Archive for ‘Architecture’ Category
Posted on 05:32, January 22nd, 2011 by Billy McCafferty
Over the years, a number of well defined architectures have been proposed for a wide assortment of project domains; a few examples were described in Architectural Paradigms of Robotic Control. As described, the architectural approaches can typically be categorized as follows:
As with any project, a challenge is picking the appropriate architecture for the task at hand. Thankfully, much work has been done on analyzing agent-oriented architectures to assist development teams with making this selection. While a little dated, Jörg Müller et al.’s Intelligent Agents V provides a solid introduction to analyzing architectural approaches and comparing them to one another for particular project needs. For the project at hand, only those architectures which were hybrid, supporting deliberative-reaction capabilities, were considered. Furthermore, those supporting multi-agent coordination were particularly preferred. Specifically, the following architectures were considered in detail: RAPs/ATLANTIS/3T, Lyons & Hendriks, TouringMachines, InteRRaP, SimAgent, and NMRA. Without going into details as to why the choice was made, InteRRaP has been selected as the target architectural design for the project, providing a good balance of reactive and deliberative capabilities while supporting multi-agent communication and cooperation. This post introduces the major concepts of InteRRaP and the effects of this architectural selection on the O-MaSE project methodology.
InteRRaP is a hybrid, belief-desire-intention (BDI) architecture supporting the modeling of reactive, goal-directed, and interacting agents by providing 1) a set of hierarchical control layers, 2) a knowledge base that supports the representation of different abstraction levels of knowledge, and 3) a well-defined control structure that ensures coherent interaction among the control layers.  For completeness, BDI should be further described. A BDI approach is broken down, conceptually, as the following mental categories :
It should be noted that InteRRaP is not a traditional BDI architecture; it attempts to leverage the advantages of a BDI architecture as a component of its hybrid approach to structuring multi-agent systems, distributing the mental categories over three layers. For example, beliefs are split into three layered models:
For action deliberation and execution, InteRRaP incorporates three hierarchical control layers described as:
The initiation of actions are triggered by specific situations, which are specific subsets of the agents beliefs. Similar to the breakdown of belief modeling and control layering, situations are classified into three separate categories:
The implementation of the control architecture itself is based on the principles of:
The diagram above  illustrates how the underlying principles were used in implementing the control architecture of InteRRaP. There are three primary modules: a world interface providing the agent’s perception, communication, and action interfaces with its environment; a knowledge base partitioned into three layers, consisting of the world, mental and social models described previously; and a control unit organized into the three control layers described previously (behavior-based, local-planning, and cooperative-planning). Furthermore, each control layer has two processes including a situation recognition and goal activation (SG) process and a planning, schedule, and execution (PS) process. Control moves from the behavior layer up until a suitable layer competent for execution is found; action is then directed back down to the behavior layer which is the only layer with direct access to sensors and actuators.
To help limit the scope of responsibility of each layer, each is limited to a respective portion of the knowledge base. For example, the behavior-based layer only has access to the world model and can only recognize situations warranting a purely reactive response. Conversely, the cooperative planning layer has access to the social, mental and world models, allowing it to recognize more complex situations and to plan and pass down execution commands, accordingly.
Implications on O-MaSE
As described previously, O-MaSE is a flexible methodology for the definition and design of multi-agent systems. While choosing InteRRaP as a preferred architecture does not preclude the use of any O-MaSE tasks, it implies the introduction of a new task: Model Situations Task. This task would define the situations which may be recognized for taking action upon. Going a step further along these lines, a supporting O-MaSE task may be introduced – Refine Situations Task – to better assign which control layers should be responsible for recognizing and responding to each situation.
This introductory post to InteRRaP only touches upon the major components of this architectural approach in an effort to concisely describe its intent and organization. The interested reader is strongly encouraged to read the references found at the bottom of this post for more detailed information. In the next post, we’ll look at some examples of how O-MaSE was used to define requirements in alignment with the selected InteRRaP architecture.
It should be noted that it is not my intention to follow InteRRaP “to a tee”; rather, I find its overall organization to be very logical and will use as inspiration for structuring current project work; for example, I could see trex-autonomy as being a suitable approach for implementing the behavior-based and local-planning layers without negating the underlying principles of InteRRaP, nor its implied organization. Time (and a lot of trial-and-error) will tell.
 Müller, Jörg. 1996. The Design of Intelligent Agents.
 Rao, Anand and Michael Georgeff. 1995. BDI-Agents: From Theory to Practice.
 Fischer, Klaus, Jörg Müller, and Markus Pischel. 1995. Unifying Control in a Layered Agent Architecture.
Posted on 07:22, July 30th, 2010 by Billy McCafferty
Part VI: Adding a UI Layer to the Package
As the last and final chapter to this series of posts (Part I, II, III, IV, V), we’ll be adding a basic UI layer to facilitate user interaction with the underlying layers of our package. Specifically, a UI will be developed to allow the user (e.g., you) to start and stop the laser reporting application service via a wxWidgets interface. If you’re new to wxWidgets, it really is a terrific open-source UI package with very helpful online tutorials, a thriving community, and a very helpful book, Cross-Platform GUI Programming with wxWidgets – certainly a good reference to add to the bookshelf. Arguably, the sample code discussed below is very simplistic and only touches upon wxWidgets; with that said, it should demonstrate how to put the basics in place and to see how the UI layer interacts with the other layers of the package.
Developing a UI layer with wxWidgets is quite straight forward; the UI itself is made up of two primary elements: a wxApp which is used to initialize the UI and a wxFrame which serves as the primary window. For the task at hand, the wxApp in the UI layer will be used to perform three primary tasks, in the order listed:
As a rule of thumb, the UI layer should only communicate to the rest of the package elements via the application services layer. E.g., the UI layer should not be invoking functions directly on domain objects found within ladar_reporter_core; instead, it should call tasks exposed by the application services layer which then coordinates and delegates activity to lower levels.
Before we delve deeper, as a reminder of what the overall class diagram looks like, as developed over the previous posts, review the class diagram found within Part V. The current objective will be to add the UI layer, as illustrated in the package diagram found within Part I. To cut to the chase and download the end result of this post, click here.
Show me the code!
1. Setup the Package Skeleton, Domain Layer, Application Services Layer, and Message Endpoint Layer
2. Install wxWidgets
Download and install wxWidgets. Instructions for Ubuntu and Debian may be found at http://wiki.wxpython.org/InstallingOnUbuntuOrDebian.
3. Define the UI events that the user may raise
Create an enum class at src/ui/UiEvents.hpp to define UI events as follows:
As suggested by the enum values, the user will be able to start the reporting process, stop it, and quit the application altogether.
4. Create the wxWidgets application header class
Create src/ui/LadarReporterApp.hpp containing the following code:
A few notes:
5. Create the wxWidgets application implementation class
Create src/ui/LadarReporterApp.cpp containing the following code:
The direction for this class was taken from wxWidgets online tutorials along with reviewing the ROS turtlesim package, which is a real treasure trove for seeing how a much more sophisticated ROS UI is put together. (If you have not already, I strongly suggest you review the turtlesim code in detail.)
6. Create the wxWidgets frame header class
Now that the wxWidgets application is in place, the frame, representing the UI window itself, needs to be developed. Accordingly, create src/ui/LadarReporterFrame.hpp containing the following code:
There are a couple of interesting bits in the header:
7. Create the wxWidgets frame implementation class
Create src/ui/LadarReporterFrame.cpp containing the following code:
A few implementation notes:
There’s obviously a lot of wxWidgets related information which I am glossing over which is beyond the scope of these posts. The wxWidgets documentation referenced earlier should fill in any remaining gaps.
8. Configure CMake to Include the Header and Implementation
With the header and implementation classes completed for the both the wxWidgets application and frame, we need to make a couple of minor modifications to CMake for their inclusion in the build.
9. Add a ROS wxWidgets Dependency to manifest.xml
Since the package will be leveraging wxWidgets, a dependency needs to be added for the package to find and use this, accordingly:
10. Build and try out the UI Functionality
We are now ready to try everything out. While it is generally possible to write unit tests for the UI layer, personal experience has shown that the UI changes too frequently to make such unit tests worth while. UI unit tests quickly become a maintenance headache and do not provide much more value than what the existing unit tests have already proven; i.e., we’ve already verified through unit tests that the heart of our package – the domain objects, the message endpoints, and the application services – are all working as expected…the UI is now “simply” the final touch. Enough babble, let’s see this baby in action:
Well, that about wraps it up, we started by laying out our architecture and systematically tackling each layer of the package with proper separation of concerns and unit testing to make sure we were doing what we said we were doing. As demonstrated with the layering approach that we developed, higher layers (e.g., application services and core) didn’t depend on lower layers (e.g., message endpoints and the ROS API). In fact, when possible, the lower layers actually depended on interfaces defined in the higher layers; e.g., the message endpoint implemented an interface defined in the higher core layer. (Although the class diagrams show core on the bottom, it’s actually reflecting the dependency inversion that was introduced.) This dependency inversion enabled a clean separation of concerns while allowing us to unit test the various layers in isolation of each other.
I sincerely hope that this series has shed some light on how to properly architect a ROS package. While this series did not go into a granular level of detail with respect to using ROS and wxWidgets, it should have provided a good starting point for developing a solid package. The techniques described in this series have been honed over many years by demi-gods of development (e.g., Martin Fowler, Robert Martin, Kent Beck, Ward Cunningham, and many others) and continue to prove their value in enabling the development of maintainable, extensible applications which are enjoyable to work on. While ROS may be relatively new, the tried and trued lessons of professional development are quite timeless indeed.
As always, your feedback, questions, comments, suggestions, and even rebuttals are most welcome. To delve a bit further into many of the patterns oriented topics discussed, I recommend reading Gregor Hohpe’s Enterprise Integration Patterns and Robert Martin’s Agile Software Development, Principles, Patterns, and Practices. And obviously, for anything ROS related, you’ll want to keep reading everything you can at http://www.ros.org/wiki/ (and here at sharprobotica.com, of course)!
Posted on 05:13, July 28th, 2010 by Billy McCafferty
Part V: Developing and Testing the ROS Message Endpoint
[Author's note, July 28, 2010: Introduced Boost::shared_ptr to manage reference to message endpoint so as to enable postponed construction, as will be required in Part VI.]
While this series (Part I, II, III, IV) has been specifically written to address writing well-designed packages for ROS, we’ve actually seen very little of ROS itself thus far. In fact, outside of the use of roscreate to generate the package basics and sensor_msgs::LaserScan for communicating laser scan data from the reader up to the application services layer, there’s been no indication that this application was actually intended to work with ROS now or ever. Ironically, this is exactly what we’d expect to see in a well designed ROS package.
Each layer that we’ve developed – as initially outlined in Part I – is logically separated from each other’s context of responsibility. To illustrate, the upper layers do not directly depend on “service” layers, such as message endpoints. Instead, the lower layers depend on abstract service interfaces declared in the upper layers. This dependency inversion was enabled in Part IV with the creation of ILaserScanEndpoint, a separated interface. If all of this dependency inversion and separated interface mumbo-jumbo has your head spinning at all, take some time to delve deeper into this subject in Dependency Injection 101.
While the actual message endpoint interface was created, only a test double was developed for testing the application service layer’s functionality. Accordingly, in this post, the concrete message endpoint “service,” which implements its separated interface, will be developed and tested. That’s right…we’ll finally actually talk to ROS! You can skip to the chase and download the source for this post.
Before digging into the code, it’s important to take a moment to better understand the purpose and usefulness of the message endpoint. The message endpoint encapsulates communications to the messaging middleware similarly to how a data repository encapsulates communications to a database. By encapsulating such communications, the rest of the application (ROS package, in our case) may remain blissfully oblivious to details such as how to publish messages to a topic or translate between messages and domain layer objects.
This separation of concerns helps to keep the application cleanly decoupled from the messaging middleware. Another benefit of this approach is enabling the development and testing of nearly the entirety of the application/package before “wiring” it up to the messaging middleware itself. This typically results in more reusable and readable code. If you haven’t already, I would encourage you to read the article Message-Based Systems for Maintainable, Asynchronous Development for a more complete discussion on message endpoints.
Target Class Diagram
The following diagram shows what the package will look like after completing the steps in this post…it’s beginning to look oddly familiar to the package diagram discussed in Part I of this series, isn’t it? If you’ve been following along, most of the elements have already been completed; only the concrete LaserScanEndpoint and LaserScanEndpointTests will need to be introduced along with a slight modification to the TestRunner.
1. Setup the Package Skeleton, Domain Layer and Application Services Layer
If not done already, follow the steps in Part II, Part III, and Part IV to create the package and develop/test the domain and application service layers. (Or just download the code from Part IV as a starting point to save some time.)
2. Create the message endpoint header class.
Create src/message_endpoints/LaserScanEndpoint.hpp containing the following code:
3. Create the message endpoint implementation class.
Create src/message_endpoints/LaserScanEndpoint.cpp containing the following code:
As you can see, there’s really not much to the actual publication process…which is what we were hoping for. The message endpoint should simply be a light way means to send and receive messages to/from the messaging middleware. This message endpoint does so as follows:
4. Configure CMake to Include the Header and Implementation
With the header and implementation classes completed, we need to make a couple of minor modifications to CMake for their inclusion in the build.
5. Build the message endpoints Class Library
In a terminal window, cd to /ladar_reporter and run
Like with everything else thus far…it’s now time to test our new functionality.
6. Unit Test the LaserScanEndpoint Functionality
While testing up to this point has been pretty straight-forward, we now need to incorporate ROS package initialization within the test itself.
The first five parts of this series conclude the primary elements of developing well-designed packages for Robot Operating System (ROS) using proven design patterns and proper separation of concerns. Obviously, this is not a trivially simple approach to developing ROS packages; indeed, it would be overkill for very simple packages. But as packages grow in size, scope, and complexity, techniques described in this series should help to establish a maintainable, extensible package which doesn’t get too unruly as it evolves. In Part VI, the final part in this series, we’ll look at adding a simple UI layer, using wxWidgets, to interact with the package functionality.
Download the source for this post.
Posted on 00:06, July 13th, 2010 by Billy McCafferty
In Architectural Paradigms of Robotic Control, I discussed a number of control architectures with a bias towards a hybrid approach, for facilitating reactive behaviors without precluding proper planning. With 3T, a common hybrid approach, the three layers include a skill layer for reactive behavior and actuator control, a sequencing (or execution) layer for sequencing behaviors based on relevant conditions, and a planning (or deliberative) layer for making plans for future actions.
While the skill layer is typically developed in a low level language such as C++, the sequencing and planning layers frequently require a “higher” language to manage complexity and required flexibility. (E.g., a language using XML to express and execute first-order predicate logic without worrying about the low level implementation details of C++ control structure could be considered a “higher” language.) Indeed, Douglas Hofstadter, in his classic work Gödel, Escher, Bach, suggests that such higher level languages will most certainly be a prerequisite for developing more intelligent machines.
ESL (Execution Support Language), developed by Ron Garret (the artist formerly known as Erann Gat), is one such higher language, built on Lisp, for the implementation of the sequencing layer of a hybrid control architecture. ESL is discussed in both Artificial Intelligence and Mobile Robotics and Springer Handbook of Robotics as being a language which:
After looking around for an implementation of ESL, I contacted Dr. Garret to find out where I might be able to find it. Amiably, Ron has made ESL available for download from his site. While I admit that I have not yet used ESL, I look forward to digging into Ron’s code to learn more about this seemingly solid approach to developing and managing a proper sequencing layer.
While I am also familiar with Task Description Language (TDL) as an alternative to ESL, I am quite interested in hearing about any other approaches actively being taken to managing the sequencing/execution layer. I’ll certainly post more about ESL or other options as I research more on this topic. Incidentally, I’m also looking forward to digging into Herbal for the planning layer…but that’s for another post altogether!
Posted on 03:04, June 19th, 2010 by Billy McCafferty
Part IV: Developing and Testing the Application Services
[Author's note, July 28, 2010: Introduced Boost::shared_ptr to manage reference to message endpoint so as to enable postponed construction, as will be required in Part VI.]
Ah, we’ve made it to the application services layer. After defining the architecture, setting up the package, and implementing the core layer which contains the domain logic of the application, we’re ready to take on developing the application services layer. In short, the service layer provides the external API to the underlying domain layer. Ideally, one should be able to develop the entirety of the “non-UI” portions of an application and have it exposed via the application services layer. In other words, one should be able to swap out the front end – say from a web front end to a Flash front end – without having the application services layer effected. If you’d like to go ahead and download the source for this article, click here.
The application services layer is analogous to a task or coordination manager; i.e., it doesn’t know how to carry out the low level details of a particular action but it does know who is responsible for carrying out particular tasks. Accordingly, the application services layer of the package (or any application for that matter) is mostly made up of a number of publicly accessible methods which, in turn, pass responsibilities on to external service dependencies (e.g., a message endpoint) and the domain layer for execution.
With that said, the services layer should still eschew direct dependencies on external services, such as communications with a messaging framework (e.g., ROS). Accordingly, in this post, we’ll materialize the application services layer of our package and give it two responsibilities:
But (there’s always a “but” isn’t there), while the application services layer should be responsible for passing messages received from the domain layer on to the messaging middleware, it should not have a direct dependency on that messaging middleware itself. This decoupling facilitates testing of the application services layer with test doubles, keeps a clean separation of concerns between task coordination and messaging, and provides greater flexibility with being able to modify/upgrade the messaging layer without affecting the application services layer.
A moment needs to be taken to clarify the differences among application services, domain services, and “external resource” services.
Before we delve in, let’s briefly review what we plan to accomplish:
Enough with the chatter, let’s see some code!
Target Class Diagram
The following diagram shows what the package will look like after completing the steps in this post. While the individual elements will be discussed in more detail; the class diagram should serve as a good bird’s eye view of the current objectives. The elements with green check marks were completed in previous posts.
1. Setup the Package Skeleton and Domain Layer
2. Create an interface for the message endpoint service which the application service layer will leverage to communicate with ROS.
There shouldn’t be anything too surprising in this interface. It simply exposes a method to publish a laser scan to the underlying messaging middleware. Why not put the interface in the application services layer, which intends to use it? A couple of good reasons come to mind: 1) since it’s a pure interface, having elements within core aware of it (or even directly dependent upon it) does not introduce any further coupling to the underlying external resource, the messaging middleware, and 2) in very simply packages, an application services layer might be overkill, so keeping the “external resource service interface” (there’s a mouthful) in the core layer facilitates either approach without having to move anything around if the selected approach changes during development.
3. Create the application service header class.
Create src/application_services/LaserScanReportingService.hpp containing the following code:
A few notes:
4. Create the application service implementation class.
Create src/application_services/LaserScanReportingService.cpp containing the following code:
A few notes:
As mentioned previously, the above implementation could be done without a private implementation design pattern, but this serves to illustrate how such a pattern may be leveraged when warranted.
5. Configure CMake to Include the Header and Implementation
With the header and implementation classes completed, we need to make a couple of minor modifications to CMake for their inclusion in the build.
6. Build the application services Class Library
In a terminal window, cd to /ladar_reporter and run
Like before, we’re not done yet…it’s now time to test our new functionality.
7. Unit Test the LaserScanReportingService Functionality
When we go to test the functionality of the laser scan reporting application service, it will likely be quickly noticed that there’s a missing dependency which will be needed to test the functionality of this class: a concrete implementation of ILaserScanEndpoint.hpp. The job of the message endpoint class will be to take laser scans and publish them to the appropriate topic on ROS. But we’re just not there yet…what we’d really like to do is to be able to test the functionality of the application service layer – LaserScanReportingService.cpp to be specific – without necessitating the presence of the message endpoint and ROS itself. While we’ll have to cross that bridge eventually (in Part V to be exact), we’re not currently interested in doing integration tests. Instead, we’re only interested in testing the behavior of the application service regardless of its integration with ROS.
Accordingly, a “stub” object will be employed to stand in for an actual message endpoint. In this case, the stub object is nothing more than a concrete implementation of ILaserScanEndpoint.hpp; but instead of publishing the laser scan to ROS, it’ll do something testable, such as keep a tally, or possibly a temporary queue, of laser scans “published” which can then be verified with testing asserts. If you’re new to unit testing, you’ll want to read about test doubles, Martin Fowler’s Mocks Aren’t Stubs, and XUnit Test Patterns for a more comprehensive treatment of the subject.
Onward with testing…
While running the tests, you should see a few messages published to the message endpoint stub. This demonstrates that all of the interactions among our core and application service layers are occurring exactly as expected. Now for the fun part…
In Part V, we’ll finally take a look at using all of these pieces together in order to publish messages to a ROS topic via a message endpoint.
Posted on 05:47, May 20th, 2010 by Billy McCafferty
Part III: Developing and Testing the Domain Layer
[Author's note, July 28, 2010: Fixed minor bug in LaserScanReader.cpp wherein it couldn't be restarted after being stopped; had to reset _stopRequested to false.]
In Part II of this series, we created the humble beginnings of the package and added folders to accommodate all of the layers of our end product, a well-designed ROS package that reports (fake) laser scan reports. In this post, the domain layer of the package will be fleshed out along with unit tests to verify the model and functionality, accordingly. The entire focus will be on implementing just one of the requirements initially described in Part I: The package will read laser reports coming from a laser range-finder. If you’d like to download the resulting source for this article, click here.
That certainly sounds easy enough. Disregarding the previous discussions concerning architecture, the gut reaction might be to start adding code to main(), simply taking the results from the range-finder, turning them directly into a ROS message, and publishing the messages to the appropriate ROS topic. This myopic “get ‘er done” approach quickly gets out of hand as main() turns into a tangled mess of code managing a variety of responsibilities. Object oriented principles aside, having all of these separate concerns mashed into main turns the little package into a maintenance nightmare with little ability to reuse code. As mentioned, the first concern that we’ll want to tackle is the ability to read laser range-finder reports. We’ll tackle this requirement by encapsulating the range-finder integration code within a class called LaserScanReader.cpp. By doing so, all of the communications to the range-finder are properly encapsulated within one or more classes, making the integration code easier to reuse and maintain. To keep our focus on the overall architecture, and to avoid the need to have a physical range-finder handy, we’ll simulate range-finder communications within LaserScanReader.cpp. Certainly an added benefit of this approach, if we were doing this for a real-world package, is that one group could work on the “rest” of the package while another group works on the actually range-finder communications; so when ready, LaserScanReader.cpp could be switched out with the “real” range-finder integration code.
Before proceeding, recall that, as described in Part I, the domain layer of the package should not have knowledge concerning how to communicate with the messaging middleware directly (e.g., ROS). This implies that the domain layer should have no direct dependencies on the messaging middleware. This allows the domain layer to be more easily reused with another messaging middleware solution. Additionally, keeping this clean separation of concerns facilitates the testing of the domain layer independently from its interactions with the messaging middleware. Accordingly, the simple domain layer developed in this post will adhere to this guidance along with full testing for verification of capabilities as well.
Our LaserScanReader class will expose two methods, beginReading() and stopReading(), along with an observer hook to provide a call-back to be invoked whenever a new reading is available. For now, we won’t worry about what exactly will be called back in the completed package, as that’ll be a concern of the application services layer; but we’ll need to prepare for it by including an interface for the laser scan observer.
Target Class Diagram
The following diagram shows what the package will look like after completing the steps in this post. While the individual elements will be discussed in more detail; the class diagram should serve as a good bird’s eye view of the current objectives.
1. Setup the Package Skeleton
If not done already, follow the steps in Part II to create the beginnings of the package.
2. Create the ILaserScanListener Observer Header
Whenever a laser scan is read, it’ll need to be given to whomever is interested in it. It should not be the responsibility of the laser scan reader to predict who will want the laser scans. Accordingly, an observer interface should be introduced which the laser scan reader will communicate through to raise laser scan events to an arbitrary number of listeners.
Add a new interface header file to /ladar_reporter/src/core called ILaserScanListener.hpp containing the following code:
As you can see, this C++ interface (or as close as you can get to an interface in C++) simply exposes a single function to handle laser scan events.
3. Create the LaserScanReader Header
Add a new class header file to /ladar_reporter/src/core called LaserScanReader.hpp containing the following code, which we’ll discuss in detail below.
Let’s now review the more interesting parts of the header class:
4. Create the LaserScanReader Class Implementation
Add a new class file to /ladar_reporter/src/core called LaserScanReader.cpp containing the following code, which we’ll discuss in detail below.
Let’s now review the more interesting parts of the class implementation:
5. Add a ROS sensor_msgs Dependency to manifest.xml
Since the code above refers to sensor_msgs::LaserScan, a dependency needs to be added for the package to use this class, accordingly:
6. Configure CMake to Include both the Header and Implementation
With the header and implementation classes completed, we need to make a couple of minor modifications to CMake for their inclusion in the build.
First, open /ladar_reporter/CMakeLists.txt and make the following modifications:
In doing so, the LaserScanReader header has been included for consumption by other classes and application layers. This inclusion has been done in the root CMakeLists.txt as this will make the headers available to the unit tests as well. For more complex packages, this approach of including the headers in the root CMakeLists.txt, to make them globally accessible, may become a bit messy; it may be more appropriate to put the header inclusions in only the CMakeLists.txt which actually require the respective headers if the package is larger.
At this point, a new CMake file is needed under /ladar_report/src:
You’ll quickly notice that this CMake file has merely passed the buck of defining class libraries further down the chain. Accordingly, a CMakeLists.txt will be setup for each of the package layers including application_services, core, message_endpoints, and ui. All of these layers will be compiled into separate class libraries and, finally, an executable. Arguably, all of these layers could be combined into a single executable with a single CMakeLists.txt file. But keeping them in separate class libraries keeps a clean separation of concerns in their respective responsibilities and makes each aspect of our package more easily testable in isolation from the other layers.
Next, in order to create the core class library, a new CMake file is needed under /ladar_reporter/src/core:
We’re now ready to compile the class library for the “core” layer of the package…
7. Build the core Class Library
In a terminal window, cd to /ladar_reporter and run
Woohoo! Done, right? Well, not yet…time to test our new functionality.
8. Unit Test the LadarScanReader Functionality
So far, you’ve had to simply assume that a successful build means everything is working as expected. Obviously, when developing a ROS package, we’ll want a bit more reassurance than a successful build to be confident that the developed capabilities are working as expected. Accordingly, unit tests should be developed to test the functionality; in the case at hand, a unit test will be developed to initialize, begin and stop the laser reading cycle to ensure that it is raising laser scan events as designed.
The standard ROS testing tool is gtest; this is a great choice as gtest is very easy to setup and provides informative output during unit test execution. To setup and run unit tests for the package, only two elements are needed: a “test runner” to act as the unit tests’ main and to execute all of the package unit tests, and the unit tests themselves which may be spread out among a variety of folders and classes. When unit testing, I typically write one unit testing class (a test fixture) per class being tested. Furthermore, I include one unit test for each public function or behavior of the class being tested. As for a couple of other unit testing best practices, be sure to keep the unit tests independent from each other – they should be able to be run in isolation without being dependent on the running of other unit tests. Additionally, each unit test is organized as three stages of the test: “establish context” wherein the testing context is setup, “act” wherein the desired behavior is invoked, and “assert” wherein the results of the behavior are verified. Finally, no testing code should be added to the “production” source code itself; all tests should be maintained in a separate executable to keep a clean separation of concerns between application code and tests. We’ll see an example of this as we proceed.
When the test runs, you should see a few laser scan angle_min values get printed to the terminal along with the final report that the test successfully passed.
Now that we’ve done all of the above to complete the core domain layer of the package, let’s review what have was accomplished:
Posted on 19:38, April 27th, 2010 by Billy McCafferty
Part II: Creating the Package Skeleton
In Part I of this series, we examined the overall architecture that will be put in place with the development of the ROS package. It’s time to get our hands dirty and put the theory hypothesis to work! Accordingly, this post in the series will focus on putting together the skeletal package which will contain the various layers of the described architecture. If you’d like to skip to the chase and download the source to this article, click here.
If you went through the excellently written ROS tutorials (you did, didn’t you?), you’re no doubt aware that ROS brings with it a number of helpers to make our lives easier. Accordingly, we’ll start by using roscreate-pkg to create the ROS package itself followed by placeholder folders for the remainder of the package.
And with that, the package skeleton is now in place. Folders have been added to represent all of the application layers that will come to fruition over the next few posts, as we see the package evolve. (Typically, folders would only be added as needed, but this post provides a concise means to see how the end product will be structured.)
In Part III of this series in developing a well-designed ROS package, the domain elements will be added to the core layer with unit testing, accordingly.
Posted on 04:21, April 25th, 2010 by Billy McCafferty
In previous posts, I discussed general patterns of message-based systems along with basic guidance on developing components for such systems. While these posts may have provided basic information about their respective topics, there’s a clear difference between theory and implementation. (The former is easier to fake.) Accordingly, it’s time to get our hands dirty and provide concrete guidance on developing packages for Robot Operating System (ROS) using proven messaging-system and other developmental design patterns.
This post is the first in a number of posts which walks the reader through nearly every facet of developing a package for ROS. Primary interests of this series will focus on:
A ROS package is a complete application which happens to send and receive messages via messaging middleware (e.g., ROS) to help it complete its job. This isn’t too different from an application which simply retrieves and saves data to a database. Accordingly, the development of the package deserves the same design considerations that a stand-alone application would should receive. This series of blogs will walk the reader through the development of a package, taking into such considerations, appropriately. This will be a six part series include:
At the end of this series, the reader should feel comfortable with the concepts behind designing and developing an extensible and maintainable ROS package adhering to proven design patterns and solid coding practices.
A Ladar Reporting Package
Before delving straight into implementation, it’s important to establish the context of the project to assist in guiding architectural and developmental decisions. Accordingly, the package that will be developed will be a ladar reporting package. The requirements of the package are simple enough:
Our first stop will be in examining the overall architecture of the package in order to conceptualize how the end product will be layered and developed.
Part I: Planning the Package Architecture
When developing any application, it’s important to strike the appropriate balance between extensibility and maintainability. We’ve all seen as many “over-architected” solutions as those that did not receive a single thought toward design. Such applications usually meet their demise in the middle of the night when a wearied developer throws up their hands in debugging exhaustion before starting The (infamous) Rewrite. Accordingly, when designing a package, “just enough” prefactoring should take place to architect a solution that will smoothly evolve to meet the package’s requirements without over-complicating the solution. For the task at hand, when architecting a ROS package, there are a number of minimal architectural guidelines which should be adhered to which will lead to a extensible messaging component while being maintainable and easy to understand; a few guiding principles are as follows:
At first glance, this may appear to be a lot more than just “minimal prefactoring.” But after more than a dozen years of professional development, and many painful lessons along the way, I can assure you that this is a solid approach to architecting your package. With that said, I certainly encourage you and your team to discuss what architectural approach is most appropriate for the task at hand – and the capabilities of the team – before agreeing upon a particular approach.
The best way to further discuss how the architectural guidelines will be implemented in the package, is to review a UML package diagram expressing all of the layers and the direction of dependencies, accordingly:
Keep in mind that this is not a diagram for the overall message-based system; this represents how each individual ROS package is architected. Accordingly, in the above UML package diagram, each box represents a separate library encapsulating a distinct functional concern. The arrows represent the direction of dependencies; e.g., application_services depends on core. Let us now review each layer in more detail, beginning from the bottom up.
As described previously, the domain layer encapsulates the core capabilities of the package. For instance, if this were a package providing SLAM capabilities, then this layer would contain the associated algorithms and logic. In addition to domain logic, the core library contains interfaces for services which depend on external resources, such as message endpoints and (if applicable) data repositories. Having interfaces in the core library allow both core and the application services layers to communicate with services while remaining loosely coupled to the implementation details of those services.
As described previously, the application services layer acts as the “task manager” of the application, delegating execution responsibilities to the domain layer and services. Note that the application services layer, similar to core, has no direct dependency on the service implementation classes, e.g., the message endpoints. It only has knowledge of the service interfaces for remaining loosely coupled and for facilitating unit testing.
Arguably, in smaller packages and applications, an application services may not add much value and simply complicate an otherwise simpler design. With that said, care should be taken at the beginning of a project to fully consider the pros and cons of including an application services layer before deciding to discard its use, accordingly.
If the package were to retrieve and save data to a database or other external data source, data repositories would encapsulate the details of communicating with the external data source. Note that the data repositories implement the repository interfaces defined in core.
This library contains the concrete implementations of the message endpoints. The message endpoints are the only classes which actually know how to publish, subscribe, and otherwise communicate with the ROS messaging middleware. Note that the message endpoints implement the endpoint interfaces defined in core.
This layer contains the user interface for interacting with the package, if needed.
Finally, this library contains all of the unit tests for testing all layers of the ROS package.
This is our package architecture in a nutshell. In the next post, we’ll begin developing each layer from Core on up to the UI, starting with the overall package structure itself.
Posted on 21:34, March 23rd, 2010 by Billy McCafferty
Even when developing the most basic CRUD application, we ask ourselves a number of questions – whether we realize it or not – during the initial phases of development concerning the architecture and construction of the project. Where will the data be persisted? What mechanism will be used to communicate with the database? How will data from the database be transformed into business objects? Should separated interfaces and dependency injection be employed to maintain a clean separation of concerns between application logic and data access objects? What UI components will be leveraged to speed development of the UI layer? When developing message-based systems (see Message-Based Systems for Maintainable, Asynchronous Development for an introduction) it’s immensely helpful to formalize such questions about how the systems will be designed and developed. Creating a checklist of such items to decide upon, and formalizing answers for the application context, helps to:
This post provides an addendum, if you will (I’m sure you will), to Message-Based Systems for Maintainable, Asynchronous Development for complementing the article with a checklist of questions and architectural topics that should be discussed, decided upon, and/or spiked before beginning development of your budding message-based system. Accordingly, this does not describe a methodology for developing message-based systems, but simply a checklist of topics which should be taken into consideration before development begins. Many of the checklist items below serve as good starting points for team discussion and for the development of architectural spikes for demonstrating implementation details.
Integration with Messaging Middleware
Channels and Routing
As stated previously, this checklist is not intended to provide a methodology for developing message-based systems, but should serve as a good basis to make sure “T’s are crossed and I’s are dotted” when deciding upon the major architectural aspects and development techniques that will be leveraged throughout the development of your (team’s) message-based system.
Posted on 15:23, March 12th, 2010 by Billy McCafferty
Preface (you know it’s good if there’s a preface)
In Architectural Paradigms of Robotic Control, a number of architectures were reviewed including deliberative, reactive, and hybrid architectures. Each of these exhibit a clean separation of concerns with layering and encapsulation of defined behaviors. When implemented, the various capabilities, such as planners and mobility controllers, are encapsulated into discrete components for better reusability and maintainability. A pivotal aspect not discussed in the previous article is how the various system layers and components communicate with each other, such as reporting sensor feedback and sending commands to actuator controllers. Effectively resolving this communication challenge is not only important to robotic systems but to many other industries and domains for the successful integration of disparate applications.
To give credit where credit is due, this article pulls quite heavily from the patterns, taxonomy, and best practices presented in Enterprise Integration Patterns (Hohpe, 2003). This well organized book is chock full of hard learned lessons and solid guidelines for developing maintainable message-based systems. This article should not be seen as an adequate replacement for that book (it’s more like cliff notes with a spackling of robotics bias); indeed, Enterprise Integration Patterns should have a prominent place on your bookshelf if you’re developing message-based systems – so read this post and then browse http://www.eaipatterns.com/ while waiting for your copy to arrive to delve deeper.
A Need for Message-Based Systems
A few industries in particular, such as finance, healthcare and robotics, are demanding integration of an intimidating number of separate technologies that may be spread across computers, networks, and/or built upon a variety of technological platforms. Not only is this integration tricky, it can come with a significant cost to performance and maintainability if not implemented correctly. Accordingly, a solution is needed which facilitates loosely coupled integration while accommodating the performance demands of the task at hand. Taking a message-oriented approach to inter-application communications is one such way to accommodate these demands in a maintainable manner without sacrificing performance. This article gives an introduction to developing message-based systems using messaging middleware, describes taxonomy for discussing messaging topics and patterns, and includes a number of best practices.
Before delving further, it’s important to clarify a few terms that will be used frequently:
As stated, messaging provides one means of facilitating inter-component communications. But as with any design approach, the project requirements must be carefully considered to determine if messaging is the appropriate mechanism for integration. While messaging is robust and facilitates integration, it also adds complexity and indirection. So before deciding to use messaging as the means of integration, consider all component integration options including (Hohpe, 2003):
Determining which integration approach is most suitable to your project’s needs is beyond the scope of this article, focusing instead specifically on messaging. In turn, we’ll review important elements of developing a message-based system including: message channels, messages, message routers, and message endpoints.
When a component sends information to another component in a message-based system, it adds the information to a message channel. The receiving component then retrieves the information from the message channel. Different channels are created for each kind of data to be carried; having a separate channel for each datatype better enables receiving components to know what kind of data will be retrieved from a given channel. For using a channel, each channel is addressable for sending and retrieving messages to/from them. How a channel is addressed varies depending on the messaging middleware being leveraged, but it’s usually a port number or a unique string identifier. As a good practice for keeping channels organized, if string identifiers are available, a hierarchical naming convention may be employed to label channels by type and name; e.g., a channel carrying laser scans might be called “Perception/LaserScans.”
There are two basic kinds of message channels:
In addition to channels intended to carry information among components, it is a good practice to setup an invalid message channel that bad-formed or unreadable messages may be forwarded to for logging and to assist with debugging.
With a function call, a simple parameter or object reference may be passed and retrieved by the invoked method, sharing the same memory space. But when passing data between two processes with separate memory spaces, the data must be packaged into a “message” adhering to an agreed upon format which the receiver will be able to disassemble and understand. The sender of the message passes the message via a message channel. The receiver retrieves the message from the message channel and transforms the message into internal data structures appropriate for the task at hand.
A message is made up of two parts:
When sending a message, the sender intends for the message to be used, or responded to, in a particular way. The intention of the message may be described as being one of the following:
Event messages deserve a bit more discussion. In its simplest form, an event message would simply be informational, letting subscribers know that an event has occurred; e.g., a new laser scan is available. If subscribers would like details concerning the event, they would send a request-reply to the sender of the event to provide further details; e.g., the laser scan details. Alternatively, the event could be a document as an event to inform subscribers that an event has occurred along with the details of that event; e.g., a new laser scan is available with laser scan details included. The size of the event details and the rapidity in which the event occurs should be considered when deciding between publishing simple event messages and document messages as events.
Request-reply messages could also use a bit more describing. A request-reply is usually implemented as two point-to-point channels. The first channel delivers the request as a command message, while the second carries the reply back to the requestor as a document message. To keep the replying component more loosely coupled and reusable, the requestor should include a return address indicating the channel that the replier should use to publish the reply. After receiving the reply, a challenge for the requestor is to then correlate the reply to the original request. If the requestor is sending a number of requests in succession, it will likely be difficult to keep clear – if it matters – which request a reply is associated with. To resolve this, every request may include a unique message identifier that the replier would then include as a correlation identifier. (A message could have both a message Id and a correlation Id.) The requestor uses the correlation identifier to “jog its memory” concerning which request the response is for. But frequently, a request-reply is in context of a particular domain object, such as a terrain map or a bank transaction; but the correlation Id doesn’t include such information. To assist, the requestor can maintain a mapping (e.g., hashtable) between message Ids and relevant domain object Ids which are related to the original request. When the reply is received, the mapping may be used to load the appropriate domain objects and take further action, accordingly.
Obviously, it is important that the senders and receivers of a message system agree upon the format that messages will take for clear interoperability, better reusability of components, and extensibility of the system. Consequently, a canonical data model should be well defined that all applications will adhere to. The canonical data model does not dictate how each application’s domain model must be structured, only how each application must format data within messages. Message translators are developed to convert the sending application’s domain model into the canonical data model before sending a message; receivers of messages then use their own message translators to translate the message into their own domain. This mechanism allows applications built on completely different technologies (e.g., C#, Lisp, and C++) to communicate with each other and exchange data. Many off the shelf messaging systems define their canonical data model which must be adhered to. For example, the Robot Operating System (ROS), which we’ll looking at in more detail in subsequent posts, defines their canonical model at http://www.ros.org/wiki/msg. But the canonical data model need not be limited to defining the types of primitives available and how to include them in messages.
Domain-specific canonical data models may augment message formatting rules, adding semantic meaning to the data within a message. For example, the Joint Architecture For Unmanned Systems (JAUS) is a set of message guidelines for the domain of unmanned systems, such as autonomous vehicles. The JAUS guidelines provide domain specific rules for communicating data, such as propulsion and braking commands, sensor events, pose and location information, etc. To demonstrate, JAUS message types include (Siciliano, 2008):
A challenge in dealing with canonical data models is how to handle changes to the model. In order to support backwards compatibility of existing components when the canonical data model changes, new message channels could be created to carry the messages adhering to the model; e.g., “Perception/LaserScans_V1″ and “Perception/LaserScans_V2.” Alternatively, the existing channels could continue to be leveraged to carry messages adhering to different version of the canonical data model. To do so, the message, within its header, would include a format indicator, such as a version number or format document (e.g., DTD) reference. But if a sender knows that receivers of a particular message are mixed in what format is being used, a component would need to send two messages, one for each version of the canonical data model. Certainly, this is an important consideration when deciding which components should (or even can) be upgraded to newer formats, and in what order.
While the heavy lifting of the message routing is handled by the messaging middleware itself, there are times when it is useful to augment the middleware with custom message routers to support unique scenarios.
Suppose the destination of a message may change based on the number of messages that have already passed over a channel. In this scenario, the sender of a message may not know how many messages have been passed over a channel since other senders may have been publishing messages on the same channel. Consequently, a message router may subscribe to the channel to determine where each message should be forwarded to, based on the described business rules. Once the destination is determined, the router would then place the message on a subsequent channel to be delivered to the appropriate destination. This intermediary routing is described as predictive routing as the message router is aware of every possible destination and the rules for routing, accordingly. If the routing is based on content within the message itself, such as threshold values, then the custom router is known as a content-based router.
A drawback to using message routers is that if the routing rules change frequently, the message router will need to be modified just as often. To help remedy this, if the rules are expected to change frequently, configurable routing rules (e.g., via XML) could be employed to enable easier management of routing rules.
Let’s now consider another scenario wherein it’s left up to the subscribers to determine which messages they’re interested in; i.e., subscribers will be responsible for filtering out the messages they’re uninterested in. In this reactive routing scenario, each subscriber would provide a respective message filter which is similar to a message router, but simply forwards, or does not forward, a message onto a subsequent channel that the destination subscriber is listening to. Frequently, message filters decide to forward, or not forward, based on content in the message itself; e.g., only forwarding orders that have a coupon included. While being similar to a message router in basic functionality, a message filter only has one possible channel to forward the message onto.
Deciding between predictive and reactive filtering must take into account a number of considerations. Is the message content sensitive? Do you need to minimize network traffic? Do you need to be able to add and remove subscribers easily? Is the predictive router becoming a bottleneck of message dissemination? For further guidance on selecting among routing options, see (Hohpe, 2003), ppg. 241-242.
Each messaging middleware option (e.g., CCR, ROS) has unique requirements for communicating with it. Each has its own API, its own means of addressing channels, and its own rules for packaging messages. Ideally, the components of the system should not be aware of the specifics of communicating with the messaging middleware. Furthermore, while unlikely to occur, the middleware should be able to be replaced with another, requiring little, if any, changes to the components. Accordingly, the components must be loosely coupled to the messaging middleware. Message endpoints provide the bridge between the domain of each component and the API of the messaging middleware.
Message endpoints are similar in nature to repositories when communicating with a database. Repositories encapsulate the code required to communicate with a database to store and retrieve data while being able to convert information from the database into domain objects. If the database changes, or if the mechanism for database communication changes (e.g., ADO.NET to NHibernate), then, ideally, only the repositories are affected. The rest of the application knows little about database communications outside of the repository interfaces. Likewise, components of a message-based system should not be aware of messaging details outside of the message endpoint interfaces, which provide the means to send and receive data. The message endpoint accepts a command or data, converts it into a message, and publishes it onto the correct channel. Additionally, the message endpoint receives messages from a channel, converts the content into the domain of the component, and passes the domain objects to the component for further action. Internally, the message endpoint implements a message mapper to convert between the component domain objects and the canonical data model.
While posting to a channel is rather straight forward, a message endpoint may receive a message by acting as a:
Many messaging middleware solutions include support for transactions that the message endpoints may leverage, making the endpoints transactional clients. To illustrate the need for this, suppose the receiver of a request-reply command crashes just moments after the command message is consumed and removed from the channel. When it recovers, the command message is lost and the sender will never receive a reply. Using a transaction, the command message is not removed from the channel until the response is completed and sent. Committing the transaction removes the command message from the channel and adds the reply document message to the reply channel.
There are a few recommendations which should be considered when developing message endpoints. In accordance with the SRP, message endpoints should be able to receive messages or send messages, but not both in the same message endpoint. Furthermore, a message endpoint should only communicate with one message channel. If a component needs to send a message on separate channels, it would leverage multiple message endpoints to do so. If your components are developed in line with DDD, it would be each component’s application services layer which would communicate with the message endpoints, preferably via their interfaces instead of concrete instances. This facilitates swapping out the message endpoints with mock objects for unit testing. Leveraging separated interfaces and dependency injection helps enable this approach. While these guidelines introduce more objects and indirection, they are proven practices for increasing maintainability of the component and reusability of the message endpoints.
Performance of Message Based Systems
One would likely be quick to assume that developing a message-based system has a huge cost to performance. Domain objects are converted to messages, messages are passed over channels, routers and filters intercept and forward messages, and messages are converted back into domain objects…this sounds like a heck of a lot going on. But because each component executes in its own thread or process, they need not wait for other components to complete their job before being able to move on to handling another message or task.
In the figure above (adapted from Hohpe, 2003, pg. 73), note that a sequential process requires that each message life cycle is completed in full before moving on to the next message. But in an asynchronous, message-based system, each component can move onto a subsequent message just as soon as it has completed its part in the last one. This greatly compensates for the extra overhead imparted by the messaging infrastructure.
With that said, there are some component responsibilities which take a long time to complete and may impede the speed by which messages are processed. For example, imagine a component which takes images from a web cam and extracts information such as human figures or road signs. This is likely a time consuming process and would impact the turn around time in which the component could process subsequent messages. To alleviate this bottleneck, multiple instances of the same component may subscribe to the same point-to-point channel. Recall that a point-to-point channel ensures that each message only has a single receiver. The instances of the component then become competing consumers of each message. So if one instance of the component is still processing an earlier message, another instance can grab the next message that arrives for concurrent processing. Power in numbers!
Monitoring and Debugging
Due to the widely asynchronous nature of message-based systems, attention must be given to monitoring and debugging techniques to observe system behavior and iron out problems.
Logging message content is an invaluable measure towards getting a clear look at what communications are taking place. To facilitate this, logging could be added directly into sending and receiving components, but logging message content should not be a concern of components; accordingly, using a monitoring utility to log such information is a cleaner separation of concerns. Certainly a benefit of publish-subscribe channels is that a monitoring component may subscribe to all messages and log the information to a file or console window. While it’s just as valuable to monitor messages on point-to-point channels, if a monitor were to consume a message over a point-to-point channel, the message would be noted as consumed and would no longer be available to the intended receiver. Because such monitoring capability is so helpful in developing and debugging, many messaging middleware options include a “peek” option which allows a monitoring utility to review message contents on a point-to-point channel without actually consuming the message. This capability should be taken into consideration when comparing messaging middleware alternatives.
In addition to monitoring message content sent over channels, it’s also assistive to monitor active subscriptions to various channels to accurately determine which components are receiving which messages. This capability is typically built into messaging middleware solutions and is very assistive during development.
It’s possible that even if a message is routed appropriately, the receiving component may not know what to do with the message due to an incorrectly formatted header or body content. Invalid messages such as this should be forwarded to an invalid message channel which an error logging utility would monitor and log, accordingly. An invalid message channel is setup just like any other channel but with the intention of exposing such messages for debugging purposes.
Developing asynchronous, message-based systems requires a paradigm shift from the more traditional, synchronously executed applications that most people are familiar with. It is not appropriate for every application domain and should be seen as one additional architectural option to consider when developing applications. But in some domains, such as robotics or in the integration of disparate applications, this approach to development is absolutely pivotal in providing responsive behavior without sacrificing maintainability of the overall system. Indeed, by splitting responsibilities into discrete components, loosely coupled to each other via messaging middleware, immensely complex problems can be broken down into understandable chunks while being flexible enough to accommodate changes to the underlying middleware or introduction of new components.
In the next couple of posts, we’ll look at a checklist for developing message-based systems followed with examples in CCR and ROS.
Hohpe, G., Woolf, B. 2003. Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions.
Siciliano, B., Khatib, O. 2008. Springer Handbook of Robotics.
© 2011-2014 Codai, Inc. All Rights Reserved