– Sharp ideas for the software side of robotics

Sharp ideas for the software side of robotics

Serial Communications with SICK S300 Standard

Note that this is for communications with the SICK S300 Standard, not the S300 Professional. If you have an S300 Professional, see the cob_sick_s300 ROS package.

As much as I am a software guy, you’re simply not going to avoid hardware for long in the world of robotics.  Sure, simulators are great, but it’s impossible to fully replace the nuances (and excitement and frustration) of developing robots with actual hardware.  Furthermore, the “noise” generated by live robotics is hard to adequately mimic in the simulated environment; e.g., missed laser scans, bumpy surfaces, sensors that aren’t perfectly aligned with where you think they are, dealing with hardware drivers, etc.

Unfortunately, the hardware side of robotics is far from cheap if your research is self-funded (read – I am still accepting research donations).  One of the most essential, and expensive, elements of modern robotics is a solid range-finding sensor(s).  While having been around for some time, laser range-finders are finally compact enough and relatively affordable enough to augment any roboticist’s tool bench.  The challenge is finding one that is the right balance of size, price, power consumption and capability for your project’s needs.

The Hokuyo URG-04LX-UG01 Laser is a great choice for basic range-finding needs; it’s very compact (e.g., can fit in the head of an Aldebaran Nao), weighs little, has a great price point (around $1,200…Nao head not included) and provides precise 2D range details out to 5.6 m.  Providing 3D point cloud information, the Mesa Imaging SR4000 provides an insane amount of range information within a narrower range of sensing (43° or 69°) out to 10 m, ideal for highly precise applications, such as interpreting facial expressions (if you’re not using a camera for some reason) or tracking slight body motions…all for a cool $9,000.  On the other end of the cost spectrum, the XBox 360 Kinect also provides point cloud information out to 5 m while only costing a fraction of the cost at $150 (assuming you can find it during the Christmas rush); the Kinect also has newly available drivers for ROS.  (I can’t imagine a better starting point than Kinect.)

For my own needs, I was looking for something that could provide a greater range than the URG-04LX-UG01 without precluding the ability to collect point cloud information at an extended range as well.  Without spending your entire research budget, this can be achieved by obtaining a 30m laser range-finder and mounting it on a tilt system, such as a SPT400 Tilt System.  The pairing won’t be as real-time as an SR4000 (you’ll need to remain stationary while tilting the laser for a most accurate point cloud), but it makes the 30 m point-cloud generator far more affordable.  (If you don’t need 30 m capabilities…stop here and go for the Kinect!)

As for the range finder, a 30 m laser range finder can cost around $4,000 – $5,500 brand spanking new; great choices being Hokuyo UTM-30LX and the SICK LMS100.  But if you keep your eye on eBay, you can pick up a comparable laser range finder for far less; I was fortunate enough to find a new SICK S300 Standard priced more like a URG-04LX-UG01, providing the 30 m range data I was looking for.  Unfortunately, the S300 Standard does not provide programmatic access through its “system plug” (the base port) like it’s bigger (and more expensive) brother, the SICK S300 Professional.  With that said, the S300 Standard does provide communications via an RS-232 diagnostics port on the front of the scanner.  It’s limited to 38400 baud, but it’s enough to provide 2.75 scans per second, encompassing 270° of range information at 1/2° increments.  Unfortunately, the S300 Standard documentations is almost useless for taking this approach; accordingly, this post (after the above segue) provides an overview of programmatic communications with the SICK S300 Standard via the diagnostics port.

Serial Communications with SICK S300 Standard

The SICK S300 Standard diagnostic port provides 38400 baud reporting of laser scan data over RS-232 serial communications.  There are three general steps to requesting laser scans via this approach:

  1. A “token” must be programmatically requested and allocated by the S300 in order for scan data to be requested.  Requesting a token informs the S300 which communication port will be used for data exchange.
  2. After successfully requesting a token to be allocated, requests (max 2.75 Hz) may be sent for laser scan information.
  3. The token may be released when no more scans are needed.  If not released, the token will be automatically released during a power reset.  (There isn’t much need to release the token if you’re only communicating via the diagnostics port.)

We’ll now detail how each phase of communications is accomplished, beginning with setting up the system for initial testing.

Initial System Setup

Instead of diving right in to writing a C++ application for managing the communications, it’s best to confirm and demonstrate successful communications with the S300 via raw serial communications using a serial utility.  But before we get even that far, the hardware itself needs to be setup.  To communicate with the S300 Standard, the following equipment is required (model numbers are further detailed here):

  • SICK S300 Standard S30B-2011BA (obviously)
  • Standard 11-wire System Plug SX0B-B1105G.  The system plug provides separate wire access for powering the device and for accessing the safety output wires.  Accordingly, if you’re simply looking to use the S300 Standard for laser range scans, and do not require the safety features, the system plug is optional; alternatively, power source wires may be soldered directly to the base of the S300.  (But the system plug sure looks nice, gives the S300 a flat surface for mounting the S300 on a mobile robot base, and decreases the chance of damaging the S300 with a solder gun.)
  • S300 Programming Cable 6021195.  This cable plugs into the front of the S300, via the diagnostic port, and has an RS-232 connector at the other end for communications from a computer.
  • 24 VDC, 2.1 Amp Power Supply (PS) 50W-24V.  This power supply facilitates running the S300 from an AC outlet.  Alternatively, you can use a 24 VDC battery (and surely will on a mobile robot)…but the AC connection is very handy during development.  This power supply can typically be found on eBay for far less than retail.

The hardware setup is then quite simple, as follows:

  1. Wire the power supply by taking a three wire cord, such as an old monitor cord and splice the wires at the monitor end; e.g., ending up with brown (positive), blue (neutral), and green-yellow (ground) wires.  The wires will be inserted into the bottom-front of the power supply with brown going to “L,” blue going to “N” and green-yellow going to the remaining ground connector.
  2. From the S300, only three wires are used coming from the system plug for powering the laser scanner; all other wires are unused and may be wrapped (separately) in electrical tape.  Line FE (ground – green) should be inserted into the same power supply connector that the green-yellow ground wire was inserted into.  Line 1 (+24V DC – brown) should be inserted into the top-front connector of the power supply labeled “+.”  Finally, line 2 (0V DC – blue) should be inserted into the connector, two over from line 1, labeled “-.”

After providing power to the S300 Standard, and before serial communications are possible, the S300 Standard must be initialized via the Configuration & Diagnostics Software (CDS); until that is done, it’ll show the number “6” on its display when powered up.  Since the S300 Standard is a safety scanner, the laser scanning will stop (and it would send a signal to the vehicle it’s mounted on) if an object enters the protective field range, with its default settings.  So within the CDS, set the scanner for automatic interlock restart to disable this disabling trigger action.  (Certainly not recommended if you actually need the safety features…the S300 operating instructions gives all the details on this.)  Once properly configured, the S300 Standard should show “1” on its display with a green checkbox displayed.

Now that the scanner is powered and configured, it’s time to test basic communications.  If using Windows, a tool such as Advanced Serial Port Monitor may be used to send serial requests to the S300 and to read from it, accordingly.  In a Linux environment, such as Ubuntu Lucid Lynx, there are two tools that I prefer for testing serial communications: jpnevulator to write commands and slsnif to listen for responses.  Using these utilities, take the following steps to setup:

  1. Hook up the S300 Standard diagnostic cable to a port, such as /dev/ttyUSB0.  If you’re unsure of which port you are plugging in to, run the following command in a terminal, before plugging it into the port; it’ll let you know which port has been connected to:
    tail -f /var/log/messages
  2. Open a terminal for monitoring responses from the S300:
    slsnif -s 38400 -n -x -u /dev/ttyUSB0
  3. Open a second terminal for manually sending serial commands to the S300:
    jpnevulator --ascii --tty /dev/ttyUSB0 --write

We’re now ready to request a token and laser scans…

Token Request

As mentioned, before being able to request laser scan data, a token must be requested to inform the S300 which communication port we’ll be requesting scan data on.  In this case, communications are over the diagnostic port.  To request a token, send the following hex command via the jpnevulator terminal by simply copying/pasting the following command:

00 00 41 44 19 00 00 05 FF 07 19 00 00 05 FF 07 07 0F 9F D0

If this command was successfully sent, the slsnif terminal should return 4 null byte-pairs as 00 00 00 00.  If this response was not received, verify the port of communication (e.g., /dev/ttyUSB0 vs. /dev/ttyUSB1) and try again.

Scan Request

After successfully requesting a token, laser scans may now be requested.  Send the following to request a laser scan:

00 00 45 44 0C 00 02 22 FF 07

This request should return a 1096 byte-pair response.  The response is interpreted as follows:

Description i Position # of Byte-Pairs Example
Telegram Identifier 0-1 2 00 00
Designation indicating reply 2 1 00
Error indicator (none if 0) 3 1 00
Echo of last 6 byte-pairs of request 4-9 6 0c 00 02 22 ff 07
Unknown 10-11 2 00 08
Data of scans 1-541 12-1093 1082 See below
TGM CRC (checksum) 1094-1095 2 cd 8c

In practice, the first 12 byte-pairs may be ignored assuming the fourth byte-pair (position 3) is null (00).

Each element of the scan data is made up of two byte-pairs (four bytes) and must be broken down to the bit level to parse the information. The first byte-pair may be ignored as it concerns itself with the S300’s safety features. The second byte-pair contains information stored across 16 bits. Bits 0-12 express the range distance in cm. The only other useful bit (pun intended) is bit 13 which indicated if glare (dazzling) was detected; an indication that the range measurement may be subjectively ignored, or confidence variance increased for that datum.

The TGM CRC is a cyclic-redundancy-check (or checksum) to test the integrity of the scan report. Googling for that fancy phrase will provide algorithms for its implementation.

The care-o-bot‘s S300 ROS package provides terrific example code for parsing range data and computing the CRC. (This package won’t work out-of-the-box as the care-o-bot’s S300 driver is intended for the S300 Professional which has quite different header information…but it doesn’t take much effort to modify it for the S300 Standard. I’ll make an announcement once I publish the ROS package that I’ve developed which works with the S300 Standard.)

Token Release

If you’re dotting your i’s and crossing your t’s, you should request a token release when complete.  Do so by sending the following command:

00 00 41 44 19 00 00 05 FF 07 19 00 00 05 FF 07 00 00 E7 B8

This isn’t required if you’ll only ever be communicating via the diagnostics port, but it surprisingly makes you feel clean and tidy if you do it, nonetheless.

The next step is to write a program which sends the described commands to the S300 Standard and parses responses, accordingly. If you don’t want to go for what the care-o-bot code includes, simple, reusable, serial port communications code for sending and receiving data in C++, may be found within the ROS package serial_port; an example package which uses it is the ax12-arm.

With this, the 30 meter ranging capabilities of the S300 Standard are available for programmatic use.

Billy McCafferty

Introduction to the Kalman Filter, Part II

As discussed in Part I, the Kalman filter is an optimal, prediction/correction estimation technique which minimizes estimated error and can be used a wide array of problems.  While Part I focused on a one-dimensional problem space to more easily convey the underlying concepts of the Kalman filter, Part II will now expand the discussion to bring the technique to higher dimensions (e.g., 2D and 3D) while still being constrained to linear problems.

What’s so cool about the Kalman filter you ask?  Let’s highlight a few areas where the Kalman filter may provide value.  (This should help you remain motivated while you’re delving into a myriad of Greek symbols and matrix transformations!)  The Kalman filter can help:

  • Determine the true pose of a mobile robot given its control input and measurements;
  • Provide sensor fusion capabilities by further correcting an estimate with each subsequent measurement input;
  • Track an object within a video (e.g., face tracking); and
  • Determine the attitude of a satellite using measurement of star locations.

These few examples demonstrate just how useful the Kalman filter can be on a wide variety of problems.  To limit the scope of discussions, the context of this tutorial will be on determining the true pose of a mobile robot given noisy control inputs and measurement data.

In examining the Kalman filter a bit more, we’ll discuss the following topics:

  • Applicable systems for use of the Kalman filter
  • Kalman filter algorithm, inputs and outputs
  • Kalman filter algorithm formalized
  • Limitations of the Kalman filter
  • Direction for further study

The current discussion will avoid the derivations of the associated, base equations in favor of pragmatic use of the filter itself.  For a comprehensive derivation and discussion of the involved equations and mathematical roots, see Robert Stengel’s Optimal Control and Estimation.

Applicable Systems for Use of the Kalman filter

In Part I, it was discussed that a system must adhere to three constraints for Kalman filter applicability:  1) it must be describable by a linear model, 2) the noise within the model must be white, and 3) the noise within the model must be Gaussian.  More precisely, the state must be estimable via:

The state estimation equation is broken down as follows:

  • xk is the estimate of the state after taking into consideration the previous state, the control input, and process noise.
  • xk-1 is the state estimate from the previous time iteration (k-1) where k is the current time iteration.  The state will likely surely be a vector; e.g., (x y θ)T for a mobile robot on a 2D plane having a location (x, y) and orientation θ.
  • uk is the control input for the current time iteration.  While being sensor data in the strict sense, odometry measurements may be considered control input; and would be a good fit for our mobile robot example. Accordingly, control input could be represented as a three dimensional vector containing an initial rotation (in radians), a translation distance (distance travelled in a straight line), and a second rotation (δrot1 δtrans δrot2)T.
  • wk-1 is the process noise, or the amount of error inherent in the state estimation when taking into consideration noisy control input.  (The process noise is Guassian, having mean μ of 0 and covariance Σ.)  The process noise is a vector having the same dimension n as the state vector.
  • A is a matrix which transforms the previous state into the current state without regard to any control input.  For example, if the system being modeled were a satellite orbiting the earth, A would be a matrix which would modify the state to reflect the orbital distance traveled between time iterations k and (k – 1).  In more earth-bound scenarios, A would be an identity matrix.  Regardless, A is a square matrix of size (nxn) where n is the dimension of the state vector xk.
  • B is a matrix which transforms the control input to be compatible for summing to the previous state to reflect the current state.  Within our mobile robot context, the control input is (δrot1 δtrans δrot2)T which cannot simply be added to the previous state to determine the current state; accordingly, B must transform uk into a vector reflecting the relative state change induced by the control input.  For example, if the control input were to move the robot 55 mm on the x axis, 120 mm on the y axis, and 0.8 rad, then the result of Buk would be (55mm 120mm 0.8 rad) which could then be easily summed to the previous state to get the current state.  B is a matrix of size (nxl) where n is the dimension of the state vector and l is the dimension of the control vector.

In addition to adhering to the linear state estimation just discussed, in order to be Kalman-filter-compatible, the system being modeled must also have a measurement model estimable via:

The measurement equation is broken down as follows:

  • zk is the predicted measurement model after taking into account the estimated state and known measurement noise.  Don’t take that at face value; ask yourself why it’s important to be able to predict what the measurement model will be for a given state.  Spoiler alert…  If we’re able to predict what the measurement model should be for a given state, then we can compare the predicted measurement model against the actual measurement data returned by our sensors.  The difference between the two will be used for improving the state estimation within the Kalman filter algorithm.  For our little robot, the measurement model may be a vector of laser scans (s1 s2sn)T, with each scan having an orientation and range (x θ)T.
  • xk is the result of the state estimation equation discussed above.
  • vk is the measurement noise, or the amount of error inherent in the measurement estimation when taking into consideration noisy sensor input.  (The measurement noise is Guassian, having mean μ of 0 and covariance Σ.)  The measurement noise is a vector having the same dimension n as the resulting measurement vector.
  • H is a matrix which transforms the state xk into the predicated measurement model.  In other words, if given the state xk, Hxk will calculate what the measurement model should look like if there were no uncertainty involved.  H is of size (mxn) where m is the dimension of the measurement vector zk and n is the dimension of the state vector.
    And to answer your question, yes, H can be quite onerous to fabricate.  In fact, H may actually be implemented as a function, accepting a state and returning a measurement model based on the known map, estimated location and orientation; the Extended Kalman filter or other extension would need to be leveraged if we digressed in this way from our “simple” linear model.

In summary, if the system can be modeled by the the process and measurement equations described above, then the Kalman filter may be used on the system to estimate state, if given control and measurement inputs.  Let’s now look at the general Kalman filter algorithm, at a very high level, including specific inputs and outputs.

Kalman Filter Algorithm, Inputs and Outputs

The Kalman filter algorithm follows a surprisingly straight-forward algorithm broken down into two phases.  The first phase is called the time estimate (or prediction) in which the previous state and control input is used to estimate the current state and estimate covariance.  The second phase is called the measurement update (or correction) in which the Kalman gain is calculated and the state estimate and covariance is improved upon using measurement data and the Kalman gain.  Roughly, the algorithm is as follows:

  1. Estimate the predicted state based on process (control) input.  This estimate is the a priori estimate.
  2. Calculate the state estimate covariance (our confidence in the state estimate).
  3. Calculate the Kalman gain which will be used for weighting the amount of correction the measurement data will have on the state estimate.
  4. Refine the estimated state using measurement input.  This refined estimate is the a posteriori estimate.
  5. Refine the state estimate covariance (our confidence in the state estimate after taking measurement data into account).

During the first time iteration t0, the Kalman filter accepts as input the initial state and estimate covariance (which may be zero if the initial state is known with 100% certainty) along with the control input u and measurement data z.  On subsequent time iterations tn, the Kalman filter accepts as input the output from the previous run (with mean and covariance – discussed more below) along with the control input u and measurement data z from tn.

The output of the Kalman filter is an estimate of the state represented by a normal distribution having mean μ (the estimated state) and covariance Σ (the confidence, or more accurately, the noise, in that estimate).  (As a reminder, the covariance of a normal distribution is the standard deviation squared  σ2.)  Note that μ need not be limited to a scalar value; in fact, it’ll almost always be a vector.  For example, the pose of a mobile robot may be a three dimensional vector containing the location and orientation (x y θ)T.  Accordingly, this vector would be the resulting mean value.  Furthermore, with a three dimensional state vector as the mean, the covariance Σ would be a (33) diagonal matrix having a covariance for each corresponding value of the vector, as shown at right.

Kalman Filter Algorithm Formalized

We’ve discussed the initial Kalman filter equations for process and measurement estimation; we’ve also discussed the overall algorithm for implementation, broken down into prediction and correction phases.  What’s missing are the actual calculations for concretely carrying out the estimation process itself.  The concrete calculations for implementing the Kalman filter algorithm are “easily” derived from the process and measurement equations by taking the partial derivatives of them and setting them to zero for minimizing error…and jumping around three times and standing on your head for π minutes.  (My eyes quickly begin to glaze over when I start to follow derivations of this nature…but if you like this kind of stuff, Sebastian Thrun shows the complete derivation within Probabilistic Robotics; Robert Stengel takes it to 11 within Optimal Control and Estimation with more Greek symbols than you can shake a stick at.)  But I digress…

To formalize, the Kalman filter algorithm accepts four inputs:

  • μt-1 – the mean state vector
  • Σt-1 – the covariance of the mean (the error in the state estimate)
  • ut – the process (control) input
  • zt – the measurement data

With the given inputs, the Kalman filter algorithm is implemented as follows:

Line 1 should be comfortingly familiar; this is the calculation for estimating the current state given the previous state and control input.  But what’s missing from the original process equation?  Have you spotted it yet?  I’ll give you a noisy hint.  (No mean for the pun…thank you, thank you, I’ll be here all week.)  That’s right, the noise parameter has been left off of the state estimation equation in line 1.  Line 1 simply calculates the a priori state estimate, ignoring process noise.

Line 2 calculates the covariance of the current state estimate, taking process noise into consideration.  Matrix A has already been discussed; it comes from the Kalman filter state estimate equation described earlier.  R is a diagonal matrix representing the process noise covariance.

Line 3 calculates the Kalman gain which will be used to weight the effect of the measurement model when correcting the estimate.  C is identical to the matrix H described earlier in the base Kalman filter measurement equation.  As tricky as this line looks (and some of those matrix calculations can make your head hurt a bit), the only thing new is Q; this diagonal matrix is the measurement noise covariance.  The resulting Kalman gain K is a matrix having dimensions (nxm) where n is the dimension of the state vector and m is the dimension of the measurement vector.

(As a side, take note that in different reading sources, the meaning of R and Q may be switched; Q would be process noise and R would be measurement noise and would have their locations in the equations swapped, accordingly.  Just be cognizant of which is which within the source you’re reading from.)

Line 4 updates the state estimate taking into account the weighted measurement information.  Note that the Kalman gain is multiplied by the difference between the actual measurement model and the predicted measurement model.  What happens if they happen to be identical?  …Jeopardy daily double sounds playing in the background…  If the actual and predicted measurement models happen to be identical, then the estimated state will not be corrected at all since our sensors have verified that we’re exactly where we thought we were.  I.e., don’t fix what ain’t broken.  The result of line 4 is the a posteriori state estimate.

Finally, line 5 corrects the covariance, taking into account the Kalman gain used to correct the state estimate.

As output, the Kalman filter algorithm returns two values:

  • μt – the current mean state vector.
  • Σt – the current covariance of the mean (the error in the state estimate).  The covariance matrix would be a diagonal matrix having dimensions (nxn) where n is the dimension of the state vector.

With these outputs, it is now known with some Σ amount of error what the current state of the system is; or where our intrepid little robot is on the map.

Limitations of the Kalman Filter

The Kalman filter is incredibly powerful and can be used in a surprising number of scenarios.  The primary limitation of the Kalman filter is that it assumes use within a linear system.  Many systems are non-linear (such as a mobile robot moving with a rotational trajectory) yet may still benefit from the Kalman filter.  The applicable approach is to form a linear estimate of the non-linear system for use by the Kalman filter; similar in effect to a Taylor series expansion.  Popular extensions to the Kalman filter to support non-linear systems include the Extended Kalman filter and, even better, the Unscented Kalman filter.  Specifically, chapter 7 of Sebastian Thrun’s Probabilistic Robotics goes into good detail on describing how to apply both of these extensions to the context of mobile robotics.

Googling for “Kalman filter” will quickly show just how much more there is to this topic.  But I certainly hope this two part series has helped to clarify the overall algorithm with particular attention to describing the various elements of the calculations themselves.

Billy McCafferty

Fundamentals: Vectors, Matrices and Matrix Operations

Our initial introduction to the Kalman filter was easy to understand because both the motion and measurement models were assumed to be one-dimensional. That’s great if you’re a lustrous point in Lineland, but the three dimensional world must be dealt with sooner or later. Specifically, within the initial introduction, location (or state) x, the control input u, and the measurement z were all scalar (numeric) values along a one-dimensional line. For actual use of the Kalman filter, x, u, and z are much more frequently vectors instead of scalar units. In order for the vectors to play nicely with one another (to add and subtract them from each other), matrices must be used to tranform the vectors into a common form. Accordingly, before delving further into the Kalman filter, this post provides a basic review of matrices and matrix operations to better prepare ourselves for more gory Kalman filter details.

To better visualize why we need to be concerned with matrices, assume you are using the Kalman filter for localization of your humble robot on a 2D map. Furthermore, assume that the robot is holonomic on a 2D plane (can turn in a circle on a dime). We now need to create a model to adequately represent the pose (or “kinematic configuration” if you’re feeling fancy), the motion model (the control input), and the measurement model (which we’ll ignore for now to focus on matrices).

The state, x, or pose of our robot, is succinctly represented as a three-dimensional column vector made up of the x and y coordinates of the robot on the two-dimensional map along with the robot’s orientation relative to the x axis, represented as θ. (Note that the x positional component here is decidedly different than the x vector variable representing the overall pose.) This 3D column vector, representing the pose on a 2D plane, is shown at right.

The control input, u, or motion model of our robot, can be represented in various forms, examples of which are described in detail in Probabilistic Robotics [1]; but for the topic at hand, assume that the motion model is simply a constant velocity, v, between two ticks of time, represented as a 2D vector containing speed and direction.

With this information, if given the previous pose and the motion model over a given timeframe, we can then calculate the current pose. To do so, we’ll need a linear equation which adds the previous pose to the control input. But the velocity vector can’t simply be added to the vector representing the previous pose – we’re talking apples and oranges here. We’ll need a transformation matrix to transform the velocity into a 3D vector which can be added to the pose. This is starting to get into Part II of the Kalman filter introduction, but this starts to give you an idea of how matrices will be used in the Kalman filter.

So onward with our matrix primer!


As illustrated above, a column vector is an ordered set of values with n dimensions, where n is the number of values within the vector. The values within a vector need not be limited to being scalars; e.g., one or more values within the vector could also be a vector. By convention, a vector is assumed to be a column vector unless otherwise noted. A vector is symbolized as a bold-face, lower-case letter.

If all of the elements of a vector are 0, the vector is a null vector.

The transpose of a column vector is a row vector (and vice-versa) and has a superscript T to denote as such.


A matrix is a two-dimensional array of scalar values (or coefficients) having r rows and n columns, noted as having (rxn) dimensions. If both r and n are one, then the matrix is a scalar value. If just n is one, then the matrix is a vector. If just r is one, then the matrix is a row vector. If r = n then the matrix is a square matrix. Matrices are symbolized as a bold-face, upper-case letter.

If all of the elements of a matrix are 0, the matrix is a null matrix. If all of the diagonal elements of a square matrix (e.g., a11, a22, …, arn) have a value while all others do not, the matrix is a diagonal matrix. If all of the diagonal elements of a diagonal, square matrix are 1, then the matrix is an identity matrix. An example identity matrix is shown at right.

The transpose of a matrix is the matrix “flipped” on its diagonal; it is created by writing the rows of A as the columns of AT.  Accordingly, the columns (n) and rows (r) of A will equal the rows (r) and columns (n) of AT, respectively; e.g., if A has the dimensions (23) then AT has the dimensions (32).

Matrix Operations

Scalar/Matrix Multiplication

When looking at available operations among scalars, vectors, and matrices, it’s easiest to start with the multiplication of a matrix by a scalar value.  Simply enough, each value within the matrix is simply multiplied by the scalar; quite elementary indeed.

Matrix/Matrix Addition & Subtraction

The next trivial operation is that of matrix-to-matrix addition and subtraction.  Simply enough, each value in the first matrix is added to, or subtracted by, the respective element in the second matrix.  In order to add or subtract to matrices, the matrices must have the same (rxn) dimensions.

Matrix/Vector Multiplication

As mentioned in the opening of this review, it is necessary within the Kalman filter to transform a control vector, for example, into a state vector, so that it may be added to the previous state to calculate the current state.  This transformation is achieved by multiplying the control vector by a matrix representing how the control vector relates to the state.

In more generic terms, a resulting variable may be the result of a linear function of another vector and a matrix representing how the vector being acted upon relates to the result.  (You might want to read that again.)  The linear funtion for the result is written as y = Ax.  More simply put, A is a matrix which represents how the vector x relates to y, the result; accordingly, A transforms x into y.  In matrix-speak, this is a linear transformation.

In order to transform a vector by a matrix, the number of columns (n) of A must equal the dimension (n) of x.  Additionally, the number of rows (r) of A will equal the dimension (n) of y.  If these constraints hold, then A is said to be conformable to x.

The following demonstrates how each value of y is calculated:

  • y1 = a11x1 + a12x2 + … + a1nxn
  • y2 = a21x1 + a22x2 + … + a2nxn
  • yr = ar1x1 + ar2x2 + … + arnxn

Interestingly, if the matrix A is a diagonal matrix (square by implication), then each y value is the product of the corresponding x and diagonal value in the matrix. If the matrix A is an identity matrix (also square by implication), then each y value is equal to the corresponding x value.  Examples of each are shown at right.

Matrix/Matrix Multiplication

The last topic worth mentioning in detail, in our rather elementary review of matrices and matrix operations, is that of multiplying two matrices together.

In order to get the result of the product of two matrices, e.g., CAB, the number of columns (n) of A must equal the number of rows (r) of B.  The result, C, will have the number of rows (r) of A and the number of columns (n) of B.

The following demonstrates how each value of C is calculated:

  • c11 = a11b11 + a12b21
  • c12 = a11b12 + a12b22
  • c21 = a21b11 + a22b21
  • c22 = a21b12 + a22b22
  • c31 = a31b11 + a32b21
  • c32 = a31b12 + a32b22

If both A and B were square, AB ≠ BA due to order in which rows and columns are multiplied and summed.  But when multiplying by an identity matrix, AAI = IA.

There is certainly much more to matrices and matrix operations, but the above gives enough to move on to the Part II of our introduction to the Kalman filter and to understand the implication of matrices when used within signals control and robotics literature.  Incidentally, this should also be enough information to understand just about every use of a vector and matrix within Sebastian Thrun’s Probabilistic Robotics (a highly recommended read if you’re interested in mobile robotics).  For a more comprehensive review of matrices and their use within control systems, there are fewer texts better (albeit, a bit daunting) than Robert Stengel’s Optimal Control and Estimation.

Billy McCafferty

Introduction to the Kalman Filter, Part I

Dealing with the real world is rough. (With an opening like that, you can probably guess how my high school days went.) To be clear here, we’re talking about robots having to deal with the real world (not me, luckily). It’s hard enough for our little silicon brethren to have limited sensory capabilities, but on top of that, the data that’s coming in is usually noisy. For example, sonar range data can be inconsistent, throwing off distance measurements by many centimeters. Even laser data isn’t perfect; if you watch the range data coming in from stationary laser sensors, you’ll notice the range data shake indecisively around the actual range. What’s one to do? Well, you could shell out a little more money for ultra-sensitive sensors, but even they have their limitations and are subject to error. To add insult to injury, our robot’s misjudgment isn’t limited to data coming in from sensors, it’s also prone to misjudging what it’s done from a control perspective. For example, tell your robot to move forward 1 meter and it’ll comply as best it can; but due to various surface textures (road vs. dirt vs. carpet vs. sand), the robot may report that it’s traveled 1 meter when in fact it’s gone a bit further or a bit less. What we’d like to do is to filter out the chaff from the wheat…or the noise from the useful data in this case. Likely the most widely used, researched, and proven filtering technique is the Kalman filter. (Read: this is an important concept to understand!)

In short, if not a bit curtly, the Kalman filter is a recursive data processing algorithm which processes all available measurements, regardless of their precision, to estimate the current value of variables of interest, such as a robot’s position and orientation. There is a plethora of literature written concerning the Kalman filter, some useful and some otherwise; accordingly, to help you better understand the Kalman filter, I’ll guide you through a series of readings which I’ve found pivotally assistive in understanding this technique and discuss key points from those references.

The first step on our journey into the Kalman filter rabbit hole is Peter Maybeck’s Stochastic Models, Estimation, and Control, Vol. 1, Chapter 1 [1]. There is simply no writing which gives a more tractable and approachable overview of the concepts behind the Kalman filter. If you’re following along at home (singing along with the bouncing dot), please take the time to now read Maybeck’s introduction and take the time to understand what you’re reading…I’ll wait patiently for your return.

So what did we just learn? Below are a few points to emphasize key ideas from Maybeck’s introduction…

  • A Kalman filter can only be applied to a system exhibiting three attributes: it must be a linear model, the noise within the model must be white, and the noise within the model must be Gaussian (normal bell-shaped curve having a mean and standard deviation). (Interestingly, if we relax the first condition to accommodate non-linear models, we must use a technique known as an Extended Kalman filter which will be a subject of a successive post; and this is very important as once a robot starts taking a curved path, it becomes a non-linear problem.)
  • The Kalman filter is built upon the ideas encapsulated by Bayes’ filter. (For a great introduction to Bayes filter, see chapter 2.4 of Sebastian Thrun’s Probabilistic Robotics [2].), Indeed, the Kalman filter is an optimal incantation of Bayes’ filter as it’s result is the mean and the mode of the Gaussian result, it is the maximum likelihood estimate, and it is the linear estimate whose variance is less than that of any other linear unbiased estimate; in other words, whichever angle you look at it, the result is just about as good as it gets.
  • There are two kinds of noise which needs to be considered: process (or control) noise, which results from imprecise movement and manipulation control, and measurement noise which results from imprecise sensory data, such as sonar and laser data. When calculating an estimate, the process and process noise is considered first with the measurement and measurement noise being applied next.
  • The “white noise” of the system being analyzed need not be constant; a “shaping filter” can be used to describe how the noise changes over time assuming that the amplitude of the noise can be described at any point in time as a Gaussian distribution.
  • In reading Maybeck’s introduction, you may have been surprised to see that , as described in equation 1-1 of the introduction. If you skip back to equation 1-16, you’ll be reminded why this is the case. As the variance of the process/control noise approaches infinity and the variance of the process prediction approaches infinity, the gain approaches 1. Accordingly, no “weight” is put on the control input while all trust is put on the measurement result itself. Now, going back to equation 1-1, since control isn’t being considered at all, it is effectively removed from the equation, similarly to as if it had an infinite error variance.

Algorithmically, in our one-dimensional context, the Kalman filter takes the following steps to estimate the variable of interest:

  1. Calculate the predicted state based on process (control) input (1-11),
  2. Calculate the error variance of the predicted state (1-12),
  3. Calculate the gain which will be used to “correct” the predicted state when the measurement data is applied (1-15),
  4. Re-calculate the predicted state, taking into account measurement data and the gain (1-13), and
  5. Re-calculate the error variance taking into account the improvement that the measurement data has added to the prediction (1-14).

Isn’t that brilliant? The Kalman filter takes into account the previous state estimation, and the confidence of that estimation, to then determine the current estimate based on control input and measurement output, all the while tweaking itself – with the gain – along the way. And since it only needs to maintain the last estimate and confidence of the estimate, it takes up very little memory to implement. While very approachable, Maybeck’s introduction to the Kalman filter is greatly simplified…for that very reason. For example, Maybeck’s introduction assumes movement on a one-dimensional plane (an x value) with one-dimensional control input (the velocity). In more realistic contexts, we store the position and orientation of a robot as a vector and the control and measurement data as vectors as well. In the next post, we’ll examine what modifications need to be made to Maybeck’s simplification to apply the Kalman filter to more real world scenarios. This is where we must get our old college textbooks out to realize that we should have paid more attention in our matrices class (or you can refresh yourself here).

Until next time!
Billy McCafferty


Maybeck, P. 1979. Stochastic Models, Estimation, and Control, Vol. 1.

Thrun, S., Burgard, W., Fox, D. 2005. Probabilistic Robotics.

Fundamentals: Coordinate Frames

The world of robotics has a dizzying number of subjects; it’s quite overwhelming at first glance to figure out which topics someone “really needs to get” and which topics require a more cursory understanding. Accordingly, this will be the first in a number of posts (“number” being linearly proportional to my motivation) that I will be doing on some of the more fundamental topics within the realm of robotics. We’ll begin our travels with “coordinate frames.” What are coordinate frames you ask…well slow down there fella…let’s first take a step back to figure out the motivations for wanting to ask such a question to begin with.

A “robot” is typically defined (more or less) as an autonomous system which interacts with its environment. Interaction may include actual manipulation of the robot’s environment; this requires some sort of manipulator. In (Siciliano, 2009), a manipulator is described as “a kinematic chain of rigid bodies connected by means of revolute or prismatic joints. One end of the chain is constrained to a base, while an end-effector is mounted to the other end.” This academic explanation can be more easily understood by looking at an AX-12 Smart Robot Arm: the aluminum links are the “rigid bodies,” the AX-12 servos are the “prismatic joints,” and the gripper is the “end effector.” So what’s this have to do with coordinate frames? Well, a challenge in having a robot manipulate its environment is being able to determine and describe the position and orientation (together describing the pose) of the end effector in relation to what needs to be manipulated (and yes, this is very challenging). More specifically, one needs to describe both the pose of the end effector and target object in relation to a reference frame.

When considering an object’s pose within a reference frame, one first needs to know what a reference frame is to begin with. In short, a reference frame is “how the world is oriented”; i.e., which way’s North, South, up, down, etc. To describe the reference frame, and – more importantly – to enable one to provide bearing for where an object is within that frame, the frame is described with three axis: x, y, and (you guessed it) z. Without being oriented, if y points to the right and z points up, which way does x point? To determine this, use a handy trick (no mean for the pun) known as the “right-handed rule” (sorry south paws). To demonstrate, hold out your hand in front of your face like you were about to karate chop a board with your thumb sticking towards your face. If you point your index finger towards y, curl your other fingers towards z, then your thumb will point towards the positive direction of x.

The origin of the frame, the [0, 0, 0] value of the x, y, and z axis is located on an arbitrary, but known, point within the environment or on an object. There can be a frame, and different origin accordingly, for each reference perspective for a given context; each would be known as the coordinate frame of the given context.

For example, suppose one is developing a robot to pick up toys and put them into a toy bin (can you tell I have kids?). In this case, there would likely be three coordinate frames of interest. The first would be a reference frame which would allow one to describe the pose of the toy and the manipulator in relation to that reference frame. For instance, the reference frame could have its origin in the corner of the room and be tied to the orientation of the room itself; a “reference frame” is simply a coordinate frame which does not change pose as other objects move through it. A second coordinate frame would be the coordinate frame from the perspective of the end effector. By applying a separate coordinate frame to the end effector, it’s now tractable to determine not only where the end effector is found in relation to the reference frame, but also how the end effector is oriented in relation to the reference frame and how the pose needs to be modified to reach another pose (with fun stuff techniques like matrix transformations). As the end effector would be moved, its coordinate frame would move with it, figuratively fixed to a point on the end effector. Finally, a third coordinate frame would be that of the toy being picked up; this frame, in relation to the reference frame, would facilitate determining how the end effector’s pose needs to change to be in proper alignment for picking up the toy, taking into account the toy’s pose as well (read, lots more matrix transformations).

When applying a coordinate frame to an object or environment, two decisions must be made:

  • What fixed point on the object or environment should the coordinate frame be applied to? If using RobotIQ’s way cool, underactuated Adaptive Gripper, the coordinate frame might have its origin at the base of the “solo” finger. If talking about an airport, the coordinate frame (in this case the “reference frame”) might have its origin at the base of the control tower.
  • How will the coordinate frame be oriented with respect to the environment or object it is applied to; i.e., which way should x, y, and z point out of the origin? By convention, z would point “up” out of the origin. With the airport example, z would point towards the sky through the control tower. For movable objects, it’s not so straight forward to pick which way is “up.” So when applying a coordinate frame to a movable object (e.g., end effector), what’s most import is to pick a point and an orientation of the frame, with respect to the object its applied to, and keep that pose relationship between the object and the frame fixed as the object changes pose. For example, if a coordinate frame were applied to a mobile robot base, and the robot turns 90 degrees clockwise, then its coordinate frame would turn 90 degrees clockwise with it while the reference frame would remain static.

There is certainly a ton more to coordinate frames, manipulating them, comparing them to each other, and transforming them, than the light introduction provided here, but this should at least assist in removing a deer in headlights look if you’re unfamiliar with this term and someone brings it up in conversation during a cocktail party…which always happens.

Billy McCafferty


Siciliano, B. 2009. Robotics: Modelling, Planning and Control.

A Sampling of ROS Integration Packages

A computer science professor of mine, “back in the day,” once said that the greatest challenge in the future of software development will rest in the realm of integration. This is certainly true in robotics, if nowhere else. Due to the complexity involved with robotics, developers typically focus on very specific problems and develop very specific solutions to meet those challenges, accordingly. Path planning, SLAM, edge finding, handle manipulation, road recognition, planning with resource constraints, pattern and object recognition, D*, data mining algorithms, reinforcement learning, Kalman and particle filters are just some examples of “specialty” subjects in robotics and AI which people have developed pivotal software solutions to. An immense opportunity has existed, and will continue to exist, in bringing these together into more generalized solutions which are able to reap the benefits of the more specialized solutions within a cohesive whole.

An obstinate challenge in doing just that has been the technical complexities involved with integrating the plethora of solutions into a grouping which facilitates seamless communications among these solutions. If someone were to attempt at integrating more than a handful of specialized solutions and frameworks, the time involved would be nearly prohibitive to accomplishing the desired goal. Robot Operating System (ROS) seems to be changing the game a bit to allow just such an endeavor to be more tractable. Now that core ROS development has stabilized, a large number of groups have been feverishly working on providing their “specialty” solutions as packages which seamlessly integrate with ROS. And that means that they are then much more inter-operable with just about any other package that works with ROS.

There is truly a dizzying number of packages that have been provided for ROS (navigable at The packages range from algorithms for SLAM to hardware communication packages to wrappers for programming languages to wrappers for off the shelf systems and other existing software. It is this last category that gets me really jazzed. With such wrappers being introduced (not to be confused with “rappers,” a slightly different breed) , we’re now able to leverage a number of very solid and mature tools and frameworks without being bogged down with low level integration and communication issues. I’d like to briefly highlight a few ROS packages which provide just such accessibility to existing tools and frameworks:

  • trex_ros for trex-autonomy: The Teleo-Reactive Executive (T-REX) is a hybrid executive combining goal-driven and event-driven behavior in a unified framework based on temporal plans and temporal planning. (I’ve been tearing through the documentation here and here with the hopes of trying it out soon on my home projects.) A cool did-ya-know about T-REX is that it leverages the europa-pso planning framework available from NASA Ames Research Center.
  • antlr for ANTLR: ANTLR is a language tool that provides a framework for constructing recognizers, interpreters, compilers, and translators from grammatical descriptions containing actions. There’s a mouthfull! Really what ANTLR provides is a means to formalize and create your own domain-specific language…imagine a language specifically geared towards commanding your robot in a particular domain, ANTLR enables you to do just that.
  • rosglue for RL-Glue: Recently highlighted via ROS News, RL-Glue (Reinforcement Learning Glue) provides a standard interface that allows you to connect reinforcement learning agents, environments, and experiment programs together, even if they are written in different languages.
  • openrave for OpenRAVE: OpenRAVE is targeted for real-world autonomous robot applications, and includes a seamless integration of 3-D simulation, visualization, planning, scripting and control. OpenRAVE is particularly useful for planning movements for robotic arms and manipulators.
  • thea for Thea OWL Parser: Thea is an Prolog library that provides complete support for querying and processing OWL2 ontologies directly from within Prolog programs. (Honestly, I can’t say that I know much about this, but I keep reading more and more about OWL2…it seems that this is gaining and maintaining widespread interest in AI and autonomous robotics.)
  • karto for Karto Mapping: Karto has been around for a while and came around to open sourcing their mapping libraries earlier this year. The karto ROS package provides a means to leverage this solid library with ROS.
  • vision_opencv for OpenCV: I don’t think I need to say much about this…just use it!
  • gmapping for GMapping: GMapping is a SLAM (Simultaneous Localization and Mapping) solution using particle filters; the gmapping ROS package wraps the entire application for use within ROS. (If you’d like to learn more about SLAM, there’s no better resource that Sebastian Thrun’s Probabilistic Robotics.)
  • While I’ve been focusing on integration with software packages, I’ve got to mention nxt for Lego NXT: Recently announced on ROS News, this stack provides a number of tools for integrating NXT with ROS along with a novel use of NXT’s Lego Digital Designer for use within the rviz simulator…very slick.

The sampling above highlights just a few of the efforts by various groups to facilitate the integration of solid, existing tools and frameworks into ROS for easier communications with other packages and custom development. Ultimately, these efforts are lowering the barrier to integrate many great ideas and solutions into a more cohesive whole. This seems like a great indication of the current state of robotics…a sign that the industry is finally maturing enough that we’re now working towards integrating existing solutions – rather than re-inventing the wheel – in order to more aggressively push the envelope of what’s attainable and what’s imaginable.

Billy McCafferty

Are additional layers of abstraction warranted?

I was presented with the following discussion opener concerning the management of complexity via the introduction of abstraction…

How do we handle complexity? I asked a few software architects I know and all of them answered, “Abstraction.” Basically they’re right, but being a math major in college, there is a principal I believe that software architects and designers miss – complexity is constant. In other words, if you’re designing a system that is inherently complex, you can’t reduce it. In chemistry I remember learning that energy is constant, it can’t be created or destroyed, it’s just there. In software, if the solution to a problem is complex, the complexity is always going to be there.

What about abstraction? Abstraction basically hides complexity. This is good, right? The problem is, in a lot of designs, once abstraction hides complexity the designers tend to forget about it. If you work hard at a good design to hide the complexity and forget about it, it will come back and haunt you some day. So what does one do?

I love this discussion! The heart of this is determining if the introduction of abstractions, for the sake of hiding complexity, is truly an overall benefit. In robotics, we’re ever dealing with increasing levels of abstraction for this very reason. For example, in Architectural Paradigms of Robotic Control, I briefly discussed the 3T architecture which has three separate layers, implemented as increasing levels of abstraction. E.g., the skill/servo layer would likely be implemented in C++, the sequencing/execution layer might be implemented as a sequence modeling language, such as ESL or NDDL, while the planning/deliberative layer might be implemented with a higher abstraction yet, such as with the Planning Domain Definition Language or Ontology with Polymorphic Types (Opt).

In response to the concerns put forth, I would tend to agree that abstraction hides complexity and that it does make it more likely that you may be bitten by the hiding of the complexity. With that said, encapsulation of complexity into well formed abstractions is an inevitable step to facilitate taking on increasingly complex problems. For example, in the .NET world of data access, Fluent NHibernate is really just a way of hiding the complexities of NHibernate. NHibernate is really just a way of hiding the complexities of ADO.NET. ADO.NET is really just a way of hiding the complexities of communicating to a database via TCP/IP sockets, or whatever underlying mechanism is employed. Along the same vein, tools such as Herbal, NDDL, and ESL are similarly provided as a means to provide an abstraction for hiding complexity.

Because these layers of complexity have been encapsulated in a manageable fashion, we’re now able to take on project work which would be far too complex to manage if we were using a lower level of implementation, e.g., pure C++, or Assembly for that matter. Indeed, there will be times when the added layers of abstraction will make it more difficult to tweak a low level capability, but the improved complexity management that the abstractions provide should far outweigh the sacrifice of losing some low level capabilities.

I think the crux for determining if an abstraction is worthwhile:

  • Does the abstraction reduce complexity of interacting with the underlying layer that it encapsulates?
  • Does the abstraction make it easier to tackle increasingly complex problems?
  • Does the abstraction provide enough tweak points to accommodate the 5-10% of times that more low level control is needed?
  • Does the abstraction increase maintainability and ease of understanding of the overall goals of the system?

If the answers to the above are yes, then I believe that the encapsulation of complexity, codified as a new layer of abstraction, is pulling its weight. Otherwise, you might not want to throw that Assembly language reference away just yet.

Billy McCafferty

Simulation Environments for Mobile Robotics

While I wait for my $200K grant, my Darpa project award, Aldebaran to send me a few Nao’s, or Willow Garage to mail a PR2 my way (please contact me for shipping details), I spend much of my research time on the simulation side of robotics. In addition to being far less costly than purchasing hardware, working via simulators actually provides a number of benefits:

  • One need not spend time hacking a hardware platform together. Instead, simulators allow the robot to be completely available, in perfect working condition, before the first line of code is written. The time trade off enables a much greater amount of effort to be spent on algorithmic development…assuming that’s your interest.
  • One need not worry as much about hardware obsolescence. A vexing challenge for roboticists is deciding when to invest in new hardware. It always seems that the moment hardware arrives in the mail, announcements are made concerning the availability of even better options.
  • As a positive side effect of the time and cost savings that simulators typically provide, the simulation route also enables one to try out many more platforms and segments of the robotics industry in a shorter amount of time. For example, instead of putting all your eggs in one basket, it becomes far less costly to abandon a platform for another when you can simply install a new simulation environment or simulated platform.
  • Simulators enable researches to perform otherwise impractical research; such as evolutionary development, robotic injury recovery, or simulation within dangerous or hard-to-replicate environments, such as mines, nuclear waste facilities and natural disaster sites.

Obviously, it’s difficult to replace the experiences of working on real robots in real environments with real-world sensor errors, data fusion that doesn’t agree, and unpredictable dynamics, but simulators certainly provide a convenient means to try out new ideas or experiment in new areas.

Along those lines, I’d like to highlight a few simulators for mobile robotics which may pique your interest:

  • Microsoft Robotics Developer Studio (RDS): In addition to being a complete robotic operating system, Microsoft RDS includes a powerful simulation environment with a large number of robotic platforms readily available for use. Beyond the Microsoft provided content, an amazing breadth of tutorials and guidance for using the RDS simulator may be found here.
  • Player: Player is a very popular and widely used robotic software platform which is actually three tools in one. The first tool is Player itself; Player is the server component of the robotic platform which facilitates communications to sensors and actuators over network communications. The next two tools provide the simulation capabilities. Stage is a 2D simulation environment supporting multiple agents supporting sensor feedback. Gazebo is a 3D simulation environment with sensor support as well as a physics engine for simulating object interaction in the 3D context. A great tutorial for getting started with Player/Stage is found here. Luckily, running your code between the Stage and Gazebo environments requires very little changes at all; accordingly, once you learn one, you’re already quite familiar with other.
  • Robot Operating System (ROS): Like Microsoft RDS, ROS is far more than just a simulation environment; indeed, ROS is a full robot operating system providing messaging infrastructure as well as a means to easily develop and share solutions with other users. For the subject at hand, ROS leverages the Stage 2D simulator and Gazebo 3D simulator for many of its simulation oriented packages. You can find a lot more about ROS’ use of Gazebo for robotic simulation, and learn how to try it out yourself, with these tutorials. For a Stage tutorial, be sure to also check out this tutorial.
  • Webots: Webots is completely geared towards robotic simulation. Offering plenty of physics options for modeling the real world along with many available sensors and actuators makes Webots a very solid and mature simulation platform. The Webots overview page does a very concise job of describing the software’s capabilities, which I need not repeat here. Webots has a well organized introduction within the documentation; many user generated tutorials are also readily available online, such as this one.
  • RoboCup Soccer Simulator: While the previously discussed packaged are geared towards being “one size fits all,” the RoboCup Soccer Simulator is honed to just that, allowing developers to participate in the RoboCup Soccer Simulation league via a standard platform. This platform is made up of a number of elements: the Server which runs the simulation and acts as a host for participating clients to send commands and receive sensor feedback information, the Monitor which facilitates viewing the simulation in real time, and the Log Player which provides the ability to replay games at a later time. While official documentation may be found here, its 2003 last-modified date makes me concerned that it may have fallen out of synch with the software itself, which is still being actively maintained. While the online manual is much more current, it is missing quite a few pieces.
  • RoboCup Rescue: This more seriously toned simulation environment provides a platform for researching topics such as modeling disaster environments and robotic assistance in disaster recovery. The Agents Simulation league models the disaster environment including dynamics such as traffic, fire, and civilian movements. Agents are then developed to take on the role of police and other recovery participants. The Virtual Robotics league, facilitates the development of autonomous robots with sensors and actuators to assist in the disaster recovery effort. Although I’ve found the documentation for RoboCup Rescue to be a bit less approachable than for the other simulation environments, there is good content to be found on the project’s wiki

The above list is certainly not exhaustive, but should give a good introduction of available simulation environments. Other environments not mentioned which also have support for mobile robotics development in simulated environments include MapleSim, Simbad, Carmen, Urbi (compatible with Webots) lpzrobots, Moby, and OpenSim. Finally, robot competitions occasionally include simulated environments in their challenges; keeps a terrific listing of available competitions at; the AAAI conferences and competitions are particularly good at coming up with novel hardware and simulation challenges which truly push the envelope of progress.

Certainly enough to keep you busy for a while…not bad when you consider the many projects you can carry out before committing to buying a single piece of hardware!

Billy McCafferty

Developing Well-Designed Packages for Robot Operating System (ROS), Part VI

Part VI: Adding a UI Layer to the Package

As the last and final chapter to this series of posts (Part I, II, III, IV, V), we’ll be adding a basic UI layer to facilitate user interaction with the underlying layers of our package. Specifically, a UI will be developed to allow the user (e.g., you) to start and stop the laser reporting application service via a wxWidgets interface. If you’re new to wxWidgets, it really is a terrific open-source UI package with very helpful online tutorials, a thriving community, and a very helpful book, Cross-Platform GUI Programming with wxWidgets – certainly a good reference to add to the bookshelf. Arguably, the sample code discussed below is very simplistic and only touches upon wxWidgets; with that said, it should demonstrate how to put the basics in place and to see how the UI layer interacts with the other layers of the package.

Developing a UI layer with wxWidgets is quite straight forward; the UI itself is made up of two primary elements: a wxApp which is used to initialize the UI and a wxFrame which serves as the primary window. For the task at hand, the wxApp in the UI layer will be used to perform three primary tasks, in the order listed:

  1. Initialize ROS,
  2. Initialize application services and dependencies for those services (e.g., message endpoints), and
  3. Create the initial frame/window; application services will be passed to the frame to enable wiring up UI events to the respective application service functions.

As a rule of thumb, the UI layer should only communicate to the rest of the package elements via the application services layer. E.g., the UI layer should not be invoking functions directly on domain objects found within ladar_reporter_core; instead, it should call tasks exposed by the application services layer which then coordinates and delegates activity to lower levels.

Before we delve deeper, as a reminder of what the overall class diagram looks like, as developed over the previous posts, review the class diagram found within Part V. The current objective will be to add the UI layer, as illustrated in the package diagram found within Part I. To cut to the chase and download the end result of this post, click here.

Show me the code!

1. Setup the Package Skeleton, Domain Layer, Application Services Layer, and Message Endpoint Layer

If not done already, follow the steps in Part II, III, IV, and V to get everything in place. (Or simply download the source from Part V to skip all the action packed steps leading up to this post.)

2. Install wxWidgets

Download and install wxWidgets. Instructions for Ubuntu and Debian may be found at

3. Define the UI events that the user may raise

Create an enum class at src/ui/UiEvents.hpp to define UI events as follows:

// UiEvents.hpp
#ifndef GUARD_UiEvents
#define GUARD_UiEvents
namespace ladar_reporter_ui
  enum UiEventType
    UI_EVENT_Quit = 1,
    UI_EVENT_StartReporting = 2,
    UI_EVENT_StopReporting = 3
#endif /* GUARD_UiEvents */

As suggested by the enum values, the user will be able to start the reporting process, stop it, and quit the application altogether.

4. Create the wxWidgets application header class

Create src/ui/LadarReporterApp.hpp containing the following code:

// LadarReporterApp.hpp
#include <boost/shared_ptr.hpp>
#include <ros/ros.h>
#include "LaserScanEndpoint.hpp"
#include "LaserScanReportingService.hpp"
namespace ladar_reporter_ui
  class LadarReporterApp : public wxApp
      virtual bool OnInit();
      virtual int OnExit();
      void InitializeRos();
      void InitializeApplicationServices();
      void CreateMainWindow();
      char** _argvForRos;
      ros::NodeHandlePtr _nodeHandlePtr;
      // Application services and dependencies.
      // Stored as pointers to postpone creation until ready to initialize.
      boost::shared_ptr<ladar_reporter_core::ILaserScanEndpoint> _laserScanEndpoint;
      boost::shared_ptr<ladar_reporter_application_services::LaserScanReportingService> _laserScanReportingService;

A few notes:

  • LadarReporterApp inherits from wxApp to create the “main” equivalent for wxWidgets.
  • OnInit() and OnExit() are events called by wxWidgets when the UI is created and upon exiting (obviously).
  • InitializeRos(), InitializeApplicationServices(), and CreateMainWindow() lay out the three primary tasks previously described.
  • _nodeHandlePtr will hold our initialized reference to ROS during the lifetime of the UI.
  • Finally, the application services and dependencies which will be leveraged by the UI are declared. While _laserScanEndpoint won’t be used by the UI directly, it’ll need to be injected into the constructor of _laserScanReportingService and will need to be kept alive so as to continue advertising on its respective topic.

5. Create the wxWidgets application implementation class

Create src/ui/LadarReporterApp.cpp containing the following code:

// LadarReporterApp.cpp
#include <wx/wx.h>
#include "LadarReporterApp.hpp"
#include "LadarReporterFrame.hpp"
#include "LaserScanEndpoint.hpp"
#include "UiEvents.hpp"
using namespace ladar_reporter_application_services;
using namespace ladar_reporter_core;
using namespace ladar_reporter_message_endpoints;
// Inform wxWidgets what to use as the wxApp
// Implements LadarReporterApp& wxGetApp() globally
namespace ladar_reporter_ui
  bool LadarReporterApp::OnInit() {
    // Order of initialization functions is critical:
    // 1) ROS must be initialized before message endpoint(s) can advertise
    // 2) Application services must be initialized before being passed to UI
    // 3) UI can be created with properly initialized ROS and application services
    return true;
  int LadarReporterApp::OnExit() {
    for (int i = 0; i < argc; ++i) {
    delete [] _argvForRos;
    return 0;
  void LadarReporterApp::InitializeRos() {
    // create our own copy of argv, with regular char*s.
    _argvForRos =  new char*[argc];
    for (int i = 0; i < argc; ++i) {
      _argvForRos[i] = strdup( wxString( argv[i] ).mb_str() );
    ros::init(argc, _argvForRos, "ladar_reporter");
    _nodeHandlePtr.reset(new ros::NodeHandle);
  void LadarReporterApp::InitializeApplicationServices() {
    _laserScanEndpoint = boost::shared_ptr<ILaserScanEndpoint>(
      new LaserScanEndpoint());
    _laserScanReportingService = boost::shared_ptr<LaserScanReportingService>(
      new LaserScanReportingService(_laserScanEndpoint));
  void LadarReporterApp::CreateMainWindow() {
    LadarReporterFrame * frame = new LadarReporterFrame(
      _laserScanReportingService, _("Ladar Reporter"), wxPoint(50, 50), wxSize(450, 200));
    frame->Connect( ladar_reporter_ui::UI_EVENT_Quit, wxEVT_COMMAND_MENU_SELECTED,
      (wxObjectEventFunction) &LadarReporterFrame::OnQuit );
    frame->Connect( ladar_reporter_ui::UI_EVENT_StartReporting, wxEVT_COMMAND_MENU_SELECTED,
      (wxObjectEventFunction) &LadarReporterFrame::OnStartReporting );
    frame->Connect( ladar_reporter_ui::UI_EVENT_StopReporting, wxEVT_COMMAND_MENU_SELECTED,
      (wxObjectEventFunction) &LadarReporterFrame::OnStopReporting );

The direction for this class was taken from wxWidgets online tutorials along with reviewing the ROS turtlesim package, which is a real treasure trove for seeing how a much more sophisticated ROS UI is put together. (If you have not already, I strongly suggest you review the turtlesim code in detail.)

6. Create the wxWidgets frame header class

Now that the wxWidgets application is in place, the frame, representing the UI window itself, needs to be developed. Accordingly, create src/ui/LadarReporterFrame.hpp containing the following code:

// LadarReporterFrame.hpp
#ifndef GUARD_LadarReporterFrame
#define GUARD_LadarReporterFrame
#include <wx/wx.h>
#include "LaserScanReportingService.hpp"
namespace ladar_reporter_ui
  class LadarReporterFrame : public wxFrame
          boost::shared_ptr<ladar_reporter_application_services::LaserScanReportingService> laserScanReportingService, 
          const wxString& title, const wxPoint& pos, const wxSize& size);
        void OnQuit(wxCommandEvent& event);
        void OnStartReporting(wxCommandEvent& event);
        void OnStopReporting(wxCommandEvent& event);
        boost::shared_ptr<ladar_reporter_application_services::LaserScanReportingService> _laserScanReportingService;
#endif /* GUARD_LadarReporterFrame */

There are a couple of interesting bits in the header:

  • The frame header declares the events which will be handled; e.g., OnQuit.
  • It accepts and stores an instance of the LaserScanReportingService in order to invoke the application service layer in response to user interaction.

7. Create the wxWidgets frame implementation class

Create src/ui/LadarReporterFrame.cpp containing the following code:

// LadarReporterFrame.cpp
#include "LadarReporterFrame.hpp"
#include "UiEvents.hpp"
using namespace ladar_reporter_application_services;
namespace ladar_reporter_ui
    boost::shared_ptr<LaserScanReportingService> laserScanReportingService, 
    const wxString& title, const wxPoint& pos, const wxSize& size)
      : wxFrame( NULL, -1, title, pos, size ), _laserScanReportingService(laserScanReportingService)
    wxMenuBar *menuBar = new wxMenuBar;
    wxMenu *menuAction = new wxMenu;
    menuAction->Append( UI_EVENT_StartReporting, _("&Start Reporting") );
    menuAction->Append( UI_EVENT_StopReporting, _("S&top Reporting") );
    menuAction->Append( UI_EVENT_Quit, _("E&xit") );
    menuBar->Append(menuAction, _("&Action") );
    SetStatusText( _("Ready to begin reporting") );
  void LadarReporterFrame::OnStartReporting(wxCommandEvent& WXUNUSED(event)) {
    SetStatusText( _("Laser scan reporting is running") );
  void LadarReporterFrame::OnStopReporting(wxCommandEvent& WXUNUSED(event)) {
    SetStatusText( _("Laser scan reporting has been stopped") );
  void LadarReporterFrame::OnQuit(wxCommandEvent& WXUNUSED(event)) {

A few implementation notes:

  • The LadarReporterFrame() constructor initializes the menubar for the window along with other rendering details.
  • Each of the respective events are defined, invoking the application services layer when applicable. As discussed previously, the UI should interact with the rest of the package layers via the application services, as demonstrated above.

There’s obviously a lot of wxWidgets related information which I am glossing over which is beyond the scope of these posts. The wxWidgets documentation referenced earlier should fill in any remaining gaps.

8. Configure CMake to Include the Header and Implementation

With the header and implementation classes completed for the both the wxWidgets application and frame, we need to make a couple of minor modifications to CMake for their inclusion in the build.

  1. Open /ladar_reporter/CMakeLists.txt. Under the commented line #rosbuild_gensrv(), add inclusions for wxWidgets as follows:
    find_package(wxWidgets REQUIRED)
    include_directories( ${wxWidgets_INCLUDE_DIRS} )
  2. With /ladar_reporter/CMakeLists.txt still open, add a line to the include_directories statement for the following directory in order to include the wxWidgets application and frame headers:
  3. Open /ladar_reporter/src/CMakeLists.txt. Add an inclusion for the ui directory at the end of the file:
  4. Now, in order to create the UI executable itself, a new CMake file is needed under /ladar_reporter/src/ui. Accordingly, create a new CMakeLists.txt under /ladar_reporter/src/ui, containing the following:
      # Important that core comes after application_services due to direction of dependencies

9. Add a ROS wxWidgets Dependency to manifest.xml

Since the package will be leveraging wxWidgets, a dependency needs to be added for the package to find and use this, accordingly:

  • Under /ladar_reporter, open manifest.xml and add the following line just before the closing tag:

10. Build and try out the UI Functionality

We are now ready to try everything out. While it is generally possible to write unit tests for the UI layer, personal experience has shown that the UI changes too frequently to make such unit tests worth while. UI unit tests quickly become a maintenance headache and do not provide much more value than what the existing unit tests have already proven; i.e., we’ve already verified through unit tests that the heart of our package – the domain objects, the message endpoints, and the application services – are all working as expected…the UI is now “simply” the final touch. Enough babble, let’s see this baby in action:

  1. Build the application by running make within a terminal at the root of the package (at /ladar_reporter).
  2. Open a new terminal and run roscore to begin ROS
  3. Open a third terminal window and run rostopic echo /laser_report to observe any laser reports published to ROS
  4. In the terminal that you’ve been building in…
    cd bin

    Wait a second, that’s not showing anything new…that’s right! Always run your unit tests after making changes, even if you haven’t added any new tests, to make sure that you haven’t broken any existing code.

  5. Still within the bin folder – in the terminal – run ./ladar_reporter_node. A window should pop-up allowing you to select an “Action” menu to start and stop the reporting service along with quitting the application altogether. You can see the laser reports going out to ROS via the third terminal window that was opened.
  6. Click the “Quit” menu item when you just can’t handle any more excitement.

Well, that about wraps it up, we started by laying out our architecture and systematically tackling each layer of the package with proper separation of concerns and unit testing to make sure we were doing what we said we were doing. As demonstrated with the layering approach that we developed, higher layers (e.g., application services and core) didn’t depend on lower layers (e.g., message endpoints and the ROS API). In fact, when possible, the lower layers actually depended on interfaces defined in the higher layers; e.g., the message endpoint implemented an interface defined in the higher core layer. (Although the class diagrams show core on the bottom, it’s actually reflecting the dependency inversion that was introduced.) This dependency inversion enabled a clean separation of concerns while allowing us to unit test the various layers in isolation of each other.

I sincerely hope that this series has shed some light on how to properly architect a ROS package. While this series did not go into a granular level of detail with respect to using ROS and wxWidgets, it should have provided a good starting point for developing a solid package. The techniques described in this series have been honed over many years by demi-gods of development (e.g., Martin Fowler, Robert Martin, Kent Beck, Ward Cunningham, and many others) and continue to prove their value in enabling the development of maintainable, extensible applications which are enjoyable to work on. While ROS may be relatively new, the tried and trued lessons of professional development are quite timeless indeed.

As always, your feedback, questions, comments, suggestions, and even rebuttals are most welcome. To delve a bit further into many of the patterns oriented topics discussed, I recommend reading Gregor Hohpe’s Enterprise Integration Patterns and Robert Martin’s Agile Software Development, Principles, Patterns, and Practices. And obviously, for anything ROS related, you’ll want to keep reading everything you can at (and here at, of course)!

Billy McCafferty

Download the source for this article.

Developing Well-Designed Packages for Robot Operating System (ROS), Part V

Part V: Developing and Testing the ROS Message Endpoint

[Author’s note, July 28, 2010: Introduced Boost::shared_ptr to manage reference to message endpoint so as to enable postponed construction, as will be required in Part VI.]

While this series (Part I, II, III, IV) has been specifically written to address writing well-designed packages for ROS, we’ve actually seen very little of ROS itself thus far. In fact, outside of the use of roscreate to generate the package basics and sensor_msgs::LaserScan for communicating laser scan data from the reader up to the application services layer, there’s been no indication that this application was actually intended to work with ROS now or ever. Ironically, this is exactly what we’d expect to see in a well designed ROS package.

Each layer that we’ve developed – as initially outlined in Part I – is logically separated from each other’s context of responsibility. To illustrate, the upper layers do not directly depend on “service” layers, such as message endpoints. Instead, the lower layers depend on abstract service interfaces declared in the upper layers. This dependency inversion was enabled in Part IV with the creation of ILaserScanEndpoint, a separated interface. If all of this dependency inversion and separated interface mumbo-jumbo has your head spinning at all, take some time to delve deeper into this subject in Dependency Injection 101.

While the actual message endpoint interface was created, only a test double was developed for testing the application service layer’s functionality. Accordingly, in this post, the concrete message endpoint “service,” which implements its separated interface, will be developed and tested. That’s right…we’ll finally actually talk to ROS! You can skip to the chase and download the source for this post.

Before digging into the code, it’s important to take a moment to better understand the purpose and usefulness of the message endpoint. The message endpoint encapsulates communications to the messaging middleware similarly to how a data repository encapsulates communications to a database. By encapsulating such communications, the rest of the application (ROS package, in our case) may remain blissfully oblivious to details such as how to publish messages to a topic or translate between messages and domain layer objects.

This separation of concerns helps to keep the application cleanly decoupled from the messaging middleware. Another benefit of this approach is enabling the development and testing of nearly the entirety of the application/package before “wiring” it up to the messaging middleware itself. This typically results in more reusable and readable code. If you haven’t already, I would encourage you to read the article Message-Based Systems for Maintainable, Asynchronous Development for a more complete discussion on message endpoints.


Target Class Diagram

The following diagram shows what the package will look like after completing the steps in this post…it’s beginning to look oddly familiar to the package diagram discussed in Part I of this series, isn’t it? If you’ve been following along, most of the elements have already been completed; only the concrete LaserScanEndpoint and LaserScanEndpointTests will need to be introduced along with a slight modification to the TestRunner.

1. Setup the Package Skeleton, Domain Layer and Application Services Layer

If not done already, follow the steps in Part II, Part III, and Part IV to create the package and develop/test the domain and application service layers. (Or just download the code from Part IV as a starting point to save some time.)

2. Create the message endpoint header class.

Create src/message_endpoints/LaserScanEndpoint.hpp containing the following code:

// LaserScanEndpoint.hpp
#ifndef GUARD_LaserScanEndpoint
#define GUARD_LaserScanEndpoint
#include <ros/ros.h>
#include "sensor_msgs/LaserScan.h"
#include "ILaserScanEndpoint.hpp"
namespace ladar_reporter_message_endpoints
  class LaserScanEndpoint : public ladar_reporter_core::ILaserScanEndpoint
      void publish(const sensor_msgs::LaserScan& laserScan) const;
      // Create handle to node
      ros::NodeHandle _ladarReporterNode;
      ros::Publisher _laserReportPublisher;
#endif /* GUARD_LaserScanEndpoint */

The message endpoint header simply implements ILaserScanEndpoint and sets up handlers for holding the ROS NodeHandle and Publisher. The more interesting bits are found in the implementation details…

3. Create the message endpoint implementation class.

Create src/message_endpoints/LaserScanEndpoint.cpp containing the following code:

// LaserScanEndpoint.cpp
#include <ros/ros.h>
#include "sensor_msgs/LaserScan.h"
#include "LaserScanEndpoint.hpp"
namespace ladar_reporter_message_endpoints
    // Setup topic for publishing laser scans to
    : _laserReportPublisher(
      _ladarReporterNode.advertise<sensor_msgs::LaserScan>("laser_report", 100)) { }
  void LaserScanEndpoint::publish(const sensor_msgs::LaserScan& laserScan) const {
    ROS_INFO("Published laser scan to laser_report topic with angle_min of: %f", laserScan.angle_min);

As you can see, there’s really not much to the actual publication process…which is what we were hoping for. The message endpoint should simply be a light way means to send and receive messages to/from the messaging middleware. This message endpoint does so as follows:

  • The constructor initializes the publisher by advertising on a topic with the name of “laser_report.”
  • The publish() function simply takes the received laserSan and moves it along to be published via ROS. Although not necessary for this specific package code, the call to spinOnce() will be important when the package has callbacks based on messages received. (See for more details.)

4. Configure CMake to Include the Header and Implementation

With the header and implementation classes completed, we need to make a couple of minor modifications to CMake for their inclusion in the build.

  1. Open /ladar_reporter/CMakeLists.txt. Within the include_directories statement, add an include for the following directory to include the concrete message endpoint header:
  2. Open /ladar_reporter/src/CMakeLists.txt. Add an inclusion for the message_endpoints directory at the end of the file:
  3. Now, in order to create the message endpoints class library itself, a new CMake file is needed under /ladar_reporter/src/message_endpoints. Accordingly, create a new CMakeLists.txt under /ladar_reporter/src/message_endpoints, containing the following:
    # Create the library

5. Build the message endpoints Class Library

In a terminal window, cd to /ladar_reporter and run make. The class library should build and link successfully.

Like with everything else thus far…it’s now time to test our new functionality.

6. Unit Test the LaserScanEndpoint Functionality

While testing up to this point has been pretty straight-forward, we now need to incorporate ROS package initialization within the test itself.

  1. Our package is going to act as a single ROS node; accordingly, we need to modify /ladar_reporter/test/TestRunner.cpp to initialize ROS within the package and to register itself as a node which we’ll call “ladar_reporter.” While I’m hard-coding the node name within the test itself, you may want to consider putting the name into a config file as it’ll likely be referenced in multiple places; having it centralized within a config file gets rid of the magic string, making it easier to change while reducing the likelihood of typing it wrong in some hard-to-track-down spot.

    Open /ladar_reporter/test/TestRunner.cpp and modify the code to reflect the following:

    #include <gtest/gtest.h>
    #include <ros/ros.h>
    // Run all the tests that were declared with TEST()
    int main(int argc, char **argv){
      testing::InitGoogleTest(&argc, argv);
      // Initialize ROS and set node name
      ros::init(argc, argv, "ladar_reporter");
      return RUN_ALL_TESTS();
  2. Create a new testing class, /ladar_reporter/test/message_endpoints/LaserScanEndpointTests.cpp:
    // LaserScanEndpointTests.cpp
    #include <gtest/gtest.h>
    #include "sensor_msgs/LaserScan.h"
    #include "LaserScanEndpoint.hpp"
    #include "LaserScanReportingService.hpp"
    using namespace ladar_reporter_application_services;
    using namespace ladar_reporter_message_endpoints;
    namespace ladar_reporter_test_message_endpoints
      // Define unit test to verify ability to publish laser scans 
      // to ROS using the concrete message endpoint.
      TEST(LaserScanEndpointTests, canPublishLaserScanWithEndpoint) {
        // Establish Context
        LaserScanEndpoint laserScanEndpoint;
        sensor_msgs::LaserScan laserScan;
        // Give ROS time to fully initialize and for the laserScanEndpoint to advertise
        // Act
        laserScan.angle_min = 1;
        laserScan.angle_min = 2;
        // Assert
        // Nothing to assert other than using terminal windows to 
        // watch publication activity. Alternatively, for better testing, 
        // you could create a subscriber and subscribe to the reports 
        // You could then track how many reports were received and 
        // assert checks, accordingly.
      // Define unit test to verify ability to leverage the reporting 
      // service using the concrete message endpoint. This is more of a 
      // package integration test than a unit test, making sure that all 
      // of the pieces are playing together nicely within the package.
      TEST(LaserScanEndpointTests, canStartAndStopLaserScanReportingServiceWithEndpoint) {
        // Establish Context
        boost::shared_ptr<LaserScanEndpoint> laserScanEndpoint =
          boost::shared_ptr<LaserScanEndpoint>(new LaserScanEndpoint());
        LaserScanReportingService laserScanReportingService(laserScanEndpoint);
        // Give ROS time to fully initialize and for the laserScanEndpoint to advertise
        // Act
        // Assert
        // See assertion note above from 
        // LaserScanEndpointTests.canPublishLaserScanWithEndpoint

    The comments within the test class above should clarify what is occurring. But in summary, the canPublishLaserScanWithEndpoint test bypasses all of the layers and tests the publishing of messages directly via the message endpoint. The canStartAndStopLaserScanReportingServiceWithEndpoint test takes this much further and injects the LaserScanEndpoint message endpoint into the LaserScanReportingService application service and starts/stops the laser scan reprorting, accordingly. This latter test should be seen more as an integration test rather than a unit test as it tests the results of all of the layers working together.

  3. Open /ladar_reporter/test/CMakeLists.txt. Within the rosbuild_add_executable statement, add the following to include the message endpoint tests in the build:
  4. While we’re at it, we’ll also need to link the new message_endpoints class library to the unit testing executable; accordingly, also within /ladar_reporter/test/CMakeLists.txt, modify target_link_libraries to reflect the following:
    # Link the libraries
      # Important that core comes after application_services due to direction of dependencies
  5. Verify that everything builds OK by running make within a terminal from the root folder, /ladar_reporter.
  6. Now for the fun part…time to run our tests with ROS running. The steps will follow closely to those described in the ROS tutorial, Examining Pulisher/Subscriber:
    1. Open a new terminal and run roscore to begin ROS
    2. In the terminal that you’ve been building in…
      cd bin

    With a little luck, you should see a few messaging being published to ROS while running the unit tests. And just to prove it…

    1. With roscore still running, open a third terminal window and run rostopic echo /laser_report. You’ll likely be warned that the topic is not publishing yet…let’s change that.
    2. Back in your original terminal, the one you ran the unit tests in, rerun ./ladar_reporter_tests. You should now being seeing laser scans being echoed to the third terminal window for the LaserScanEndpointTests canPublishLaserScanWithEndpoint and canStartAndStopLaserScanReportingServiceWithEndpoint. Seriously now, how cool is that?

The first five parts of this series conclude the primary elements of developing well-designed packages for Robot Operating System (ROS) using proven design patterns and proper separation of concerns. Obviously, this is not a trivially simple approach to developing ROS packages; indeed, it would be overkill for very simple packages. But as packages grow in size, scope, and complexity, techniques described in this series should help to establish a maintainable, extensible package which doesn’t get too unruly as it evolves. In Part VI, the final part in this series, we’ll look at adding a simple UI layer, using wxWidgets, to interact with the package functionality.

Billy McCafferty

Download the source for this post.

© 2011-2016 Codai, Inc. All Rights Reserved