Posted on 23:59, November 22nd, 2010 by Billy McCafferty
Note that this is for communications with the SICK S300 Standard, not the S300 Professional. If you have an S300 Professional, see the cob_sick_s300 ROS package.
As much as I am a software guy, you’re simply not going to avoid hardware for long in the world of robotics. Sure, simulators are great, but it’s impossible to fully replace the nuances (and excitement and frustration) of developing robots with actual hardware. Furthermore, the “noise” generated by live robotics is hard to adequately mimic in the simulated environment; e.g., missed laser scans, bumpy surfaces, sensors that aren’t perfectly aligned with where you think they are, dealing with hardware drivers, etc.
Unfortunately, the hardware side of robotics is far from cheap if your research is self-funded (read – I am still accepting research donations). One of the most essential, and expensive, elements of modern robotics is a solid range-finding sensor(s). While having been around for some time, laser range-finders are finally compact enough and relatively affordable enough to augment any roboticist’s tool bench. The challenge is finding one that is the right balance of size, price, power consumption and capability for your project’s needs.
The Hokuyo URG-04LX-UG01 Laser is a great choice for basic range-finding needs; it’s very compact (e.g., can fit in the head of an Aldebaran Nao), weighs little, has a great price point (around $1,200…Nao head not included) and provides precise 2D range details out to 5.6 m. Providing 3D point cloud information, the Mesa Imaging SR4000 provides an insane amount of range information within a narrower range of sensing (43° or 69°) out to 10 m, ideal for highly precise applications, such as interpreting facial expressions (if you’re not using a camera for some reason) or tracking slight body motions…all for a cool $9,000. On the other end of the cost spectrum, the XBox 360 Kinect also provides point cloud information out to 5 m while only costing a fraction of the cost at $150 (assuming you can find it during the Christmas rush); the Kinect also has newly available drivers for ROS. (I can’t imagine a better starting point than Kinect.)
For my own needs, I was looking for something that could provide a greater range than the URG-04LX-UG01 without precluding the ability to collect point cloud information at an extended range as well. Without spending your entire research budget, this can be achieved by obtaining a 30m laser range-finder and mounting it on a tilt system, such as a SPT400 Tilt System. The pairing won’t be as real-time as an SR4000 (you’ll need to remain stationary while tilting the laser for a most accurate point cloud), but it makes the 30 m point-cloud generator far more affordable. (If you don’t need 30 m capabilities…stop here and go for the Kinect!)
As for the range finder, a 30 m laser range finder can cost around $4,000 – $5,500 brand spanking new; great choices being Hokuyo UTM-30LX and the SICK LMS100. But if you keep your eye on eBay, you can pick up a comparable laser range finder for far less; I was fortunate enough to find a new SICK S300 Standard priced more like a URG-04LX-UG01, providing the 30 m range data I was looking for. Unfortunately, the S300 Standard does not provide programmatic access through its “system plug” (the base port) like it’s bigger (and more expensive) brother, the SICK S300 Professional. With that said, the S300 Standard does provide communications via an RS-232 diagnostics port on the front of the scanner. It’s limited to 38400 baud, but it’s enough to provide 2.75 scans per second, encompassing 270° of range information at 1/2° increments. Unfortunately, the S300 Standard documentations is almost useless for taking this approach; accordingly, this post (after the above segue) provides an overview of programmatic communications with the SICK S300 Standard via the diagnostics port.
Serial Communications with SICK S300 Standard
The SICK S300 Standard diagnostic port provides 38400 baud reporting of laser scan data over RS-232 serial communications. There are three general steps to requesting laser scans via this approach:
We’ll now detail how each phase of communications is accomplished, beginning with setting up the system for initial testing.
Initial System Setup
Instead of diving right in to writing a C++ application for managing the communications, it’s best to confirm and demonstrate successful communications with the S300 via raw serial communications using a serial utility. But before we get even that far, the hardware itself needs to be setup. To communicate with the S300 Standard, the following equipment is required (model numbers are further detailed here):
The hardware setup is then quite simple, as follows:
After providing power to the S300 Standard, and before serial communications are possible, the S300 Standard must be initialized via the Configuration & Diagnostics Software (CDS); until that is done, it’ll show the number “6″ on its display when powered up. Since the S300 Standard is a safety scanner, the laser scanning will stop (and it would send a signal to the vehicle it’s mounted on) if an object enters the protective field range, with its default settings. So within the CDS, set the scanner for automatic interlock restart to disable this disabling trigger action. (Certainly not recommended if you actually need the safety features…the S300 operating instructions gives all the details on this.) Once properly configured, the S300 Standard should show “1″ on its display with a green checkbox displayed.
Now that the scanner is powered and configured, it’s time to test basic communications. If using Windows, a tool such as Advanced Serial Port Monitor may be used to send serial requests to the S300 and to read from it, accordingly. In a Linux environment, such as Ubuntu Lucid Lynx, there are two tools that I prefer for testing serial communications: jpnevulator to write commands and slsnif to listen for responses. Using these utilities, take the following steps to setup:
We’re now ready to request a token and laser scans…
As mentioned, before being able to request laser scan data, a token must be requested to inform the S300 which communication port we’ll be requesting scan data on. In this case, communications are over the diagnostic port. To request a token, send the following hex command via the jpnevulator terminal by simply copying/pasting the following command:
00 00 41 44 19 00 00 05 FF 07 19 00 00 05 FF 07 07 0F 9F D0
If this command was successfully sent, the slsnif terminal should return 4 null byte-pairs as
After successfully requesting a token, laser scans may now be requested. Send the following to request a laser scan:
00 00 45 44 0C 00 02 22 FF 07
This request should return a 1096 byte-pair response. The response is interpreted as follows:
In practice, the first 12 byte-pairs may be ignored assuming the fourth byte-pair (position 3) is null (
Each element of the scan data is made up of two byte-pairs (four bytes) and must be broken down to the bit level to parse the information. The first byte-pair may be ignored as it concerns itself with the S300′s safety features. The second byte-pair contains information stored across 16 bits. Bits 0-12 express the range distance in cm. The only other useful bit (pun intended) is bit 13 which indicated if glare (dazzling) was detected; an indication that the range measurement may be subjectively ignored, or confidence variance increased for that datum.
The TGM CRC is a cyclic-redundancy-check (or checksum) to test the integrity of the scan report. Googling for that fancy phrase will provide algorithms for its implementation.
The care-o-bot‘s S300 ROS package provides terrific example code for parsing range data and computing the CRC. (This package won’t work out-of-the-box as the care-o-bot’s S300 driver is intended for the S300 Professional which has quite different header information…but it doesn’t take much effort to modify it for the S300 Standard. I’ll make an announcement once I publish the ROS package that I’ve developed which works with the S300 Standard.)
If you’re dotting your i’s and crossing your t’s, you should request a token release when complete. Do so by sending the following command:
00 00 41 44 19 00 00 05 FF 07 19 00 00 05 FF 07 00 00 E7 B8
This isn’t required if you’ll only ever be communicating via the diagnostics port, but it surprisingly makes you feel clean and tidy if you do it, nonetheless.
The next step is to write a program which sends the described commands to the S300 Standard and parses responses, accordingly. If you don’t want to go for what the care-o-bot code includes, simple, reusable, serial port communications code for sending and receiving data in C++, may be found within the ROS package serial_port; an example package which uses it is the ax12-arm.
With this, the 30 meter ranging capabilities of the S300 Standard are available for programmatic use.
Posted on 23:40, October 22nd, 2010 by Billy McCafferty
As discussed in Part I, the Kalman filter is an optimal, prediction/correction estimation technique which minimizes estimated error and can be used a wide array of problems. While Part I focused on a one-dimensional problem space to more easily convey the underlying concepts of the Kalman filter, Part II will now expand the discussion to bring the technique to higher dimensions (e.g., 2D and 3D) while still being constrained to linear problems.
What’s so cool about the Kalman filter you ask? Let’s highlight a few areas where the Kalman filter may provide value. (This should help you remain motivated while you’re delving into a myriad of Greek symbols and matrix transformations!) The Kalman filter can help:
These few examples demonstrate just how useful the Kalman filter can be on a wide variety of problems. To limit the scope of discussions, the context of this tutorial will be on determining the true pose of a mobile robot given noisy control inputs and measurement data.
In examining the Kalman filter a bit more, we’ll discuss the following topics:
The current discussion will avoid the derivations of the associated, base equations in favor of pragmatic use of the filter itself. For a comprehensive derivation and discussion of the involved equations and mathematical roots, see Robert Stengel’s Optimal Control and Estimation.
Applicable Systems for Use of the Kalman filter
In Part I, it was discussed that a system must adhere to three constraints for Kalman filter applicability: 1) it must be describable by a linear model, 2) the noise within the model must be white, and 3) the noise within the model must be Gaussian. More precisely, the state must be estimable via:
The state estimation equation is broken down as follows:
In addition to adhering to the linear state estimation just discussed, in order to be Kalman-filter-compatible, the system being modeled must also have a measurement model estimable via:
The measurement equation is broken down as follows:
In summary, if the system can be modeled by the the process and measurement equations described above, then the Kalman filter may be used on the system to estimate state, if given control and measurement inputs. Let’s now look at the general Kalman filter algorithm, at a very high level, including specific inputs and outputs.
Kalman Filter Algorithm, Inputs and Outputs
The Kalman filter algorithm follows a surprisingly straight-forward algorithm broken down into two phases. The first phase is called the time estimate (or prediction) in which the previous state and control input is used to estimate the current state and estimate covariance. The second phase is called the measurement update (or correction) in which the Kalman gain is calculated and the state estimate and covariance is improved upon using measurement data and the Kalman gain. Roughly, the algorithm is as follows:
During the first time iteration t0, the Kalman filter accepts as input the initial state and estimate covariance (which may be zero if the initial state is known with 100% certainty) along with the control input u and measurement data z. On subsequent time iterations tn, the Kalman filter accepts as input the output from the previous run (with mean and covariance – discussed more below) along with the control input u and measurement data z from tn.
The output of the Kalman filter is an estimate of the state represented by a normal distribution having mean μ (the estimated state) and covariance Σ (the confidence, or more accurately, the noise, in that estimate). (As a reminder, the covariance of a normal distribution is the standard deviation squared σ2.) Note that μ need not be limited to a scalar value; in fact, it’ll almost always be a vector. For example, the pose of a mobile robot may be a three dimensional vector containing the location and orientation (x y θ)T. Accordingly, this vector would be the resulting mean value. Furthermore, with a three dimensional state vector as the mean, the covariance Σ would be a (3×3) diagonal matrix having a covariance for each corresponding value of the vector, as shown at right.
Kalman Filter Algorithm Formalized
We’ve discussed the initial Kalman filter equations for process and measurement estimation; we’ve also discussed the overall algorithm for implementation, broken down into prediction and correction phases. What’s missing are the actual calculations for concretely carrying out the estimation process itself. The concrete calculations for implementing the Kalman filter algorithm are “easily” derived from the process and measurement equations by taking the partial derivatives of them and setting them to zero for minimizing error…and jumping around three times and standing on your head for π minutes. (My eyes quickly begin to glaze over when I start to follow derivations of this nature…but if you like this kind of stuff, Sebastian Thrun shows the complete derivation within Probabilistic Robotics; Robert Stengel takes it to 11 within Optimal Control and Estimation with more Greek symbols than you can shake a stick at.) But I digress…
To formalize, the Kalman filter algorithm accepts four inputs:
With the given inputs, the Kalman filter algorithm is implemented as follows:
Line 1 should be comfortingly familiar; this is the calculation for estimating the current state given the previous state and control input. But what’s missing from the original process equation? Have you spotted it yet? I’ll give you a noisy hint. (No mean for the pun…thank you, thank you, I’ll be here all week.) That’s right, the noise parameter has been left off of the state estimation equation in line 1. Line 1 simply calculates the a priori state estimate, ignoring process noise.
Line 2 calculates the covariance of the current state estimate, taking process noise into consideration. Matrix A has already been discussed; it comes from the Kalman filter state estimate equation described earlier. R is a diagonal matrix representing the process noise covariance.
Line 3 calculates the Kalman gain which will be used to weight the effect of the measurement model when correcting the estimate. C is identical to the matrix H described earlier in the base Kalman filter measurement equation. As tricky as this line looks (and some of those matrix calculations can make your head hurt a bit), the only thing new is Q; this diagonal matrix is the measurement noise covariance. The resulting Kalman gain K is a matrix having dimensions (nxm) where n is the dimension of the state vector and m is the dimension of the measurement vector.
(As a side, take note that in different reading sources, the meaning of R and Q may be switched; Q would be process noise and R would be measurement noise and would have their locations in the equations swapped, accordingly. Just be cognizant of which is which within the source you’re reading from.)
Line 4 updates the state estimate taking into account the weighted measurement information. Note that the Kalman gain is multiplied by the difference between the actual measurement model and the predicted measurement model. What happens if they happen to be identical? …Jeopardy daily double sounds playing in the background… If the actual and predicted measurement models happen to be identical, then the estimated state will not be corrected at all since our sensors have verified that we’re exactly where we thought we were. I.e., don’t fix what ain’t broken. The result of line 4 is the a posteriori state estimate.
Finally, line 5 corrects the covariance, taking into account the Kalman gain used to correct the state estimate.
As output, the Kalman filter algorithm returns two values:
With these outputs, it is now known with some Σ amount of error what the current state of the system is; or where our intrepid little robot is on the map.
Limitations of the Kalman Filter
The Kalman filter is incredibly powerful and can be used in a surprising number of scenarios. The primary limitation of the Kalman filter is that it assumes use within a linear system. Many systems are non-linear (such as a mobile robot moving with a rotational trajectory) yet may still benefit from the Kalman filter. The applicable approach is to form a linear estimate of the non-linear system for use by the Kalman filter; similar in effect to a Taylor series expansion. Popular extensions to the Kalman filter to support non-linear systems include the Extended Kalman filter and, even better, the Unscented Kalman filter. Specifically, chapter 7 of Sebastian Thrun’s Probabilistic Robotics goes into good detail on describing how to apply both of these extensions to the context of mobile robotics.
Googling for “Kalman filter” will quickly show just how much more there is to this topic. But I certainly hope this two part series has helped to clarify the overall algorithm with particular attention to describing the various elements of the calculations themselves.
Posted on 22:16, October 14th, 2010 by Billy McCafferty
Our initial introduction to the Kalman filter was easy to understand because both the motion and measurement models were assumed to be one-dimensional. That’s great if you’re a lustrous point in Lineland, but the three dimensional world must be dealt with sooner or later. Specifically, within the initial introduction, location (or state) x, the control input u, and the measurement z were all scalar (numeric) values along a one-dimensional line. For actual use of the Kalman filter, x, u, and z are much more frequently vectors instead of scalar units. In order for the vectors to play nicely with one another (to add and subtract them from each other), matrices must be used to tranform the vectors into a common form. Accordingly, before delving further into the Kalman filter, this post provides a basic review of matrices and matrix operations to better prepare ourselves for more gory Kalman filter details.
To better visualize why we need to be concerned with matrices, assume you are using the Kalman filter for localization of your humble robot on a 2D map. Furthermore, assume that the robot is holonomic on a 2D plane (can turn in a circle on a dime). We now need to create a model to adequately represent the pose (or “kinematic configuration” if you’re feeling fancy), the motion model (the control input), and the measurement model (which we’ll ignore for now to focus on matrices).
The state, x, or pose of our robot, is succinctly represented as a three-dimensional column vector made up of the x and y coordinates of the robot on the two-dimensional map along with the robot’s orientation relative to the x axis, represented as θ. (Note that the x positional component here is decidedly different than the x vector variable representing the overall pose.) This 3D column vector, representing the pose on a 2D plane, is shown at right.
The control input, u, or motion model of our robot, can be represented in various forms, examples of which are described in detail in Probabilistic Robotics ; but for the topic at hand, assume that the motion model is simply a constant velocity, v, between two ticks of time, represented as a 2D vector containing speed and direction.
With this information, if given the previous pose and the motion model over a given timeframe, we can then calculate the current pose. To do so, we’ll need a linear equation which adds the previous pose to the control input. But the velocity vector can’t simply be added to the vector representing the previous pose – we’re talking apples and oranges here. We’ll need a transformation matrix to transform the velocity into a 3D vector which can be added to the pose. This is starting to get into Part II of the Kalman filter introduction, but this starts to give you an idea of how matrices will be used in the Kalman filter.
So onward with our matrix primer!
As illustrated above, a column vector is an ordered set of values with n dimensions, where n is the number of values within the vector. The values within a vector need not be limited to being scalars; e.g., one or more values within the vector could also be a vector. By convention, a vector is assumed to be a column vector unless otherwise noted. A vector is symbolized as a bold-face, lower-case letter.
If all of the elements of a vector are 0, the vector is a null vector.
The transpose of a column vector is a row vector (and vice-versa) and has a superscript T to denote as such.
A matrix is a two-dimensional array of scalar values (or coefficients) having r rows and n columns, noted as having (rxn) dimensions. If both r and n are one, then the matrix is a scalar value. If just n is one, then the matrix is a vector. If just r is one, then the matrix is a row vector. If r = n then the matrix is a square matrix. Matrices are symbolized as a bold-face, upper-case letter.
If all of the elements of a matrix are 0, the matrix is a null matrix. If all of the diagonal elements of a square matrix (e.g., a11, a22, …, arn) have a value while all others do not, the matrix is a diagonal matrix. If all of the diagonal elements of a diagonal, square matrix are 1, then the matrix is an identity matrix. An example identity matrix is shown at right.
The transpose of a matrix is the matrix “flipped” on its diagonal; it is created by writing the rows of A as the columns of AT. Accordingly, the columns (n) and rows (r) of A will equal the rows (r) and columns (n) of AT, respectively; e.g., if A has the dimensions (2×3) then AT has the dimensions (3×2).
When looking at available operations among scalars, vectors, and matrices, it’s easiest to start with the multiplication of a matrix by a scalar value. Simply enough, each value within the matrix is simply multiplied by the scalar; quite elementary indeed.
Matrix/Matrix Addition & Subtraction
The next trivial operation is that of matrix-to-matrix addition and subtraction. Simply enough, each value in the first matrix is added to, or subtracted by, the respective element in the second matrix. In order to add or subtract to matrices, the matrices must have the same (rxn) dimensions.
As mentioned in the opening of this review, it is necessary within the Kalman filter to transform a control vector, for example, into a state vector, so that it may be added to the previous state to calculate the current state. This transformation is achieved by multiplying the control vector by a matrix representing how the control vector relates to the state.
In more generic terms, a resulting variable may be the result of a linear function of another vector and a matrix representing how the vector being acted upon relates to the result. (You might want to read that again.) The linear funtion for the result is written as y = Ax. More simply put, A is a matrix which represents how the vector x relates to y, the result; accordingly, A transforms x into y. In matrix-speak, this is a linear transformation.
In order to transform a vector by a matrix, the number of columns (n) of A must equal the dimension (n) of x. Additionally, the number of rows (r) of A will equal the dimension (n) of y. If these constraints hold, then A is said to be conformable to x.
The following demonstrates how each value of y is calculated:
Interestingly, if the matrix A is a diagonal matrix (square by implication), then each y value is the product of the corresponding x and diagonal value in the matrix. If the matrix A is an identity matrix (also square by implication), then each y value is equal to the corresponding x value. Examples of each are shown at right.
The last topic worth mentioning in detail, in our rather elementary review of matrices and matrix operations, is that of multiplying two matrices together.
In order to get the result of the product of two matrices, e.g., C = AB, the number of columns (n) of A must equal the number of rows (r) of B. The result, C, will have the number of rows (r) of A and the number of columns (n) of B.
The following demonstrates how each value of C is calculated:
If both A and B were square, AB ≠ BA due to order in which rows and columns are multiplied and summed. But when multiplying by an identity matrix, A = AI = IA.
There is certainly much more to matrices and matrix operations, but the above gives enough to move on to the Part II of our introduction to the Kalman filter and to understand the implication of matrices when used within signals control and robotics literature. Incidentally, this should also be enough information to understand just about every use of a vector and matrix within Sebastian Thrun’s Probabilistic Robotics (a highly recommended read if you’re interested in mobile robotics). For a more comprehensive review of matrices and their use within control systems, there are fewer texts better (albeit, a bit daunting) than Robert Stengel’s Optimal Control and Estimation.
Posted on 17:17, September 22nd, 2010 by Billy McCafferty
Dealing with the real world is rough. (With an opening like that, you can probably guess how my high school days went.) To be clear here, we’re talking about robots having to deal with the real world (not me, luckily). It’s hard enough for our little silicon brethren to have limited sensory capabilities, but on top of that, the data that’s coming in is usually noisy. For example, sonar range data can be inconsistent, throwing off distance measurements by many centimeters. Even laser data isn’t perfect; if you watch the range data coming in from stationary laser sensors, you’ll notice the range data shake indecisively around the actual range. What’s one to do? Well, you could shell out a little more money for ultra-sensitive sensors, but even they have their limitations and are subject to error. To add insult to injury, our robot’s misjudgment isn’t limited to data coming in from sensors, it’s also prone to misjudging what it’s done from a control perspective. For example, tell your robot to move forward 1 meter and it’ll comply as best it can; but due to various surface textures (road vs. dirt vs. carpet vs. sand), the robot may report that it’s traveled 1 meter when in fact it’s gone a bit further or a bit less. What we’d like to do is to filter out the chaff from the wheat…or the noise from the useful data in this case. Likely the most widely used, researched, and proven filtering technique is the Kalman filter. (Read: this is an important concept to understand!)
In short, if not a bit curtly, the Kalman filter is a recursive data processing algorithm which processes all available measurements, regardless of their precision, to estimate the current value of variables of interest, such as a robot’s position and orientation. There is a plethora of literature written concerning the Kalman filter, some useful and some otherwise; accordingly, to help you better understand the Kalman filter, I’ll guide you through a series of readings which I’ve found pivotally assistive in understanding this technique and discuss key points from those references.
The first step on our journey into the Kalman filter rabbit hole is Peter Maybeck’s Stochastic Models, Estimation, and Control, Vol. 1, Chapter 1 . There is simply no writing which gives a more tractable and approachable overview of the concepts behind the Kalman filter. If you’re following along at home (singing along with the bouncing dot), please take the time to now read Maybeck’s introduction and take the time to understand what you’re reading…I’ll wait patiently for your return.
<insert you reading intently here>
So what did we just learn? Below are a few points to emphasize key ideas from Maybeck’s introduction…
Algorithmically, in our one-dimensional context, the Kalman filter takes the following steps to estimate the variable of interest:
Isn’t that brilliant? The Kalman filter takes into account the previous state estimation, and the confidence of that estimation, to then determine the current estimate based on control input and measurement output, all the while tweaking itself – with the gain – along the way. And since it only needs to maintain the last estimate and confidence of the estimate, it takes up very little memory to implement. While very approachable, Maybeck’s introduction to the Kalman filter is greatly simplified…for that very reason. For example, Maybeck’s introduction assumes movement on a one-dimensional plane (an x value) with one-dimensional control input (the velocity). In more realistic contexts, we store the position and orientation of a robot as a vector and the control and measurement data as vectors as well. In the next post, we’ll examine what modifications need to be made to Maybeck’s simplification to apply the Kalman filter to more real world scenarios. This is where we must get our old college textbooks out to realize that we should have paid more attention in our matrices class (or you can refresh yourself here).
Until next time!
Maybeck, P. 1979. Stochastic Models, Estimation, and Control, Vol. 1.
Thrun, S., Burgard, W., Fox, D. 2005. Probabilistic Robotics.
Posted on 04:34, September 9th, 2010 by Billy McCafferty
The world of robotics has a dizzying number of subjects; it’s quite overwhelming at first glance to figure out which topics someone “really needs to get” and which topics require a more cursory understanding. Accordingly, this will be the first in a number of posts (“number” being linearly proportional to my motivation) that I will be doing on some of the more fundamental topics within the realm of robotics. We’ll begin our travels with “coordinate frames.” What are coordinate frames you ask…well slow down there fella…let’s first take a step back to figure out the motivations for wanting to ask such a question to begin with.
A “robot” is typically defined (more or less) as an autonomous system which interacts with its environment. Interaction may include actual manipulation of the robot’s environment; this requires some sort of manipulator. In (Siciliano, 2009), a manipulator is described as “a kinematic chain of rigid bodies connected by means of revolute or prismatic joints. One end of the chain is constrained to a base, while an end-effector is mounted to the other end.” This academic explanation can be more easily understood by looking at an AX-12 Smart Robot Arm: the aluminum links are the “rigid bodies,” the AX-12 servos are the “prismatic joints,” and the gripper is the “end effector.” So what’s this have to do with coordinate frames? Well, a challenge in having a robot manipulate its environment is being able to determine and describe the position and orientation (together describing the pose) of the end effector in relation to what needs to be manipulated (and yes, this is very challenging). More specifically, one needs to describe both the pose of the end effector and target object in relation to a reference frame.
When considering an object’s pose within a reference frame, one first needs to know what a reference frame is to begin with. In short, a reference frame is “how the world is oriented”; i.e., which way’s North, South, up, down, etc. To describe the reference frame, and – more importantly – to enable one to provide bearing for where an object is within that frame, the frame is described with three axis: x, y, and (you guessed it) z. Without being oriented, if y points to the right and z points up, which way does x point? To determine this, use a handy trick (no mean for the pun) known as the “right-handed rule” (sorry south paws). To demonstrate, hold out your hand in front of your face like you were about to karate chop a board with your thumb sticking towards your face. If you point your index finger towards y, curl your other fingers towards z, then your thumb will point towards the positive direction of x.
The origin of the frame, the [0, 0, 0] value of the x, y, and z axis is located on an arbitrary, but known, point within the environment or on an object. There can be a frame, and different origin accordingly, for each reference perspective for a given context; each would be known as the coordinate frame of the given context.
For example, suppose one is developing a robot to pick up toys and put them into a toy bin (can you tell I have kids?). In this case, there would likely be three coordinate frames of interest. The first would be a reference frame which would allow one to describe the pose of the toy and the manipulator in relation to that reference frame. For instance, the reference frame could have its origin in the corner of the room and be tied to the orientation of the room itself; a “reference frame” is simply a coordinate frame which does not change pose as other objects move through it. A second coordinate frame would be the coordinate frame from the perspective of the end effector. By applying a separate coordinate frame to the end effector, it’s now tractable to determine not only where the end effector is found in relation to the reference frame, but also how the end effector is oriented in relation to the reference frame and how the pose needs to be modified to reach another pose (with fun stuff techniques like matrix transformations). As the end effector would be moved, its coordinate frame would move with it, figuratively fixed to a point on the end effector. Finally, a third coordinate frame would be that of the toy being picked up; this frame, in relation to the reference frame, would facilitate determining how the end effector’s pose needs to change to be in proper alignment for picking up the toy, taking into account the toy’s pose as well (read, lots more matrix transformations).
When applying a coordinate frame to an object or environment, two decisions must be made:
There is certainly a ton more to coordinate frames, manipulating them, comparing them to each other, and transforming them, than the light introduction provided here, but this should at least assist in removing a deer in headlights look if you’re unfamiliar with this term and someone brings it up in conversation during a cocktail party…which always happens.
Siciliano, B. 2009. Robotics: Modelling, Planning and Control.
Posted on 20:56, September 2nd, 2010 by Billy McCafferty
A computer science professor of mine, “back in the day,” once said that the greatest challenge in the future of software development will rest in the realm of integration. This is certainly true in robotics, if nowhere else. Due to the complexity involved with robotics, developers typically focus on very specific problems and develop very specific solutions to meet those challenges, accordingly. Path planning, SLAM, edge finding, handle manipulation, road recognition, planning with resource constraints, pattern and object recognition, D*, data mining algorithms, reinforcement learning, Kalman and particle filters are just some examples of “specialty” subjects in robotics and AI which people have developed pivotal software solutions to. An immense opportunity has existed, and will continue to exist, in bringing these together into more generalized solutions which are able to reap the benefits of the more specialized solutions within a cohesive whole.
An obstinate challenge in doing just that has been the technical complexities involved with integrating the plethora of solutions into a grouping which facilitates seamless communications among these solutions. If someone were to attempt at integrating more than a handful of specialized solutions and frameworks, the time involved would be nearly prohibitive to accomplishing the desired goal. Robot Operating System (ROS) seems to be changing the game a bit to allow just such an endeavor to be more tractable. Now that core ROS development has stabilized, a large number of groups have been feverishly working on providing their “specialty” solutions as packages which seamlessly integrate with ROS. And that means that they are then much more inter-operable with just about any other package that works with ROS.
There is truly a dizzying number of packages that have been provided for ROS (navigable at http://www.ros.org/browse/list.php). The packages range from algorithms for SLAM to hardware communication packages to wrappers for programming languages to wrappers for off the shelf systems and other existing software. It is this last category that gets me really jazzed. With such wrappers being introduced (not to be confused with “rappers,” a slightly different breed) , we’re now able to leverage a number of very solid and mature tools and frameworks without being bogged down with low level integration and communication issues. I’d like to briefly highlight a few ROS packages which provide just such accessibility to existing tools and frameworks:
The sampling above highlights just a few of the efforts by various groups to facilitate the integration of solid, existing tools and frameworks into ROS for easier communications with other packages and custom development. Ultimately, these efforts are lowering the barrier to integrate many great ideas and solutions into a more cohesive whole. This seems like a great indication of the current state of robotics…a sign that the industry is finally maturing enough that we’re now working towards integrating existing solutions – rather than re-inventing the wheel – in order to more aggressively push the envelope of what’s attainable and what’s imaginable.
Posted on 22:25, August 18th, 2010 by Billy McCafferty
I was presented with the following discussion opener concerning the management of complexity via the introduction of abstraction…
I love this discussion! The heart of this is determining if the introduction of abstractions, for the sake of hiding complexity, is truly an overall benefit. In robotics, we’re ever dealing with increasing levels of abstraction for this very reason. For example, in Architectural Paradigms of Robotic Control, I briefly discussed the 3T architecture which has three separate layers, implemented as increasing levels of abstraction. E.g., the skill/servo layer would likely be implemented in C++, the sequencing/execution layer might be implemented as a sequence modeling language, such as ESL or NDDL, while the planning/deliberative layer might be implemented with a higher abstraction yet, such as with the Planning Domain Definition Language or Ontology with Polymorphic Types (Opt).
In response to the concerns put forth, I would tend to agree that abstraction hides complexity and that it does make it more likely that you may be bitten by the hiding of the complexity. With that said, encapsulation of complexity into well formed abstractions is an inevitable step to facilitate taking on increasingly complex problems. For example, in the .NET world of data access, Fluent NHibernate is really just a way of hiding the complexities of NHibernate. NHibernate is really just a way of hiding the complexities of ADO.NET. ADO.NET is really just a way of hiding the complexities of communicating to a database via TCP/IP sockets, or whatever underlying mechanism is employed. Along the same vein, tools such as Herbal, NDDL, and ESL are similarly provided as a means to provide an abstraction for hiding complexity.
Because these layers of complexity have been encapsulated in a manageable fashion, we’re now able to take on project work which would be far too complex to manage if we were using a lower level of implementation, e.g., pure C++, or Assembly for that matter. Indeed, there will be times when the added layers of abstraction will make it more difficult to tweak a low level capability, but the improved complexity management that the abstractions provide should far outweigh the sacrifice of losing some low level capabilities.
I think the crux for determining if an abstraction is worthwhile:
If the answers to the above are yes, then I believe that the encapsulation of complexity, codified as a new layer of abstraction, is pulling its weight. Otherwise, you might not want to throw that Assembly language reference away just yet.
Posted on 05:23, August 3rd, 2010 by Billy McCafferty
While I wait for my $200K grant, my Darpa project award, Aldebaran to send me a few Nao’s, or Willow Garage to mail a PR2 my way (please contact me for shipping details), I spend much of my research time on the simulation side of robotics. In addition to being far less costly than purchasing hardware, working via simulators actually provides a number of benefits:
Obviously, it’s difficult to replace the experiences of working on real robots in real environments with real-world sensor errors, data fusion that doesn’t agree, and unpredictable dynamics, but simulators certainly provide a convenient means to try out new ideas or experiment in new areas.
Along those lines, I’d like to highlight a few simulators for mobile robotics which may pique your interest:
The above list is certainly not exhaustive, but should give a good introduction of available simulation environments. Other environments not mentioned which also have support for mobile robotics development in simulated environments include MapleSim, Simbad, Carmen, Urbi (compatible with Webots) lpzrobots, Moby, and OpenSim. Finally, robot competitions occasionally include simulated environments in their challenges; robots.net keeps a terrific listing of available competitions at http://robots.net/rcfaq.html; the AAAI conferences and competitions are particularly good at coming up with novel hardware and simulation challenges which truly push the envelope of progress.
Certainly enough to keep you busy for a while…not bad when you consider the many projects you can carry out before committing to buying a single piece of hardware!
Posted on 07:22, July 30th, 2010 by Billy McCafferty
Part VI: Adding a UI Layer to the Package
As the last and final chapter to this series of posts (Part I, II, III, IV, V), we’ll be adding a basic UI layer to facilitate user interaction with the underlying layers of our package. Specifically, a UI will be developed to allow the user (e.g., you) to start and stop the laser reporting application service via a wxWidgets interface. If you’re new to wxWidgets, it really is a terrific open-source UI package with very helpful online tutorials, a thriving community, and a very helpful book, Cross-Platform GUI Programming with wxWidgets – certainly a good reference to add to the bookshelf. Arguably, the sample code discussed below is very simplistic and only touches upon wxWidgets; with that said, it should demonstrate how to put the basics in place and to see how the UI layer interacts with the other layers of the package.
Developing a UI layer with wxWidgets is quite straight forward; the UI itself is made up of two primary elements: a wxApp which is used to initialize the UI and a wxFrame which serves as the primary window. For the task at hand, the wxApp in the UI layer will be used to perform three primary tasks, in the order listed:
As a rule of thumb, the UI layer should only communicate to the rest of the package elements via the application services layer. E.g., the UI layer should not be invoking functions directly on domain objects found within ladar_reporter_core; instead, it should call tasks exposed by the application services layer which then coordinates and delegates activity to lower levels.
Before we delve deeper, as a reminder of what the overall class diagram looks like, as developed over the previous posts, review the class diagram found within Part V. The current objective will be to add the UI layer, as illustrated in the package diagram found within Part I. To cut to the chase and download the end result of this post, click here.
Show me the code!
1. Setup the Package Skeleton, Domain Layer, Application Services Layer, and Message Endpoint Layer
2. Install wxWidgets
Download and install wxWidgets. Instructions for Ubuntu and Debian may be found at http://wiki.wxpython.org/InstallingOnUbuntuOrDebian.
3. Define the UI events that the user may raise
Create an enum class at src/ui/UiEvents.hpp to define UI events as follows:
As suggested by the enum values, the user will be able to start the reporting process, stop it, and quit the application altogether.
4. Create the wxWidgets application header class
Create src/ui/LadarReporterApp.hpp containing the following code:
A few notes:
5. Create the wxWidgets application implementation class
Create src/ui/LadarReporterApp.cpp containing the following code:
The direction for this class was taken from wxWidgets online tutorials along with reviewing the ROS turtlesim package, which is a real treasure trove for seeing how a much more sophisticated ROS UI is put together. (If you have not already, I strongly suggest you review the turtlesim code in detail.)
6. Create the wxWidgets frame header class
Now that the wxWidgets application is in place, the frame, representing the UI window itself, needs to be developed. Accordingly, create src/ui/LadarReporterFrame.hpp containing the following code:
There are a couple of interesting bits in the header:
7. Create the wxWidgets frame implementation class
Create src/ui/LadarReporterFrame.cpp containing the following code:
A few implementation notes:
There’s obviously a lot of wxWidgets related information which I am glossing over which is beyond the scope of these posts. The wxWidgets documentation referenced earlier should fill in any remaining gaps.
8. Configure CMake to Include the Header and Implementation
With the header and implementation classes completed for the both the wxWidgets application and frame, we need to make a couple of minor modifications to CMake for their inclusion in the build.
9. Add a ROS wxWidgets Dependency to manifest.xml
Since the package will be leveraging wxWidgets, a dependency needs to be added for the package to find and use this, accordingly:
10. Build and try out the UI Functionality
We are now ready to try everything out. While it is generally possible to write unit tests for the UI layer, personal experience has shown that the UI changes too frequently to make such unit tests worth while. UI unit tests quickly become a maintenance headache and do not provide much more value than what the existing unit tests have already proven; i.e., we’ve already verified through unit tests that the heart of our package – the domain objects, the message endpoints, and the application services – are all working as expected…the UI is now “simply” the final touch. Enough babble, let’s see this baby in action:
Well, that about wraps it up, we started by laying out our architecture and systematically tackling each layer of the package with proper separation of concerns and unit testing to make sure we were doing what we said we were doing. As demonstrated with the layering approach that we developed, higher layers (e.g., application services and core) didn’t depend on lower layers (e.g., message endpoints and the ROS API). In fact, when possible, the lower layers actually depended on interfaces defined in the higher layers; e.g., the message endpoint implemented an interface defined in the higher core layer. (Although the class diagrams show core on the bottom, it’s actually reflecting the dependency inversion that was introduced.) This dependency inversion enabled a clean separation of concerns while allowing us to unit test the various layers in isolation of each other.
I sincerely hope that this series has shed some light on how to properly architect a ROS package. While this series did not go into a granular level of detail with respect to using ROS and wxWidgets, it should have provided a good starting point for developing a solid package. The techniques described in this series have been honed over many years by demi-gods of development (e.g., Martin Fowler, Robert Martin, Kent Beck, Ward Cunningham, and many others) and continue to prove their value in enabling the development of maintainable, extensible applications which are enjoyable to work on. While ROS may be relatively new, the tried and trued lessons of professional development are quite timeless indeed.
As always, your feedback, questions, comments, suggestions, and even rebuttals are most welcome. To delve a bit further into many of the patterns oriented topics discussed, I recommend reading Gregor Hohpe’s Enterprise Integration Patterns and Robert Martin’s Agile Software Development, Principles, Patterns, and Practices. And obviously, for anything ROS related, you’ll want to keep reading everything you can at http://www.ros.org/wiki/ (and here at sharprobotica.com, of course)!
Posted on 05:13, July 28th, 2010 by Billy McCafferty
Part V: Developing and Testing the ROS Message Endpoint
[Author's note, July 28, 2010: Introduced Boost::shared_ptr to manage reference to message endpoint so as to enable postponed construction, as will be required in Part VI.]
While this series (Part I, II, III, IV) has been specifically written to address writing well-designed packages for ROS, we’ve actually seen very little of ROS itself thus far. In fact, outside of the use of roscreate to generate the package basics and sensor_msgs::LaserScan for communicating laser scan data from the reader up to the application services layer, there’s been no indication that this application was actually intended to work with ROS now or ever. Ironically, this is exactly what we’d expect to see in a well designed ROS package.
Each layer that we’ve developed – as initially outlined in Part I – is logically separated from each other’s context of responsibility. To illustrate, the upper layers do not directly depend on “service” layers, such as message endpoints. Instead, the lower layers depend on abstract service interfaces declared in the upper layers. This dependency inversion was enabled in Part IV with the creation of ILaserScanEndpoint, a separated interface. If all of this dependency inversion and separated interface mumbo-jumbo has your head spinning at all, take some time to delve deeper into this subject in Dependency Injection 101.
While the actual message endpoint interface was created, only a test double was developed for testing the application service layer’s functionality. Accordingly, in this post, the concrete message endpoint “service,” which implements its separated interface, will be developed and tested. That’s right…we’ll finally actually talk to ROS! You can skip to the chase and download the source for this post.
Before digging into the code, it’s important to take a moment to better understand the purpose and usefulness of the message endpoint. The message endpoint encapsulates communications to the messaging middleware similarly to how a data repository encapsulates communications to a database. By encapsulating such communications, the rest of the application (ROS package, in our case) may remain blissfully oblivious to details such as how to publish messages to a topic or translate between messages and domain layer objects.
This separation of concerns helps to keep the application cleanly decoupled from the messaging middleware. Another benefit of this approach is enabling the development and testing of nearly the entirety of the application/package before “wiring” it up to the messaging middleware itself. This typically results in more reusable and readable code. If you haven’t already, I would encourage you to read the article Message-Based Systems for Maintainable, Asynchronous Development for a more complete discussion on message endpoints.
Target Class Diagram
The following diagram shows what the package will look like after completing the steps in this post…it’s beginning to look oddly familiar to the package diagram discussed in Part I of this series, isn’t it? If you’ve been following along, most of the elements have already been completed; only the concrete LaserScanEndpoint and LaserScanEndpointTests will need to be introduced along with a slight modification to the TestRunner.
1. Setup the Package Skeleton, Domain Layer and Application Services Layer
If not done already, follow the steps in Part II, Part III, and Part IV to create the package and develop/test the domain and application service layers. (Or just download the code from Part IV as a starting point to save some time.)
2. Create the message endpoint header class.
Create src/message_endpoints/LaserScanEndpoint.hpp containing the following code:
3. Create the message endpoint implementation class.
Create src/message_endpoints/LaserScanEndpoint.cpp containing the following code:
As you can see, there’s really not much to the actual publication process…which is what we were hoping for. The message endpoint should simply be a light way means to send and receive messages to/from the messaging middleware. This message endpoint does so as follows:
4. Configure CMake to Include the Header and Implementation
With the header and implementation classes completed, we need to make a couple of minor modifications to CMake for their inclusion in the build.
5. Build the message endpoints Class Library
In a terminal window, cd to /ladar_reporter and run
Like with everything else thus far…it’s now time to test our new functionality.
6. Unit Test the LaserScanEndpoint Functionality
While testing up to this point has been pretty straight-forward, we now need to incorporate ROS package initialization within the test itself.
The first five parts of this series conclude the primary elements of developing well-designed packages for Robot Operating System (ROS) using proven design patterns and proper separation of concerns. Obviously, this is not a trivially simple approach to developing ROS packages; indeed, it would be overkill for very simple packages. But as packages grow in size, scope, and complexity, techniques described in this series should help to establish a maintainable, extensible package which doesn’t get too unruly as it evolves. In Part VI, the final part in this series, we’ll look at adding a simple UI layer, using wxWidgets, to interact with the package functionality.
Download the source for this post.
© 2011-2014 Codai, Inc. All Rights Reserved