Warning: strlen() expects parameter 1 to be string, array given in /home/sharpr6/public_html/wp-content/themes/starscape/code/starscape.php on line 450
Fundamentals | SharpRobotica.com - Sharp ideas for the software side of robotics
Posts Tagged ‘Fundamentals’

Introduction to the Kalman Filter, Part II

datePosted on 23:40, October 22nd, 2010 by Billy McCafferty

As discussed in Part I, the Kalman filter is an optimal, prediction/correction estimation technique which minimizes estimated error and can be used a wide array of problems.  While Part I focused on a one-dimensional problem space to more easily convey the underlying concepts of the Kalman filter, Part II will now expand the discussion to bring the technique to higher dimensions (e.g., 2D and 3D) while still being constrained to linear problems.

What’s so cool about the Kalman filter you ask?  Let’s highlight a few areas where the Kalman filter may provide value.  (This should help you remain motivated while you’re delving into a myriad of Greek symbols and matrix transformations!)  The Kalman filter can help:

  • Determine the true pose of a mobile robot given its control input and measurements;
  • Provide sensor fusion capabilities by further correcting an estimate with each subsequent measurement input;
  • Track an object within a video (e.g., face tracking); and
  • Determine the attitude of a satellite using measurement of star locations.

These few examples demonstrate just how useful the Kalman filter can be on a wide variety of problems.  To limit the scope of discussions, the context of this tutorial will be on determining the true pose of a mobile robot given noisy control inputs and measurement data.


In examining the Kalman filter a bit more, we’ll discuss the following topics:

  • Applicable systems for use of the Kalman filter
  • Kalman filter algorithm, inputs and outputs
  • Kalman filter algorithm formalized
  • Limitations of the Kalman filter
  • Direction for further study

The current discussion will avoid the derivations of the associated, base equations in favor of pragmatic use of the filter itself.  For a comprehensive derivation and discussion of the involved equations and mathematical roots, see Robert Stengel’s Optimal Control and Estimation.

Applicable Systems for Use of the Kalman filter

In Part I, it was discussed that a system must adhere to three constraints for Kalman filter applicability:  1) it must be describable by a linear model, 2) the noise within the model must be white, and 3) the noise within the model must be Gaussian.  More precisely, the state must be estimable via:

The state estimation equation is broken down as follows:

  • xk is the estimate of the state after taking into consideration the previous state, the control input, and process noise.
  • xk-1 is the state estimate from the previous time iteration (k-1) where k is the current time iteration.  The state will likely surely be a vector; e.g., (x y θ)T for a mobile robot on a 2D plane having a location (x, y) and orientation θ.
  • uk is the control input for the current time iteration.  While being sensor data in the strict sense, odometry measurements may be considered control input; and would be a good fit for our mobile robot example. Accordingly, control input could be represented as a three dimensional vector containing an initial rotation (in radians), a translation distance (distance travelled in a straight line), and a second rotation (δrot1 δtrans δrot2)T.
  • wk-1 is the process noise, or the amount of error inherent in the state estimation when taking into consideration noisy control input.  (The process noise is Guassian, having mean μ of 0 and covariance Σ.)  The process noise is a vector having the same dimension n as the state vector.
  • A is a matrix which transforms the previous state into the current state without regard to any control input.  For example, if the system being modeled were a satellite orbiting the earth, A would be a matrix which would modify the state to reflect the orbital distance traveled between time iterations k and (k – 1).  In more earth-bound scenarios, A would be an identity matrix.  Regardless, A is a square matrix of size (nxn) where n is the dimension of the state vector xk.
  • B is a matrix which transforms the control input to be compatible for summing to the previous state to reflect the current state.  Within our mobile robot context, the control input is (δrot1 δtrans δrot2)T which cannot simply be added to the previous state to determine the current state; accordingly, B must transform uk into a vector reflecting the relative state change induced by the control input.  For example, if the control input were to move the robot 55 mm on the x axis, 120 mm on the y axis, and 0.8 rad, then the result of Buk would be (55mm 120mm 0.8 rad) which could then be easily summed to the previous state to get the current state.  B is a matrix of size (nxl) where n is the dimension of the state vector and l is the dimension of the control vector.

In addition to adhering to the linear state estimation just discussed, in order to be Kalman-filter-compatible, the system being modeled must also have a measurement model estimable via:

The measurement equation is broken down as follows:

  • zk is the predicted measurement model after taking into account the estimated state and known measurement noise.  Don’t take that at face value; ask yourself why it’s important to be able to predict what the measurement model will be for a given state.  Spoiler alert…  If we’re able to predict what the measurement model should be for a given state, then we can compare the predicted measurement model against the actual measurement data returned by our sensors.  The difference between the two will be used for improving the state estimation within the Kalman filter algorithm.  For our little robot, the measurement model may be a vector of laser scans (s1 s2sn)T, with each scan having an orientation and range (x θ)T.
  • xk is the result of the state estimation equation discussed above.
  • vk is the measurement noise, or the amount of error inherent in the measurement estimation when taking into consideration noisy sensor input.  (The measurement noise is Guassian, having mean μ of 0 and covariance Σ.)  The measurement noise is a vector having the same dimension n as the resulting measurement vector.
  • H is a matrix which transforms the state xk into the predicated measurement model.  In other words, if given the state xk, Hxk will calculate what the measurement model should look like if there were no uncertainty involved.  H is of size (mxn) where m is the dimension of the measurement vector zk and n is the dimension of the state vector.
    And to answer your question, yes, H can be quite onerous to fabricate.  In fact, H may actually be implemented as a function, accepting a state and returning a measurement model based on the known map, estimated location and orientation; the Extended Kalman filter or other extension would need to be leveraged if we digressed in this way from our “simple” linear model.

In summary, if the system can be modeled by the the process and measurement equations described above, then the Kalman filter may be used on the system to estimate state, if given control and measurement inputs.  Let’s now look at the general Kalman filter algorithm, at a very high level, including specific inputs and outputs.

Kalman Filter Algorithm, Inputs and Outputs

The Kalman filter algorithm follows a surprisingly straight-forward algorithm broken down into two phases.  The first phase is called the time estimate (or prediction) in which the previous state and control input is used to estimate the current state and estimate covariance.  The second phase is called the measurement update (or correction) in which the Kalman gain is calculated and the state estimate and covariance is improved upon using measurement data and the Kalman gain.  Roughly, the algorithm is as follows:

  1. Estimate the predicted state based on process (control) input.  This estimate is the a priori estimate.
  2. Calculate the state estimate covariance (our confidence in the state estimate).
  3. Calculate the Kalman gain which will be used for weighting the amount of correction the measurement data will have on the state estimate.
  4. Refine the estimated state using measurement input.  This refined estimate is the a posteriori estimate.
  5. Refine the state estimate covariance (our confidence in the state estimate after taking measurement data into account).

During the first time iteration t0, the Kalman filter accepts as input the initial state and estimate covariance (which may be zero if the initial state is known with 100% certainty) along with the control input u and measurement data z.  On subsequent time iterations tn, the Kalman filter accepts as input the output from the previous run (with mean and covariance – discussed more below) along with the control input u and measurement data z from tn.

The output of the Kalman filter is an estimate of the state represented by a normal distribution having mean μ (the estimated state) and covariance Σ (the confidence, or more accurately, the noise, in that estimate).  (As a reminder, the covariance of a normal distribution is the standard deviation squared  σ2.)  Note that μ need not be limited to a scalar value; in fact, it’ll almost always be a vector.  For example, the pose of a mobile robot may be a three dimensional vector containing the location and orientation (x y θ)T.  Accordingly, this vector would be the resulting mean value.  Furthermore, with a three dimensional state vector as the mean, the covariance Σ would be a (3×3) diagonal matrix having a covariance for each corresponding value of the vector, as shown at right.

Kalman Filter Algorithm Formalized

We’ve discussed the initial Kalman filter equations for process and measurement estimation; we’ve also discussed the overall algorithm for implementation, broken down into prediction and correction phases.  What’s missing are the actual calculations for concretely carrying out the estimation process itself.  The concrete calculations for implementing the Kalman filter algorithm are “easily” derived from the process and measurement equations by taking the partial derivatives of them and setting them to zero for minimizing error…and jumping around three times and standing on your head for π minutes.  (My eyes quickly begin to glaze over when I start to follow derivations of this nature…but if you like this kind of stuff, Sebastian Thrun shows the complete derivation within Probabilistic Robotics; Robert Stengel takes it to 11 within Optimal Control and Estimation with more Greek symbols than you can shake a stick at.)  But I digress…

To formalize, the Kalman filter algorithm accepts four inputs:

  • μt-1 – the mean state vector
  • Σt-1 – the covariance of the mean (the error in the state estimate)
  • ut – the process (control) input
  • zt – the measurement data

With the given inputs, the Kalman filter algorithm is implemented as follows:

Line 1 should be comfortingly familiar; this is the calculation for estimating the current state given the previous state and control input.  But what’s missing from the original process equation?  Have you spotted it yet?  I’ll give you a noisy hint.  (No mean for the pun…thank you, thank you, I’ll be here all week.)  That’s right, the noise parameter has been left off of the state estimation equation in line 1.  Line 1 simply calculates the a priori state estimate, ignoring process noise.

Line 2 calculates the covariance of the current state estimate, taking process noise into consideration.  Matrix A has already been discussed; it comes from the Kalman filter state estimate equation described earlier.  R is a diagonal matrix representing the process noise covariance.

Line 3 calculates the Kalman gain which will be used to weight the effect of the measurement model when correcting the estimate.  C is identical to the matrix H described earlier in the base Kalman filter measurement equation.  As tricky as this line looks (and some of those matrix calculations can make your head hurt a bit), the only thing new is Q; this diagonal matrix is the measurement noise covariance.  The resulting Kalman gain K is a matrix having dimensions (nxm) where n is the dimension of the state vector and m is the dimension of the measurement vector.

(As a side, take note that in different reading sources, the meaning of R and Q may be switched; Q would be process noise and R would be measurement noise and would have their locations in the equations swapped, accordingly.  Just be cognizant of which is which within the source you’re reading from.)

Line 4 updates the state estimate taking into account the weighted measurement information.  Note that the Kalman gain is multiplied by the difference between the actual measurement model and the predicted measurement model.  What happens if they happen to be identical?  …Jeopardy daily double sounds playing in the background…  If the actual and predicted measurement models happen to be identical, then the estimated state will not be corrected at all since our sensors have verified that we’re exactly where we thought we were.  I.e., don’t fix what ain’t broken.  The result of line 4 is the a posteriori state estimate.

Finally, line 5 corrects the covariance, taking into account the Kalman gain used to correct the state estimate.

As output, the Kalman filter algorithm returns two values:

  • μt – the current mean state vector.
  • Σt – the current covariance of the mean (the error in the state estimate).  The covariance matrix would be a diagonal matrix having dimensions (nxn) where n is the dimension of the state vector.

With these outputs, it is now known with some Σ amount of error what the current state of the system is; or where our intrepid little robot is on the map.

Limitations of the Kalman Filter

The Kalman filter is incredibly powerful and can be used in a surprising number of scenarios.  The primary limitation of the Kalman filter is that it assumes use within a linear system.  Many systems are non-linear (such as a mobile robot moving with a rotational trajectory) yet may still benefit from the Kalman filter.  The applicable approach is to form a linear estimate of the non-linear system for use by the Kalman filter; similar in effect to a Taylor series expansion.  Popular extensions to the Kalman filter to support non-linear systems include the Extended Kalman filter and, even better, the Unscented Kalman filter.  Specifically, chapter 7 of Sebastian Thrun’s Probabilistic Robotics goes into good detail on describing how to apply both of these extensions to the context of mobile robotics.

Googling for “Kalman filter” will quickly show just how much more there is to this topic.  But I certainly hope this two part series has helped to clarify the overall algorithm with particular attention to describing the various elements of the calculations themselves.

Enjoy!
Billy McCafferty

Fundamentals: Vectors, Matrices and Matrix Operations

datePosted on 22:16, October 14th, 2010 by Billy McCafferty

Our initial introduction to the Kalman filter was easy to understand because both the motion and measurement models were assumed to be one-dimensional. That’s great if you’re a lustrous point in Lineland, but the three dimensional world must be dealt with sooner or later. Specifically, within the initial introduction, location (or state) x, the control input u, and the measurement z were all scalar (numeric) values along a one-dimensional line. For actual use of the Kalman filter, x, u, and z are much more frequently vectors instead of scalar units. In order for the vectors to play nicely with one another (to add and subtract them from each other), matrices must be used to tranform the vectors into a common form. Accordingly, before delving further into the Kalman filter, this post provides a basic review of matrices and matrix operations to better prepare ourselves for more gory Kalman filter details.

To better visualize why we need to be concerned with matrices, assume you are using the Kalman filter for localization of your humble robot on a 2D map. Furthermore, assume that the robot is holonomic on a 2D plane (can turn in a circle on a dime). We now need to create a model to adequately represent the pose (or “kinematic configuration” if you’re feeling fancy), the motion model (the control input), and the measurement model (which we’ll ignore for now to focus on matrices).

The state, x, or pose of our robot, is succinctly represented as a three-dimensional column vector made up of the x and y coordinates of the robot on the two-dimensional map along with the robot’s orientation relative to the x axis, represented as θ. (Note that the x positional component here is decidedly different than the x vector variable representing the overall pose.) This 3D column vector, representing the pose on a 2D plane, is shown at right.

The control input, u, or motion model of our robot, can be represented in various forms, examples of which are described in detail in Probabilistic Robotics [1]; but for the topic at hand, assume that the motion model is simply a constant velocity, v, between two ticks of time, represented as a 2D vector containing speed and direction.

With this information, if given the previous pose and the motion model over a given timeframe, we can then calculate the current pose. To do so, we’ll need a linear equation which adds the previous pose to the control input. But the velocity vector can’t simply be added to the vector representing the previous pose – we’re talking apples and oranges here. We’ll need a transformation matrix to transform the velocity into a 3D vector which can be added to the pose. This is starting to get into Part II of the Kalman filter introduction, but this starts to give you an idea of how matrices will be used in the Kalman filter.

So onward with our matrix primer!


Vectors

As illustrated above, a column vector is an ordered set of values with n dimensions, where n is the number of values within the vector. The values within a vector need not be limited to being scalars; e.g., one or more values within the vector could also be a vector. By convention, a vector is assumed to be a column vector unless otherwise noted. A vector is symbolized as a bold-face, lower-case letter.

If all of the elements of a vector are 0, the vector is a null vector.

The transpose of a column vector is a row vector (and vice-versa) and has a superscript T to denote as such.

Matrices

A matrix is a two-dimensional array of scalar values (or coefficients) having r rows and n columns, noted as having (rxn) dimensions. If both r and n are one, then the matrix is a scalar value. If just n is one, then the matrix is a vector. If just r is one, then the matrix is a row vector. If r = n then the matrix is a square matrix. Matrices are symbolized as a bold-face, upper-case letter.

If all of the elements of a matrix are 0, the matrix is a null matrix. If all of the diagonal elements of a square matrix (e.g., a11, a22, …, arn) have a value while all others do not, the matrix is a diagonal matrix. If all of the diagonal elements of a diagonal, square matrix are 1, then the matrix is an identity matrix. An example identity matrix is shown at right.

The transpose of a matrix is the matrix “flipped” on its diagonal; it is created by writing the rows of A as the columns of AT.  Accordingly, the columns (n) and rows (r) of A will equal the rows (r) and columns (n) of AT, respectively; e.g., if A has the dimensions (2×3) then AT has the dimensions (3×2).

Matrix Operations

Scalar/Matrix Multiplication

When looking at available operations among scalars, vectors, and matrices, it’s easiest to start with the multiplication of a matrix by a scalar value.  Simply enough, each value within the matrix is simply multiplied by the scalar; quite elementary indeed.

Matrix/Matrix Addition & Subtraction

The next trivial operation is that of matrix-to-matrix addition and subtraction.  Simply enough, each value in the first matrix is added to, or subtracted by, the respective element in the second matrix.  In order to add or subtract to matrices, the matrices must have the same (rxn) dimensions.

Matrix/Vector Multiplication

As mentioned in the opening of this review, it is necessary within the Kalman filter to transform a control vector, for example, into a state vector, so that it may be added to the previous state to calculate the current state.  This transformation is achieved by multiplying the control vector by a matrix representing how the control vector relates to the state.

In more generic terms, a resulting variable may be the result of a linear function of another vector and a matrix representing how the vector being acted upon relates to the result.  (You might want to read that again.)  The linear funtion for the result is written as y = Ax.  More simply put, A is a matrix which represents how the vector x relates to y, the result; accordingly, A transforms x into y.  In matrix-speak, this is a linear transformation.

In order to transform a vector by a matrix, the number of columns (n) of A must equal the dimension (n) of x.  Additionally, the number of rows (r) of A will equal the dimension (n) of y.  If these constraints hold, then A is said to be conformable to x.

The following demonstrates how each value of y is calculated:

  • y1 = a11x1 + a12x2 + … + a1nxn
  • y2 = a21x1 + a22x2 + … + a2nxn
  • yr = ar1x1 + ar2x2 + … + arnxn

Interestingly, if the matrix A is a diagonal matrix (square by implication), then each y value is the product of the corresponding x and diagonal value in the matrix. If the matrix A is an identity matrix (also square by implication), then each y value is equal to the corresponding x value.  Examples of each are shown at right.

Matrix/Matrix Multiplication

The last topic worth mentioning in detail, in our rather elementary review of matrices and matrix operations, is that of multiplying two matrices together.

In order to get the result of the product of two matrices, e.g., CAB, the number of columns (n) of A must equal the number of rows (r) of B.  The result, C, will have the number of rows (r) of A and the number of columns (n) of B.

The following demonstrates how each value of C is calculated:

  • c11 = a11b11 + a12b21
  • c12 = a11b12 + a12b22
  • c21 = a21b11 + a22b21
  • c22 = a21b12 + a22b22
  • c31 = a31b11 + a32b21
  • c32 = a31b12 + a32b22

If both A and B were square, AB ≠ BA due to order in which rows and columns are multiplied and summed.  But when multiplying by an identity matrix, AAI = IA.


There is certainly much more to matrices and matrix operations, but the above gives enough to move on to the Part II of our introduction to the Kalman filter and to understand the implication of matrices when used within signals control and robotics literature.  Incidentally, this should also be enough information to understand just about every use of a vector and matrix within Sebastian Thrun’s Probabilistic Robotics (a highly recommended read if you’re interested in mobile robotics).  For a more comprehensive review of matrices and their use within control systems, there are fewer texts better (albeit, a bit daunting) than Robert Stengel’s Optimal Control and Estimation.

Enjoy!
Billy McCafferty

Introduction to the Kalman Filter, Part I

datePosted on 17:17, September 22nd, 2010 by Billy McCafferty

Dealing with the real world is rough. (With an opening like that, you can probably guess how my high school days went.) To be clear here, we’re talking about robots having to deal with the real world (not me, luckily). It’s hard enough for our little silicon brethren to have limited sensory capabilities, but on top of that, the data that’s coming in is usually noisy. For example, sonar range data can be inconsistent, throwing off distance measurements by many centimeters. Even laser data isn’t perfect; if you watch the range data coming in from stationary laser sensors, you’ll notice the range data shake indecisively around the actual range. What’s one to do? Well, you could shell out a little more money for ultra-sensitive sensors, but even they have their limitations and are subject to error. To add insult to injury, our robot’s misjudgment isn’t limited to data coming in from sensors, it’s also prone to misjudging what it’s done from a control perspective. For example, tell your robot to move forward 1 meter and it’ll comply as best it can; but due to various surface textures (road vs. dirt vs. carpet vs. sand), the robot may report that it’s traveled 1 meter when in fact it’s gone a bit further or a bit less. What we’d like to do is to filter out the chaff from the wheat…or the noise from the useful data in this case. Likely the most widely used, researched, and proven filtering technique is the Kalman filter. (Read: this is an important concept to understand!)

In short, if not a bit curtly, the Kalman filter is a recursive data processing algorithm which processes all available measurements, regardless of their precision, to estimate the current value of variables of interest, such as a robot’s position and orientation. There is a plethora of literature written concerning the Kalman filter, some useful and some otherwise; accordingly, to help you better understand the Kalman filter, I’ll guide you through a series of readings which I’ve found pivotally assistive in understanding this technique and discuss key points from those references.

The first step on our journey into the Kalman filter rabbit hole is Peter Maybeck’s Stochastic Models, Estimation, and Control, Vol. 1, Chapter 1 [1]. There is simply no writing which gives a more tractable and approachable overview of the concepts behind the Kalman filter. If you’re following along at home (singing along with the bouncing dot), please take the time to now read Maybeck’s introduction and take the time to understand what you’re reading…I’ll wait patiently for your return.

<insert you reading intently here>

So what did we just learn? Below are a few points to emphasize key ideas from Maybeck’s introduction…

  • A Kalman filter can only be applied to a system exhibiting three attributes: it must be a linear model, the noise within the model must be white, and the noise within the model must be Gaussian (normal bell-shaped curve having a mean and standard deviation). (Interestingly, if we relax the first condition to accommodate non-linear models, we must use a technique known as an Extended Kalman filter which will be a subject of a successive post; and this is very important as once a robot starts taking a curved path, it becomes a non-linear problem.)
  • The Kalman filter is built upon the ideas encapsulated by Bayes’ filter. (For a great introduction to Bayes filter, see chapter 2.4 of Sebastian Thrun’s Probabilistic Robotics [2].), Indeed, the Kalman filter is an optimal incantation of Bayes’ filter as it’s result is the mean and the mode of the Gaussian result, it is the maximum likelihood estimate, and it is the linear estimate whose variance is less than that of any other linear unbiased estimate; in other words, whichever angle you look at it, the result is just about as good as it gets.
  • There are two kinds of noise which needs to be considered: process (or control) noise, which results from imprecise movement and manipulation control, and measurement noise which results from imprecise sensory data, such as sonar and laser data. When calculating an estimate, the process and process noise is considered first with the measurement and measurement noise being applied next.
  • The “white noise” of the system being analyzed need not be constant; a “shaping filter” can be used to describe how the noise changes over time assuming that the amplitude of the noise can be described at any point in time as a Gaussian distribution.
  • In reading Maybeck’s introduction, you may have been surprised to see that , as described in equation 1-1 of the introduction. If you skip back to equation 1-16, you’ll be reminded why this is the case. As the variance of the process/control noise approaches infinity and the variance of the process prediction approaches infinity, the gain approaches 1. Accordingly, no “weight” is put on the control input while all trust is put on the measurement result itself. Now, going back to equation 1-1, since control isn’t being considered at all, it is effectively removed from the equation, similarly to as if it had an infinite error variance.

Algorithmically, in our one-dimensional context, the Kalman filter takes the following steps to estimate the variable of interest:

  1. Calculate the predicted state based on process (control) input (1-11),
  2. Calculate the error variance of the predicted state (1-12),
  3. Calculate the gain which will be used to “correct” the predicted state when the measurement data is applied (1-15),
  4. Re-calculate the predicted state, taking into account measurement data and the gain (1-13), and
  5. Re-calculate the error variance taking into account the improvement that the measurement data has added to the prediction (1-14).

Isn’t that brilliant? The Kalman filter takes into account the previous state estimation, and the confidence of that estimation, to then determine the current estimate based on control input and measurement output, all the while tweaking itself – with the gain – along the way. And since it only needs to maintain the last estimate and confidence of the estimate, it takes up very little memory to implement. While very approachable, Maybeck’s introduction to the Kalman filter is greatly simplified…for that very reason. For example, Maybeck’s introduction assumes movement on a one-dimensional plane (an x value) with one-dimensional control input (the velocity). In more realistic contexts, we store the position and orientation of a robot as a vector and the control and measurement data as vectors as well. In the next post, we’ll examine what modifications need to be made to Maybeck’s simplification to apply the Kalman filter to more real world scenarios. This is where we must get our old college textbooks out to realize that we should have paid more attention in our matrices class (or you can refresh yourself here).

Until next time!
Billy McCafferty


References

Maybeck, P. 1979. Stochastic Models, Estimation, and Control, Vol. 1.

Thrun, S., Burgard, W., Fox, D. 2005. Probabilistic Robotics.

Fundamentals: Coordinate Frames

datePosted on 04:34, September 9th, 2010 by Billy McCafferty

The world of robotics has a dizzying number of subjects; it’s quite overwhelming at first glance to figure out which topics someone “really needs to get” and which topics require a more cursory understanding. Accordingly, this will be the first in a number of posts (“number” being linearly proportional to my motivation) that I will be doing on some of the more fundamental topics within the realm of robotics. We’ll begin our travels with “coordinate frames.” What are coordinate frames you ask…well slow down there fella…let’s first take a step back to figure out the motivations for wanting to ask such a question to begin with.

A “robot” is typically defined (more or less) as an autonomous system which interacts with its environment. Interaction may include actual manipulation of the robot’s environment; this requires some sort of manipulator. In (Siciliano, 2009), a manipulator is described as “a kinematic chain of rigid bodies connected by means of revolute or prismatic joints. One end of the chain is constrained to a base, while an end-effector is mounted to the other end.” This academic explanation can be more easily understood by looking at an AX-12 Smart Robot Arm: the aluminum links are the “rigid bodies,” the AX-12 servos are the “prismatic joints,” and the gripper is the “end effector.” So what’s this have to do with coordinate frames? Well, a challenge in having a robot manipulate its environment is being able to determine and describe the position and orientation (together describing the pose) of the end effector in relation to what needs to be manipulated (and yes, this is very challenging). More specifically, one needs to describe both the pose of the end effector and target object in relation to a reference frame.

When considering an object’s pose within a reference frame, one first needs to know what a reference frame is to begin with. In short, a reference frame is “how the world is oriented”; i.e., which way’s North, South, up, down, etc. To describe the reference frame, and – more importantly – to enable one to provide bearing for where an object is within that frame, the frame is described with three axis: x, y, and (you guessed it) z. Without being oriented, if y points to the right and z points up, which way does x point? To determine this, use a handy trick (no mean for the pun) known as the “right-handed rule” (sorry south paws). To demonstrate, hold out your hand in front of your face like you were about to karate chop a board with your thumb sticking towards your face. If you point your index finger towards y, curl your other fingers towards z, then your thumb will point towards the positive direction of x.

The origin of the frame, the [0, 0, 0] value of the x, y, and z axis is located on an arbitrary, but known, point within the environment or on an object. There can be a frame, and different origin accordingly, for each reference perspective for a given context; each would be known as the coordinate frame of the given context.

For example, suppose one is developing a robot to pick up toys and put them into a toy bin (can you tell I have kids?). In this case, there would likely be three coordinate frames of interest. The first would be a reference frame which would allow one to describe the pose of the toy and the manipulator in relation to that reference frame. For instance, the reference frame could have its origin in the corner of the room and be tied to the orientation of the room itself; a “reference frame” is simply a coordinate frame which does not change pose as other objects move through it. A second coordinate frame would be the coordinate frame from the perspective of the end effector. By applying a separate coordinate frame to the end effector, it’s now tractable to determine not only where the end effector is found in relation to the reference frame, but also how the end effector is oriented in relation to the reference frame and how the pose needs to be modified to reach another pose (with fun stuff techniques like matrix transformations). As the end effector would be moved, its coordinate frame would move with it, figuratively fixed to a point on the end effector. Finally, a third coordinate frame would be that of the toy being picked up; this frame, in relation to the reference frame, would facilitate determining how the end effector’s pose needs to change to be in proper alignment for picking up the toy, taking into account the toy’s pose as well (read, lots more matrix transformations).

When applying a coordinate frame to an object or environment, two decisions must be made:

  • What fixed point on the object or environment should the coordinate frame be applied to? If using RobotIQ’s way cool, underactuated Adaptive Gripper, the coordinate frame might have its origin at the base of the “solo” finger. If talking about an airport, the coordinate frame (in this case the “reference frame”) might have its origin at the base of the control tower.
  • How will the coordinate frame be oriented with respect to the environment or object it is applied to; i.e., which way should x, y, and z point out of the origin? By convention, z would point “up” out of the origin. With the airport example, z would point towards the sky through the control tower. For movable objects, it’s not so straight forward to pick which way is “up.” So when applying a coordinate frame to a movable object (e.g., end effector), what’s most import is to pick a point and an orientation of the frame, with respect to the object its applied to, and keep that pose relationship between the object and the frame fixed as the object changes pose. For example, if a coordinate frame were applied to a mobile robot base, and the robot turns 90 degrees clockwise, then its coordinate frame would turn 90 degrees clockwise with it while the reference frame would remain static.

There is certainly a ton more to coordinate frames, manipulating them, comparing them to each other, and transforming them, than the light introduction provided here, but this should at least assist in removing a deer in headlights look if you’re unfamiliar with this term and someone brings it up in conversation during a cocktail party…which always happens.

Enjoy!
Billy McCafferty


References

Siciliano, B. 2009. Robotics: Modelling, Planning and Control.

© 2011-2014 Codai, Inc. All Rights Reserved