Posts Tagged ‘Fundamentals’
Oct
22
2010
## Introduction to the Kalman Filter, Part IIPosted on 23:40, October 22nd, 2010 by
Billy McCaffertyAs discussed in Part I, the Kalman filter is an optimal, prediction/correction estimation technique which minimizes estimated error and can be used a wide array of problems. While Part I focused on a one-dimensional problem space to more easily convey the underlying concepts of the Kalman filter, Part II will now expand the discussion to bring the technique to higher dimensions (e.g., 2D and 3D) while still being constrained to linear problems. What’s so cool about the Kalman filter you ask? Let’s highlight a few areas where the Kalman filter may provide value. (This should help you remain motivated while you’re delving into a myriad of Greek symbols and matrix transformations!) The Kalman filter can help: - Determine the true pose of a mobile robot given its control input and measurements;
- Provide sensor fusion capabilities by further correcting an estimate with each subsequent measurement input;
- Track an object within a video (e.g., face tracking); and
- Determine the attitude of a satellite using measurement of star locations.
These few examples demonstrate just how useful the Kalman filter can be on a wide variety of problems. To limit the scope of discussions, the context of this tutorial will be on determining the true pose of a mobile robot given noisy control inputs and measurement data. In examining the Kalman filter a bit more, we’ll discuss the following topics: - Applicable systems for use of the Kalman filter
- Kalman filter algorithm, inputs and outputs
- Kalman filter algorithm formalized
- Limitations of the Kalman filter
- Direction for further study
The current discussion will avoid the derivations of the associated, base equations in favor of pragmatic use of the filter itself. For a comprehensive derivation and discussion of the involved equations and mathematical roots, see Robert Stengel’s Optimal Control and Estimation.
In Part I, it was discussed that a system must adhere to three constraints for Kalman filter applicability: 1) it must be describable by a linear model, 2) the noise within the model must be white, and 3) the noise within the model must be Gaussian. More precisely, the state must be estimable via: The state estimation equation is broken down as follows: **x**_{k}is the estimate of the state after taking into consideration the previous state, the control input, and process noise.**x**_{k-1}is the state estimate from the previous time iteration (k-1) where k is the current time iteration. The state will likely surely be a vector; e.g., (*x**y*θ)^{T}for a mobile robot on a 2D plane having a location (*x*,*y*) and orientation θ.**u**_{k}is the control input for the current time iteration. While being sensor data in the strict sense, odometry measurements may be considered control input; and would be a good fit for our mobile robot example. Accordingly, control input could be represented as a three dimensional vector containing an initial rotation (in radians), a translation distance (distance travelled in a straight line), and a second rotation (δ_{rot1}δ_{trans}δ_{rot2})^{T}.**w**_{k-1 is the process noise, or the amount of error inherent in the state estimation when taking into consideration noisy control input. (The process noise is Guassian, having mean μ of 0 and covariance Σ.) The process noise is a vector having the same dimension n as the state vector.}**A**is a matrix which transforms the previous state into the current state without regard to any control input. For example, if the system being modeled were a satellite orbiting the earth,**A**would be a matrix which would modify the state to reflect the orbital distance traveled between time iterations k and (k – 1). In more earth-bound scenarios,**A**would be an identity matrix. Regardless,**A**is a square matrix of size (*n*x*n*) where*n*is the dimension of the state vector**x**_{k}.**B**is a matrix which transforms the control input to be compatible for summing to the previous state to reflect the current state. Within our mobile robot context, the control input is (δ_{rot1}δ_{trans}δ_{rot2})^{T}which cannot simply be added to the previous state to determine the current state; accordingly,**B**must transform**u**_{k}into a vector reflecting the relative state change induced by the control input. For example, if the control input were to move the robot 55 mm on the*x*axis, 120 mm on the*y*axis, and 0.8 rad, then the result of**Bu**_{k}would be (55mm 120mm 0.8 rad) which could then be easily summed to the previous state to get the current state.**B**is a matrix of size (*n*x*l*) where*n*is the dimension of the state vector and*l*is the dimension of the control vector.
In addition to adhering to the linear state estimation just discussed, in order to be Kalman-filter-compatible, the system being modeled must also have a measurement model estimable via: The measurement equation is broken down as follows: **z**_{k}is the predicted measurement model after taking into account the estimated state and known measurement noise. Don’t take that at face value; ask yourself why it’s important to be able to predict what the measurement model will be for a given state. Spoiler alert… If we’re able to predict what the measurement model should be for a given state, then we can compare the*predicted*measurement model against the*actual*measurement data returned by our sensors. The difference between the two will be used for improving the state estimation within the Kalman filter algorithm. For our little robot, the measurement model may be a vector of laser scans (**s**_{1}**s**_{2}…**s**_{n})^{T}, with each scan having an orientation and range (*x*θ)^{T}.**x**_{k}is the result of the state estimation equation discussed above.**v**_{k}is the measurement noise, or the amount of error inherent in the measurement estimation when taking into consideration noisy sensor input. (The measurement noise is Guassian, having mean μ of 0 and covariance Σ.) The measurement noise is a vector having the same dimension*n*as the resulting measurement vector.**H**is a matrix which transforms the state**x**_{k}into the predicated measurement model. In other words, if given the state**x**_{k},**Hx**_{k}will calculate what the measurement model should look like if there were no uncertainty involved.**H**is of size (*m*x*n*) where*m*is the dimension of the measurement vector**z**_{k}and*n*is the dimension of the state vector. And to answer your question, yes,**H**can be quite onerous to fabricate. In fact,**H**may actually be implemented as a function, accepting a state and returning a measurement model based on the known map, estimated location and orientation; the Extended Kalman filter or other extension would need to be leveraged if we digressed in this way from our “simple” linear model.
In summary, if the system can be modeled by the the process and measurement equations described above, then the Kalman filter may be used on the system to estimate state, if given control and measurement inputs. Let’s now look at the general Kalman filter algorithm, at a very high level, including specific inputs and outputs.
The Kalman filter algorithm follows a surprisingly straight-forward algorithm broken down into two phases. The first phase is called the - Estimate the predicted state based on process (control) input. This estimate is the
*a priori*estimate. - Calculate the state estimate covariance (our confidence in the state estimate).
- Calculate the Kalman gain which will be used for weighting the amount of correction the measurement data will have on the state estimate.
- Refine the estimated state using measurement input. This refined estimate is the
*a posteriori*estimate. - Refine the state estimate covariance (our confidence in the state estimate after taking measurement data into account).
During the first time iteration The output of the Kalman filter is an estimate of the state represented by a normal distribution having mean
We’ve discussed the initial Kalman filter equations for process and measurement estimation; we’ve also discussed the overall algorithm for implementation, broken down into prediction and correction phases. What’s missing are the actual calculations for concretely carrying out the estimation process itself. The concrete calculations for implementing the Kalman filter algorithm are “easily” derived from the process and measurement equations by taking the partial derivatives of them and setting them to zero for minimizing error…and jumping around three times and standing on your head for π minutes. (My eyes quickly begin to glaze over when I start to follow derivations of this nature…but if you like this kind of stuff, Sebastian Thrun shows the complete derivation within Probabilistic Robotics; Robert Stengel takes it to 11 within Optimal Control and Estimation with more Greek symbols than you can shake a stick at.) But I digress… To formalize, the Kalman filter algorithm accepts four inputs: *μ*_{t-1}– the mean state vector**Σ**_{t-1}– the covariance of the mean (the error in the state estimate)*u*_{t}– the process (control) input*z*_{t}– the measurement data
With the given inputs, the Kalman filter algorithm is implemented as follows: Line 1 should be comfortingly familiar; this is the calculation for estimating the current state given the previous state and control input. But what’s missing from the original process equation? Have you spotted it yet? I’ll give you a noisy hint. (No mean for the pun…thank you, thank you, I’ll be here all week.) That’s right, the noise parameter has been left off of the state estimation equation in line 1. Line 1 simply calculates the Line 2 calculates the covariance of the current state estimate, taking process noise into consideration. Matrix Line 3 calculates the Kalman gain which will be used to weight the effect of the measurement model when correcting the estimate. (As a side, take note that in different reading sources, the meaning of Line 4 updates the state estimate taking into account the weighted measurement information. Note that the Kalman gain is multiplied by the difference between the Finally, line 5 corrects the covariance, taking into account the Kalman gain used to correct the state estimate. As output, the Kalman filter algorithm returns two values: *μ*_{t}– the current mean state vector.**Σ**_{t}– the current covariance of the mean (the error in the state estimate). The covariance matrix would be a diagonal matrix having dimensions (*n*x*n*) where*n*is the dimension of the state vector.
With these outputs, it is now known with some
The Kalman filter is incredibly powerful and can be used in a surprising number of scenarios. The primary limitation of the Kalman filter is that it assumes use within a linear system. Many systems are non-linear (such as a mobile robot moving with a rotational trajectory) yet may still benefit from the Kalman filter. The applicable approach is to form a linear estimate of the non-linear system for use by the Kalman filter; similar in effect to a Taylor series expansion. Popular extensions to the Kalman filter to support non-linear systems include the Extended Kalman filter and, even better, the Unscented Kalman filter. Specifically, chapter 7 of Sebastian Thrun’s Probabilistic Robotics goes into good detail on describing how to apply both of these extensions to the context of mobile robotics. Googling for “Kalman filter” will quickly show just how much more there is to this topic. But I certainly hope this two part series has helped to clarify the overall algorithm with particular attention to describing the various elements of the calculations themselves. Enjoy! Oct
14
2010
## Fundamentals: Vectors, Matrices and Matrix OperationsPosted on 22:16, October 14th, 2010 by
Billy McCaffertyOur initial introduction to the Kalman filter was easy to understand because both the motion and measurement models were assumed to be one-dimensional. That’s great if you’re a lustrous point in Lineland, but the three dimensional world must be dealt with sooner or later. Specifically, within the initial introduction, location (or state) To better visualize why we need to be concerned with matrices, assume you are using the Kalman filter for localization of your humble robot on a 2D map. Furthermore, assume that the robot is holonomic on a 2D plane (can turn in a circle on a dime). We now need to create a model to adequately represent the pose (or “kinematic configuration” if you’re feeling fancy), the motion model (the control input), and the measurement model (which we’ll ignore for now to focus on matrices). The state, The control input, With this information, if given the previous pose and the motion model over a given timeframe, we can then calculate the current pose. To do so, we’ll need a linear equation which adds the previous pose to the control input. But the velocity vector can’t simply be added to the vector representing the previous pose – we’re talking apples and oranges here. We’ll need a So onward with our matrix primer! Vectors
As illustrated above, a If all of the elements of a vector are 0, the vector is a The
A matrix is a two-dimensional array of scalar values (or coefficients) having If all of the elements of a matrix are 0, the matrix is a The transpose of a matrix is the matrix “flipped” on its diagonal; it is created by writing the rows of
When looking at available operations among scalars, vectors, and matrices, it’s easiest to start with the multiplication of a matrix by a scalar value. Simply enough, each value within the matrix is simply multiplied by the scalar; quite elementary indeed.
The next trivial operation is that of matrix-to-matrix addition and subtraction. Simply enough, each value in the first matrix is added to, or subtracted by, the respective element in the second matrix. In order to add or subtract to matrices, the matrices must have the same (
As mentioned in the opening of this review, it is necessary within the Kalman filter to transform a control vector, for example, into a state vector, so that it may be added to the previous state to calculate the current state. This transformation is achieved by multiplying the control vector by a matrix representing how the control vector relates to the state. In more generic terms, a resulting variable may be the result of a linear function of another vector and a matrix representing how the vector being acted upon relates to the result. (You might want to read that again.) The linear funtion for the result is written as In order to transform a vector by a matrix, the number of columns ( The following demonstrates how each value of *y*_{1}=*a*_{11}*x*_{1}+*a*_{12}*x*_{2}+ … +*a*_{1n}*x*_{n}*y*_{2}=*a*_{21}*x*_{1}+*a*_{22}*x*_{2}+ … +*a*_{2n}*x*_{n}- …
*y*_{r}=*a*_{r1}*x*_{1}+*a*_{r2}*x*_{2}+ … +*a*_{rn}*x*_{n}
Interestingly, if the matrix
The last topic worth mentioning in detail, in our rather elementary review of matrices and matrix operations, is that of multiplying two matrices together. In order to get the result of the product of two matrices, e.g., The following demonstrates how each value of *c*_{11}=*a*_{11}*b*_{11}+*a*_{12}*b*_{21}*c*_{12}=*a*_{11}*b*_{12}+*a*_{12}*b*_{22}*c*_{21}=*a*_{21}*b*_{11}+*a*_{22}*b*_{21}*c*_{22}=*a*_{21}*b*_{12}+*a*_{22}*b*_{22}*c*_{31}=*a*_{31}*b*_{11}+*a*_{32}*b*_{21}*c*_{32}=*a*_{31}*b*_{12}+*a*_{32}*b*_{22}
If both There is certainly much more to matrices and matrix operations, but the above gives enough to move on to the Part II of our introduction to the Kalman filter and to understand the implication of matrices when used within signals control and robotics literature. Incidentally, this should also be enough information to understand just about every use of a vector and matrix within Sebastian Thrun’s Probabilistic Robotics (a highly recommended read if you’re interested in mobile robotics). For a more comprehensive review of matrices and their use within control systems, there are fewer texts better (albeit, a bit daunting) than Robert Stengel’s Optimal Control and Estimation.
Enjoy! Sep
22
2010
## Introduction to the Kalman Filter, Part IPosted on 17:17, September 22nd, 2010 by
Billy McCaffertyDealing with the real world is rough. (With an opening like that, you can probably guess how my high school days went.) To be clear here, we’re talking about robots having to deal with the real world (not me, luckily). It’s hard enough for our little silicon brethren to have limited sensory capabilities, but on top of that, the data that’s coming in is usually noisy. For example, sonar range data can be inconsistent, throwing off distance measurements by many centimeters. Even laser data isn’t perfect; if you watch the range data coming in from stationary laser sensors, you’ll notice the range data shake indecisively around the actual range. What’s one to do? Well, you could shell out a little more money for ultra-sensitive sensors, but even they have their limitations and are subject to error. To add insult to injury, our robot’s misjudgment isn’t limited to data coming in from sensors, it’s also prone to misjudging what it’s done from a control perspective. For example, tell your robot to move forward 1 meter and it’ll comply as best it can; but due to various surface textures (road vs. dirt vs. carpet vs. sand), the robot may report that it’s traveled 1 meter when in fact it’s gone a bit further or a bit less. What we’d like to do is to filter out the chaff from the wheat…or the noise from the useful data in this case. Likely the most widely used, researched, and proven filtering technique is the Kalman filter. (Read: this is an important concept to understand!) In short, if not a bit curtly, the Kalman filter is a recursive data processing algorithm which processes all available measurements, regardless of their precision, to estimate the current value of variables of interest, such as a robot’s position and orientation. There is a plethora of literature written concerning the Kalman filter, some useful and some otherwise; accordingly, to help you better understand the Kalman filter, I’ll guide you through a series of readings which I’ve found pivotally assistive in understanding this technique and discuss key points from those references. The first step on our journey into the Kalman filter rabbit hole is Peter Maybeck’s Stochastic Models, Estimation, and Control, Vol. 1, Chapter 1 [1]. There is simply no writing which gives a more tractable and approachable overview of the concepts behind the Kalman filter. If you’re following along at home (singing along with the bouncing dot), please take the time to now read Maybeck’s introduction and take the time to understand what you’re reading…I’ll wait patiently for your return. <insert you reading intently here> So what did we just learn? Below are a few points to emphasize key ideas from Maybeck’s introduction… - A Kalman filter can only be applied to a system exhibiting three attributes: it must be a linear model, the noise within the model must be white, and the noise within the model must be Gaussian (normal bell-shaped curve having a mean and standard deviation). (Interestingly, if we relax the first condition to accommodate non-linear models, we must use a technique known as an Extended Kalman filter which will be a subject of a successive post; and this is very important as once a robot starts taking a curved path, it becomes a non-linear problem.)
- The Kalman filter is built upon the ideas encapsulated by Bayes’ filter. (For a great introduction to Bayes filter, see chapter 2.4 of Sebastian Thrun’s Probabilistic Robotics [2].), Indeed, the Kalman filter is an optimal incantation of Bayes’ filter as it’s result is the mean and the mode of the Gaussian result, it is the maximum likelihood estimate, and it is the linear estimate whose variance is less than that of any other linear unbiased estimate; in other words, whichever angle you look at it, the result is just about as good as it gets.
- There are two kinds of noise which needs to be considered: process (or control) noise, which results from imprecise movement and manipulation control, and measurement noise which results from imprecise sensory data, such as sonar and laser data. When calculating an estimate, the process and process noise is considered first with the measurement and measurement noise being applied next.
- The “white noise” of the system being analyzed need not be constant; a “shaping filter” can be used to describe how the noise changes over time assuming that the amplitude of the noise can be described at any point in time as a Gaussian distribution.
- In reading Maybeck’s introduction, you may have been surprised to see that , as described in equation 1-1 of the introduction. If you skip back to equation 1-16, you’ll be reminded why this is the case. As the variance of the process/control noise approaches infinity and the variance of the process prediction approaches infinity, the gain approaches 1. Accordingly, no “weight” is put on the control input while all trust is put on the measurement result itself. Now, going back to equation 1-1, since control isn’t being considered at all, it is effectively removed from the equation, similarly to as if it had an infinite error variance.
Algorithmically, in our one-dimensional context, the Kalman filter takes the following steps to estimate the variable of interest: - Calculate the predicted state based on process (control) input (1-11),
- Calculate the error variance of the predicted state (1-12),
- Calculate the gain which will be used to “correct” the predicted state when the measurement data is applied (1-15),
- Re-calculate the predicted state, taking into account measurement data and the gain (1-13), and
- Re-calculate the error variance taking into account the improvement that the measurement data has added to the prediction (1-14).
Isn’t that brilliant? The Kalman filter takes into account the previous state estimation, and the confidence of that estimation, to then determine the current estimate based on control input and measurement output, all the while tweaking itself – with the gain – along the way. And since it only needs to maintain the last estimate and confidence of the estimate, it takes up very little memory to implement. While very approachable, Maybeck’s introduction to the Kalman filter is greatly simplified…for that very reason. For example, Maybeck’s introduction assumes movement on a one-dimensional plane (an x value) with one-dimensional control input (the velocity). In more realistic contexts, we store the position and orientation of a robot as a vector and the control and measurement data as vectors as well. In the next post, we’ll examine what modifications need to be made to Maybeck’s simplification to apply the Kalman filter to more real world scenarios. This is where we must get our old college textbooks out to realize that we should have paid more attention in our matrices class (or you can refresh yourself here). Until next time! References
Maybeck, P. 1979. Stochastic Models, Estimation, and Control, Vol. 1. Thrun, S., Burgard, W., Fox, D. 2005. Probabilistic Robotics. Sep
09
2010
## Fundamentals: Coordinate FramesPosted on 04:34, September 9th, 2010 by
Billy McCaffertyThe world of robotics has a dizzying number of subjects; it’s quite overwhelming at first glance to figure out which topics someone “really needs to get” and which topics require a more cursory understanding. Accordingly, this will be the first in a number of posts (“number” being linearly proportional to my motivation) that I will be doing on some of the more fundamental topics within the realm of robotics. We’ll begin our travels with “coordinate frames.” What are coordinate frames you ask…well slow down there fella…let’s first take a step back to figure out the motivations for wanting to ask such a question to begin with. A “robot” is typically defined (more or less) as an autonomous system which interacts with its environment. Interaction When considering an object’s pose within a reference frame, one first needs to know what a reference frame is to begin with. In short, a reference frame is “how the world is oriented”; i.e., which way’s North, South, up, down, etc. To describe the reference frame, and – more importantly – to enable one to provide bearing for where an object is within that frame, the frame is described with three axis: x, y, and (you guessed it) z. Without being oriented, if y points to the right and z points up, which way does x point? To determine this, use a handy trick (no mean for the pun) known as the “right-handed rule” (sorry south paws). To demonstrate, hold out your hand in front of your face like you were about to karate chop a board with your thumb sticking towards your face. If you point your index finger towards y, curl your other fingers towards z, then your thumb will point towards the positive direction of x. The origin of the frame, the [0, 0, 0] value of the x, y, and z axis is located on an arbitrary, but known, point within the environment or on an object. There can be a frame, and different origin accordingly, for each reference perspective for a given context; each would be known as the coordinate frame of the given context. For example, suppose one is developing a robot to pick up toys and put them into a toy bin (can you tell I have kids?). In this case, there would likely be three coordinate frames of interest. The first would be a reference frame which would allow one to describe the pose of the toy and the manipulator in relation to that reference frame. For instance, the reference frame could have its origin in the corner of the room and be tied to the orientation of the room itself; a “reference frame” is simply a coordinate frame which does not change pose as other objects move through it. A second coordinate frame would be the coordinate frame from the perspective of the end effector. By applying a separate coordinate frame to the end effector, it’s now tractable to determine not only where the end effector is found in relation to the reference frame, but also how the end effector is oriented in relation to the reference frame and how the pose needs to be modified to reach another pose (with fun stuff techniques like matrix transformations). As the end effector would be moved, its coordinate frame would move with it, figuratively fixed to a point on the end effector. Finally, a third coordinate frame would be that of the toy being picked up; this frame, in relation to the reference frame, would facilitate determining how the end effector’s pose needs to change to be in proper alignment for picking up the toy, taking into account the toy’s pose as well (read, lots more matrix transformations). When applying a coordinate frame to an object or environment, two decisions must be made: - What fixed point on the object or environment should the coordinate frame be applied to? If using RobotIQ’s way cool, underactuated Adaptive Gripper, the coordinate frame might have its origin at the base of the “solo” finger. If talking about an airport, the coordinate frame (in this case the “reference frame”) might have its origin at the base of the control tower.
- How will the coordinate frame be oriented with respect to the environment or object it is applied to; i.e., which way should x, y, and z point out of the origin? By convention, z would point “up” out of the origin. With the airport example, z would point towards the sky through the control tower. For movable objects, it’s not so straight forward to pick which way is “up.” So when applying a coordinate frame to a movable object (e.g., end effector), what’s most import is to pick a point and an orientation of the frame, with respect to the object its applied to, and keep that pose relationship between the object and the frame fixed as the object changes pose. For example, if a coordinate frame were applied to a mobile robot base, and the robot turns 90 degrees clockwise, then its coordinate frame would turn 90 degrees clockwise with it while the reference frame would remain static.
There is certainly a ton more to coordinate frames, manipulating them, comparing them to each other, and transforming them, than the light introduction provided here, but this should at least assist in removing a deer in headlights look if you’re unfamiliar with this term and someone brings it up in conversation during a cocktail party…which Enjoy!
Siciliano, B. 2009. Robotics: Modelling, Planning and Control. |

© 2011-2014 **Codai, Inc.** All Rights Reserved