Posts Tagged ‘Algorithms’
Posted on 23:40, October 22nd, 2010 by Billy McCafferty
As discussed in Part I, the Kalman filter is an optimal, prediction/correction estimation technique which minimizes estimated error and can be used a wide array of problems. While Part I focused on a one-dimensional problem space to more easily convey the underlying concepts of the Kalman filter, Part II will now expand the discussion to bring the technique to higher dimensions (e.g., 2D and 3D) while still being constrained to linear problems.
What’s so cool about the Kalman filter you ask? Let’s highlight a few areas where the Kalman filter may provide value. (This should help you remain motivated while you’re delving into a myriad of Greek symbols and matrix transformations!) The Kalman filter can help:
These few examples demonstrate just how useful the Kalman filter can be on a wide variety of problems. To limit the scope of discussions, the context of this tutorial will be on determining the true pose of a mobile robot given noisy control inputs and measurement data.
In examining the Kalman filter a bit more, we’ll discuss the following topics:
The current discussion will avoid the derivations of the associated, base equations in favor of pragmatic use of the filter itself. For a comprehensive derivation and discussion of the involved equations and mathematical roots, see Robert Stengel’s Optimal Control and Estimation.
Applicable Systems for Use of the Kalman filter
In Part I, it was discussed that a system must adhere to three constraints for Kalman filter applicability: 1) it must be describable by a linear model, 2) the noise within the model must be white, and 3) the noise within the model must be Gaussian. More precisely, the state must be estimable via:
The state estimation equation is broken down as follows:
In addition to adhering to the linear state estimation just discussed, in order to be Kalman-filter-compatible, the system being modeled must also have a measurement model estimable via:
The measurement equation is broken down as follows:
In summary, if the system can be modeled by the the process and measurement equations described above, then the Kalman filter may be used on the system to estimate state, if given control and measurement inputs. Let’s now look at the general Kalman filter algorithm, at a very high level, including specific inputs and outputs.
Kalman Filter Algorithm, Inputs and Outputs
The Kalman filter algorithm follows a surprisingly straight-forward algorithm broken down into two phases. The first phase is called the time estimate (or prediction) in which the previous state and control input is used to estimate the current state and estimate covariance. The second phase is called the measurement update (or correction) in which the Kalman gain is calculated and the state estimate and covariance is improved upon using measurement data and the Kalman gain. Roughly, the algorithm is as follows:
During the first time iteration t0, the Kalman filter accepts as input the initial state and estimate covariance (which may be zero if the initial state is known with 100% certainty) along with the control input u and measurement data z. On subsequent time iterations tn, the Kalman filter accepts as input the output from the previous run (with mean and covariance – discussed more below) along with the control input u and measurement data z from tn.
The output of the Kalman filter is an estimate of the state represented by a normal distribution having mean μ (the estimated state) and covariance Σ (the confidence, or more accurately, the noise, in that estimate). (As a reminder, the covariance of a normal distribution is the standard deviation squared σ2.) Note that μ need not be limited to a scalar value; in fact, it’ll almost always be a vector. For example, the pose of a mobile robot may be a three dimensional vector containing the location and orientation (x y θ)T. Accordingly, this vector would be the resulting mean value. Furthermore, with a three dimensional state vector as the mean, the covariance Σ would be a (3×3) diagonal matrix having a covariance for each corresponding value of the vector, as shown at right.
Kalman Filter Algorithm Formalized
We’ve discussed the initial Kalman filter equations for process and measurement estimation; we’ve also discussed the overall algorithm for implementation, broken down into prediction and correction phases. What’s missing are the actual calculations for concretely carrying out the estimation process itself. The concrete calculations for implementing the Kalman filter algorithm are “easily” derived from the process and measurement equations by taking the partial derivatives of them and setting them to zero for minimizing error…and jumping around three times and standing on your head for π minutes. (My eyes quickly begin to glaze over when I start to follow derivations of this nature…but if you like this kind of stuff, Sebastian Thrun shows the complete derivation within Probabilistic Robotics; Robert Stengel takes it to 11 within Optimal Control and Estimation with more Greek symbols than you can shake a stick at.) But I digress…
To formalize, the Kalman filter algorithm accepts four inputs:
With the given inputs, the Kalman filter algorithm is implemented as follows:
Line 1 should be comfortingly familiar; this is the calculation for estimating the current state given the previous state and control input. But what’s missing from the original process equation? Have you spotted it yet? I’ll give you a noisy hint. (No mean for the pun…thank you, thank you, I’ll be here all week.) That’s right, the noise parameter has been left off of the state estimation equation in line 1. Line 1 simply calculates the a priori state estimate, ignoring process noise.
Line 2 calculates the covariance of the current state estimate, taking process noise into consideration. Matrix A has already been discussed; it comes from the Kalman filter state estimate equation described earlier. R is a diagonal matrix representing the process noise covariance.
Line 3 calculates the Kalman gain which will be used to weight the effect of the measurement model when correcting the estimate. C is identical to the matrix H described earlier in the base Kalman filter measurement equation. As tricky as this line looks (and some of those matrix calculations can make your head hurt a bit), the only thing new is Q; this diagonal matrix is the measurement noise covariance. The resulting Kalman gain K is a matrix having dimensions (nxm) where n is the dimension of the state vector and m is the dimension of the measurement vector.
(As a side, take note that in different reading sources, the meaning of R and Q may be switched; Q would be process noise and R would be measurement noise and would have their locations in the equations swapped, accordingly. Just be cognizant of which is which within the source you’re reading from.)
Line 4 updates the state estimate taking into account the weighted measurement information. Note that the Kalman gain is multiplied by the difference between the actual measurement model and the predicted measurement model. What happens if they happen to be identical? …Jeopardy daily double sounds playing in the background… If the actual and predicted measurement models happen to be identical, then the estimated state will not be corrected at all since our sensors have verified that we’re exactly where we thought we were. I.e., don’t fix what ain’t broken. The result of line 4 is the a posteriori state estimate.
Finally, line 5 corrects the covariance, taking into account the Kalman gain used to correct the state estimate.
As output, the Kalman filter algorithm returns two values:
With these outputs, it is now known with some Σ amount of error what the current state of the system is; or where our intrepid little robot is on the map.
Limitations of the Kalman Filter
The Kalman filter is incredibly powerful and can be used in a surprising number of scenarios. The primary limitation of the Kalman filter is that it assumes use within a linear system. Many systems are non-linear (such as a mobile robot moving with a rotational trajectory) yet may still benefit from the Kalman filter. The applicable approach is to form a linear estimate of the non-linear system for use by the Kalman filter; similar in effect to a Taylor series expansion. Popular extensions to the Kalman filter to support non-linear systems include the Extended Kalman filter and, even better, the Unscented Kalman filter. Specifically, chapter 7 of Sebastian Thrun’s Probabilistic Robotics goes into good detail on describing how to apply both of these extensions to the context of mobile robotics.
Googling for “Kalman filter” will quickly show just how much more there is to this topic. But I certainly hope this two part series has helped to clarify the overall algorithm with particular attention to describing the various elements of the calculations themselves.
Posted on 17:17, September 22nd, 2010 by Billy McCafferty
Dealing with the real world is rough. (With an opening like that, you can probably guess how my high school days went.) To be clear here, we’re talking about robots having to deal with the real world (not me, luckily). It’s hard enough for our little silicon brethren to have limited sensory capabilities, but on top of that, the data that’s coming in is usually noisy. For example, sonar range data can be inconsistent, throwing off distance measurements by many centimeters. Even laser data isn’t perfect; if you watch the range data coming in from stationary laser sensors, you’ll notice the range data shake indecisively around the actual range. What’s one to do? Well, you could shell out a little more money for ultra-sensitive sensors, but even they have their limitations and are subject to error. To add insult to injury, our robot’s misjudgment isn’t limited to data coming in from sensors, it’s also prone to misjudging what it’s done from a control perspective. For example, tell your robot to move forward 1 meter and it’ll comply as best it can; but due to various surface textures (road vs. dirt vs. carpet vs. sand), the robot may report that it’s traveled 1 meter when in fact it’s gone a bit further or a bit less. What we’d like to do is to filter out the chaff from the wheat…or the noise from the useful data in this case. Likely the most widely used, researched, and proven filtering technique is the Kalman filter. (Read: this is an important concept to understand!)
In short, if not a bit curtly, the Kalman filter is a recursive data processing algorithm which processes all available measurements, regardless of their precision, to estimate the current value of variables of interest, such as a robot’s position and orientation. There is a plethora of literature written concerning the Kalman filter, some useful and some otherwise; accordingly, to help you better understand the Kalman filter, I’ll guide you through a series of readings which I’ve found pivotally assistive in understanding this technique and discuss key points from those references.
The first step on our journey into the Kalman filter rabbit hole is Peter Maybeck’s Stochastic Models, Estimation, and Control, Vol. 1, Chapter 1 . There is simply no writing which gives a more tractable and approachable overview of the concepts behind the Kalman filter. If you’re following along at home (singing along with the bouncing dot), please take the time to now read Maybeck’s introduction and take the time to understand what you’re reading…I’ll wait patiently for your return.
<insert you reading intently here>
So what did we just learn? Below are a few points to emphasize key ideas from Maybeck’s introduction…
Algorithmically, in our one-dimensional context, the Kalman filter takes the following steps to estimate the variable of interest:
Isn’t that brilliant? The Kalman filter takes into account the previous state estimation, and the confidence of that estimation, to then determine the current estimate based on control input and measurement output, all the while tweaking itself – with the gain – along the way. And since it only needs to maintain the last estimate and confidence of the estimate, it takes up very little memory to implement. While very approachable, Maybeck’s introduction to the Kalman filter is greatly simplified…for that very reason. For example, Maybeck’s introduction assumes movement on a one-dimensional plane (an x value) with one-dimensional control input (the velocity). In more realistic contexts, we store the position and orientation of a robot as a vector and the control and measurement data as vectors as well. In the next post, we’ll examine what modifications need to be made to Maybeck’s simplification to apply the Kalman filter to more real world scenarios. This is where we must get our old college textbooks out to realize that we should have paid more attention in our matrices class (or you can refresh yourself here).
Until next time!
Maybeck, P. 1979. Stochastic Models, Estimation, and Control, Vol. 1.
Thrun, S., Burgard, W., Fox, D. 2005. Probabilistic Robotics.
© 2011-2013 Codai, Inc. All Rights Reserved