Posts Tagged ‘Algorithms’
Oct
22
2010
## Introduction to the Kalman Filter, Part IIPosted on 23:40, October 22nd, 2010 by
Billy McCaffertyAs discussed in Part I, the Kalman filter is an optimal, prediction/correction estimation technique which minimizes estimated error and can be used a wide array of problems. While Part I focused on a one-dimensional problem space to more easily convey the underlying concepts of the Kalman filter, Part II will now expand the discussion to bring the technique to higher dimensions (e.g., 2D and 3D) while still being constrained to linear problems. What’s so cool about the Kalman filter you ask? Let’s highlight a few areas where the Kalman filter may provide value. (This should help you remain motivated while you’re delving into a myriad of Greek symbols and matrix transformations!) The Kalman filter can help: - Determine the true pose of a mobile robot given its control input and measurements;
- Provide sensor fusion capabilities by further correcting an estimate with each subsequent measurement input;
- Track an object within a video (e.g., face tracking); and
- Determine the attitude of a satellite using measurement of star locations.
These few examples demonstrate just how useful the Kalman filter can be on a wide variety of problems. To limit the scope of discussions, the context of this tutorial will be on determining the true pose of a mobile robot given noisy control inputs and measurement data. In examining the Kalman filter a bit more, we’ll discuss the following topics: - Applicable systems for use of the Kalman filter
- Kalman filter algorithm, inputs and outputs
- Kalman filter algorithm formalized
- Limitations of the Kalman filter
- Direction for further study
The current discussion will avoid the derivations of the associated, base equations in favor of pragmatic use of the filter itself. For a comprehensive derivation and discussion of the involved equations and mathematical roots, see Robert Stengel’s Optimal Control and Estimation.
In Part I, it was discussed that a system must adhere to three constraints for Kalman filter applicability: 1) it must be describable by a linear model, 2) the noise within the model must be white, and 3) the noise within the model must be Gaussian. More precisely, the state must be estimable via: The state estimation equation is broken down as follows: **x**_{k}is the estimate of the state after taking into consideration the previous state, the control input, and process noise.**x**_{k-1}is the state estimate from the previous time iteration (k-1) where k is the current time iteration. The state will likely surely be a vector; e.g., (*x**y*θ)^{T}for a mobile robot on a 2D plane having a location (*x*,*y*) and orientation θ.**u**_{k}is the control input for the current time iteration. While being sensor data in the strict sense, odometry measurements may be considered control input; and would be a good fit for our mobile robot example. Accordingly, control input could be represented as a three dimensional vector containing an initial rotation (in radians), a translation distance (distance travelled in a straight line), and a second rotation (δ_{rot1}δ_{trans}δ_{rot2})^{T}.**w**_{k-1 is the process noise, or the amount of error inherent in the state estimation when taking into consideration noisy control input. (The process noise is Guassian, having mean μ of 0 and covariance Σ.) The process noise is a vector having the same dimension n as the state vector.}**A**is a matrix which transforms the previous state into the current state without regard to any control input. For example, if the system being modeled were a satellite orbiting the earth,**A**would be a matrix which would modify the state to reflect the orbital distance traveled between time iterations k and (k – 1). In more earth-bound scenarios,**A**would be an identity matrix. Regardless,**A**is a square matrix of size (*n*x*n*) where*n*is the dimension of the state vector**x**_{k}.**B**is a matrix which transforms the control input to be compatible for summing to the previous state to reflect the current state. Within our mobile robot context, the control input is (δ_{rot1}δ_{trans}δ_{rot2})^{T}which cannot simply be added to the previous state to determine the current state; accordingly,**B**must transform**u**_{k}into a vector reflecting the relative state change induced by the control input. For example, if the control input were to move the robot 55 mm on the*x*axis, 120 mm on the*y*axis, and 0.8 rad, then the result of**Bu**_{k}would be (55mm 120mm 0.8 rad) which could then be easily summed to the previous state to get the current state.**B**is a matrix of size (*n*x*l*) where*n*is the dimension of the state vector and*l*is the dimension of the control vector.
In addition to adhering to the linear state estimation just discussed, in order to be Kalman-filter-compatible, the system being modeled must also have a measurement model estimable via: The measurement equation is broken down as follows: **z**_{k}is the predicted measurement model after taking into account the estimated state and known measurement noise. Don’t take that at face value; ask yourself why it’s important to be able to predict what the measurement model will be for a given state. Spoiler alert… If we’re able to predict what the measurement model should be for a given state, then we can compare the*predicted*measurement model against the*actual*measurement data returned by our sensors. The difference between the two will be used for improving the state estimation within the Kalman filter algorithm. For our little robot, the measurement model may be a vector of laser scans (**s**_{1}**s**_{2}…**s**_{n})^{T}, with each scan having an orientation and range (*x*θ)^{T}.**x**_{k}is the result of the state estimation equation discussed above.**v**_{k}is the measurement noise, or the amount of error inherent in the measurement estimation when taking into consideration noisy sensor input. (The measurement noise is Guassian, having mean μ of 0 and covariance Σ.) The measurement noise is a vector having the same dimension*n*as the resulting measurement vector.**H**is a matrix which transforms the state**x**_{k}into the predicated measurement model. In other words, if given the state**x**_{k},**Hx**_{k}will calculate what the measurement model should look like if there were no uncertainty involved.**H**is of size (*m*x*n*) where*m*is the dimension of the measurement vector**z**_{k}and*n*is the dimension of the state vector. And to answer your question, yes,**H**can be quite onerous to fabricate. In fact,**H**may actually be implemented as a function, accepting a state and returning a measurement model based on the known map, estimated location and orientation; the Extended Kalman filter or other extension would need to be leveraged if we digressed in this way from our “simple” linear model.
In summary, if the system can be modeled by the the process and measurement equations described above, then the Kalman filter may be used on the system to estimate state, if given control and measurement inputs. Let’s now look at the general Kalman filter algorithm, at a very high level, including specific inputs and outputs.
The Kalman filter algorithm follows a surprisingly straight-forward algorithm broken down into two phases. The first phase is called the - Estimate the predicted state based on process (control) input. This estimate is the
*a priori*estimate. - Calculate the state estimate covariance (our confidence in the state estimate).
- Calculate the Kalman gain which will be used for weighting the amount of correction the measurement data will have on the state estimate.
- Refine the estimated state using measurement input. This refined estimate is the
*a posteriori*estimate. - Refine the state estimate covariance (our confidence in the state estimate after taking measurement data into account).
During the first time iteration The output of the Kalman filter is an estimate of the state represented by a normal distribution having mean
We’ve discussed the initial Kalman filter equations for process and measurement estimation; we’ve also discussed the overall algorithm for implementation, broken down into prediction and correction phases. What’s missing are the actual calculations for concretely carrying out the estimation process itself. The concrete calculations for implementing the Kalman filter algorithm are “easily” derived from the process and measurement equations by taking the partial derivatives of them and setting them to zero for minimizing error…and jumping around three times and standing on your head for π minutes. (My eyes quickly begin to glaze over when I start to follow derivations of this nature…but if you like this kind of stuff, Sebastian Thrun shows the complete derivation within Probabilistic Robotics; Robert Stengel takes it to 11 within Optimal Control and Estimation with more Greek symbols than you can shake a stick at.) But I digress… To formalize, the Kalman filter algorithm accepts four inputs: *μ*_{t-1}– the mean state vector**Σ**_{t-1}– the covariance of the mean (the error in the state estimate)*u*_{t}– the process (control) input*z*_{t}– the measurement data
With the given inputs, the Kalman filter algorithm is implemented as follows: Line 1 should be comfortingly familiar; this is the calculation for estimating the current state given the previous state and control input. But what’s missing from the original process equation? Have you spotted it yet? I’ll give you a noisy hint. (No mean for the pun…thank you, thank you, I’ll be here all week.) That’s right, the noise parameter has been left off of the state estimation equation in line 1. Line 1 simply calculates the Line 2 calculates the covariance of the current state estimate, taking process noise into consideration. Matrix Line 3 calculates the Kalman gain which will be used to weight the effect of the measurement model when correcting the estimate. (As a side, take note that in different reading sources, the meaning of Line 4 updates the state estimate taking into account the weighted measurement information. Note that the Kalman gain is multiplied by the difference between the Finally, line 5 corrects the covariance, taking into account the Kalman gain used to correct the state estimate. As output, the Kalman filter algorithm returns two values: *μ*_{t}– the current mean state vector.**Σ**_{t}– the current covariance of the mean (the error in the state estimate). The covariance matrix would be a diagonal matrix having dimensions (*n*x*n*) where*n*is the dimension of the state vector.
With these outputs, it is now known with some
The Kalman filter is incredibly powerful and can be used in a surprising number of scenarios. The primary limitation of the Kalman filter is that it assumes use within a linear system. Many systems are non-linear (such as a mobile robot moving with a rotational trajectory) yet may still benefit from the Kalman filter. The applicable approach is to form a linear estimate of the non-linear system for use by the Kalman filter; similar in effect to a Taylor series expansion. Popular extensions to the Kalman filter to support non-linear systems include the Extended Kalman filter and, even better, the Unscented Kalman filter. Specifically, chapter 7 of Sebastian Thrun’s Probabilistic Robotics goes into good detail on describing how to apply both of these extensions to the context of mobile robotics. Googling for “Kalman filter” will quickly show just how much more there is to this topic. But I certainly hope this two part series has helped to clarify the overall algorithm with particular attention to describing the various elements of the calculations themselves. Enjoy! Sep
22
2010
## Introduction to the Kalman Filter, Part IPosted on 17:17, September 22nd, 2010 by
Billy McCaffertyDealing with the real world is rough. (With an opening like that, you can probably guess how my high school days went.) To be clear here, we’re talking about robots having to deal with the real world (not me, luckily). It’s hard enough for our little silicon brethren to have limited sensory capabilities, but on top of that, the data that’s coming in is usually noisy. For example, sonar range data can be inconsistent, throwing off distance measurements by many centimeters. Even laser data isn’t perfect; if you watch the range data coming in from stationary laser sensors, you’ll notice the range data shake indecisively around the actual range. What’s one to do? Well, you could shell out a little more money for ultra-sensitive sensors, but even they have their limitations and are subject to error. To add insult to injury, our robot’s misjudgment isn’t limited to data coming in from sensors, it’s also prone to misjudging what it’s done from a control perspective. For example, tell your robot to move forward 1 meter and it’ll comply as best it can; but due to various surface textures (road vs. dirt vs. carpet vs. sand), the robot may report that it’s traveled 1 meter when in fact it’s gone a bit further or a bit less. What we’d like to do is to filter out the chaff from the wheat…or the noise from the useful data in this case. Likely the most widely used, researched, and proven filtering technique is the Kalman filter. (Read: this is an important concept to understand!) In short, if not a bit curtly, the Kalman filter is a recursive data processing algorithm which processes all available measurements, regardless of their precision, to estimate the current value of variables of interest, such as a robot’s position and orientation. There is a plethora of literature written concerning the Kalman filter, some useful and some otherwise; accordingly, to help you better understand the Kalman filter, I’ll guide you through a series of readings which I’ve found pivotally assistive in understanding this technique and discuss key points from those references. The first step on our journey into the Kalman filter rabbit hole is Peter Maybeck’s Stochastic Models, Estimation, and Control, Vol. 1, Chapter 1 [1]. There is simply no writing which gives a more tractable and approachable overview of the concepts behind the Kalman filter. If you’re following along at home (singing along with the bouncing dot), please take the time to now read Maybeck’s introduction and take the time to understand what you’re reading…I’ll wait patiently for your return. <insert you reading intently here> So what did we just learn? Below are a few points to emphasize key ideas from Maybeck’s introduction… - A Kalman filter can only be applied to a system exhibiting three attributes: it must be a linear model, the noise within the model must be white, and the noise within the model must be Gaussian (normal bell-shaped curve having a mean and standard deviation). (Interestingly, if we relax the first condition to accommodate non-linear models, we must use a technique known as an Extended Kalman filter which will be a subject of a successive post; and this is very important as once a robot starts taking a curved path, it becomes a non-linear problem.)
- The Kalman filter is built upon the ideas encapsulated by Bayes’ filter. (For a great introduction to Bayes filter, see chapter 2.4 of Sebastian Thrun’s Probabilistic Robotics [2].), Indeed, the Kalman filter is an optimal incantation of Bayes’ filter as it’s result is the mean and the mode of the Gaussian result, it is the maximum likelihood estimate, and it is the linear estimate whose variance is less than that of any other linear unbiased estimate; in other words, whichever angle you look at it, the result is just about as good as it gets.
- There are two kinds of noise which needs to be considered: process (or control) noise, which results from imprecise movement and manipulation control, and measurement noise which results from imprecise sensory data, such as sonar and laser data. When calculating an estimate, the process and process noise is considered first with the measurement and measurement noise being applied next.
- The “white noise” of the system being analyzed need not be constant; a “shaping filter” can be used to describe how the noise changes over time assuming that the amplitude of the noise can be described at any point in time as a Gaussian distribution.
- In reading Maybeck’s introduction, you may have been surprised to see that , as described in equation 1-1 of the introduction. If you skip back to equation 1-16, you’ll be reminded why this is the case. As the variance of the process/control noise approaches infinity and the variance of the process prediction approaches infinity, the gain approaches 1. Accordingly, no “weight” is put on the control input while all trust is put on the measurement result itself. Now, going back to equation 1-1, since control isn’t being considered at all, it is effectively removed from the equation, similarly to as if it had an infinite error variance.
Algorithmically, in our one-dimensional context, the Kalman filter takes the following steps to estimate the variable of interest: - Calculate the predicted state based on process (control) input (1-11),
- Calculate the error variance of the predicted state (1-12),
- Calculate the gain which will be used to “correct” the predicted state when the measurement data is applied (1-15),
- Re-calculate the predicted state, taking into account measurement data and the gain (1-13), and
- Re-calculate the error variance taking into account the improvement that the measurement data has added to the prediction (1-14).
Isn’t that brilliant? The Kalman filter takes into account the previous state estimation, and the confidence of that estimation, to then determine the current estimate based on control input and measurement output, all the while tweaking itself – with the gain – along the way. And since it only needs to maintain the last estimate and confidence of the estimate, it takes up very little memory to implement. While very approachable, Maybeck’s introduction to the Kalman filter is greatly simplified…for that very reason. For example, Maybeck’s introduction assumes movement on a one-dimensional plane (an x value) with one-dimensional control input (the velocity). In more realistic contexts, we store the position and orientation of a robot as a vector and the control and measurement data as vectors as well. In the next post, we’ll examine what modifications need to be made to Maybeck’s simplification to apply the Kalman filter to more real world scenarios. This is where we must get our old college textbooks out to realize that we should have paid more attention in our matrices class (or you can refresh yourself here). Until next time! References
Maybeck, P. 1979. Stochastic Models, Estimation, and Control, Vol. 1. Thrun, S., Burgard, W., Fox, D. 2005. Probabilistic Robotics. |

© 2011-2014 **Codai, Inc.** All Rights Reserved