Kalman filters

Kalman Filters

Colin Shaw

04.27.2017

- Introduction

- Welcome to Computer Science Club
- Thank you to RevUnit for sponsoring

- What are Kalman filters used for?

- General time series analysis

- Econometrics

- Motion planning

- Guidance
- Navigation
- Apollo lander and nuclear missiles

- Sensor fusion

- Lidar
- Radar

- How Kalman filters fit in the larger world

- Measurement

- Measuring the world around you
- Coordinates relative to you
- Examples

- Kalman filters

- Plain (linear) Kalman filters
- Extended Kalman filters
- Unscented Kalman filters

- Double Exponential Smoothing
- Recursive Total Least Squares

- Localization

- Measuring your position in the world
- Your position in larger map coordinates
- Examples

- Particle filters
- Histogram filters

- Both measurement and localization

- Simultaneous Localization and Measurement (SLAM)

- Torrid history of the introduction of the Kalman filter

- Rudolph Emil Kalman’s 1960 paper

- Hidden model was looked at with great skepticism
- Not met with open arms in the community
- EE journals would not publish it (ended up in ME)

- Unscented Kalman filters

- Met with skepticism
- Again hard to publish

- What type of process is a Kalman filter?

- Linear quadratic estimator

- Linear system
- RMSE can be minimized by minimizing quadratic

- Bayesian estimator

- Prior
- Distribution
- Posterior
- Conjugate priors

- Markov process

- Future probabilities determined by most recent state

- How are we presenting Kalman filters here?

- Developing intuition about a basic motion model
- Look at what we need to use the motion model
- Discuss noisy process and measurements
- Jump to talking about probability distributions
- Show the full set of Kalman filter equations
- Walk through a Kalman filter code example
- Discuss more complex motion models and sensor fusion
- Discuss deviation from a linear model

- Intuition regarding the motion model

- Draw measurements at time t = 0, 1, 2
- Infer measurement at time t = 3
- How do we know where the next point is?

- Mental model is more than the measurements
- It has implied velocity

- How do we correct for errors

- Error is the actual minus predicted (residual)
- Apply scaled residual to update model

- More formality to the motion model

- Vector of position x and velocity v (or ẋ)

- State updates occur with a state transition function

- We can only observe measurables, so there is a measurement function H

- The errors can be assumed to be acceleration noise

- There is a force causing the change in velocity
- There is an acceleration associated with the force
- This noise is distributed N(0, 𝝈a2)

- Adding the notion of noise

- What if we had noisy measurements only

- For each t = 0, 1, 2, … we assume position error is distributed N(0, 𝝈x2)
- What does this give us?

- The standard deviation at these positions
- In effect, nothing

- We need more information to be able to refine estimates of uncertainty

- Think about the Bayesian estimation notion of the problem
- We can use a prior and a new distribution to compute a posterior

- Brief probability distribution review

- Gaussian.ipynb
- What is a normal distributed random variable

- Mean
- Standard deviation

- Adding normally distributed random variables

- Analytical result

- 𝜇 = 𝜇1 + 𝜇2
- 𝝈2 = 𝝈12 + 𝝈22

- Pointwise addition
- Convolution (ConvolutionAddition.ipynb)

- Multiplying Gaussians

- Analytical result

- 𝜇 = 𝜇1 𝝈22 + 𝜇2 𝝈12 / 𝝈12 + 𝝈22 (normalized weighted average)
- 𝝈2 = 𝝈12 𝝈22 / 𝝈12 + 𝝈22 (parallel sum)

- PDF demonstration

- Kalman filter model

- Prediction

- Predicted State Estimate

- Predicted Estimate Covariance

- Measurement

- Innovation Residual

- Innovation Covariance

- Optimal Kalman Gain

- Updated State Estimate

- Updated Covariance Estimate

- Intuition about what the Kalman filter is doing

- PositionVelocity.ipynb
- Steps

- Initial position estimate (poor velocity knowledge)
- Project to arbitrary velocity distribution
- New measurement (decent position, poor velocity knowledge)
- Posterior distribution now has much better position and velocity
- Repeat

- Basic idea

- Additive noise (both prediction and measurement)
- Multiplicative refinement in the Kalman gain calculation

- Measuring error

- Root Mean Squared Error (RMSE)

- How to compute

- Is not normalized
- Depends on knowing ground truth

- Normalized Innovation Squared (NIS)

- ChiSquared.ipynb
- How to compute

- Is normalized
- Does not depend on ground truth

- Kalman filter code example

- KalmanFilter.ipynb
- Numpy for matrix arithmetic
- Steps

- Initialize state and covariance
- Compute state prediction
- Update with measurement
- Repeat

- Error estimation

- More complex motion models

- Sensor fusion

- Merge two (or more) types of measurements
- Get better results than any provide alone

- Example

- Radar

- Radius (inaccurate)
- Radial velocity (accurate)
- Angle (inaccurate)

- Lidar

- Radius (accurate)
- Angle (accurate)

- Measurement and state transfer function in general could be nonlinear

- Deviation from the linear Kalman filter model

- GaussianTransformation.ipynb
- Extended Kalman filters

- Linearize the nonlinear mappings
- Jacobian evaluation instead of matrix-vector multiplication
- Can be expensive to compute
- Not so good results in difficult cases

- Very nonlinear functions
- Non-normal distributions

- Unscented Kalman filters

- Sample sigma points
- Map sigma points
- Estimator of distribution without knowing specifics
- Relatively inexpensive to compute
- More stable results in most cases