1 of 21

Autonomous Mobile Manipulation

State Estimation: Bayesian Estimation – Kalman Filter

C. Papachristos

Robotic Workers (RoboWork) Lab

University of Nevada, Reno

CS-791

2 of 21

Probabilistic Robotics

Uncertainty defines the State Estimation Process

  • Uncertain Sensor observations
  • Uncertain robot Models
  • Uncertain Prior knowledge

Notation(s) :

  • Probabilistic sensor models:

  • Probabilistic motion models:

  • Fusion of multiple sensors:

  • Sensor & Model fusion over time:

 

 

 

 

State

Measurement

Input

CS791 C. Papachristos

3 of 21

Probabilistic Robotics

 

 

 

 

 

 

 

 

 

 

 

 

CS791 C. Papachristos

4 of 21

Probabilistic Robotics

 

 

 

A Function /Table of Random Variables - Integrates to 1

Specific Probability Values

 

 

 

 

 

 

 

Pick a 10 of Diamonds (card is 10 AND card is Diamonds)

UNCONDITIONED Probability: Pick a 10, Pick a Diamond, …

Probability “DISTRIBUTION” of card Numbers and Shapes

 

C. Papachristos

Conditional Probability (or Likelihood):

GIVEN that we picked a Diamond, Probability of being a 10

Probability of being 10 of Diamonds = Probability of being a 10 GIVEN that it is a Diamond * Probability of being a Diamond

5 of 21

Probabilistic Robotics

 

 

Probability�of State given Observation

Marginal Probability of State

Marginal Probability of Observation

 

 

D H S C

1

2

3

1/52

1/52

1/52

1/52

1/52

1/52

1/52

1/52

1/52

1/52

1/52

1/52

 

6 of 21

Probabilistic Robotics

 

 

 

 

C. Papachristos

 

Probability�of State given Observation

Marginal Probability of State

Marginal Probability of Observation

 

 

 

 

 

 

 

 

3-Level Tactile Sensor

 

 

 

 

7 of 21

Bayes Filter

Markov Chain

Markov Property:

  • Conditional Probability Distribution of future states of a process (conditional on both past and present states) depends only upon the present state, i.e. :

    • Observations depend only on current state

    • [If action is available, future state depends�only on current state and current action]

Markov Chain:

  • A discrete-time stochastic process satisfying the Markov property

 

 

CS791 C. Papachristos

8 of 21

Bayes Filter

 

 

 

 

 

 

HMM: State variable

isn't observed, only�a noisy measurement of it is observed)

(General Description – given Conditional Independences from Markov Process)

CS791 C. Papachristos

9 of 21

Bayes Filter

 

 

 

 

 

CS791 C. Papachristos

10 of 21

Kalman Filter

Bayes Filter for Multivariate Normal PDFs

Univariate Normal (Gaussian) Distribution:

Multivariate Normal (Gaussian) Distribution:

 

 

 

 

 

Probability Density Function

 

Probability Density Function

 

 

CS791 C. Papachristos

11 of 21

Kalman Filter

Bayes Filter for Multivariate Normal Distributions

Linear Transformation of Gaussian Distribution:

Product of two Gaussian Probability Density Functions

 

 

 

(Note 1: Not the Distribution of the product�of the 2 Random Variables themselves (!),�but the product of the PDFs of the two RVs)

CS791 C. Papachristos

12 of 21

Kalman Filter

Bayes Filter for Multivariate Normal Distributions

Assuming a Discrete Time Stochastic Process that follows the Markov Property

Assuming the state Probability Distribution Function is Gaussian:

Assuming that it evolves according to a Linear Process Model:

 

 

 

 

Note: These are the Gauss-Markov Assumptions

such that Ordinary Least Squares provide the

Best, Linear, Unbiased Estimation methodology�(BLUE)

 

 

CS791 C. Papachristos

13 of 21

Kalman Filter

Bayes Filter for Multivariate Normal Distributions

Assuming a Discrete Time Stochastic Process that follows the Markov Property

Assuming the state Probability Distribution Function is Gaussian:

Assuming that it evolves according to a Linear Process Model:

Assuming that the Measurement Model is also Linear:

 

 

 

 

 

 

 

 

 

CS791 C. Papachristos

14 of 21

Kalman Filter

Bayes Filter for Multivariate Normal Distributions

Recursive Bayes Estimator

  • Prediction:

  • Update:

  • Kalman Assumptions:

 

 

 

 

 

CS791 C. Papachristos

 

 

15 of 21

Kalman Filter

Bayes Filter for Multivariate Normal Distributions

Applied, gives the Kalman Filter Predict & Update (/Correct) steps:

  • Kalman Prediction:

  • Kalman Update:

Note: Prediction & Correction steps can take place in various orders� depending on the Markov Chain

 

 

 

 

 

 

 

 

 

 

 

CS791 C. Papachristos

16 of 21

Kalman Filter

 

 

 

 

where

 

 

 

 

(1)

(2)

(1)

CS791 C. Papachristos

17 of 21

Kalman Filter

 

 

 

 

(1)

(2)

 

 

 

(2)

 

 

 

Solve to yield “Kalman Gain”

CS791 C. Papachristos

18 of 21

Kalman Filter

Kalman Filter – Recursive Estimation

  • Final Notes:
    • Statistics – Maximum Likelihood Estimation (MLE): Estimate unknown parameters of a statistical model by constructing a�(log-)likelihood function of the Joint Distribution of the data, then maximizing this function over all possible parameter values
    • Statistics – Ordinary Least Squares (OLS): Linear least squares method for estimating the unknown parameters in a linear regression model. Under Markov Assumptions, it is the Optimal (Best) Linear Unbiased Estimator (BLUE)

    • OLS under additional assumption for Normally-distributed errors, is identical to the MLE !�Assuming a Multivariate Normal Distribution, the construction of the Log-Likelihood function of the Joint Distribution of data in order to perform MLE turns out to yield an equivalent form as the OLS method

Prediction

 

Correction

 

 

 

Project State Ahead:

Project Error Covariance Ahead:

Update Error Covariance:

Update Estimate with Measurement:

Compute Kalman Gain:

 

CS791 C. Papachristos

19 of 21

Kalman Filter

Kalman Filter – Recursive Estimation

  • Final Notes:

    • Kalman Filter is Recursive, adheres by Markov Assumptions, assumes Normally-Distributed errors in state, process, measurement, resulting Joint Distributions are Multivariate Normal Distributions

    • Recursive OLS and Kalman-Filter MLE coincide if a conditional (log-)likelihood function is used

    • But, when the model is time varying, MLE estimates are obtained with mis-specified errors�These are not asymptotically equivalent to those of the correct model
      • Thus the Kalman-Filter estimates are not Best Linear MSE (Mean Squared Error) ones

Prediction

 

Correction

 

 

 

Project State Ahead:

Project Error Covariance Ahead:

Update Error Covariance:

Update Estimate with Measurement:

Compute Kalman Gain:

 

CS791 C. Papachristos

20 of 21

Kalman Filter

Kalman Filter – Recursive Estimation

  • Final Notes:

    • When the models are nonlinear, the Extended Kalman filter (EKF) works by linearizing them.
      • Then A and H represent the Jacobian matrices of partial derivatives
      • Effectively, propagations are calculated based on “first-order” linearizations of the nonlinear system

    • So: Distributions of the Random Variables are no longer normal after the respective nonlinear transformations

    • EKF does not work well when the model is highly non-linear, but another variant, the�Unscented Kalman filter (UKF – family of Sigma-Point Kalman Filters), which uses a Monte Carlo-based�approach to calculate updates, works better

Prediction

 

Correction

 

 

 

Project State Ahead:

Project Error Covariance Ahead:

Update Error Covariance:

Update Estimate with Measurement:

Compute Kalman Gain:

more on EKF in�upcoming Lecture…

 

CS791 C. Papachristos

21 of 21

Time for Questions !

CS-791

CS791 C. Papachristos