a crash course on kalman filtering - cleveland state university · 2014-09-18 · a crash course on...

64
A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1 / 64

Upload: others

Post on 29-Mar-2020

15 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

A Crash Course on Kalman Filtering

Dan Simon

Cleveland State University

Fall 2014

1 / 64

Page 2: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Outline

Linear Systems

Probability

State Means and Covariances

Least Squares Estimation

The Kalman Filter

Unknown Input Estimation

The Extended Kalman Filter

2 / 64

Page 3: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Linear Systems

KVL: u = vc + L diL/dt KCL: ic + ir1 = ir2 + iLic = C dvc/dtir1 = vc/Rir2 = (u − vc)/R=⇒ dvc/dt = −2vc/RC + iL/C + u/RCDefine xT =

[vc iL

]x =

[−2/RC 1/C−1/L 0

]x +

[1/RC1/L

]u = Ax + Bu

What is the “output”?Suppose y = vr2 = L diL/dt =

[−1 0

]x + u = Cx + Du

3 / 64

Page 4: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Linear Systems

Let R = C = L = 1 and u(t) = step function.How can we simulate the system in Matlab?

Control System Toolbox

Simulink

m-file

4 / 64

Page 5: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Linear Systems: Continuous-Time Simulation

Control System Toolbox:R = 1; L = 1; C = 1;A = [-2/R/C, 1/C ; -1/L, 0];B = [1/R/C ; 1/L];C = [-1, 0];D = 1;sys = ss(A, B, C, D);step(sys)

Simulink: LinearRLC1Model.slx

m-file: LinearRLC1.mWhat time step should we use?

5 / 64

Page 6: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Linear Systems: Discretization

Continuous time: x = Ax + Bu, y = Cx + Du

Discrete time: xk+1 = Fx + Gu, y = Cx + Du

F = exp(A∆t)

G = (F − I )A−1B

where ∆t is the integration step size; I is the identity matrix

6 / 64

Page 7: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Linear Systems: Discrete-Time Simulation

Control System Toolbox:dt = 0.1;F = expm(A*dt);G = (F - eye(2)) / A * B;sysDiscrete = ss(F, G, C, D, dt);step(sysDiscrete)

Simulink: LinearRLC1DiscreteModel.slx

m-file: LinearRLC1Discrete.m

7 / 64

Page 8: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Outline

Linear Systems

Probability

State Means and Covariances

Least Squares Estimation

The Kalman Filter

Unknown Input Estimation

The Extended Kalman Filter

8 / 64

Page 9: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Cumulative Distribution Function

X = random variable

CDF: FX (x) = P(X ≤ x)

FX (x) ∈ [0, 1]

FX (−∞) = 0

FX (∞) = 1

FX (a) ≤ FX (b) if a ≤ b

P(a < X ≤ b) = FX (b)− FX (a)

9 / 64

Page 10: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Probability Density Function

PDF: fX (x) =dFX (x)

dx

FX (x) =

∫ x

−∞fX (z) dz

fX (x) ≥ 0∫ ∞−∞

fX (x) dx = 1

P(a < x ≤ b) =

∫ b

afX (x) dx

Expected value: E [g(X )] =

∫ ∞−∞

g(x)fX (x) dx

E (X ) = x = µx = mean

E[(X − x)2

]= σ2x = variance

σx = standard deviation

10 / 64

Page 11: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Probability

Random numbers in Matlab: rand and randn

Random number seedHow can we create a random vector with given covariance R?

Probability Density Functions

Uniform DistributionGaussian, or Normal, Distribution

11 / 64

Page 12: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Multiple Random Variables

CDF: FXY (x , y) = P(X ≤ x ,Y ≤ y)

PDF: fXY (x , y) =∂2FXY (x , y)

∂x∂y

Independence:

P(X ≤ x ,Y ≤ y) = P(X ≤ x)P(Y ≤ y) for all x , y

Covariance: CXY = E [(X − X )(Y − Y ]

= E (XY )− X Y

Correlation: RXY = E (XY )

12 / 64

Page 13: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Random Vectors

X =[X1 X2

]CDF: FX (x) = P(X1 ≤ x1,X2 ≤ x2)

pdf: fX (x) =∂2FX (x)

∂x1∂x2

Autocorrelation: RX = E [XXT ] > 0

Autocovariance: CX = E [(X − X )(X − X )T ] > 0

Gaussian RV:

PDF(x) =1

(2π)n/2|CX |1/2exp

[−1

2(x − x)TC−1X (x − x)

]If Y = AX + b, then Y ∼ N(Ax + b,ACXA

T )

13 / 64

Page 14: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Stochastic Processes

A stochastic process X (t) is an RV that varies with time

If X (t1) and X (t2) are independent ∀ t1 6= t2 thenX (t) is white

Otherwise, X (t) is colored

Examples:

The high temperature on a given day

The closing price of the stock market

Measurement noise in a voltmeter

The amount of sleep you get each night

14 / 64

Page 15: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Outline

Linear Systems

Probability

State Means and Covariances

Least Squares Estimation

The Kalman Filter

Unknown Input Estimation

The Extended Kalman Filter

15 / 64

Page 16: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

State Means and Covariances

xk = Fk−1xk−1 + Gk−1uk−1 + wk−1

wk ∼ (0,Qk)

xk = E (xk)

= Fk−1xk−1 + Gk−1uk−1

(xk − xk)(· · · )T = (Fk−1xk−1 + Gk−1uk−1 + wk−1 − xk)(· · · )T

= [Fk−1(xk−1 − xk−1) + wk−1][· · · ]T

= Fk−1(xk−1 − xk−1)(xk−1 − xk−1)TFTk−1 +

wk−1wTk−1 + Fk−1(xk−1 − xk−1)wT

k−1 +

wk−1(xk−1 − xk−1)TFTk−1

Pk = E[(xk − xk)(· · · )T

]= Fk−1Pk−1F

Tk−1 + Qk−1

This is the discrete-time Lyapunov Equation, or Stein Equation

16 / 64

Page 17: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Outline

Linear Systems

Probability

State Means and Covariances

Least Squares Estimation

The Kalman Filter

Unknown Input Estimation

The Extended Kalman Filter

17 / 64

Page 18: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Least Squares Estimation

Suppose x is a constant vector

Vector measurement at time k: yk = Hkx + vk , vk ∼ (0,Rk)

Estimate: xk = xk−1 + Kk(yk − Hk xk−1)

This is a recursive estimatorre·cur·sive: adjective, meaning recursive

Our goal: Find the “best” estimator gain Kk

18 / 64

Page 19: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Least Squares Estimation

What is the mean of the estimation error?

E (εx ,k) = E (x − xk)

= E [x − xk−1 − Kk(yk − Hk xk−1)]

= E [εx ,k−1 − Kk(Hkx + vk − Hk xk−1)]

= E [εx ,k−1 − KkHk(x − xk−1)− Kkvk ]

= (I − KkHk)E (εx ,k−1)− KkE (vk)

E (εx ,k) = 0 if E (vk) = 0 and E (εx ,k−1) = 0, regardless of Kk

Unbiased estimator

19 / 64

Page 20: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Least Squares Estimation

Objective function: Jk = E [(x1 − x1)2] + · · ·+ E [(xn − xn)2]

= E(ε2x1,k + · · ·+ ε2xn,k

)= E

(εTx ,kεx ,k

)= E

[Tr(εx ,kε

Tx ,k)]

= TrPk

20 / 64

Page 21: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Least Squares Estimation

Pk = E (εx ,kεTx ,k)

= E{

[(I − KkHk)εx ,k−1 − Kkvk ][· · · ]T}

= (I − KkHk)E (εx ,k−1εTx ,k−1)(I − KkHk)T −

KkE (vkεTx ,k−1)(I − KkHk)T − (I − KkHk)E (εx ,k−1v

Tk )KT

k +

KkE (vkvTk )KT

k

= (I − KkHk)Pk−1(I − KkHk)T + KkRkKTk

21 / 64

Page 22: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Least Squares Estimation

Recall that ∂Tr(ABAT )∂A = 2AB if B is symmetric

∂Jk∂Kk

= 2(I − KkHk)Pk−1(−HTk ) + 2KkRk

= 0

KkRk = (I − KkHk)Pk−1HTk

Kk(Rk + HkPk−1HTk ) = Pk−1H

Tk

Kk = Pk−1HTk (HkPk−1H

Tk + Rk)−1

22 / 64

Page 23: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Recursive least squares estimation of a constant

1 Initialization: x0 = E (x), P0 = E [(x − x0)(x − x0)T ]If no knowledge about x is available before measurements aretaken, then P0 =∞I . If perfect knowledge about x isavailable before measurements are taken, then P0 = 0.

2 For k = 1, 2, · · · , perform the following.1 Obtain measurement yk :

yk = Hkx + vk

where vk ∼ (0,Rk) and E (vivk) = Rkδk−i (white noise)2 Measurement update of estimate:

Kk = Pk−1HTk (HkPk−1H

Tk + Rk)−1

xk = xk−1 + Kk(yk − Hk xk−1)

Pk = (I − KkHk)Pk−1(I − KkHk)T + KkRkKTk

23 / 64

Page 24: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Alternate Estimator Equations

Kk = Pk−1HTk (HkPk−1H

Tk + Rk)−1

= PkHTk R−1k

Pk = (I − KkHk)Pk−1(I − KkHk)T + KkRkKTk

= (P−1k−1 + HTk R−1k Hk)−1

= (I − KkHk)Pk−1 (Valid only for optimal Kk)

Example: RLS.m

24 / 64

Page 25: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Outline

Linear Systems

Probability

State Means and Covariances

Least Squares Estimation

The Kalman Filter

Unknown Input Estimation

The Extended Kalman Filter

25 / 64

Page 26: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

The Kalman filter

xk = Fk−1xk−1 + Gk−1uk−1 + wk−1

yk = Hkxk + vk

wk ∼ (0,Qk)

vk ∼ (0,Rk)

E [wkwTj ] = Qkδk−j

E [vkvTj ] = Rkδk−j

E [vkwTj ] = 0

x+k = E [xk |y1, y2, · · · , yk ] = a posteriori estimate

P+k = E [(xk − x+k )(xk − x+k )T ]

x−k = E [xk |y1, y2, · · · , yk−1] = a priori estimate

P−k = E [(xk − x−k )(xk − x−k )T ]

xk|k+N = E [xk |y1, y2, · · · , yk , · · · , yk+N ] = smoothed estimate

xk|k−M = E [xk |y1, y2, · · · , yk−M ] = predicted estimate

26 / 64

Page 27: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Time Update Equations

Initialization: x+0 = E (x0)

x−1 = F0x+0 + G0u0

P−1 = F0P+0 FT

0 + Q0

Generalize: x−k = Fk−1x+k−1 + Gk−1uk−1

P−k = Fk−1P+k−1F

Tk−1 + Qk−1

These are the Kalman filter time update equations

27 / 64

Page 28: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Measurement Update Equations

Recall the RLS estimate of a constant x :

Kk = Pk−1HTk (HkPk−1H

Tk + Rk)−1

xk = xk−1 + Kk(yk − Hk xk−1)

Pk = (I − KkHk)Pk−1(I − KkHk)T + KkRkKTk

xk−1,Pk−1 = estimate and covariance before measurement ykxk ,Pk = estimate and covariance after measurement yk

Least squares estimator Kalman filter

xk−1 = estimate before yk =⇒ x−k = a priori estimatePk−1 = covariance before yk =⇒ P−k = a priori covariancexk = estimate after yk =⇒ x+k = a posteriori estimatePk = covariance after yk =⇒ P+

k = a posteriori covariance

28 / 64

Page 29: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Measurement Update Equations

Recursive Least Squares:

Kk = Pk−1HTk (HkPk−1H

Tk + Rk)−1

xk = xk−1 + Kk(yk − Hk xk−1)

Pk = (I − KkHk)Pk−1(I − KkHk)T + KkRkKTk

Kalman Filter:

Kk = P−k HTk (HkP

−k HT

k + Rk)−1

x+k = x−k + Kk(yk − Hk x−k )

P+k = (I − KkHk)P−k (I − KkHk)T + KkRkK

Tk

These are the Kalman filter measurement update equations

29 / 64

Page 30: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Kalman Filter Equations

1 State equations:

xk = Fk−1xk−1 + Gk−1uk−1 + wk−1

yk = Hkxk + vk

E (wkwTj ) = Qkδk−j ,E (vkv

Tj ) = Rkδk−j ,E (wkv

Tj ) = 0

2 Initialization: x+0 = E (x0), P+0 = E [(x0 − x+0 )(x0 − x+0 )T ]

3 For each time step k = 1, 2, · · ·P−k = Fk−1P

+k−1F

Tk−1 + Qk−1

Kk = P−k HTk (HkP

−k HT

k + Rk)−1 = P+k HT

k R−1k

x−k = Fk−1x+k−1 + Gk−1uk−1 = a priori state estimate

x+k = x−k + Kk(yk − Hk x−k ) = a posteriori state estimate

P+k = (I − KkHk)P−k (I − KkHk)T + KkRkK

Tk

=[(P−k )−1 + HT

k R−1k Hk

]−1= (I − KkHk)P−k

30 / 64

Page 31: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Kalman Filter Properties

Define estimation error xk = xk − xkProblem: minE

[xTk Sk xk

], where Sk > 0

If {wk} and {vk} are Gaussian, zero-mean, uncorrelated, andwhite, then the Kalman filter solves the problem.

If {wk} and {vk} are zero-mean, uncorrelated, and white, thenthe Kalman filter is the best linear solution to the problem.

If {wk} and {vk} are correlated or colored, then the Kalmanfilter can be easily modified to solve the problem.

For nonlinear systems, the Kalman filter can be modified toapproximate the solution to the problem.

31 / 64

Page 32: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Kalman Filter Example: DiscreteKFEx1.m

rva

=

0 1 00 0 10 0 0

rva

+ w =⇒ x = Ax + w

xk+1 = Fxk + wk

F = exp(AT ) =

1 T T 2/20 1 T0 0 1

wk ∼ (0,Qk)

x−k = F x+k−1

P−k = FP+k−1F

T + Qk−1

yk = Hkxk + vk

=[

1 0 0]xk + vk

vk ∼ (0,Rk),Rk = σ2

32 / 64

Page 33: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Kalman Filter Divergence

Kalman filter theory is based on several assumptions.How to improve filter performance in the real world:

Increase arithmetic precision

Square root filtering

Use a fading-memory Kalman filter

Use fictitious process noise

Use a more robust filter (e.g., H-infinity)

33 / 64

Page 34: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Modeling Errors

True System:

x1,k+1 = x1,k + x2,k

x2,k+1 = x2,k

yk = x1,k + vk

vk ∼ (0, 1)

Incorrect Model:

x1,k+1 = x1,k

yk = xk + vk

wk ∼ (0,Q), Q = 0

vk ∼ (0, 1)

34 / 64

Page 35: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Modeling Errors

0 10 20 30 40 500

100

200

300

400

500

600

time

true stateestimated state

Figure: Kalman filter divergence due to mismodeling

35 / 64

Page 36: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Fictitious Process Noise

0 10 20 30 40 500

100

200

300

400

500

600

time

true statexhat (Q = 1)xhat (Q = 0.1)xhat (Q = 0.01)xhat (Q = 0)

Figure: Kalman filter improvement due to fictitious process noise

36 / 64

Page 37: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Fictitious Process Noise

0 10 20 30 40 500

0.1

0.2

0.3

0.4

0.5

0.6

0.7

time

Q = 1Q = 0.1Q = 0.01Q = 0

Figure: Kalman gain for various values of process noise

37 / 64

Page 38: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

The Continuous-Time Kalman Filter

x = Ax + Bu + w

y = Cx + v

w ∼ (0,Qc)

v ∼ (0,Rc)

Discretize:

xk = Fxk−1 + Guk−1 + wk−1

yk = Hxk + vk

F = exp(AT ) ≈ (I + AT ) for small T

G = (exp(AT )− I )A−1B ≈ BT for small T

H = C

wk ∼ (0,Q), Q = QcT

vk ∼ (0,R), R = Rc/T

38 / 64

Page 39: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

The Continuous-Time Kalman Filter

Recall the discrete-time Kalman gain:

Kk = P−k HT (HP−k HT + R)−1

= P−k CT (CP−k CT + Rc/T )−1

Kk

T= P−k CT (CP−k CTT + Rc)−1

limT→0

Kk

T= P−k CTR−1c

39 / 64

Page 40: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

The Continuous-Time Kalman Filter

Recall the discrete-time estimation error covariance equations:

P+k = (I − KkH)P−k

P−k+1 = FP+k FT + Q

= (I + AT )P+k (I + AT )T + QcT , for small T

= P+k + (AP+

k + P+k AT + Qc)T + AP+

k ATT 2

= (I − KkC )P−k + AP+k ATT 2 +

[A(I − KkC )P−k + (I − KkC )P−k AT + Qc ]T

P−k+1 − P−kT

=−KkCP

−k

T+ AP+

k ATT +

(AP−k + AKkCP−k + P−k AT − KkCP

−k AT + Qc)

P = limT→0

P−k+1 − P−kT

= −PCTR−1c CP + AP + PAT + Qc

40 / 64

Page 41: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

The Continuous-Time Kalman Filter

Recall the discrete-time state estimate equations:

x−k = F x+k−1 + Guk−1

x+k = x−k + Kk(yk − Hx−k )

= F x+k−1 + Guk−1 + Kk(yk − HF x+k−1 − HGuk−1)

≈ (I + AT )x+k−1 + BTuk−1 +

Kk(yk − C (I + AT )x+k−1 − CBTuk−1), for small T

= x+k−1 + ATx+k−1 + BTuk−1 +

PCTR−1c T (yk − Cx+k−1 − CATx+k−1 − CBTuk−1)

˙x = limT→0

x+k − x+k−1T

= Ax + Bu + PCTR−1c (y − Cx)

˙x = Ax + Bu + K (y − Cx)

K = PCTR−1c

41 / 64

Page 42: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

The Continuous-Time Kalman Filter

Continuous-time system dynamics and measurement:

x = Ax + Bu + w

y = Cx + v

w ∼ (0,Qc)

v ∼ (0,Rc)

Continuous-time Kalman filter equations:

x(0) = E [x(0)]

P(0) = E [(x(0)− x(0))(x(0)− x(0))T ]

K = PCTR−1c

˙x = Ax + Bu + K (y − Cx)

P = −PCTR−1c CP + AP + PAT + Qc

What if y includes the input also? That is, y = Cx + Du?

42 / 64

Page 43: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Example: Regenerative Knee Prosthesis

43 / 64

Page 44: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

System Equations

44 / 64

Page 45: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Input Torque M

D. Winter, Biomechanics and Motor Control of Human Movement, 4th Edition, Wiley, 2009, Appendix A

www.wiley.com/WileyCDA/WileyTitle/productCd-0470398183.html

bcs.wiley.com/he-bcs/Books?action=resource&bcsId=5453&itemId=0470398183&resourceId=19492

45 / 64

Page 46: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

State Equations

x1 = φk

x2 = φk

x3 = iA

x4 = vC

JT = JG + n2JM

x =

0 1 0 0

−K/JT 0 −na/JT 0

0 an/LA −RA/LA −u/LA

0 0 u/C 0

x +

0

1/JT

0

0

MK + w

y =[

1 0 0 0]x + v

Matlab program: RegenerationKalman.m

46 / 64

Page 47: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Outline

Linear Systems

Probability

State Means and Covariances

Least Squares Estimation

The Kalman Filter

Unknown Input Estimation

The Extended Kalman Filter

47 / 64

Page 48: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Unknown Input Estimation

Continuous-time system dynamics and measurement:

x = Ax + Bu + f + w

y = Cx + v

w ∼ (0,Qc), v ∼ (0,Rc)

Consider f as a state:

z =[xT f T

]Tz =

[A I0 0

]z +

[B0

]u +

[ww ′

]y =

[C 0

]z +

[v0

]w ∼ (0, Q), Q = diag(Qc ,Q

′)

v ∼ (0, R), R = diag(Rc , 0)

w ′ is fictitious process noise, and Q ′ is a tuning parameter

48 / 64

Page 49: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Unknown Input Estimation: Rowing Machine

θ = position, ω = velocity, q = capacitor charge

k = spring constant, J = inertia, a = motor constant

R = resistance, u = power converter ratio, C = capacitance

r = radius, φ = friction

49 / 64

Page 50: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Unknown Input Estimation: Rowing Machine

System model:

θ = ω

ω = −k

Jθ − a2

RJω +

au

RCJq +

r

JF − φ(θ, ω)

J

q =au

Rω − u2

RCq, φ(·, ·) = 0.12sign(ω)

State space model, assuming ω > 0:

x =

0 1 0 0

−k/J −a2/RJ au/RCJ r/J

0 au/R u2/RC 0

0 0 0 0

x +

0

−0.12

0

0

+ w

w ∼ (0,Q), Q = diag[

q1, q2, q3, q4

]

y = Cx + v =

1 0 0 0

0 1 0 0

0 0 1 0

x + v

v ∼ (0,R), R = diag[

0.012, 0.012, (0.01C)2]

(q = CV )50 / 64

Page 51: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Unknown Input Estimation: Rowing Machine

Q = diag([0.012, 0.012, 12, 12]) - not responsive enough51 / 64

Page 52: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Unknown Input Estimation: Rowing Machine

Q = diag([0.012, 0.012, 12, 100002]) - too responsive52 / 64

Page 53: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Unknown Input Estimation: Rowing Machine

Q = diag([0.012, 0.012, 12, 1002]) - just about right53 / 64

Page 54: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Unknown Input Estimation: Rowing Machine

How can we improve our results?

We have modeled F as a noisy constant: F = w

Instead we can model F as a ramp:

F = Fv + w1

Fv = w2

This increases the number of states by 1 but gives the Kalmanfilter more flexibility to estimate a value for F that matchesthe measurements

RMS force estimation error decreases from 0.8 N to 0.4 N

54 / 64

Page 55: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Outline

Linear Systems

Probability

State Means and Covariances

Least Squares Estimation

The Kalman Filter

Unknown Input Estimation

The Extended Kalman Filter

55 / 64

Page 56: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Nonlinear Kalman Filtering

Nonlinear system:

x = f (x , u,w , t)

y = h(x , v , t)

w ∼ (0,Q)

v ∼ (0,R)

Linearization:

x ≈ f (x0, u0,w0, t) +∂f

∂x

∣∣∣∣0

(x − x0) +∂f

∂u

∣∣∣∣0

(u − u0) +

∂f

∂w

∣∣∣∣0

(w − w0)

= f (x0, u0,w0, t) + A∆x + B∆u + L∆w

y ≈ h(x0, v0, t) +∂h

∂x

∣∣∣∣0

(x − x0) +∂h

∂v

∣∣∣∣0

(v − v0)

= h(x0, v0, t) + C∆x + M∆v

56 / 64

Page 57: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Nonlinear Kalman Filtering

x0 = f (x0, u0,w0, t)

y0 = h(x0, v0, t)

∆x = x − x0

∆y = y − y0

∆x = A∆x + Lw

= A∆x + w

w ∼ (0, Q), Q = LQLT

∆y = C∆x + Mv

= C∆x + v

v ∼ (0, R), R = MRMT

We have a linear system with state ∆x and measurement ∆y

57 / 64

Page 58: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

The Linearized Kalman Filter

System equations:

x = f (x , u,w , t), w ∼ (0,Q)

y = h(x , v , t), v ∼ (0,R)

Nominal trajectory:

x0 = f (x0, u0, 0, t), y0 = h(x0, 0, t)

Compute partial derivative matrices:

A = ∂f /∂x |0 , L = ∂f /∂w |0 ,C = ∂h/∂x |0 ,M = ∂h/∂v |0Compute Q = LQLT , R = MRMT , ∆y = y − y0Kalman filter equations:

∆x(0) = 0,P(0) = E[(∆x(0)−∆x(0))(∆x(0)−∆x(0))T

]∆ ˙x = A∆x + K (∆y − C∆x),K = PCT R−1

P = AP + PAT + Q − PCT R−1CP

x = x0 + ∆x58 / 64

Page 59: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

The Extended Kalman Filter

Combine the x0 and ∆ ˙x equations:

x0 + ∆ ˙x = f (x0, u0,w0, t) + A∆x + K [y − y0 − C (x − x0)]

Choose x0(t) = x(t), so ∆x(t) = 0 and ∆ ˙x(t) = 0

Then the nominal measurement becomes

y0 = h(x0, v0, t)

= h(x , v0, t)

and the first equation above becomes

˙x = f (x , u,w0, t) + K [y − h(x , v0, t)]

59 / 64

Page 60: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

The Extended Kalman Filter

System equations:

x = f (x , u,w , t), w ∼ (0,Q)

y = h(x , v , t), v ∼ (0,R)

Compute partial derivative matrices:

A = ∂f /∂x |x , L = ∂f /∂w |x ,C = ∂h/∂x |x ,M = ∂h/∂v |x

Compute Q = LQLT , R = MRMT

Kalman filter equations:

x(0) = E [x(0)], P(0) = E[(x(0)− x(0))(x(0)− x(0))T

]˙x = f (x , u, 0, t) + K [y − h(x , 0, t)] , K = PCT R−1

P = AP + PAT + Q − PCT R−1CP

60 / 64

Page 61: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Robot State Estimation

Robot dynamics:

u = Mq + Cq + g + R + F

u = control forces/torques, q = joint coordinatesM(q) = mass matrix, C (q, q) = Coriolis matrixg(q) = gravity vector, R(q) = friction vectorF (q) = external forces/torquesState space model:

q = M−1(u − Cq − g − R − F )

x =[q1 q2 q3 q1 q2 q3

]T=[xT1 xT2

]Tx =

[x2

M−1(u − Cq − g − R − F )

]= f (x , u,w , t)

y = q3 =[

0 0 1 0 0 0]x + v = h(x , v , t)

The detailed model is available at:www.sciencedirect.com/science/article/pii/S0307904X14003096

dynamicsystems.asmedigitalcollection.asme.org/article.aspx?articleid=1809665

61 / 64

Page 62: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Robot State Estimation

System equations:

x = f (x , u,w , t), w ∼ (0,Q)

y = h(x , v , t), v ∼ (0,R)

Compute partial derivative matrices:

A = ∂f /∂x |x , L = ∂f /∂w |x ,C = ∂h/∂x |x ,M = ∂h/∂v |x

Compute Q = LQLT , R = MRMT

Kalman filter equations:

x(0) = E [x(0)], P(0) = E[(x(0)− x(0))(x(0)− x(0))T

]˙x = f (x , u, 0, t) + K [y − h(x , 0, t)] , K = PCT R−1

P = AP + PAT + Q − PCT R−1CP

62 / 64

Page 63: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Robot State Estimation: Robot.zip

First we write a simulation for the dynamic system model:simGRF.mdl and statederCCFforce.m

Then we write a controller: PBimpedanceCCFfull.m

Then we calculate the A matrix: CalcFMatrix.m andEvaluateFMatrix.m

Then we write a Kalman filter: zhatdot.m

Run the program:

Run setup.m

Run simGRF.mdlLook at the output plots:

Run plotter.m to see control performanceOpen “Plotting - 1 meas” block to see estimator performanceOpen the “q1, q1hat” scope to see hip positionOpen the “q2, q2hat” scope to see thigh angleOpen the “q3meas, q3, q3hat” scope to see knee angle

63 / 64

Page 64: A Crash Course on Kalman Filtering - Cleveland State University · 2014-09-18 · A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1/64

Additional Estimation Topics

Nonlinear estimation

Iterated EKFSecond-order EKFUnscented Kalman filterParticle filterMany others ...

Parameter estimation

Smoothing

Adaptive filtering

Robust filtering (H∞ filtering)

Constrained filtering

64 / 64