feedback particle filter and its applications to neuroscience
DESCRIPTION
3rd IFAC Workshop on Distributed Estimation and Control in Networked Systems Santa Barbara, Sep 14-15, 2012TRANSCRIPT
Feedback Particle Filter and itsApplications to Neuroscience
3rd IFAC Workshop onDistributed Estimation and Control in Networked Systems
Santa Barbara, Sep 14-15, 2012
Prashant G. Mehta
Department of Mechanical Science and Engineeringand the Coordinated Science Laboratory
University of Illinois at Urbana-Champaign
Research supported by NSF and AFOSR
Background
Bayesian Inference/FilteringMathematics of prediction: Bayes’ rule
Signal (hidden): X X ∼ P(X ), (prior, known)
Solution
Bayes’ rule: P(X |Y )︸ ︷︷ ︸Posterior
∝ P(Y |X )P(X )︸ ︷︷ ︸Prior
This talk is about implementing Bayes’ rule indynamic, nonlinear, non-Gaussian settings!
2
Background
Bayesian Inference/FilteringMathematics of prediction: Bayes’ rule
Signal (hidden): X X ∼ P(X ), (prior, known)
Observation: Y (known)
Solution
Bayes’ rule: P(X |Y )︸ ︷︷ ︸Posterior
∝ P(Y |X )P(X )︸ ︷︷ ︸Prior
This talk is about implementing Bayes’ rule indynamic, nonlinear, non-Gaussian settings!
2
Background
Bayesian Inference/FilteringMathematics of prediction: Bayes’ rule
Signal (hidden): X X ∼ P(X ), (prior, known)
Observation: Y (known)
Observation model: P(Y |X ) (known)
Solution
Bayes’ rule: P(X |Y )︸ ︷︷ ︸Posterior
∝ P(Y |X )P(X )︸ ︷︷ ︸Prior
This talk is about implementing Bayes’ rule indynamic, nonlinear, non-Gaussian settings!
2
Background
Bayesian Inference/FilteringMathematics of prediction: Bayes’ rule
Signal (hidden): X X ∼ P(X ), (prior, known)
Observation: Y (known)
Observation model: P(Y |X ) (known)
Problem: What is X ?
Solution
Bayes’ rule: P(X |Y )︸ ︷︷ ︸Posterior
∝ P(Y |X )P(X )︸ ︷︷ ︸Prior
This talk is about implementing Bayes’ rule indynamic, nonlinear, non-Gaussian settings!
2
Background
Bayesian Inference/FilteringMathematics of prediction: Bayes’ rule
Signal (hidden): X X ∼ P(X ), (prior, known)
Observation: Y (known)
Observation model: P(Y |X ) (known)
Problem: What is X ?
Solution
Bayes’ rule: P(X |Y )︸ ︷︷ ︸Posterior
∝ P(Y |X )P(X )︸ ︷︷ ︸Prior
This talk is about implementing Bayes’ rule indynamic, nonlinear, non-Gaussian settings!
2
Background
Bayesian Inference/FilteringMathematics of prediction: Bayes’ rule
Signal (hidden): X X ∼ P(X ), (prior, known)
Observation: Y (known)
Observation model: P(Y |X ) (known)
Problem: What is X ?
Solution
Bayes’ rule: P(X |Y )︸ ︷︷ ︸Posterior
∝ P(Y |X )P(X )︸ ︷︷ ︸Prior
This talk is about implementing Bayes’ rule indynamic, nonlinear, non-Gaussian settings!
2
Background
ApplicationsEngineering applications
Filtering is important to:
Air moving target indicator (AMTI) systems, Space situationalawareness
Remote sensing and surveillance: Air traffic management, weathersurveillance, geophysical surveys
Autonomous navigation & robotics: Simultaneous localization andmap building (SLAM)
3
Background
ApplicationsEngineering applications
Filtering is important to:
Air moving target indicator (AMTI) systems, Space situationalawareness
Remote sensing and surveillance: Air traffic management, weathersurveillance, geophysical surveys
Autonomous navigation & robotics: Simultaneous localization andmap building (SLAM)
3
Background
ApplicationsEngineering applications
Filtering is important to:
Air moving target indicator (AMTI) systems, Space situationalawareness
Remote sensing and surveillance: Air traffic management, weathersurveillance, geophysical surveys
Autonomous navigation & robotics: Simultaneous localization andmap building (SLAM)
3
Background
ApplicationsEngineering applications
Filtering is important to:
Air moving target indicator (AMTI) systems, Space situationalawareness
Remote sensing and surveillance: Air traffic management, weathersurveillance, geophysical surveys
Autonomous navigation & robotics: Simultaneous localization andmap building (SLAM)
3
Background
Applications in BiologyBayesian model of sensory signal processing
4
Background
Applications in BiologyBayesian model of sensory signal processing
4
Part I
Theory: Nonlinear Filtering
Nonlinear Filtering
Nonlinear FilteringMathematical Problem
Signal model: dXt = a(Xt)dt + dBt , X0 ∼ p∗0(·)
Posterior is an information state
P(Xt ∈ A|Zt) =∫A
p∗(x , t)dx
E(Xt |Zt) =∫R
xp∗(x , t)dx
6
Nonlinear Filtering
Nonlinear FilteringMathematical Problem
Signal model: dXt = a(Xt)dt + dBt , X0 ∼ p∗0(·)
Observation model: dZt = h(Xt)dt + dWt
Posterior is an information state
P(Xt ∈ A|Zt) =∫A
p∗(x , t)dx
E(Xt |Zt) =∫R
xp∗(x , t)dx
6
Nonlinear Filtering
Nonlinear FilteringMathematical Problem
Signal model: dXt = a(Xt)dt + dBt , X0 ∼ p∗0(·)
Observation model: dZt = h(Xt)dt + dWt
Problem: What is Xt ? given obs. till time t =: Zt
Posterior is an information state
P(Xt ∈ A|Zt) =∫A
p∗(x , t)dx
E(Xt |Zt) =∫R
xp∗(x , t)dx
6
Nonlinear Filtering
Nonlinear FilteringMathematical Problem
Signal model: dXt = a(Xt)dt + dBt , X0 ∼ p∗0(·)
Observation model: dZt = h(Xt)dt + dWt
Problem: What is Xt ? given obs. till time t =: Zt
Answer in terms of posterior: P(Xt |Zt) =: p∗(x , t).
Posterior is an information state
P(Xt ∈ A|Zt) =∫A
p∗(x , t)dx
E(Xt |Zt) =∫R
xp∗(x , t)dx
6
Nonlinear Filtering
Nonlinear FilteringMathematical Problem
Signal model: dXt = a(Xt)dt + dBt , X0 ∼ p∗0(·)
Observation model: dZt = h(Xt)dt + dWt
Problem: What is Xt ? given obs. till time t =: Zt
Answer in terms of posterior: P(Xt |Zt) =: p∗(x , t).
Posterior is an information state
P(Xt ∈ A|Zt) =∫A
p∗(x , t)dx
E(Xt |Zt) =∫R
xp∗(x , t)dx
6
Nonlinear Filtering
Nonlinear FilteringMathematical Problem
Signal model: dXt = a(Xt)dt + dBt , X0 ∼ p∗0(·)
Observation model: dZt = h(Xt)dt + dWt
Problem: What is Xt ? given obs. till time t =: Zt
Answer in terms of posterior: P(Xt |Zt) =: p∗(x , t).
Posterior is an information state
P(Xt ∈ A|Zt) =∫A
p∗(x , t)dx
E(Xt |Zt) =∫R
xp∗(x , t)dx
6
Nonlinear Filtering
Pretty Formulae in MathematicsMore often than not, these are simply stated
Euler’s identity
e iπ =−1
Euler’s formula
v − e + f = 2
Pythagoras theorem
x2 + y 2 = z2
Kenneth Chang “What Makes an Equation Beautiful” in The New York Times on October 24, 2004
7
Nonlinear Filtering
Kalman filterSolution in linear Gaussian settings
dXt = αXt dt + dBt (1)
dZt = γXt dt + dWt (2)
Kalman filter: p∗ = N(Xt ,Σt)
dXt = αXt dt + K (dZt − γXt dt)︸ ︷︷ ︸Update
Kalman Filter
Observation: dZt = γXt dt + dWt
Prediction: dZt = γXt dt
Innov. error: dIt = dZt − dZt
= dZt − γXt dt
Control: dUt = K dIt
Gain: Kalman gain
[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
Nonlinear Filtering
Kalman filterSolution in linear Gaussian settings
dXt = αXt dt + dBt (1)
dZt = γXt dt + dWt (2)
Kalman filter: p∗ = N(Xt ,Σt)
dXt = αXt dt + K (dZt − γXt dt)︸ ︷︷ ︸Update
Kalman Filter
Observation: dZt = γXt dt + dWt
Prediction: dZt = γXt dt
Innov. error: dIt = dZt − dZt
= dZt − γXt dt
Control: dUt = K dIt
Gain: Kalman gain
[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
Nonlinear Filtering
Kalman filterSolution in linear Gaussian settings
dXt = αXt dt + dBt (1)
dZt = γXt dt + dWt (2)
Kalman filter: p∗ = N(Xt ,Σt)
dXt = αXt dt + K (dZt − γXt dt)︸ ︷︷ ︸Update
Kalman Filter
-
+
Kalman Filter
Observation: dZt = γXt dt + dWt
Prediction: dZt = γXt dt
Innov. error: dIt = dZt − dZt
= dZt − γXt dt
Control: dUt = K dIt
Gain: Kalman gain
[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
Nonlinear Filtering
Kalman filterSolution in linear Gaussian settings
dXt = αXt dt + dBt (1)
dZt = γXt dt + dWt (2)
Kalman filter: p∗ = N(Xt ,Σt)
dXt = αXt dt + K (dZt − γXt dt)︸ ︷︷ ︸Update
Kalman Filter
-
+
Kalman Filter
Observation: dZt = γXt dt + dWt
Prediction: dZt = γXt dt
Innov. error: dIt = dZt − dZt
= dZt − γXt dt
Control: dUt = K dIt
Gain: Kalman gain
[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
Nonlinear Filtering
Kalman filterSolution in linear Gaussian settings
dXt = αXt dt + dBt (1)
dZt = γXt dt + dWt (2)
Kalman filter: p∗ = N(Xt ,Σt)
dXt = αXt dt + K (dZt − γXt dt)︸ ︷︷ ︸Update
Kalman Filter
-
+
Kalman Filter
Observation: dZt = γXt dt + dWt
Prediction: dZt = γXt dt
Innov. error: dIt = dZt − dZt
= dZt − γXt dt
Control: dUt = K dIt
Gain: Kalman gain
[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
Nonlinear Filtering
Kalman filterSolution in linear Gaussian settings
dXt = αXt dt + dBt (1)
dZt = γXt dt + dWt (2)
Kalman filter: p∗ = N(Xt ,Σt)
dXt = αXt dt + K (dZt − γXt dt)︸ ︷︷ ︸Update
Kalman Filter
-
+
Kalman Filter
Observation: dZt = γXt dt + dWt
Prediction: dZt = γXt dt
Innov. error: dIt = dZt − dZt
= dZt − γXt dt
Control: dUt = K dIt
Gain: Kalman gain
[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
Nonlinear Filtering
Kalman filterSolution in linear Gaussian settings
dXt = αXt dt + dBt (1)
dZt = γXt dt + dWt (2)
Kalman filter: p∗ = N(Xt ,Σt)
dXt = αXt dt + K (dZt − γXt dt)︸ ︷︷ ︸Update
Kalman Filter
-
+
Kalman Filter
Observation: dZt = γXt dt + dWt
Prediction: dZt = γXt dt
Innov. error: dIt = dZt − dZt
= dZt − γXt dt
Control: dUt = K dIt
Gain: Kalman gain
[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
Nonlinear Filtering
Kalman filterSolution in linear Gaussian settings
dXt = αXt dt + dBt (1)
dZt = γXt dt + dWt (2)
Kalman filter: p∗ = N(Xt ,Σt)
dXt = αXt dt + K (dZt − γXt dt)︸ ︷︷ ︸Update
Kalman Filter
-
+
Kalman Filter
Observation: dZt = γXt dt + dWt
Prediction: dZt = γXt dt
Innov. error: dIt = dZt − dZt
= dZt − γXt dt
Control: dUt = K dIt
Gain: Kalman gain[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
Nonlinear Filtering
Kalman filter
dXt = αXt dt︸ ︷︷ ︸Prediction
+ K (dZt − γXt dt)︸ ︷︷ ︸Update
This illustrates the key features of feedback control:
1 Use error to obtain control (dUt = K dIt)
2 Negative gain feedback serves to reduce error (K =γ
σ2W︸︷︷︸
SNR
Σt)
Simple enough to be included in the first undergraduate course on control
9
Nonlinear Filtering
Kalman filter
dXt = αXt dt︸ ︷︷ ︸Prediction
+ K (dZt − γXt dt)︸ ︷︷ ︸Update
This illustrates the key features of feedback control:
1 Use error to obtain control (dUt = K dIt)
2 Negative gain feedback serves to reduce error (K =γ
σ2W︸︷︷︸
SNR
Σt)
Simple enough to be included in the first undergraduate course on control
9
Nonlinear Filtering
Kalman filter
dXt = αXt dt︸ ︷︷ ︸Prediction
+ K (dZt − γXt dt)︸ ︷︷ ︸Update
Kalman Filter
-
+
This illustrates the key features of feedback control:1 Use error to obtain control (dUt = K dIt)
2 Negative gain feedback serves to reduce error (K =γ
σ2W︸︷︷︸
SNR
Σt)
Simple enough to be included in the first undergraduate course on control
9
Nonlinear Filtering
Filtering ProblemNonlinear Model: Kushner-Stratonovich PDE
Signal & Observations dXt = a(Xt)dt + σB dBt , (1)
dZt = h(Xt)dt + σW dWt (2)
Posterior distribution p∗ is a solution of a stochastic PDE:
dp∗ = L †(p∗)dt +1
σ2W
(h− h)(dZt − h dt)p∗
where h = E[h(Xt)|Zt ] =∫
h(x)p∗(x , t)dx
L †(p∗) =−∂ (p∗ ·a(x))
∂ x+
1
2σ
2B
∂ 2p∗
∂ x2
No closed-form solution in general. Closure problem.
[?] R. L. Stratonovich, SIAM Theory Probab. Appl., 1960. [?] H. J. Kushner, SIAM J. Control, 1964
10
Nonlinear Filtering
Filtering ProblemNonlinear Model: Kushner-Stratonovich PDE
Signal & Observations dXt = a(Xt)dt + σB dBt , (1)
dZt = h(Xt)dt + σW dWt (2)
Posterior distribution p∗ is a solution of a stochastic PDE:
dp∗ = L †(p∗)dt +1
σ2W
(h− h)(dZt − h dt)p∗
where h = E[h(Xt)|Zt ] =∫
h(x)p∗(x , t)dx
L †(p∗) =−∂ (p∗ ·a(x))
∂ x+
1
2σ
2B
∂ 2p∗
∂ x2
No closed-form solution in general. Closure problem.
[?] R. L. Stratonovich, SIAM Theory Probab. Appl., 1960. [?] H. J. Kushner, SIAM J. Control, 1964
10
Nonlinear Filtering
Filtering ProblemNonlinear Model: Kushner-Stratonovich PDE
Signal & Observations dXt = a(Xt)dt + σB dBt , (1)
dZt = h(Xt)dt + σW dWt (2)
Posterior distribution p∗ is a solution of a stochastic PDE:
dp∗ = L †(p∗)dt +1
σ2W
(h− h)(dZt − h dt)p∗
where h = E[h(Xt)|Zt ] =∫
h(x)p∗(x , t)dx
L †(p∗) =−∂ (p∗ ·a(x))
∂ x+
1
2σ
2B
∂ 2p∗
∂ x2
No closed-form solution in general. Closure problem.
[?] R. L. Stratonovich, SIAM Theory Probab. Appl., 1960. [?] H. J. Kushner, SIAM J. Control, 1964
10
Nonlinear Filtering
Particle FilterAn algorithm to solve nonlinear filtering problem
Approximate posterior in terms of particles p∗(x , t) =1
N
N
∑i=1
δX it(x)
Algorithm outline
1 Initialization at time 0: X i0 ∼ p∗0(·)
2 At each discrete time step:
Importance sampling (Bayes update step)Resampling (for variance reduction)
11
Nonlinear Filtering
Particle FilterAn algorithm to solve nonlinear filtering problem
Approximate posterior in terms of particles p∗(x , t) =1
N
N
∑i=1
δX it(x)
Algorithm outline
1 Initialization at time 0: X i0 ∼ p∗0(·)
2 At each discrete time step:
Importance sampling (Bayes update step)Resampling (for variance reduction)
11
Nonlinear Filtering
Particle FilterAn algorithm to solve nonlinear filtering problem
Approximate posterior in terms of particles p∗(x , t) =1
N
N
∑i=1
δX it(x)
Algorithm outline
1 Initialization at time 0: X i0 ∼ p∗0(·)
2 At each discrete time step:
Importance sampling (Bayes update step)Resampling (for variance reduction)
11
Nonlinear Filtering
Particle FilterAn algorithm to solve nonlinear filtering problem
Approximate posterior in terms of particles p∗(x , t) =1
N
N
∑i=1
δX it(x)
Algorithm outline
1 Initialization at time 0: X i0 ∼ p∗0(·)
2 At each discrete time step:
Importance sampling (Bayes update step)Resampling (for variance reduction)
e.g. dZt = Xt dt + small noise11
Nonlinear Filtering
Particle FilterAn algorithm to solve nonlinear filtering problem
Approximate posterior in terms of particles p∗(x , t) =1
N
N
∑i=1
δX it(x)
Algorithm outline
1 Initialization at time 0: X i0 ∼ p∗0(·)
2 At each discrete time step:
Importance sampling (Bayes update step)Resampling (for variance reduction)
e.g. dZt = Xt dt + small noise11
Nonlinear Filtering
Particle FilterAn algorithm to solve nonlinear filtering problem
Approximate posterior in terms of particles p∗(x , t) =1
N
N
∑i=1
δX it(x)
Algorithm outline
1 Initialization at time 0: X i0 ∼ p∗0(·)
2 At each discrete time step:
Importance sampling (Bayes update step)Resampling (for variance reduction)
Innovation error, feedback?And most importantly, is this pretty?11
Control-Oriented Approach to Particle Filtering
Research goal: Bringing pretty back!
102 103
10−3
10−2
10−1
N (number of particles)
Bootstrap (BPF)
Feedback (FPF)
MSE
Control-Oriented Approach to Particle Filtering
12
Control-Oriented Approach to Particle Filtering
Feedback Particle Filter
Signal & Observations dXt = a(Xt)dt + σB dBt (1)
dZt = h(Xt)dt + σW dWt (2)
Controlled system (N particles):
dX it = a(X i
t )dt + σB dB it + dU i
t︸︷︷︸mean field control
, i = 1, ...,N (3)
{B it}Ni=1 are ind. standard white noises.
Objective: Choose control U it , as a function of history
{Zs ,Xis : 0≤ s ≤ t}, such that the two posteriors coincide:∫
x∈Ap∗(x , t) dx = P{Xt ∈ A |Zt}∫
x∈Ap(x , t) dx = P{X i
t ∈ A |Zt}
Motivation: Work of Huang, Caines and Malhame on Mean-field games (IEEE TAC 2007)13
Control-Oriented Approach to Particle Filtering
Feedback Particle Filter
Signal & Observations dXt = a(Xt)dt + σB dBt (1)
dZt = h(Xt)dt + σW dWt (2)
Controlled system (N particles):
dX it = a(X i
t )dt + σB dB it + dU i
t︸︷︷︸mean field control
, i = 1, ...,N (3)
{B it}Ni=1 are ind. standard white noises.
Objective: Choose control U it , as a function of history
{Zs ,Xis : 0≤ s ≤ t}, such that the two posteriors coincide:∫
x∈Ap∗(x , t) dx = P{Xt ∈ A |Zt}∫
x∈Ap(x , t) dx = P{X i
t ∈ A |Zt}
Motivation: Work of Huang, Caines and Malhame on Mean-field games (IEEE TAC 2007)13
Control-Oriented Approach to Particle Filtering
Feedback Particle Filter
Signal & Observations dXt = a(Xt)dt + σB dBt (1)
dZt = h(Xt)dt + σW dWt (2)
Controlled system (N particles):
dX it = a(X i
t )dt + σB dB it + dU i
t︸︷︷︸mean field control
, i = 1, ...,N (3)
{B it}Ni=1 are ind. standard white noises.
Objective: Choose control U it , as a function of history
{Zs ,Xis : 0≤ s ≤ t}, such that the two posteriors coincide:∫
x∈Ap∗(x , t) dx = P{Xt ∈ A |Zt}∫
x∈Ap(x , t) dx = P{X i
t ∈ A |Zt}
Motivation: Work of Huang, Caines and Malhame on Mean-field games (IEEE TAC 2007)13
Control-Oriented Approach to Particle Filtering
FPF SolutionLinear model
Controlled system: for i = 1, ...N:
dX it = αX i
t dt + σB dB it︸ ︷︷ ︸
Prediction
+ K[
dZt − γX it + µt
2dt]
︸ ︷︷ ︸Update (via mean field control)
(3)
Feedback Particle Filter
-
+
14
Control-Oriented Approach to Particle Filtering
FPF SolutionLinear model
Controlled system: for i = 1, ...N:
dX it = αX i
t dt + σB dB it︸ ︷︷ ︸
Prediction
+ K[
dZt − γX it + µt
2dt]
︸ ︷︷ ︸Update (via mean field control)
(3)
Feedback Particle Filter
-
+
14
Control-Oriented Approach to Particle Filtering
FPF Update StepsLinear model
Feedback particle filter Kalman filter
Observation: dZt = γXt dt + σW dWt dZt = γXt dt + σW dWt
Prediction: dZ it = 1
2
(γX i
t + γµt
)dt dZt = γXt dt
Innovation error: dI it = dZt − dZ it dIt = dZt − dZt
= dZt − γX it +µt
2 dt = dZt − γXt dt
Control: dU it = K dI it dUt = K dIt
Gain: K is the Kalman gain
15
Control-Oriented Approach to Particle Filtering
FPF Update StepsLinear model
Feedback particle filter Kalman filter
Observation: dZt = γXt dt + σW dWt dZt = γXt dt + σW dWt
Prediction: dZ it = 1
2
(γX i
t + γµt
)dt dZt = γXt dt
Innovation error: dI it = dZt − dZ it dIt = dZt − dZt
= dZt − γX it +µt
2 dt = dZt − γXt dt
Control: dU it = K dI it dUt = K dIt
Gain: K is the Kalman gain
15
Control-Oriented Approach to Particle Filtering
FPF Update StepsLinear model
Feedback particle filter Kalman filter
Observation: dZt = γXt dt + σW dWt dZt = γXt dt + σW dWt
Prediction: dZ it = 1
2
(γX i
t + γµt
)dt dZt = γXt dt
Innovation error: dI it = dZt − dZ it dIt = dZt − dZt
= dZt − γX it +µt
2 dt = dZt − γXt dt
Control: dU it = K dI it dUt = K dIt
Gain: K is the Kalman gain
15
Control-Oriented Approach to Particle Filtering
FPF Update StepsLinear model
Feedback particle filter Kalman filter
Observation: dZt = γXt dt + σW dWt dZt = γXt dt + σW dWt
Prediction: dZ it = 1
2
(γX i
t + γµt
)dt dZt = γXt dt
Innovation error: dI it = dZt − dZ it dIt = dZt − dZt
= dZt − γX it +µt
2 dt = dZt − γXt dt
Control: dU it = K dI it dUt = K dIt
Gain: K is the Kalman gain
15
Control-Oriented Approach to Particle Filtering
FPF Update StepsLinear model
Feedback particle filter Kalman filter
Observation: dZt = γXt dt + σW dWt dZt = γXt dt + σW dWt
Prediction: dZ it = 1
2
(γX i
t + γµt
)dt dZt = γXt dt
Innovation error: dI it = dZt − dZ it dIt = dZt − dZt
= dZt − γX it +µt
2 dt = dZt − γXt dt
Control: dU it = K dI it dUt = K dIt
Gain: K is the Kalman gain
15
Control-Oriented Approach to Particle Filtering
Linear Feedback Particle FilterMean field model is the Kalman filter
Feedback particle filter:
dX it = αX i
t dt + σB dB it + K
[dZt −
γ
2
(X it +
1
N
N
∑j=1
X jt
)dt]
(3)
X i0 ∼ p∗(x ,0) = N(µ(0),Σ(0))
Mean-field model: Kalman filter! Let p denote cond. dist. of X it given
Zt . Then p = N(µt ,Σt) where
dµt = αµt dt +γΣt
σ2W
(dZt − γµt dt)
dΣt =
(2αΣt + σ
2B −
γ2Σ2t
σ2W
)dt
As N → ∞, the empirical distribution approximates the posterior p∗
16
Control-Oriented Approach to Particle Filtering
Linear Feedback Particle FilterMean field model is the Kalman filter
Feedback particle filter:
dX it = αX i
t dt + σB dB it + K
[dZt −
γ
2
(X it +
1
N
N
∑j=1
X jt
)dt]
(3)
X i0 ∼ p∗(x ,0) = N(µ(0),Σ(0))
Mean-field model: Kalman filter! Let p denote cond. dist. of X it given
Zt . Then p = N(µt ,Σt) where
dµt = αµt dt +γΣt
σ2W
(dZt − γµt dt)
dΣt =
(2αΣt + σ
2B −
γ2Σ2t
σ2W
)dt
As N → ∞, the empirical distribution approximates the posterior p∗
16
Control-Oriented Approach to Particle Filtering
Linear Feedback Particle FilterMean field model is the Kalman filter
Feedback particle filter:
dX it = αX i
t dt + σB dB it + K
[dZt −
γ
2
(X it +
1
N
N
∑j=1
X jt
)dt]
(3)
X i0 ∼ p∗(x ,0) = N(µ(0),Σ(0))
Mean-field model: Kalman filter! Let p denote cond. dist. of X it given
Zt . Then p = N(µt ,Σt) where
dµt = αµt dt +γΣt
σ2W
(dZt − γµt dt)
dΣt =
(2αΣt + σ
2B −
γ2Σ2t
σ2W
)dt
As N → ∞, the empirical distribution approximates the posterior p∗
16
Control-Oriented Approach to Particle Filtering
Variance ReductionFiltering for simple linear model.
Mean-square error:1
T
∫ T
0
(Σ
(N)t −Σt
Σt
)2
dt
102 103
10−3
10−2
10−1
N (number of particles)
Bootstrap (BPF)
Feedback (FPF)
MSE
17
Feedback Particle Filter
Methodology: Variational FormulationHow do we derive the feedback particle filter?
Time-stepping procedure:
Signal, observ. process:
dXt = a(Xt)dt + σB dBt
Ztn = h(Xtn) + Wtn
Feedback Particle filter
Filter: dX it = a(X i
t )dt + σB dB it
Control: X itn = X i
t−n+ u(X i
t−n)︸ ︷︷ ︸
controlConditional distributions:
p∗n(·): cond. pdf of Xt |Zt pn(·;u): cond. pdf of X it |Zt
Variational problem: minu
D (pn(u) ‖ p∗n)
As ∆t→ 0:
Optimal control, u = u◦, yields the feedback particle filter,Nonlinear filter is the gradient flow and u◦ is the optimal transport.
18
Feedback Particle Filter
Methodology: Variational FormulationHow do we derive the feedback particle filter?
Time-stepping procedure:
Signal, observ. process:
dXt = a(Xt)dt + σB dBt
Ztn = h(Xtn) + Wtn
Feedback Particle filter
Filter: dX it = a(X i
t )dt + σB dB it
Control: X itn = X i
t−n+ u(X i
t−n)︸ ︷︷ ︸
controlConditional distributions:
p∗n(·): cond. pdf of Xt |Zt pn(·;u): cond. pdf of X it |Zt
Variational problem: minu
D (pn(u) ‖ p∗n)
As ∆t→ 0:
Optimal control, u = u◦, yields the feedback particle filter,Nonlinear filter is the gradient flow and u◦ is the optimal transport.
18
Feedback Particle Filter
Methodology: Variational FormulationHow do we derive the feedback particle filter?
Time-stepping procedure:
Signal, observ. process:
dXt = a(Xt)dt + σB dBt
Ztn = h(Xtn) + Wtn
Feedback Particle filter
Filter: dX it = a(X i
t )dt + σB dB it
Control: X itn = X i
t−n+ u(X i
t−n)︸ ︷︷ ︸
controlConditional distributions:
p∗n(·): cond. pdf of Xt |Zt pn(·;u): cond. pdf of X it |Zt
Variational problem: minu
D (pn(u) ‖ p∗n)
As ∆t→ 0:
Optimal control, u = u◦, yields the feedback particle filter,Nonlinear filter is the gradient flow and u◦ is the optimal transport.
18
Feedback Particle Filter
Methodology: Variational FormulationHow do we derive the feedback particle filter?
Time-stepping procedure:
Signal, observ. process:
dXt = a(Xt)dt + σB dBt
Ztn = h(Xtn) + Wtn
Feedback Particle filter
Filter: dX it = a(X i
t )dt + σB dB it
Control: X itn = X i
t−n+ u(X i
t−n)︸ ︷︷ ︸
controlConditional distributions:
p∗n(·): cond. pdf of Xt |Zt pn(·;u): cond. pdf of X it |Zt
Variational problem: minu
D (pn(u) ‖ p∗n)
As ∆t→ 0:
Optimal control, u = u◦, yields the feedback particle filter,Nonlinear filter is the gradient flow and u◦ is the optimal transport.
18
Feedback Particle Filter
Feedback Particle FilterFiltering in nonlinear non-Gaussian settings
Signal model: dXt = a(Xt)dt + dBt , X0 ∼ p∗0(·)Observation model: dZt = h(Xt)dt + dWt
FPF: dX it = a(X i
t )dt + dB it + K(X i
t )◦ dI it︸ ︷︷ ︸Update
Innovations: dI it =:dZt −1
2(h(X i
t ) + h)dt, with cond. mean h = 〈p,h〉.
19
Feedback Particle Filter
Feedback Particle FilterFiltering in nonlinear non-Gaussian settings
Signal model: dXt = a(Xt)dt + dBt , X0 ∼ p∗0(·)Observation model: dZt = h(Xt)dt + dWt
FPF: dX it = a(X i
t )dt + dB it + K(X i
t )◦ dI it︸ ︷︷ ︸Update
Innovations: dI it =:dZt −1
2(h(X i
t ) + h)dt, with cond. mean h = 〈p,h〉.
19
Feedback Particle Filter
Update StepHow does feedback particle filter implement Bayes’ rule?
Feedback particle filter Linear case
Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt
Prediction: dZ it = h(X i
t )+h2 dt dZ i
t = γX it +γµt
2 dt
h = 1N ∑
Ni=1 h(X i
t )
Innov. error: dI it = dZt − dZ it dI it = dZt − dZ i
t
= dZt − h(X it )+h2 dt = dZt − γ
X it +µt
2 dt
Control: dU it = K (X i
t )◦ dI it dU it = K (X i
t )◦ dI it
Gain: K is a solution of a linear BVP K is the Kalman gain
20
Feedback Particle Filter
Update StepHow does feedback particle filter implement Bayes’ rule?
Feedback particle filter Linear case
Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt
Prediction: dZ it = h(X i
t )+h2 dt dZ i
t = γX it +γµt
2 dt
h = 1N ∑
Ni=1 h(X i
t )
Innov. error: dI it = dZt − dZ it dI it = dZt − dZ i
t
= dZt − h(X it )+h2 dt = dZt − γ
X it +µt
2 dt
Control: dU it = K (X i
t )◦ dI it dU it = K (X i
t )◦ dI it
Gain: K is a solution of a linear BVP K is the Kalman gain
20
Feedback Particle Filter
Update StepHow does feedback particle filter implement Bayes’ rule?
Feedback particle filter Linear case
Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt
Prediction: dZ it = h(X i
t )+h2 dt dZ i
t = γX it +γµt
2 dt
h = 1N ∑
Ni=1 h(X i
t )
Innov. error: dI it = dZt − dZ it dI it = dZt − dZ i
t
= dZt − h(X it )+h2 dt = dZt − γ
X it +µt
2 dt
Control: dU it = K (X i
t )◦ dI it dU it = K (X i
t )◦ dI it
Gain: K is a solution of a linear BVP K is the Kalman gain
20
Feedback Particle Filter
Update StepHow does feedback particle filter implement Bayes’ rule?
Feedback particle filter Linear case
Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt
Prediction: dZ it = h(X i
t )+h2 dt dZ i
t = γX it +γµt
2 dt
h = 1N ∑
Ni=1 h(X i
t )
Innov. error: dI it = dZt − dZ it dI it = dZt − dZ i
t
= dZt − h(X it )+h2 dt = dZt − γ
X it +µt
2 dt
Control: dU it = K (X i
t )◦ dI it dU it = K (X i
t )◦ dI it
Gain: K is a solution of a linear BVP K is the Kalman gain
20
Feedback Particle Filter
Update StepHow does feedback particle filter implement Bayes’ rule?
Feedback particle filter Linear case
Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt
Prediction: dZ it = h(X i
t )+h2 dt dZ i
t = γX it +γµt
2 dt
h = 1N ∑
Ni=1 h(X i
t )
Innov. error: dI it = dZt − dZ it dI it = dZt − dZ i
t
= dZt − h(X it )+h2 dt = dZt − γ
X it +µt
2 dt
Control: dU it = K (X i
t )◦ dI it dU it = K (X i
t )◦ dI it
Gain: K is a solution of a linear BVP K is the Kalman gain
20
Feedback Particle Filter
Boundary Value ProblemEuler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇ · (Kp) =−(h− h)p
solved at each time-step.
Linear case: Nonlinear case:
21
Feedback Particle Filter
Boundary Value ProblemEuler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇ · (Kp) =−(h− h)p
solved at each time-step.
Linear case: Nonlinear case:
21
Feedback Particle Filter
Boundary Value ProblemEuler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇ · (Kp) =−(h− h)p
solved at each time-step.
Linear case:
Nonlinear case:
21
Feedback Particle Filter
Boundary Value ProblemEuler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇ · (Kp) =−(h− h)p
solved at each time-step.
Linear case:
Nonlinear case:
21
Feedback Particle Filter
Boundary Value ProblemEuler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇ · (Kp) =−(h− h)p
solved at each time-step.
Linear case:
Nonlinear case:
21
Feedback Particle Filter
Boundary Value ProblemEuler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇ · (Kp) =−(h− h)p
solved at each time-step.
Linear case: Nonlinear case:
21
Feedback Particle Filter
Boundary Value ProblemEuler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇ · (Kp) =−(h− h)p
solved at each time-step.
Linear case: Nonlinear case:
21
Feedback Particle Filter
Boundary Value ProblemEuler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇ · (Kp) =−(h− h)p
solved at each time-step.
Linear case: Nonlinear case:
21
Feedback Particle Filter
Boundary Value ProblemEuler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇ · (Kp) =−(h− h)p
solved at each time-step.
Linear case: Nonlinear case:
21
Feedback Particle Filter
ConsistencyFeedback particle filter is exact
p∗ : conditional pdf of Xt given Zt ,
dp∗ = L †(p∗)dt + (h− h)(σ2W )−1(dZt − h dt)p∗
p : conditional pdf of X it given Zt ,
dp = L †(p)dt− ∂
∂ x(Kp) dZt −
∂
∂ x(up) dt +
σ2W
2
∂ 2
∂ x2
(pK2
)dt
Consistency Theorem
Consider the two evolution equations for p and p∗.Provided the FPF is initialized with p(x ,0) = p∗(x ,0), then
p(x , t) = p∗(x , t) for all t ≥ 0
22
Feedback Particle Filter
ConsistencyFeedback particle filter is exact
p∗ : conditional pdf of Xt given Zt ,
dp∗ = L †(p∗)dt + (h− h)(σ2W )−1(dZt − h dt)p∗
p : conditional pdf of X it given Zt ,
dp = L †(p)dt− ∂
∂ x(Kp) dZt −
∂
∂ x(up) dt +
σ2W
2
∂ 2
∂ x2
(pK2
)dt
Consistency Theorem
Consider the two evolution equations for p and p∗.Provided the FPF is initialized with p(x ,0) = p∗(x ,0), then
p(x , t) = p∗(x , t) for all t ≥ 0
22
Feedback Particle Filter
Kalman Filter
Kalman Filter
-
+
Innovation Error:
dIt = dZt −h(X )dt
Gain Function:
K = Kalman Gain
Feedback Particle Filter
Feedback Particle Filter
-
+
Innovation Error:
dI it = dZt−1
2
(h(X i
t ) + ht
)dt
Gain Function:
K is solution of a linear BVP.
23
Feedback Particle Filter
Kalman Filter
Kalman Filter
-
+
Innovation Error:
dIt = dZt −h(X )dt
Gain Function:
K = Kalman Gain
Feedback Particle Filter
Feedback Particle Filter
-
+
Innovation Error:
dI it = dZt−1
2
(h(X i
t ) + ht
)dt
Gain Function:
K is solution of a linear BVP.
23
Feedback Particle Filter
Kalman Filter
Kalman Filter
-
+
Innovation Error:
dIt = dZt −h(X )dt
Gain Function:
K = Kalman Gain
Feedback Particle Filter
Feedback Particle Filter
-
+
Innovation Error:
dI it = dZt−1
2
(h(X i
t ) + ht
)dt
Gain Function:
K is solution of a linear BVP.
23
Feedback Particle Filter
Kalman Filter
Kalman Filter
-
+
Innovation Error:
dIt = dZt −h(X )dt
Gain Function:
K = Kalman Gain
Feedback Particle Filter
Feedback Particle Filter
-
+
Innovation Error:
dI it = dZt−1
2
(h(X i
t ) + ht
)dt
Gain Function:
K is solution of a linear BVP.
23
Feedback Particle Filter
Kalman Filter
Kalman Filter
-
+
Innovation Error:
dIt = dZt −h(X )dt
Gain Function:
K = Kalman Gain
Feedback Particle Filter
Feedback Particle Filter
-
+
Innovation Error:
dI it = dZt−1
2
(h(X i
t ) + ht
)dt
Gain Function:
K is solution of a linear BVP.
23
Feedback Particle Filter
Kalman Filter
Kalman Filter
-
+
Innovation Error:
dIt = dZt −h(X )dt
Gain Function:
K = Kalman Gain
Feedback Particle Filter
Feedback Particle Filter
-
+
Innovation Error:
dI it = dZt−1
2
(h(X i
t ) + ht
)dt
Gain Function:
K is solution of a linear BVP.
23
Part II
Neural Rhythms, Bayesian Inference
Oscillators in Biology
Normal Form ReductionDerivation of oscillator model
CdV
dt=−gT ·m2
∞(V ) ·h · (V −ET )
−gh · r · (V −Eh)− . . . . . .
dh
dt=
h∞(V )−h
τh(V )
dr
dt=
r∞(V )− r
τr (V )
[?] J. Guckenheimer, J. Math. Biol., 1975; [?] J. Moehlis et al., Neural Computation, 2004
25
Oscillators in Biology
Normal Form ReductionDerivation of oscillator model
CdV
dt=−gT ·m2
∞(V ) ·h · (V −ET )
−gh · r · (V −Eh)− . . . . . .
dh
dt=
h∞(V )−h
τh(V )
dr
dt=
r∞(V )− r
τr (V )
[?] J. Guckenheimer, J. Math. Biol., 1975; [?] J. Moehlis et al., Neural Computation, 200425
Oscillators in Biology
Normal Form ReductionDerivation of oscillator model
CdV
dt=−gT ·m2
∞(V ) ·h · (V −ET )
−gh · r · (V −Eh)− . . . . . .
dh
dt=
h∞(V )−h
τh(V )
dr
dt=
r∞(V )− r
τr (V )
Normal form reduction−−−−−−−−−−−−→
dθi (t) = ωi dt + ui (t) ·Φ(θi (t))dt
[?] J. Guckenheimer, J. Math. Biol., 1975; [?] J. Moehlis et al., Neural Computation, 200425
Oscillators in Biology
Collective Dynamics of a Large Number of OscillatorsSynchrony, Neural rhythms
26
Oscillators in Biology
Functional Role of Neural RhythmsIs synchronization useful? Does it have a functional role?
Books/review papers:
Buzsaki, Destexhe, Ermentrout, Izhikevich, Kopell, Trout and Whittington (2009), Llinas and
Ribary (2001), Pareti and Palma (2004), Sejnowski and Paulsen (2006), Singer (1993)...
Computations: Computing with intrinsic network states
Destexhe and Contreras (2006); Izhikevich (2006); Zhang and Ballard (2001).
Synaptic plasticity: Neurons that fire together wire together
And several other hypotheses:
Communication and information flow (Laughlin and Sejnowski); Binding by synchrony (Singer);
Memory formation (Jutras and Fries); Probabilistic decision making (Wang); Stimulus
competition and attention selection (Kopell); Sleep/wakefulness/disease (Steriade)
27
Oscillators in Biology
PredictionBrain as a reality emulator
“[Prediction] is the primary function of the neocortex,and the foundation of intelligence. If we want tounderstand how your brain works, and how to buildintelligent machines, we must understand the nature ofthese predictions and how the cortex makes them.”
“The capacity to predict the outcome of future events –critical to successful movement – is, most likely, theultimate and most common of all brain functions.”
28
Oscillators in Biology
PredictionBrain as a reality emulator
“[Prediction] is the primary function of the neocortex,and the foundation of intelligence. If we want tounderstand how your brain works, and how to buildintelligent machines, we must understand the nature ofthese predictions and how the cortex makes them.”
“The capacity to predict the outcome of future events –critical to successful movement – is, most likely, theultimate and most common of all brain functions.”
28
Oscillators in Biology
PredictionBrain as a reality emulator
“[Prediction] is the primary function of the neocortex,and the foundation of intelligence. If we want tounderstand how your brain works, and how to buildintelligent machines, we must understand the nature ofthese predictions and how the cortex makes them.”
“The capacity to predict the outcome of future events –critical to successful movement – is, most likely, theultimate and most common of all brain functions.”
28
Oscillators in Biology
PredictionBrain as a reality emulator
“[Prediction] is the primary function of the neocortex,and the foundation of intelligence. If we want tounderstand how your brain works, and how to buildintelligent machines, we must understand the nature ofthese predictions and how the cortex makes them.”
“The capacity to predict the outcome of future events –critical to successful movement – is, most likely, theultimate and most common of all brain functions.”
28
Oscillators in Biology
Filtering in Brain?Bayesian model of sensory signal processing
Theory:
Lee and Mumford, Hierarchical Bayesian inference Framework (2003)
Rao; Rao and Ballard; Rao and Sejnowski. Predictive coding framework (2002)
Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995)
Ma, Beck, Latham and Pouget. Probabilistic population codes (2006)
Kording and Wolpert. Bayesian decision theory (2006)
And others: See
Doya, Ishii, Pouget and Rao, Bayesian Brain, MIT Press (2007)
Rao, Olshausen & Lewicki, Probabilistic Models of Brain, MIT Press (2002)
29
Oscillators in Biology
Filtering in Brain?Bayesian model of sensory signal processing
Theory:
Lee and Mumford, Hierarchical Bayesian inference Framework (2003)
Rao; Rao and Ballard; Rao and Sejnowski. Predictive coding framework (2002)
Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995)
Ma, Beck, Latham and Pouget. Probabilistic population codes (2006)
Kording and Wolpert. Bayesian decision theory (2006)
And others: See
Doya, Ishii, Pouget and Rao, Bayesian Brain, MIT Press (2007)
Rao, Olshausen & Lewicki, Probabilistic Models of Brain, MIT Press (2002)
29
Oscillators in Biology
Filtering in Brain?Bayesian model of sensory signal processing
Experiments (see reviews):
Gold & Shadlen, The neural basis of decision making, Ann. Rev. of Neurosci. (2007)
R. T. Knight, Neural networks debunk phrenology, Science (2007)
Such theories naturally feed into computer vision & more generally on howto make computer “intelligent”
30
Oscillators in Biology
Filtering in Brain?Bayesian model of sensory signal processing
Experiments (see reviews):
Gold & Shadlen, The neural basis of decision making, Ann. Rev. of Neurosci. (2007)
R. T. Knight, Neural networks debunk phrenology, Science (2007)
Such theories naturally feed into computer vision & more generally on howto make computer “intelligent”
30
Oscillators in Biology
Bayesian Inference in NeuroscienceLee and Mumford’s hierarchical Bayesian inference framework
. . .Bayes’ rule Bayes’ rule Bayes’ rule
Similar ideas also appear in:
1 Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995)
2 Lewicki and Sejnowski. Bayesian unsupervised learning (1995)
3 Rao and Ballard; Rao and Sejnowski. Predictive coding framework (1999;2002)
31
Oscillators in Biology
Bayesian Inference in NeuroscienceLee and Mumford’s hierarchical Bayesian inference framework
. . .Bayes’ rule Bayes’ rule Bayes’ rule
. . .Part. Filter Part. Filter Part. Filter
Similar ideas also appear in:
1 Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995)
2 Lewicki and Sejnowski. Bayesian unsupervised learning (1995)
3 Rao and Ballard; Rao and Sejnowski. Predictive coding framework (1999;2002)
31
Part III
Application: Filtering with Rhythms
Gait CycleBiological Rhythm
33
Gait CycleBiological Rhythm
33
Gait CycleBiological Rhythm
33
Gait CycleBiological Rhythm
33
Gait CycleBiological Rhythm
33
Gait CycleBiological Rhythm
33
Gait CycleBiological Rhythm
33
Application: Ankle-foot OrthosesEstimation of gait cycle using sensor measurements
Ankle-foot orthoses (AFOs) : For lower-limb neuromuscular impairments.
Provides dorsiflexor (toe lift) and plantarflexor (toe push) torque assistance
Sensors: heel, toe, and ankle joint
Compressed CO2
Actuator
Solenoid valves:control the �ow of CO2 to the actuator
AFO system components: Power supply,
Valves, Actuator, Sensors.Professor Liz Hsiao-Wecksler
Acknowledgement: Professor Liz Hsiao-Wecksler for sharing the AFO device picture and sensor data.
34
Application: Ankle-foot OrthosesEstimation of gait cycle using sensor measurements
Ankle-foot orthoses (AFOs) : For lower-limb neuromuscular impairments.
Provides dorsiflexor (toe lift) and plantarflexor (toe push) torque assistance
Sensors: heel, toe, and ankle joint
Compressed CO2
Actuator
Solenoid valves:control the �ow of CO2 to the actuator
AFO system components: Power supply,
Valves, Actuator, Sensors.Professor Liz Hsiao-Wecksler
Acknowledgement: Professor Liz Hsiao-Wecksler for sharing the AFO device picture and sensor data.
34
Gait CycleSignal model
Stance phase Swing phase
Model (Noisy oscillator)
dθt = ω0 dt︸ ︷︷ ︸natural frequency
+ noise
35
Gait CycleSignal model
Stance phase Swing phase
Model (Noisy oscillator)
dθt = ω0 dt︸ ︷︷ ︸natural frequency
+ noise
35
Gait CycleSignal model
Stance phase Swing phase
Model (Noisy oscillator)
dθt = ω0 dt︸ ︷︷ ︸natural frequency
+ noise
35
Gait CycleSignal model
Stance phase Swing phase
Model (Noisy oscillator)
dθt = ω0 dt︸ ︷︷ ︸natural frequency
+ noise
35
Gait CycleSignal model
Stance phase Swing phase
Model (Noisy oscillator)
dθt = ω0 dt︸ ︷︷ ︸natural frequency
+ noise
35
Gait CycleSignal model
Stance phase Swing phase
Model (Noisy oscillator)
dθt = ω0 dt︸ ︷︷ ︸natural frequency
+ noise
35
Gait CycleSignal model
Stance phase Swing phase
Model (Noisy oscillator)
dθt = ω0 dt︸ ︷︷ ︸natural frequency
+ noise
35
Gait CycleSignal model
Stance phase Swing phase
Model (Noisy oscillator)
dθt = ω0 dt︸ ︷︷ ︸natural frequency
+ noise
35
Problem: Estimate Gait Cycle θtSensor model
Observation model: dZt = h(θt)dt+ noise
Problem: What is θt given noisy observations?
36
Problem: Estimate Gait Cycle θtSensor model
Observation model: dZt = h(θt)dt+ noise
Problem: What is θt given noisy observations?
36
Problem: Estimate Gait Cycle θtSensor model
Observation model: dZt = h(θt)dt+ noise
Problem: What is θt given noisy observations?36
Problem: Estimate Gait Cycle θtSensor model
Observation model: dZt = h(θt)dt+ noise
Problem: What is θt given noisy observations?36
Problem: Estimate Gait Cycle θtSensor model
Observation model: dZt = h(θt)dt+ noise
Problem: What is θt given noisy observations?36
Solution: Particle FilterAlgorithm to approximate posterior distribution
“Large number of oscillators”
Posterior distribution:P(φ1 < θt < φ2|Sensor readings) = Fraction of θ
it in interval (φ1,φ2)
Circuit:dθ
it = ωi dt︸︷︷︸
natural freq. of ith oscillator
+ noisei + dU it︸︷︷︸
mean-field control
, i = 1, . . . ,N
Feedback Particle Filter: Design control law U it
37
Solution: Particle FilterAlgorithm to approximate posterior distribution
“Large number of oscillators”
Posterior distribution:P(φ1 < θt < φ2|Sensor readings) = Fraction of θ
it in interval (φ1,φ2)
Circuit:dθ
it = ωi dt︸︷︷︸
natural freq. of ith oscillator
+ noisei + dU it︸︷︷︸
mean-field control
, i = 1, . . . ,N
Feedback Particle Filter: Design control law U it
37
Solution: Particle FilterAlgorithm to approximate posterior distribution
“Large number of oscillators”
Posterior distribution:P(φ1 < θt < φ2|Sensor readings) = Fraction of θ
it in interval (φ1,φ2)
Circuit:dθ
it = ωi dt︸︷︷︸
natural freq. of ith oscillator
+ noisei + dU it︸︷︷︸
mean-field control
, i = 1, . . . ,N
Feedback Particle Filter: Design control law U it
37
Solution: Particle FilterAlgorithm to approximate posterior distribution
“Large number of oscillators”
Posterior distribution:P(φ1 < θt < φ2|Sensor readings) = Fraction of θ
it in interval (φ1,φ2)
Circuit:dθ
it = ωi dt︸︷︷︸
natural freq. of ith oscillator
+ noisei + dU it︸︷︷︸
mean-field control
, i = 1, . . . ,N
Feedback Particle Filter: Design control law U it
37
Solution: Particle FilterAlgorithm to approximate posterior distribution
“Large number of oscillators”
Posterior distribution:P(φ1 < θt < φ2|Sensor readings) = Fraction of θ
it in interval (φ1,φ2)
Circuit:dθ
it = ωi dt︸︷︷︸
natural freq. of ith oscillator
+ noisei + dU it︸︷︷︸
mean-field control
, i = 1, . . . ,N
Feedback Particle Filter: Design control law U it
37
Solution: Particle FilterAlgorithm to approximate posterior distribution
“Large number of oscillators”
Posterior distribution:P(φ1 < θt < φ2|Sensor readings) = Fraction of θ
it in interval (φ1,φ2)
Circuit:dθ
it = ωi dt︸︷︷︸
natural freq. of ith oscillator
+ noisei + dU it︸︷︷︸
mean-field control
, i = 1, . . . ,N
Feedback Particle Filter: Design control law U it
37
Filtering for Oscillators
Signal & Observations dθt = ω dt + dBt mod 2π
dZt = h(θt)dt + dWt
− π 0 π
Particle evolution,
dθit = ωi dt + dB i
t + K(θit )◦ [dZt −
1
2(h(θ
it ) + h)dt] mod 2π, i = 1, ...,N.
where ωi is sampled from a distribution.
38
Filtering for Oscillators
Signal & Observations dθt = ω dt + dBt mod 2π
dZt = h(θt)dt + dWt
− π 0 π
Particle evolution,
dθit = ωi dt + dB i
t + K(θit )◦ [dZt −
1
2(h(θ
it ) + h)dt] mod 2π, i = 1, ...,N.
where ωi is sampled from a distribution.
38
Filtering for Oscillators
Signal & Observations dθt = ω dt + dBt mod 2π
dZt = h(θt)dt + dWt
− π 0 π
Particle evolution,
dθit = ωi dt + dB i
t + K(θit )◦ [dZt −
1
2(h(θ
it ) + h)dt] mod 2π, i = 1, ...,N.
where ωi is sampled from a distribution.
Feedback Particle Filter
-
+
38
Simulation ResultsSolution of the Estimation of Gait Cycle Problem
[Click to play the movie]
39
Filtering of Biological Rhythms with Brain RhythmsConnection to Lee and Mumford’s hierarchical Bayesian inference framework
. . .Part. Filter Part. Filter Part. Filter
Prior
Noisy input
. . .Part. Filter Part. Filter
40
Filtering of Biological Rhythms with Brain RhythmsConnection to Lee and Mumford’s hierarchical Bayesian inference framework
. . .Part. Filter Part. Filter Part. Filter
Prior
Noisy input
. . .Part. Filter Part. Filter
Noisy
measurements
Rhythmic
movement
Prior
Mumford’s
box with
neurons
Normal form
reduction
Normal form
reduction
Estimate
Mumford’s box with
oscillators
40
Filtering of Biological Rhythms with Brain RhythmsConnection to Lee and Mumford’s hierarchical Bayesian inference framework
. . .Part. Filter Part. Filter Part. Filter
Prior
Noisy input
. . .Part. Filter Part. Filter
Noisy
measurements
Rhythmic
movement
Prior
Mumford’s
box with
neurons
Normal form
reduction
Normal form
reduction
Estimate
Mumford’s box with
oscillators
40
Filtering of Biological Rhythms with Brain RhythmsConnection to Lee and Mumford’s hierarchical Bayesian inference framework
. . .Part. Filter Part. Filter Part. Filter
Prior
Noisy input
. . .Part. Filter Part. Filter
Noisy
measurements
Rhythmic
movement
Prior
Mumford’s
box with
neurons
Normal form
reduction
Normal form
reduction
Estimate
Mumford’s box with
oscillators
40
Acknowledgement
Adam Tilton Tao Yang Huibing Yin Liz Hsiao-Wecksler Sean Meyn
1 T. Yang, P. G. Mehta, and S. P. Meyn. Feedback particle filter with mean-field coupling. In Procs. of IEEE Conf. onDecision and Control, December 2011.
2 T. Yang, P. G. Mehta, and S. P. Meyn. A mean-field control-oriented approach to particle filtering. In Procs. ofAmerican Control Conference, June 2011.
3 A. Tilton, E. Hsiao-Wecksler, P. G. Mehta. Filtering with rhythms: Application to estimation of gait cycle. In Procs. ofAmerican Control Conference, 2012.
4 T. Yang, G. Huang and P. G. Mehta. Joint probabilistic data association-feedback particle filter with applications tomultiple target tracking. In Procs. of American Control Conference, 2012.
5 A. Tilton, T. Yang, H. Yin and P. G. Mehta. Feedback particle filter-based multiple target tracking using bearing-onlymeasurements. In Procs. of Information Fusion, 2012.
6 T. Yang, R. Laugesen, P. G. Mehta, and S. P. Meyn. Multivariable feedback particle filter. To appear in IEEE Conf. onDecision and Control, 2012.
7 T. Yang, P. G. Mehta, and S. P. Meyn. Feedback particle filter. Conditionally accepted to IEEE Transactions onAutomatic Control.