hci / cpre / coms 575: computational perception instructor: alexander stoytchev...

Post on 23-Dec-2015

215 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

HCI / CprE / ComS 575:

Computational Perception

Instructor: Alexander Stoytchevhttp://www.ece.iastate.edu/~alexs/classes/2010_Spring_575/

Particle Filters

HCI/ComS 575X: Computational PerceptionIowa State UniversityCopyright © Alexander Stoytchev

Sebastian Thrun, Wolfram Burgard and Dieter Fox (2005).

Probabilistic Robotics

MIT Press.

F. Dellaert, D. Fox, W. Burgard, and S. Thrun (1999).

"Monte Carlo Localization for Mobile Robots", IEEE International Conference on Robotics

and Automation (ICRA99), May, 1999.

Ioannis Rekleitis (2004).

A Particle Filter Tutorial for Mobile Robot Localization.

Technical Report TR-CIM-04-02, Centre for Intelligent Machines, McGill University,

Montreal, Quebec, Canada.

Wednesday

Next Week

• Preliminary Project Presentatons

Represent belief by random samples

Estimation of non-Gaussian, nonlinear processes

Monte Carlo filter, Survival of the fittest, Condensation, Bootstrap filter, Particle filter

Filtering: [Rubin, 88], [Gordon et al., 93], [Kitagawa 96]

Computer vision: [Isard and Blake 96, 98] Dynamic Bayesian Networks: [Kanazawa et al., 95]d

Particle Filters

Example

Using Ceiling Maps for Localization

Vision-based Localization

P(z|x)

h(x)z

Under a LightMeasurement z: P(z|x):

Next to a LightMeasurement z: P(z|x):

ElsewhereMeasurement z: P(z|x):

Global Localization Using Vision

Sample-based Localization (sonar)

Example

Importance Sampling with Resampling:Landmark Detection Example

Distributions

Distributions

Wanted: samples distributed according to p(x| z1, z2, z3)

This is Easy!We can draw samples from p(x|zl) by adding noise to the detection parameters.

Importance Sampling with Resampling

Weighted samples After resampling

Quick review of Kalman Filters

Conditional density of position based on measured value of z1

[Maybeck (1979)]

Conditional density of position based on measured value of z1

[Maybeck (1979)]

position

measured position

uncertainty

Conditional density of position based on measurement of z2 alone

[Maybeck (1979)]

Conditional density of position based on measurement of z2 alone

[Maybeck (1979)]measured position 2

uncertainty 2

Conditional density of position based on data z1 and z2

[Maybeck (1979)]position estimate

uncertainty estimate

Propagation of the conditional density

[Maybeck (1979)]

Propagation of the conditional density

[Maybeck (1979)]

movement vector

expected position just prior to taking measurement 3

Propagation of the conditional density

[Maybeck (1979)]

movement vector

expected position just prior to taking measurement 3

Propagation of the conditional density

z3

σx(t3)

measured position 3

uncertainty 3

Updating the conditional density after the third measurement

z3

σx(t3)

position uncertainty

position estimate

x(t3)

Some Questions

• What if we don’t know the start position of the robot?

• What if somebody moves the robot without the robot’s knowledge?

Robot Odometry Errors

Raw range data, position indexed by odometry

[Thrun, Burgard & Fox (2005)]

Resulting Occupancy Grid Map

[Thrun, Burgard & Fox (2005)]

Basic Idea Behind Particle Filters

x

In 2D it looks like this

[http://www.ite.uni-karlsruhe.de/METZGER/DIPLOMARBEITEN/dipl2.html]

Robot Pose

Odometry Motion Model

Sampling From the Odometry Model

Motion Model

Motion Model

Velocity model for different noise parameters

Sampling from the velocity model

In Class Demoof Particle Filters

Example

[Thrun, Burgard & Fox (2005)]

Initially we don’t know the location of the robot so we have particles everywhere

Next, the robot senses that it is near a door

Since there are 3 identical doors the robot can be next any one of them

Therefore, we inflate balls (particles) that are next to doors and shrink all others

Therefore, we grow balls (particles) that are next to doors and shrink all others

Before we continue we have to make all ball to be of equal size. We need to resample.

Before we continue we have to make all ball to be of equal size. We need to resample.

Resampling Rules

=

=

=

Resampling

• Given: Set S of weighted samples.

• Wanted : Random sample, where the probability of drawing xi is given by wi.

• Typically done n times with replacement to generate new sample set S’.

[From Thrun’s book “Probabilistik Robotics”]

w2

w3

w1wn

Wn-1

Roulette wheel Resampling

w2

w3

w1wn

Wn-1

• Roulette wheel

• Binary search, n log n

• Stochastic universal sampling

• Systematic resampling

• Linear time complexity

• Easy to implement, low variance

[From Thrun’s book “Probabilistik Robotics”]

1. Algorithm systematic_resampling(S,n):

2.

3. For Generate cdf4. 5. Initialize threshold

6. For Draw samples …7. While ( ) Skip until next threshold reached8. 9. Insert10. Increment threshold

11. Return S’

Resampling Algorithm

11,' wcS

ni 2i

ii wcc 1

1],,0]~ 11 inUu

nj 1

11

nuu jj

ij cu

1,'' nxSS i

1ii

Also called stochastic universal sampling

[From Thrun’s book “Probabilistik Robotics”]

Next, The robot moves to the right

… thus, we have to shift all balls (particles) to the right

… thus, we have to shift all balls (particles) to the right

… and add some position noise

… and add some position noise

Next, the robot senses that it is next to one of the three doors

Next, the robot senses that it is next to one of the three doors

Now we have to resample again

The robot moves again

… so we must move all balls (particles) to the right again

… and add some position noise

And so on …

Now Let’s Compare that With Some of the Other Methods

Grid Localization

[Thrun, Burgard & Fox (2005)]

Grid Localization

[Thrun, Burgard & Fox (2005)]

Grid Localization

[Thrun, Burgard & Fox (2005)]

Grid Localization

[Thrun, Burgard & Fox (2005)]

Grid Localization

[Thrun, Burgard & Fox (2005)]

Markov Localization

[Thrun, Burgard & Fox (2005)]

Kalman Filter

[Thrun, Burgard & Fox (2005)]

Particle Filter

[Thrun, Burgard & Fox (2005)]

Importance Sampling

• Ideally, the particles would represent samples drawn from the distribution p(x|z).– In practice, we usually cannot get p(x|z) in

closed form; in any case, it would usually be difficult to draw samples from p(x|z).

• We use importance sampling:– Particles are drawn from an importance

distribution.– Particles are weighted by importance weights.

[ http://www.fulton.asu.edu/~morrell/581/ ]

Monte Carlo Samples (Particles)

• The posterior distribution p(x|z) may be difficult or impossible to compute in closed form.

• An alternative is to represent p(x|z) using Monte Carlo samples (particles):– Each particle has a value and a weight

x

x

[ http://www.fulton.asu.edu/~morrell/581/ ]

In 2D it looks like this

[http://www.ite.uni-karlsruhe.de/METZGER/DIPLOMARBEITEN/dipl2.html]

Objective-Find p(xk|zk,…,z1)

• The objective of the particle filter is to compute the conditional distribution

p(xk|zk,…,z1)

• To do this analytically, we would use the Chapman-Kolmogorov equation and Bayes Theorem along with Markov model assumptions.

• The particle filter gives us an approximate computational technique.

[ http://www.fulton.asu.edu/~morrell/581/ ]

Initial State Distribution

x0

x0

[ http://www.fulton.asu.edu/~morrell/581/ ]

State Update

x0

x1 = f0 (x0, w0)

x1

[ http://www.fulton.asu.edu/~morrell/581/ ]

Compute Weights

x1

x1

p(z1|x1)

x1

Before

After

[ http://www.fulton.asu.edu/~morrell/581/ ]

Resample

x1

x1

[ http://www.fulton.asu.edu/~morrell/581/ ]

THE END

top related