merging laser range data based on odometry-only tracking

5
1 Lecture 14: Localization CS 344R/393R: Robotics Benjamin Kuipers Thanks to Dieter Fox for some of his figures. Localization: “Where am I?” The map-building method we studied assumes that the robot knows its location. – Precise (x,y,θ) coordinates in the same frame of reference as the occupancy grid map. This assumes that odometry is accurate, which is often false. We will need to relocalize at each step. Odometry-Only Tracking: 6 times around a 2m x 3m area Merging Laser Range Data Based on Odometry-Only Tracking SLAM: Simultaneous Localization and Mapping Alternate at each motion step: 1. Localization: Assume accurate map. Match sensor readings against the map to update location after motion. 2. Mapping: Assume known location in the map. Update map from sensor readings. Mapping Without Localization

Upload: others

Post on 15-Jan-2022

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Merging Laser Range Data Based on Odometry-Only Tracking

1

Lecture 14:Localization

CS 344R/393R: RoboticsBenjamin Kuipers

Thanks to Dieter Fox for some of his figures.

Localization: “Where am I?”

• The map-building method we studiedassumes that the robot knows its location.– Precise (x,y,θ) coordinates in the same frame of

reference as the occupancy grid map.• This assumes that odometry is accurate,

which is often false.• We will need to relocalize at each step.

Odometry-Only Tracking:6 times around a 2m x 3m area

Merging Laser Range DataBased on Odometry-Only Tracking

SLAM: SimultaneousLocalization and Mapping

Alternate at each motion step:1. Localization:

• Assume accurate map.• Match sensor readings against the map to

update location after motion.2. Mapping:

• Assume known location in the map.• Update map from sensor readings.

Mapping Without Localization

Page 2: Merging Laser Range Data Based on Odometry-Only Tracking

2

Mapping With LocalizationModeling Action and Sensing

• Action model: P(xt | xt-1, ut-1)• Sensor model: P(zt | xt)• What we want to know is Belief:

the posterior probability distribution of xt, giventhe past history of actions and sensor inputs.

Xt-1 Xt Xt+1

Zt-1 Zt Zt+1

ut-1 ut

),,,|()( 121 ttttzuzuxPxBel != K

The Markov Assumption

• Given the present, the future is independentof the past.

• Given the state xt, the observation zt isindependent of the past.

Xt-1 Xt Xt+1

Zt-1 Zt Zt+1

ut-1 ut

!

P(zt| x

t) = P(z

t| x

t,u1,z2 K,u

t"1)

Dynamic Bayesian Network• The well-known DBN for local SLAM.

Law of Total Probability(marginalizing)

!=y

yxPxP ),()(

!=y

yPyxPxP )()|()(

!

P(y) =1y

"

Discrete

!

p(y) dy =1"

Continuous case

!= dyypyxpxp )()|()(

!= dyyxpxp ),()(

Bayes Law• We can treat the denominator in Bayes Law

as a normalizing constant:

• We will apply it in the following form:

)()|(

1)(

)()|()(

)()|()(

1

xPxyPyP

xPxyPyP

xPxyPyxP

x

!==

==

"#

#

),,,|(),,,,|( 121121 !!=tttttuzuxPuzuxzP KK"

),,,|()( 121 ttttzuzuxPxBel != K

Page 3: Merging Laser Range Data Based on Odometry-Only Tracking

3

Bayes Filter

),,,|(),,,,|( 121121 !!=tttttuzuxPuzuxzP KK"Bayes

),,,|()( 121 ttttzuzuxPxBel != K

Markov ),,,|()|( 121 !=ttttuzuxPxzP K"

1111 )(),|()|( !!!!"=ttttttt

dxxBelxuxPxzP#

Markov1121111 ),,,|(),|()|( !!!!!"=

ttttttttdxuzuxPxuxPxzP K#

11211

1121

),,,|(

),,,,|()|(

!!!

!!"=

ttt

ttttt

dxuzuxP

xuzuxPxzP

K

K#Total prob.

Markov Localization

• Bel(xt-1) and Bel(xt) are prior and posteriorprobabilities of location x.

• P(xt | ut-1, xt-1) is the action model, giving theprobability distribution over result of ut-1 at xt-1.

• P(zt | xt) is the sensor model, giving the probabilitydistribution over sense images zt at xt.

• η is a normalization constant, ensuring that totalprobability mass over xt is 1.

!

Bel(xt) =" P(z

t| x

t) P(x

t| u

t#1,xt#1)$ Bel(xt#1) dxt#1

Markov Localization• Evaluate Bel(xt) for every possible state xt.

• Prediction phase:

– Integrate over every possible state xt-1 to apply theprobability that action ut-1 could reach xt from there.

• Correction phase:

– Weight each state xt with likelihood of observation zt.

!

Bel"(x

t) = P(x

t| u

t"1,xt"1)# Bel(xt"1) dxt"1

!

Bel(xt) =" P(z

t| x

t) Bel

#(x

t)

Uniform prior probability Bel- (x0)

Sensor information P(z0 | x0)

!

Bel(x0) =" P(z0 | x0) Bel#(x0)

Apply the action model P(x1 | u0, x0)

!

Bel"(x1) = P(x1 | u0,x0)# Bel(x0) dx0

Page 4: Merging Laser Range Data Based on Odometry-Only Tracking

4

Combine with observation P(z1 | x1)

!

Bel(x1) =" P(z1 | x1) Bel#(x1)

Action model again: P(x2 | u1, x1)

!

Bel"(x2) = P(x2 | u1,x1)# Bel(x1) dx1

Local and Global Localization

• Most localization is local:– Incrementally correct belief in position after

each action.• Global localization is more dramatic.

– Where in the entire environment am I?• The “kidnapped robot problem”

– Includes detecting that I am lost.

Initial belief Bel(x0)

Intermediate Belief Bel(xt) Final Belief Bel(xt)

Page 5: Merging Laser Range Data Based on Odometry-Only Tracking

5

Global Localization Movie

Future Attractions

• Sensor and action models• Particle filtering

– elegant, simple algorithm– Monte Carlo simulation