humanoid robotics monte carlo localization - uni-bonn.de · 2 motivation ! to perform useful...

83
1 Humanoid Robotics Monte Carlo Localization Maren Bennewitz

Upload: others

Post on 04-Oct-2019

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

1

Humanoid Robotics

Monte Carlo Localization

Maren Bennewitz

Page 2: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

2

Motivation §  To perform useful service, a robot needs to

know its global pose wrt. the environment §  Motion commands are only executed

inaccurately §  Estimate the global pose of a robot in a

given model of the environment §  Based on its sensor data and odometry

measurements §  Sensor data: Range image with depth

measurements or laser range readings

Page 3: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

3

Basic Probability Rules (1)

If x and y are independent:

Bayes’ rule:

Often written as: The denominator is a normalizing constant that ensures that the posterior on the left hand side sums up to 1 over all possible values of x.

In case of background knowledge, Bayes' rule turns into

Page 4: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

4

Basic Probability Rules (2) Law of total probability:

continuous case

discrete case

Page 5: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

5

Markov Assumption

state of a dynamical system

observations

actions

Page 6: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

6

Task §  Estimate the pose of a robot in a given

model of the environment §  Based on its sensor data and odometry

measurements

Page 7: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

7

State Estimation §  Estimate the state given observations

and odometry measurements/actions

§  Goal: Calculate the distribution

Page 8: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

8

Recursive Bayes Filter 1

Definition of the belief

all data up to time t

Page 9: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

9

Recursive Bayes Filter 2

Bayes’ rule

Page 10: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

10

Recursive Bayes Filter 3

Markov assumption

Page 11: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

11

Recursive Bayes Filter 4

Law of total probability

Page 12: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

12

Recursive Bayes Filter 5

Markov assumption

Page 13: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

13

Recursive Bayes Filter 6

Page 14: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

14

Recursive Bayes Filter 7

recursive term

Page 15: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

15

Recursive Bayes Filter 7

motion model observation model

Page 16: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

16

Probabilistic Motion Model §  Robots execute motion commands only

inaccurately §  The motion model specifies the probability

that action u moves the robot from to :

§  Defined individually for each type or robot

Page 17: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

17

Observation Model for Range Readings §  Sensor data consists of measurements

§  The individual measurements are

independent given the robot’s pose

§  “How well can the distance measurements be explained given the pose (and the map)”

Page 18: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

18

Observation Model: Simplest Ray-Cast Model §  Consider the first obstacle along the ray §  Compare the actually measured distance

with the expected distance to the obstacle §  Use a Gaussian to evaluate the difference

measured distance object in map

Page 19: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

19

Observation Model: Beam-Endpoint Model Evaluate the distance of the hypothetical beam end point to the closest obstacle in the map with a Gaussian

Courtesy: Thrun, Burgard, Fox

Page 20: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

20

Summary §  Bayes filter is a framework for state

estimation §  The motion and observation model are the

central components of the Bayes filter

Page 21: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

21

Particle Filter Monte Carlo Localization

Page 22: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

22

Particle Filter §  One implementation of the recursive Bayes

filter §  Non-parametric framework (not only

Gaussian) §  Arbitrary models can be used as motion and

observation models

Page 23: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

23

Motivation Goal: Approach for dealing with arbitrary distributions

Page 24: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

24

Key Idea: Samples Use a set of weighted samples to represent arbitrary distributions

samples

Page 25: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

25

Particle Set

Set of weighted samples

state hypothesis

importance weight

Page 26: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

26

Particle Filter §  The set of weighted particles approximates

the belief distribution about the robot’s pose

§  Prediction: Sample from the motion model (propagate particles forward)

§  Correction: Weight the samples based on the observation model

Page 27: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

27

Monte Carlo Localization §  Each particle is a pose hypothesis §  Prediction: For each particle, sample a

new pose from the the motion model

§  Correction: Weight samples according to the observation model

[The weight has be chosen in exactly that way if the samples are drawn from the motion model, see Cognitive Robotics Lecture]

Page 28: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

28

Monte Carlo Localization §  Each particle is a pose hypothesis §  Prediction: For each particle, sample a

new pose from the the motion model

§  Correction: Weight samples according to the observation model

§  Resampling: Draw sample with probability and repeat times ( =#particles)

Page 29: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

29

MCL – Correction Step

Image Courtesy: S. Thrun, W. Burgard, D. Fox

Page 30: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

30

MCL – Resampling & Prediction

Image Courtesy: S. Thrun, W. Burgard, D. Fox

Page 31: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

31

MCL – Correction Step

Image Courtesy: S. Thrun, W. Burgard, D. Fox

Page 32: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

32

MCL – Resampling & Prediction

Image Courtesy: S. Thrun, W. Burgard, D. Fox

Page 33: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

33

Resampling §  Draw sample with probability .

Repeat times ( =#particles) §  Informally: “Replace unlikely samples by

more likely ones” §  Survival-of-the-fittest principle §  “Trick” to avoid that many samples cover

unlikely states §  Needed since we have a limited number of

samples

Page 34: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

34

w2

w3

w1 wn

Wn-1

Resampling

w2

w3

w1 wn

Wn-1

§  Draw randomly between 0 and 1 §  Binary search §  Repeat J times §  O(J log J)

§  Systematic resampling §  Low variance resampl. §  O(J) §  Also called “stochastic universal resampling”

step size = 1/J initial value between 0 and 1/J

“roulette wheel”

Assumption: normalized weights

Page 35: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

35

Low Variance Resampling

J = #particles step size = 1/J

cumulative sum of weights

decide whether or not to take particle i

initialization

Page 36: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

36

initialization

Courtesy: Thrun, Burgard, Fox

Page 37: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

37

observation

Courtesy: Thrun, Burgard, Fox

Page 38: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

38

weight update, resampling,

motion update Courtesy: Thrun, Burgard, Fox

Page 39: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

39

observation

Courtesy: Thrun, Burgard, Fox

Page 40: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

40

weight update, resampling

Courtesy: Thrun, Burgard, Fox

Page 41: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

41

motion update

Courtesy: Thrun, Burgard, Fox

Page 42: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

42

observation

Courtesy: Thrun, Burgard, Fox

Page 43: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

43

weight update, resampling

Courtesy: Thrun, Burgard, Fox

Page 44: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

44

motion update

Courtesy: Thrun, Burgard, Fox

Page 45: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

45

observation

Courtesy: Thrun, Burgard, Fox

Page 46: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

46

weight update, resampling

Courtesy: Thrun, Burgard, Fox

Page 47: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

47

motion update

Courtesy: Thrun, Burgard, Fox

Page 48: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

48

observation

Courtesy: Thrun, Burgard, Fox

Page 49: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

49

MCL in Action

Page 50: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

50

Summary – Particle Filters §  Particle filters are non-parametric, recursive

Bayes filters §  The belief about the state is represented by

a set of weighted samples §  The motion model is used to draw the

samples for the next time step §  The weights of the particles are computed

based on the observation model §  Resampling is carried out based on weights §  Called: Monte-Carlo localization (MCL)

Page 51: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

51

Literature §  Book: Probabilistic Robotics,

S. Thrun, W. Burgard, and D. Fox

Page 52: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

52

6D Localization for Humanoid Robots

Page 53: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

53

Localization for Humanoids 3D environments require a 6D pose estimate

height yaw, pitch, roll 2D position

estimate the 6D torso pose

Page 54: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

54

Localization for Humanoids §  Recursively estimate the distribution about

the robot‘s pose using MCL §  Probability distribution represented by

pose hypotheses (particles) §  Needed: Motion model and observation

model

Page 55: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

55

Motion Model §  The odometry estimate corresponds to

the incremental motion of the torso §  is computed from forward kinematics (FK)

from the current stance foot while walking

Page 56: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

56

Kinematic Walking Odometry

3.2. Motion Model

Frfoot

Ftorso

Fodom

Frfoot

Ftorso

Fodom

Frfoot Flfoot

Ftorso

Fodom

Flfoot

Ftorso

Fodom

Figure 3.1.: Coordinate frames for kinematic walking odometry of a biped robot under ideal con-ditions. Initially, the frame Frfoot of the right stance foot is fixed in the world frame Fodom and thetorso frame Ftorso can be computed with forward kinematics over the right leg. In the next step, theleft swing leg touches the ground. While both feet remain on the ground, the left foot frame Flfootis computed with forward kinematics over both leg chains. Finally, for the next stepping phase theleft leg becomes the stance leg and Flfoot remains static with respect to Fodom, which creates a newreference to compute Ftorso.

filter can be easily parallelized and it can directly benefit from more computational powerby increasing the number of particles. Thus, the possible estimation accuracy directlyscales with the available computational resources.

3.2. Motion Model

In the prediction step of MCL, a new pose is drawn for each particle according to themotion model p(xt | xt�1,ut). In our case, ut corresponds to the incremental motion ofthe humanoid’s torso in a local coordinate frame and is computed with forward kinematicsfrom the current stance foot while walking. This kind of odometric estimate is usuallyreferred to as kinematic odometry. Figure 3.1 shows a single stepping phase and thecoordinate frames used to compute the kinematic odometry. As illustrated in Figure 3.2,ut can then be computed from two consecutive torso poses ext�1 and ext , which are 6D rigidbody transforms in the odometry coordinate frame Fodom:

T (ut) = (ext�1)�1 ⌦ext . (3.7)

Here, T (ut) denotes the 6D rigid body transform of the odometry motion ut . On the otherhand, we can think of the humanoid’s odometric pose estimates as a concatenation of theincremental motions

ext = ext�1 ⌦T (ut) . (3.8)

Under ideal conditions the stance foot of the robot rests firmly and precisely on theground, leading to an accurate 6D pose estimate with kinematic odometry. However, thefeet of the robot typically slip on the ground due to inaccurate execution of motions. Addi-tionally, gear backlash in individual joints can aggravate the problem. Hence, in practice,

45

frame of the current stance foot

frame of the torso, transform can be computed with FK over the right leg

Keep track of the transform to the current stance foot

Page 57: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

57

Kinematic Walking Odometry

3.2. Motion Model

Frfoot

Ftorso

Fodom

Frfoot

Ftorso

Fodom

Frfoot Flfoot

Ftorso

Fodom

Flfoot

Ftorso

Fodom

Figure 3.1.: Coordinate frames for kinematic walking odometry of a biped robot under ideal con-ditions. Initially, the frame Frfoot of the right stance foot is fixed in the world frame Fodom and thetorso frame Ftorso can be computed with forward kinematics over the right leg. In the next step, theleft swing leg touches the ground. While both feet remain on the ground, the left foot frame Flfootis computed with forward kinematics over both leg chains. Finally, for the next stepping phase theleft leg becomes the stance leg and Flfoot remains static with respect to Fodom, which creates a newreference to compute Ftorso.

filter can be easily parallelized and it can directly benefit from more computational powerby increasing the number of particles. Thus, the possible estimation accuracy directlyscales with the available computational resources.

3.2. Motion Model

In the prediction step of MCL, a new pose is drawn for each particle according to themotion model p(xt | xt�1,ut). In our case, ut corresponds to the incremental motion ofthe humanoid’s torso in a local coordinate frame and is computed with forward kinematicsfrom the current stance foot while walking. This kind of odometric estimate is usuallyreferred to as kinematic odometry. Figure 3.1 shows a single stepping phase and thecoordinate frames used to compute the kinematic odometry. As illustrated in Figure 3.2,ut can then be computed from two consecutive torso poses ext�1 and ext , which are 6D rigidbody transforms in the odometry coordinate frame Fodom:

T (ut) = (ext�1)�1 ⌦ext . (3.7)

Here, T (ut) denotes the 6D rigid body transform of the odometry motion ut . On the otherhand, we can think of the humanoid’s odometric pose estimates as a concatenation of theincremental motions

ext = ext�1 ⌦T (ut) . (3.8)

Under ideal conditions the stance foot of the robot rests firmly and precisely on theground, leading to an accurate 6D pose estimate with kinematic odometry. However, thefeet of the robot typically slip on the ground due to inaccurate execution of motions. Addi-tionally, gear backlash in individual joints can aggravate the problem. Hence, in practice,

45

3.2. Motion Model

Frfoot

Ftorso

Fodom

Frfoot

Ftorso

Fodom

Frfoot Flfoot

Ftorso

Fodom

Flfoot

Ftorso

Fodom

Figure 3.1.: Coordinate frames for kinematic walking odometry of a biped robot under ideal con-ditions. Initially, the frame Frfoot of the right stance foot is fixed in the world frame Fodom and thetorso frame Ftorso can be computed with forward kinematics over the right leg. In the next step, theleft swing leg touches the ground. While both feet remain on the ground, the left foot frame Flfootis computed with forward kinematics over both leg chains. Finally, for the next stepping phase theleft leg becomes the stance leg and Flfoot remains static with respect to Fodom, which creates a newreference to compute Ftorso.

filter can be easily parallelized and it can directly benefit from more computational powerby increasing the number of particles. Thus, the possible estimation accuracy directlyscales with the available computational resources.

3.2. Motion Model

In the prediction step of MCL, a new pose is drawn for each particle according to themotion model p(xt | xt�1,ut). In our case, ut corresponds to the incremental motion ofthe humanoid’s torso in a local coordinate frame and is computed with forward kinematicsfrom the current stance foot while walking. This kind of odometric estimate is usuallyreferred to as kinematic odometry. Figure 3.1 shows a single stepping phase and thecoordinate frames used to compute the kinematic odometry. As illustrated in Figure 3.2,ut can then be computed from two consecutive torso poses ext�1 and ext , which are 6D rigidbody transforms in the odometry coordinate frame Fodom:

T (ut) = (ext�1)�1 ⌦ext . (3.7)

Here, T (ut) denotes the 6D rigid body transform of the odometry motion ut . On the otherhand, we can think of the humanoid’s odometric pose estimates as a concatenation of theincremental motions

ext = ext�1 ⌦T (ut) . (3.8)

Under ideal conditions the stance foot of the robot rests firmly and precisely on theground, leading to an accurate 6D pose estimate with kinematic odometry. However, thefeet of the robot typically slip on the ground due to inaccurate execution of motions. Addi-tionally, gear backlash in individual joints can aggravate the problem. Hence, in practice,

45

Both feet remain on the ground, compute the transform to the frame of the left foot with FK 3.2. Motion Model

Frfoot

Ftorso

Fodom

Frfoot

Ftorso

Fodom

Frfoot Flfoot

Ftorso

Fodom

Flfoot

Ftorso

Fodom

Figure 3.1.: Coordinate frames for kinematic walking odometry of a biped robot under ideal con-ditions. Initially, the frame Frfoot of the right stance foot is fixed in the world frame Fodom and thetorso frame Ftorso can be computed with forward kinematics over the right leg. In the next step, theleft swing leg touches the ground. While both feet remain on the ground, the left foot frame Flfootis computed with forward kinematics over both leg chains. Finally, for the next stepping phase theleft leg becomes the stance leg and Flfoot remains static with respect to Fodom, which creates a newreference to compute Ftorso.

filter can be easily parallelized and it can directly benefit from more computational powerby increasing the number of particles. Thus, the possible estimation accuracy directlyscales with the available computational resources.

3.2. Motion Model

In the prediction step of MCL, a new pose is drawn for each particle according to themotion model p(xt | xt�1,ut). In our case, ut corresponds to the incremental motion ofthe humanoid’s torso in a local coordinate frame and is computed with forward kinematicsfrom the current stance foot while walking. This kind of odometric estimate is usuallyreferred to as kinematic odometry. Figure 3.1 shows a single stepping phase and thecoordinate frames used to compute the kinematic odometry. As illustrated in Figure 3.2,ut can then be computed from two consecutive torso poses ext�1 and ext , which are 6D rigidbody transforms in the odometry coordinate frame Fodom:

T (ut) = (ext�1)�1 ⌦ext . (3.7)

Here, T (ut) denotes the 6D rigid body transform of the odometry motion ut . On the otherhand, we can think of the humanoid’s odometric pose estimates as a concatenation of theincremental motions

ext = ext�1 ⌦T (ut) . (3.8)

Under ideal conditions the stance foot of the robot rests firmly and precisely on theground, leading to an accurate 6D pose estimate with kinematic odometry. However, thefeet of the robot typically slip on the ground due to inaccurate execution of motions. Addi-tionally, gear backlash in individual joints can aggravate the problem. Hence, in practice,

45

Page 58: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

58

Kinematic Walking Odometry

3.2. Motion Model

Frfoot

Ftorso

Fodom

Frfoot

Ftorso

Fodom

Frfoot Flfoot

Ftorso

Fodom

Flfoot

Ftorso

Fodom

Figure 3.1.: Coordinate frames for kinematic walking odometry of a biped robot under ideal con-ditions. Initially, the frame Frfoot of the right stance foot is fixed in the world frame Fodom and thetorso frame Ftorso can be computed with forward kinematics over the right leg. In the next step, theleft swing leg touches the ground. While both feet remain on the ground, the left foot frame Flfootis computed with forward kinematics over both leg chains. Finally, for the next stepping phase theleft leg becomes the stance leg and Flfoot remains static with respect to Fodom, which creates a newreference to compute Ftorso.

filter can be easily parallelized and it can directly benefit from more computational powerby increasing the number of particles. Thus, the possible estimation accuracy directlyscales with the available computational resources.

3.2. Motion Model

In the prediction step of MCL, a new pose is drawn for each particle according to themotion model p(xt | xt�1,ut). In our case, ut corresponds to the incremental motion ofthe humanoid’s torso in a local coordinate frame and is computed with forward kinematicsfrom the current stance foot while walking. This kind of odometric estimate is usuallyreferred to as kinematic odometry. Figure 3.1 shows a single stepping phase and thecoordinate frames used to compute the kinematic odometry. As illustrated in Figure 3.2,ut can then be computed from two consecutive torso poses ext�1 and ext , which are 6D rigidbody transforms in the odometry coordinate frame Fodom:

T (ut) = (ext�1)�1 ⌦ext . (3.7)

Here, T (ut) denotes the 6D rigid body transform of the odometry motion ut . On the otherhand, we can think of the humanoid’s odometric pose estimates as a concatenation of theincremental motions

ext = ext�1 ⌦T (ut) . (3.8)

Under ideal conditions the stance foot of the robot rests firmly and precisely on theground, leading to an accurate 6D pose estimate with kinematic odometry. However, thefeet of the robot typically slip on the ground due to inaccurate execution of motions. Addi-tionally, gear backlash in individual joints can aggravate the problem. Hence, in practice,

45

3.2. Motion Model

Frfoot

Ftorso

Fodom

Frfoot

Ftorso

Fodom

Frfoot Flfoot

Ftorso

Fodom

Flfoot

Ftorso

Fodom

Figure 3.1.: Coordinate frames for kinematic walking odometry of a biped robot under ideal con-ditions. Initially, the frame Frfoot of the right stance foot is fixed in the world frame Fodom and thetorso frame Ftorso can be computed with forward kinematics over the right leg. In the next step, theleft swing leg touches the ground. While both feet remain on the ground, the left foot frame Flfootis computed with forward kinematics over both leg chains. Finally, for the next stepping phase theleft leg becomes the stance leg and Flfoot remains static with respect to Fodom, which creates a newreference to compute Ftorso.

filter can be easily parallelized and it can directly benefit from more computational powerby increasing the number of particles. Thus, the possible estimation accuracy directlyscales with the available computational resources.

3.2. Motion Model

In the prediction step of MCL, a new pose is drawn for each particle according to themotion model p(xt | xt�1,ut). In our case, ut corresponds to the incremental motion ofthe humanoid’s torso in a local coordinate frame and is computed with forward kinematicsfrom the current stance foot while walking. This kind of odometric estimate is usuallyreferred to as kinematic odometry. Figure 3.1 shows a single stepping phase and thecoordinate frames used to compute the kinematic odometry. As illustrated in Figure 3.2,ut can then be computed from two consecutive torso poses ext�1 and ext , which are 6D rigidbody transforms in the odometry coordinate frame Fodom:

T (ut) = (ext�1)�1 ⌦ext . (3.7)

Here, T (ut) denotes the 6D rigid body transform of the odometry motion ut . On the otherhand, we can think of the humanoid’s odometric pose estimates as a concatenation of theincremental motions

ext = ext�1 ⌦T (ut) . (3.8)

Under ideal conditions the stance foot of the robot rests firmly and precisely on theground, leading to an accurate 6D pose estimate with kinematic odometry. However, thefeet of the robot typically slip on the ground due to inaccurate execution of motions. Addi-tionally, gear backlash in individual joints can aggravate the problem. Hence, in practice,

45

3.2. Motion Model

Frfoot

Ftorso

Fodom

Frfoot

Ftorso

Fodom

Frfoot Flfoot

Ftorso

Fodom

Flfoot

Ftorso

Fodom

Figure 3.1.: Coordinate frames for kinematic walking odometry of a biped robot under ideal con-ditions. Initially, the frame Frfoot of the right stance foot is fixed in the world frame Fodom and thetorso frame Ftorso can be computed with forward kinematics over the right leg. In the next step, theleft swing leg touches the ground. While both feet remain on the ground, the left foot frame Flfootis computed with forward kinematics over both leg chains. Finally, for the next stepping phase theleft leg becomes the stance leg and Flfoot remains static with respect to Fodom, which creates a newreference to compute Ftorso.

filter can be easily parallelized and it can directly benefit from more computational powerby increasing the number of particles. Thus, the possible estimation accuracy directlyscales with the available computational resources.

3.2. Motion Model

In the prediction step of MCL, a new pose is drawn for each particle according to themotion model p(xt | xt�1,ut). In our case, ut corresponds to the incremental motion ofthe humanoid’s torso in a local coordinate frame and is computed with forward kinematicsfrom the current stance foot while walking. This kind of odometric estimate is usuallyreferred to as kinematic odometry. Figure 3.1 shows a single stepping phase and thecoordinate frames used to compute the kinematic odometry. As illustrated in Figure 3.2,ut can then be computed from two consecutive torso poses ext�1 and ext , which are 6D rigidbody transforms in the odometry coordinate frame Fodom:

T (ut) = (ext�1)�1 ⌦ext . (3.7)

Here, T (ut) denotes the 6D rigid body transform of the odometry motion ut . On the otherhand, we can think of the humanoid’s odometric pose estimates as a concatenation of theincremental motions

ext = ext�1 ⌦T (ut) . (3.8)

Under ideal conditions the stance foot of the robot rests firmly and precisely on theground, leading to an accurate 6D pose estimate with kinematic odometry. However, thefeet of the robot typically slip on the ground due to inaccurate execution of motions. Addi-tionally, gear backlash in individual joints can aggravate the problem. Hence, in practice,

45

3.2. Motion Model

Frfoot

Ftorso

Fodom

Frfoot

Ftorso

Fodom

Frfoot Flfoot

Ftorso

Fodom

Flfoot

Ftorso

Fodom

Figure 3.1.: Coordinate frames for kinematic walking odometry of a biped robot under ideal con-ditions. Initially, the frame Frfoot of the right stance foot is fixed in the world frame Fodom and thetorso frame Ftorso can be computed with forward kinematics over the right leg. In the next step, theleft swing leg touches the ground. While both feet remain on the ground, the left foot frame Flfootis computed with forward kinematics over both leg chains. Finally, for the next stepping phase theleft leg becomes the stance leg and Flfoot remains static with respect to Fodom, which creates a newreference to compute Ftorso.

filter can be easily parallelized and it can directly benefit from more computational powerby increasing the number of particles. Thus, the possible estimation accuracy directlyscales with the available computational resources.

3.2. Motion Model

In the prediction step of MCL, a new pose is drawn for each particle according to themotion model p(xt | xt�1,ut). In our case, ut corresponds to the incremental motion ofthe humanoid’s torso in a local coordinate frame and is computed with forward kinematicsfrom the current stance foot while walking. This kind of odometric estimate is usuallyreferred to as kinematic odometry. Figure 3.1 shows a single stepping phase and thecoordinate frames used to compute the kinematic odometry. As illustrated in Figure 3.2,ut can then be computed from two consecutive torso poses ext�1 and ext , which are 6D rigidbody transforms in the odometry coordinate frame Fodom:

T (ut) = (ext�1)�1 ⌦ext . (3.7)

Here, T (ut) denotes the 6D rigid body transform of the odometry motion ut . On the otherhand, we can think of the humanoid’s odometric pose estimates as a concatenation of theincremental motions

ext = ext�1 ⌦T (ut) . (3.8)

Under ideal conditions the stance foot of the robot rests firmly and precisely on theground, leading to an accurate 6D pose estimate with kinematic odometry. However, thefeet of the robot typically slip on the ground due to inaccurate execution of motions. Addi-tionally, gear backlash in individual joints can aggravate the problem. Hence, in practice,

45

The left leg becomes the stance leg and is the new reference to compute the transform to the torso frame

Page 59: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

59

Kinematic Walking Odometry §  Using FK, the poses of all joints and sensors

can be computed relative to the stance foot at each time step

§  The transform from the world frame to the stance foot is updated whenever the swing foot becomes the new stance foot

Page 60: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

60

Odometry Estimate Odometry estimate from two consecutive torso poses CHAPTER 3. MONTE CARLO LOCALIZATION FOR HUMANOID ROBOTS

ut

Figure 3.2.: Odometry estimate ut from two consecutive torso poses during the walking cycle.

only noisy estimates of the odometry are available while walking and a substantial amountof drift accumulates over time. Consequently, a particle filter has to account for that noisewith a higher variance, requiring a higher number of particles and thus more computa-tional power for successful pose estimation. By learning the motion model parametersinstead, the localization performance can be increased, both in terms of computationalload as well as accuracy.

Here, we consider the most general case of any kind of 3D positional and rotationaldisplacement, for instance originating from an omnidirectional walking engine. We fur-thermore assume that systematic drift affects the motion reported by odometry in the 2Dplane, i.e., only (ex,ey, ey) are affected from ex = (ex,ey,ez, ej, eq , ey). This is not a strong re-striction as long as the humanoid walks on a solid surface, since its motion is constrainedby this surface and it cannot leave the ground. Even when climbing challenging terrainsuch as stairs, the drift of the motion occurs in the 2D plane of the stance leg as long asthe robot does not fall or slide down a slope. General noise in the kinematic estimate ofthe humanoid’s height above the ground does not lead to a systematic drift.

3.2.1. Motion Model Calibration

For odometry calibration, we will refer to the reduced state vectors containing 2D posi-tion and orientation as x0 = (x,y,y)>. Corresponding to Eq. (3.8), u0

t = (ux,t ,uy,t ,uy,t)>

estimates the displacement between two poses reported by odometry

ex0t = ex0t�1 +u0t . (3.9)

To calibrate the drift of u0t , we assume that a ground truth pose x0 is available in a prior

learning phase, e.g., from an external motion capture system, scan matching, or visualodometry. Based on the deviations from the ground truth, values of a calibration matrixM 2 R3⇥3 can be determined to correct the 2D drift of odometry, such that

x0t = x0t�1 +Mu0t . (3.10)

46

Page 61: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

61

Odometry Estimate §  The incremental motion of the torso

is computed from kinematics of the legs §  Typically error sources: Slippage on the

ground and backlash in the joints §  Accordingly, we have only noisy odometry

estimates while walking, the drift accumulates over time

§  The particle filter has to account for that noise within the motion model

Page 62: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

62

Motion Model §  Noise modelled as a Gaussian with

systematic drift on the 2D plane §  Prediction step samples a new pose for each

particle according to

§  learned with least squares optimization using ground truth data (as in Ch. 2)

covariance matrix

calibration matrix

motion composition

Page 63: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

63

Sampling from the Motion Model We need to draw samples from a Gaussian

Gaussian

Page 64: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

64

How to Sample from a Gaussian §  Drawing from a 1D Gaussian can be done in

closed form

Example with 106 samples

Page 65: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

65

How to Sample from a Gaussian §  Drawing from a 1D Gaussian can be done in

closed form

§  If we consider the individual dimensions of the odometry as independent, we can draw each dimension using a 1D Gaussian

draw each of the dimensions independently

Page 66: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

66

Sampling from the Odometry §  We sample an individual motion for each

particle

§  We then incorporate the sampled

odometry into the pose of particle i

Page 67: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

68

Motion Model

odometry (uncalibrated) ground truth

particle distrib. acc. to uncalibrated motion model:

§  Resulting particle distribution (2000 particles) §  Nao robot walking straight for 40cm

Page 68: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

69

Motion Model

odometry (uncalibrated) ground truth

odometry (uncalibrated) ground truth

particle distribution acc. to uncalibrated motion model:

particle distribution acc. to calibrated motion model:

properly captures drift, closer to the ground truth

Page 69: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

Observation Model §  Observation consists of independent

§  range measurements ,

§  height (computed from the values of the joint encoders),

§  and roll and pitch measurements (by IMU)

§  Observation model:

Page 70: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

Observation Model

Page 71: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

Observation Model

Range data: Raycasting or endpoint model in 3D map

Page 72: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

73

Recap: Simplest Ray-Cast Model §  Consider the first obstacle along the ray §  Compare the actually measured distance

with the expected distance §  Use a Gaussian to evaluate the difference

measured distance object in map

Page 73: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

74

Recap: Beam-Endpoint Model Evaluate the distance of the hypothetical beam end point to the closest obstacle in the map with a Gaussian

Courtesy: Thrun, Burgard, Fox

Page 74: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

75

Observation Model

§  Range data: Ray-casting or endpoint model in 3D map

§  Torso height: Compare measured value from kinematics to predicted height (accord. to motion model)

§  IMU data: Compare measured roll and pitch to the predicted angles

§  Use individual Gaussians to evaluate the difference

Page 75: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

76

Likelihoods of Measurements Gaussian distribution

standard deviation corresponding to noise characteristics of joint encoders and IMU

computed from the height difference to the closest ground level in the map

Page 76: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands
Page 77: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

78

Localization Evaluation

§  Trajectory of 5m, 10 runs each §  Ground truth from external motion capture system §  Raycasting results in a significantly smaller error §  Calibrated motion model requires fewer particles

Page 78: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

79

Trajectory and Error Over Time 2D Laser Range Finder

Page 79: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

80

Trajectory and Error Over Time Depth Data

14 Int. Journal of Humanoid Robotics (IJHR), to appear. Author’s preprint.

Fig. 4. Experimental Environment II

At each sample point x(i)

j

, the algorithm evaluates the observation likelihood

p(rt

, C

t

| m,x(i)

j

) based on the current range measurement rt

and the set of cameraimages C

t

. We assume that the range measurement and the measurements from theindividual images are conditionally independent and compute the range measure-ment observation p(r

t

| m,x(i)

j

) according to the raycasting model (Sec. 3.3.2) and

the vision observations p(ct

| m,x(i)

j

) according to chamfer matching (Sec. 4.1).Finally, the algorithm draws a new particle pose from a Gaussian fitted to the

weighted samples and computes the corresponding importance weight according to

w

(i)

t

/ w

(i)

t�1

· ⌘(i) · p(z̃t

| m,x(i)

t

) · p('̃t

| x(i)

t

) · p(✓̃t

| x(i)

t

) (25)

with the observation models defined in Eq. (14)–(16).

5. Experiments

In a set of localization experiments, we evaluated various aspects of our localiza-tion approach with two di↵erent Nao humanoids. The experimental environmentsare shown in Fig. 1 and Fig. 4. A volumetric 3D map of these environments wasmanually constructed.

We used the weighted mean of the particle distribution as estimated localizationpose of the robot. To provide ground truth data, we used a MotionAnalysis Raptor-

E motion capture system in both environments. For the tracking experiments, weinitialized the particle filter pose at the ground truth pose returned by the motioncapture system.

5.1. Humanoid Robot Platform

As a robotic platform in our experiments, we used the Aldebaran Robotics Naohumanoid, shown in Fig. 5. The robot is 58 cm tall, weighs 4.8 kg and has 25 degreesof freedom.

Page 80: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

81

Summary §  Estimation of the 6D torso pose in a given

3D model §  Odometry estimate from kinematic walking §  Sample an own motion for each particle

using the motion model §  Use individual Gaussians in the observation

model to evaluate the differences between measured and expected values

§  Particle filter allows to locally track and globally estimate the robot’s pose

Page 81: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

82

Literature §  Chapter 3, Humanoid Robot Navigation in

Complex Indoor Environments, Armin Hornung, PhD thesis, Univ. Freiburg, 2014

Page 82: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

83

Internal Evaluation

§  Please fill out the sheets §  Provide helpful comments

Page 83: Humanoid Robotics Monte Carlo Localization - uni-bonn.de · 2 Motivation ! To perform useful service, a robot needs to know its global pose wrt. the environment ! Motion commands

84

Acknowledgment §  Previous versions of parts of the slides have

been created by Wolfram Burgard, Armin Hornung, and Cyrill Stachniss