humanoid robotics monte carlo localization - uni-bonn.de · 2 motivation ! to perform useful...
TRANSCRIPT
1
Humanoid Robotics
Monte Carlo Localization
Maren Bennewitz
2
Motivation § To perform useful service, a robot needs to
know its global pose wrt. the environment § Motion commands are only executed
inaccurately § Estimate the global pose of a robot in a
given model of the environment § Based on its sensor data and odometry
measurements § Sensor data: Range image with depth
measurements or laser range readings
3
Basic Probability Rules (1)
If x and y are independent:
Bayes’ rule:
Often written as: The denominator is a normalizing constant that ensures that the posterior on the left hand side sums up to 1 over all possible values of x.
In case of background knowledge, Bayes' rule turns into
4
Basic Probability Rules (2) Law of total probability:
continuous case
discrete case
5
Markov Assumption
state of a dynamical system
observations
actions
6
Task § Estimate the pose of a robot in a given
model of the environment § Based on its sensor data and odometry
measurements
7
State Estimation § Estimate the state given observations
and odometry measurements/actions
§ Goal: Calculate the distribution
8
Recursive Bayes Filter 1
Definition of the belief
all data up to time t
9
Recursive Bayes Filter 2
Bayes’ rule
10
Recursive Bayes Filter 3
Markov assumption
11
Recursive Bayes Filter 4
Law of total probability
12
Recursive Bayes Filter 5
Markov assumption
13
Recursive Bayes Filter 6
14
Recursive Bayes Filter 7
recursive term
15
Recursive Bayes Filter 7
motion model observation model
16
Probabilistic Motion Model § Robots execute motion commands only
inaccurately § The motion model specifies the probability
that action u moves the robot from to :
§ Defined individually for each type or robot
17
Observation Model for Range Readings § Sensor data consists of measurements
§ The individual measurements are
independent given the robot’s pose
§ “How well can the distance measurements be explained given the pose (and the map)”
18
Observation Model: Simplest Ray-Cast Model § Consider the first obstacle along the ray § Compare the actually measured distance
with the expected distance to the obstacle § Use a Gaussian to evaluate the difference
measured distance object in map
19
Observation Model: Beam-Endpoint Model Evaluate the distance of the hypothetical beam end point to the closest obstacle in the map with a Gaussian
Courtesy: Thrun, Burgard, Fox
20
Summary § Bayes filter is a framework for state
estimation § The motion and observation model are the
central components of the Bayes filter
21
Particle Filter Monte Carlo Localization
22
Particle Filter § One implementation of the recursive Bayes
filter § Non-parametric framework (not only
Gaussian) § Arbitrary models can be used as motion and
observation models
23
Motivation Goal: Approach for dealing with arbitrary distributions
24
Key Idea: Samples Use a set of weighted samples to represent arbitrary distributions
samples
25
Particle Set
Set of weighted samples
state hypothesis
importance weight
26
Particle Filter § The set of weighted particles approximates
the belief distribution about the robot’s pose
§ Prediction: Sample from the motion model (propagate particles forward)
§ Correction: Weight the samples based on the observation model
27
Monte Carlo Localization § Each particle is a pose hypothesis § Prediction: For each particle, sample a
new pose from the the motion model
§ Correction: Weight samples according to the observation model
[The weight has be chosen in exactly that way if the samples are drawn from the motion model, see Cognitive Robotics Lecture]
28
Monte Carlo Localization § Each particle is a pose hypothesis § Prediction: For each particle, sample a
new pose from the the motion model
§ Correction: Weight samples according to the observation model
§ Resampling: Draw sample with probability and repeat times ( =#particles)
29
MCL – Correction Step
Image Courtesy: S. Thrun, W. Burgard, D. Fox
30
MCL – Resampling & Prediction
Image Courtesy: S. Thrun, W. Burgard, D. Fox
31
MCL – Correction Step
Image Courtesy: S. Thrun, W. Burgard, D. Fox
32
MCL – Resampling & Prediction
Image Courtesy: S. Thrun, W. Burgard, D. Fox
33
Resampling § Draw sample with probability .
Repeat times ( =#particles) § Informally: “Replace unlikely samples by
more likely ones” § Survival-of-the-fittest principle § “Trick” to avoid that many samples cover
unlikely states § Needed since we have a limited number of
samples
34
w2
w3
w1 wn
Wn-1
Resampling
w2
w3
w1 wn
Wn-1
§ Draw randomly between 0 and 1 § Binary search § Repeat J times § O(J log J)
§ Systematic resampling § Low variance resampl. § O(J) § Also called “stochastic universal resampling”
step size = 1/J initial value between 0 and 1/J
“roulette wheel”
Assumption: normalized weights
35
Low Variance Resampling
J = #particles step size = 1/J
cumulative sum of weights
decide whether or not to take particle i
initialization
36
initialization
Courtesy: Thrun, Burgard, Fox
37
observation
Courtesy: Thrun, Burgard, Fox
38
weight update, resampling,
motion update Courtesy: Thrun, Burgard, Fox
39
observation
Courtesy: Thrun, Burgard, Fox
40
weight update, resampling
Courtesy: Thrun, Burgard, Fox
41
motion update
Courtesy: Thrun, Burgard, Fox
42
observation
Courtesy: Thrun, Burgard, Fox
43
weight update, resampling
Courtesy: Thrun, Burgard, Fox
44
motion update
Courtesy: Thrun, Burgard, Fox
45
observation
Courtesy: Thrun, Burgard, Fox
46
weight update, resampling
Courtesy: Thrun, Burgard, Fox
47
motion update
Courtesy: Thrun, Burgard, Fox
48
observation
Courtesy: Thrun, Burgard, Fox
49
MCL in Action
50
Summary – Particle Filters § Particle filters are non-parametric, recursive
Bayes filters § The belief about the state is represented by
a set of weighted samples § The motion model is used to draw the
samples for the next time step § The weights of the particles are computed
based on the observation model § Resampling is carried out based on weights § Called: Monte-Carlo localization (MCL)
51
Literature § Book: Probabilistic Robotics,
S. Thrun, W. Burgard, and D. Fox
52
6D Localization for Humanoid Robots
53
Localization for Humanoids 3D environments require a 6D pose estimate
height yaw, pitch, roll 2D position
estimate the 6D torso pose
54
Localization for Humanoids § Recursively estimate the distribution about
the robot‘s pose using MCL § Probability distribution represented by
pose hypotheses (particles) § Needed: Motion model and observation
model
55
Motion Model § The odometry estimate corresponds to
the incremental motion of the torso § is computed from forward kinematics (FK)
from the current stance foot while walking
56
Kinematic Walking Odometry
3.2. Motion Model
Frfoot
Ftorso
Fodom
Frfoot
Ftorso
Fodom
Frfoot Flfoot
Ftorso
Fodom
Flfoot
Ftorso
Fodom
Figure 3.1.: Coordinate frames for kinematic walking odometry of a biped robot under ideal con-ditions. Initially, the frame Frfoot of the right stance foot is fixed in the world frame Fodom and thetorso frame Ftorso can be computed with forward kinematics over the right leg. In the next step, theleft swing leg touches the ground. While both feet remain on the ground, the left foot frame Flfootis computed with forward kinematics over both leg chains. Finally, for the next stepping phase theleft leg becomes the stance leg and Flfoot remains static with respect to Fodom, which creates a newreference to compute Ftorso.
filter can be easily parallelized and it can directly benefit from more computational powerby increasing the number of particles. Thus, the possible estimation accuracy directlyscales with the available computational resources.
3.2. Motion Model
In the prediction step of MCL, a new pose is drawn for each particle according to themotion model p(xt | xt�1,ut). In our case, ut corresponds to the incremental motion ofthe humanoid’s torso in a local coordinate frame and is computed with forward kinematicsfrom the current stance foot while walking. This kind of odometric estimate is usuallyreferred to as kinematic odometry. Figure 3.1 shows a single stepping phase and thecoordinate frames used to compute the kinematic odometry. As illustrated in Figure 3.2,ut can then be computed from two consecutive torso poses ext�1 and ext , which are 6D rigidbody transforms in the odometry coordinate frame Fodom:
T (ut) = (ext�1)�1 ⌦ext . (3.7)
Here, T (ut) denotes the 6D rigid body transform of the odometry motion ut . On the otherhand, we can think of the humanoid’s odometric pose estimates as a concatenation of theincremental motions
ext = ext�1 ⌦T (ut) . (3.8)
Under ideal conditions the stance foot of the robot rests firmly and precisely on theground, leading to an accurate 6D pose estimate with kinematic odometry. However, thefeet of the robot typically slip on the ground due to inaccurate execution of motions. Addi-tionally, gear backlash in individual joints can aggravate the problem. Hence, in practice,
45
frame of the current stance foot
frame of the torso, transform can be computed with FK over the right leg
Keep track of the transform to the current stance foot
57
Kinematic Walking Odometry
3.2. Motion Model
Frfoot
Ftorso
Fodom
Frfoot
Ftorso
Fodom
Frfoot Flfoot
Ftorso
Fodom
Flfoot
Ftorso
Fodom
Figure 3.1.: Coordinate frames for kinematic walking odometry of a biped robot under ideal con-ditions. Initially, the frame Frfoot of the right stance foot is fixed in the world frame Fodom and thetorso frame Ftorso can be computed with forward kinematics over the right leg. In the next step, theleft swing leg touches the ground. While both feet remain on the ground, the left foot frame Flfootis computed with forward kinematics over both leg chains. Finally, for the next stepping phase theleft leg becomes the stance leg and Flfoot remains static with respect to Fodom, which creates a newreference to compute Ftorso.
filter can be easily parallelized and it can directly benefit from more computational powerby increasing the number of particles. Thus, the possible estimation accuracy directlyscales with the available computational resources.
3.2. Motion Model
In the prediction step of MCL, a new pose is drawn for each particle according to themotion model p(xt | xt�1,ut). In our case, ut corresponds to the incremental motion ofthe humanoid’s torso in a local coordinate frame and is computed with forward kinematicsfrom the current stance foot while walking. This kind of odometric estimate is usuallyreferred to as kinematic odometry. Figure 3.1 shows a single stepping phase and thecoordinate frames used to compute the kinematic odometry. As illustrated in Figure 3.2,ut can then be computed from two consecutive torso poses ext�1 and ext , which are 6D rigidbody transforms in the odometry coordinate frame Fodom:
T (ut) = (ext�1)�1 ⌦ext . (3.7)
Here, T (ut) denotes the 6D rigid body transform of the odometry motion ut . On the otherhand, we can think of the humanoid’s odometric pose estimates as a concatenation of theincremental motions
ext = ext�1 ⌦T (ut) . (3.8)
Under ideal conditions the stance foot of the robot rests firmly and precisely on theground, leading to an accurate 6D pose estimate with kinematic odometry. However, thefeet of the robot typically slip on the ground due to inaccurate execution of motions. Addi-tionally, gear backlash in individual joints can aggravate the problem. Hence, in practice,
45
3.2. Motion Model
Frfoot
Ftorso
Fodom
Frfoot
Ftorso
Fodom
Frfoot Flfoot
Ftorso
Fodom
Flfoot
Ftorso
Fodom
Figure 3.1.: Coordinate frames for kinematic walking odometry of a biped robot under ideal con-ditions. Initially, the frame Frfoot of the right stance foot is fixed in the world frame Fodom and thetorso frame Ftorso can be computed with forward kinematics over the right leg. In the next step, theleft swing leg touches the ground. While both feet remain on the ground, the left foot frame Flfootis computed with forward kinematics over both leg chains. Finally, for the next stepping phase theleft leg becomes the stance leg and Flfoot remains static with respect to Fodom, which creates a newreference to compute Ftorso.
filter can be easily parallelized and it can directly benefit from more computational powerby increasing the number of particles. Thus, the possible estimation accuracy directlyscales with the available computational resources.
3.2. Motion Model
In the prediction step of MCL, a new pose is drawn for each particle according to themotion model p(xt | xt�1,ut). In our case, ut corresponds to the incremental motion ofthe humanoid’s torso in a local coordinate frame and is computed with forward kinematicsfrom the current stance foot while walking. This kind of odometric estimate is usuallyreferred to as kinematic odometry. Figure 3.1 shows a single stepping phase and thecoordinate frames used to compute the kinematic odometry. As illustrated in Figure 3.2,ut can then be computed from two consecutive torso poses ext�1 and ext , which are 6D rigidbody transforms in the odometry coordinate frame Fodom:
T (ut) = (ext�1)�1 ⌦ext . (3.7)
Here, T (ut) denotes the 6D rigid body transform of the odometry motion ut . On the otherhand, we can think of the humanoid’s odometric pose estimates as a concatenation of theincremental motions
ext = ext�1 ⌦T (ut) . (3.8)
Under ideal conditions the stance foot of the robot rests firmly and precisely on theground, leading to an accurate 6D pose estimate with kinematic odometry. However, thefeet of the robot typically slip on the ground due to inaccurate execution of motions. Addi-tionally, gear backlash in individual joints can aggravate the problem. Hence, in practice,
45
Both feet remain on the ground, compute the transform to the frame of the left foot with FK 3.2. Motion Model
Frfoot
Ftorso
Fodom
Frfoot
Ftorso
Fodom
Frfoot Flfoot
Ftorso
Fodom
Flfoot
Ftorso
Fodom
Figure 3.1.: Coordinate frames for kinematic walking odometry of a biped robot under ideal con-ditions. Initially, the frame Frfoot of the right stance foot is fixed in the world frame Fodom and thetorso frame Ftorso can be computed with forward kinematics over the right leg. In the next step, theleft swing leg touches the ground. While both feet remain on the ground, the left foot frame Flfootis computed with forward kinematics over both leg chains. Finally, for the next stepping phase theleft leg becomes the stance leg and Flfoot remains static with respect to Fodom, which creates a newreference to compute Ftorso.
filter can be easily parallelized and it can directly benefit from more computational powerby increasing the number of particles. Thus, the possible estimation accuracy directlyscales with the available computational resources.
3.2. Motion Model
In the prediction step of MCL, a new pose is drawn for each particle according to themotion model p(xt | xt�1,ut). In our case, ut corresponds to the incremental motion ofthe humanoid’s torso in a local coordinate frame and is computed with forward kinematicsfrom the current stance foot while walking. This kind of odometric estimate is usuallyreferred to as kinematic odometry. Figure 3.1 shows a single stepping phase and thecoordinate frames used to compute the kinematic odometry. As illustrated in Figure 3.2,ut can then be computed from two consecutive torso poses ext�1 and ext , which are 6D rigidbody transforms in the odometry coordinate frame Fodom:
T (ut) = (ext�1)�1 ⌦ext . (3.7)
Here, T (ut) denotes the 6D rigid body transform of the odometry motion ut . On the otherhand, we can think of the humanoid’s odometric pose estimates as a concatenation of theincremental motions
ext = ext�1 ⌦T (ut) . (3.8)
Under ideal conditions the stance foot of the robot rests firmly and precisely on theground, leading to an accurate 6D pose estimate with kinematic odometry. However, thefeet of the robot typically slip on the ground due to inaccurate execution of motions. Addi-tionally, gear backlash in individual joints can aggravate the problem. Hence, in practice,
45
58
Kinematic Walking Odometry
3.2. Motion Model
Frfoot
Ftorso
Fodom
Frfoot
Ftorso
Fodom
Frfoot Flfoot
Ftorso
Fodom
Flfoot
Ftorso
Fodom
Figure 3.1.: Coordinate frames for kinematic walking odometry of a biped robot under ideal con-ditions. Initially, the frame Frfoot of the right stance foot is fixed in the world frame Fodom and thetorso frame Ftorso can be computed with forward kinematics over the right leg. In the next step, theleft swing leg touches the ground. While both feet remain on the ground, the left foot frame Flfootis computed with forward kinematics over both leg chains. Finally, for the next stepping phase theleft leg becomes the stance leg and Flfoot remains static with respect to Fodom, which creates a newreference to compute Ftorso.
filter can be easily parallelized and it can directly benefit from more computational powerby increasing the number of particles. Thus, the possible estimation accuracy directlyscales with the available computational resources.
3.2. Motion Model
In the prediction step of MCL, a new pose is drawn for each particle according to themotion model p(xt | xt�1,ut). In our case, ut corresponds to the incremental motion ofthe humanoid’s torso in a local coordinate frame and is computed with forward kinematicsfrom the current stance foot while walking. This kind of odometric estimate is usuallyreferred to as kinematic odometry. Figure 3.1 shows a single stepping phase and thecoordinate frames used to compute the kinematic odometry. As illustrated in Figure 3.2,ut can then be computed from two consecutive torso poses ext�1 and ext , which are 6D rigidbody transforms in the odometry coordinate frame Fodom:
T (ut) = (ext�1)�1 ⌦ext . (3.7)
Here, T (ut) denotes the 6D rigid body transform of the odometry motion ut . On the otherhand, we can think of the humanoid’s odometric pose estimates as a concatenation of theincremental motions
ext = ext�1 ⌦T (ut) . (3.8)
Under ideal conditions the stance foot of the robot rests firmly and precisely on theground, leading to an accurate 6D pose estimate with kinematic odometry. However, thefeet of the robot typically slip on the ground due to inaccurate execution of motions. Addi-tionally, gear backlash in individual joints can aggravate the problem. Hence, in practice,
45
3.2. Motion Model
Frfoot
Ftorso
Fodom
Frfoot
Ftorso
Fodom
Frfoot Flfoot
Ftorso
Fodom
Flfoot
Ftorso
Fodom
Figure 3.1.: Coordinate frames for kinematic walking odometry of a biped robot under ideal con-ditions. Initially, the frame Frfoot of the right stance foot is fixed in the world frame Fodom and thetorso frame Ftorso can be computed with forward kinematics over the right leg. In the next step, theleft swing leg touches the ground. While both feet remain on the ground, the left foot frame Flfootis computed with forward kinematics over both leg chains. Finally, for the next stepping phase theleft leg becomes the stance leg and Flfoot remains static with respect to Fodom, which creates a newreference to compute Ftorso.
filter can be easily parallelized and it can directly benefit from more computational powerby increasing the number of particles. Thus, the possible estimation accuracy directlyscales with the available computational resources.
3.2. Motion Model
In the prediction step of MCL, a new pose is drawn for each particle according to themotion model p(xt | xt�1,ut). In our case, ut corresponds to the incremental motion ofthe humanoid’s torso in a local coordinate frame and is computed with forward kinematicsfrom the current stance foot while walking. This kind of odometric estimate is usuallyreferred to as kinematic odometry. Figure 3.1 shows a single stepping phase and thecoordinate frames used to compute the kinematic odometry. As illustrated in Figure 3.2,ut can then be computed from two consecutive torso poses ext�1 and ext , which are 6D rigidbody transforms in the odometry coordinate frame Fodom:
T (ut) = (ext�1)�1 ⌦ext . (3.7)
Here, T (ut) denotes the 6D rigid body transform of the odometry motion ut . On the otherhand, we can think of the humanoid’s odometric pose estimates as a concatenation of theincremental motions
ext = ext�1 ⌦T (ut) . (3.8)
Under ideal conditions the stance foot of the robot rests firmly and precisely on theground, leading to an accurate 6D pose estimate with kinematic odometry. However, thefeet of the robot typically slip on the ground due to inaccurate execution of motions. Addi-tionally, gear backlash in individual joints can aggravate the problem. Hence, in practice,
45
3.2. Motion Model
Frfoot
Ftorso
Fodom
Frfoot
Ftorso
Fodom
Frfoot Flfoot
Ftorso
Fodom
Flfoot
Ftorso
Fodom
Figure 3.1.: Coordinate frames for kinematic walking odometry of a biped robot under ideal con-ditions. Initially, the frame Frfoot of the right stance foot is fixed in the world frame Fodom and thetorso frame Ftorso can be computed with forward kinematics over the right leg. In the next step, theleft swing leg touches the ground. While both feet remain on the ground, the left foot frame Flfootis computed with forward kinematics over both leg chains. Finally, for the next stepping phase theleft leg becomes the stance leg and Flfoot remains static with respect to Fodom, which creates a newreference to compute Ftorso.
filter can be easily parallelized and it can directly benefit from more computational powerby increasing the number of particles. Thus, the possible estimation accuracy directlyscales with the available computational resources.
3.2. Motion Model
In the prediction step of MCL, a new pose is drawn for each particle according to themotion model p(xt | xt�1,ut). In our case, ut corresponds to the incremental motion ofthe humanoid’s torso in a local coordinate frame and is computed with forward kinematicsfrom the current stance foot while walking. This kind of odometric estimate is usuallyreferred to as kinematic odometry. Figure 3.1 shows a single stepping phase and thecoordinate frames used to compute the kinematic odometry. As illustrated in Figure 3.2,ut can then be computed from two consecutive torso poses ext�1 and ext , which are 6D rigidbody transforms in the odometry coordinate frame Fodom:
T (ut) = (ext�1)�1 ⌦ext . (3.7)
Here, T (ut) denotes the 6D rigid body transform of the odometry motion ut . On the otherhand, we can think of the humanoid’s odometric pose estimates as a concatenation of theincremental motions
ext = ext�1 ⌦T (ut) . (3.8)
Under ideal conditions the stance foot of the robot rests firmly and precisely on theground, leading to an accurate 6D pose estimate with kinematic odometry. However, thefeet of the robot typically slip on the ground due to inaccurate execution of motions. Addi-tionally, gear backlash in individual joints can aggravate the problem. Hence, in practice,
45
3.2. Motion Model
Frfoot
Ftorso
Fodom
Frfoot
Ftorso
Fodom
Frfoot Flfoot
Ftorso
Fodom
Flfoot
Ftorso
Fodom
Figure 3.1.: Coordinate frames for kinematic walking odometry of a biped robot under ideal con-ditions. Initially, the frame Frfoot of the right stance foot is fixed in the world frame Fodom and thetorso frame Ftorso can be computed with forward kinematics over the right leg. In the next step, theleft swing leg touches the ground. While both feet remain on the ground, the left foot frame Flfootis computed with forward kinematics over both leg chains. Finally, for the next stepping phase theleft leg becomes the stance leg and Flfoot remains static with respect to Fodom, which creates a newreference to compute Ftorso.
filter can be easily parallelized and it can directly benefit from more computational powerby increasing the number of particles. Thus, the possible estimation accuracy directlyscales with the available computational resources.
3.2. Motion Model
In the prediction step of MCL, a new pose is drawn for each particle according to themotion model p(xt | xt�1,ut). In our case, ut corresponds to the incremental motion ofthe humanoid’s torso in a local coordinate frame and is computed with forward kinematicsfrom the current stance foot while walking. This kind of odometric estimate is usuallyreferred to as kinematic odometry. Figure 3.1 shows a single stepping phase and thecoordinate frames used to compute the kinematic odometry. As illustrated in Figure 3.2,ut can then be computed from two consecutive torso poses ext�1 and ext , which are 6D rigidbody transforms in the odometry coordinate frame Fodom:
T (ut) = (ext�1)�1 ⌦ext . (3.7)
Here, T (ut) denotes the 6D rigid body transform of the odometry motion ut . On the otherhand, we can think of the humanoid’s odometric pose estimates as a concatenation of theincremental motions
ext = ext�1 ⌦T (ut) . (3.8)
Under ideal conditions the stance foot of the robot rests firmly and precisely on theground, leading to an accurate 6D pose estimate with kinematic odometry. However, thefeet of the robot typically slip on the ground due to inaccurate execution of motions. Addi-tionally, gear backlash in individual joints can aggravate the problem. Hence, in practice,
45
The left leg becomes the stance leg and is the new reference to compute the transform to the torso frame
59
Kinematic Walking Odometry § Using FK, the poses of all joints and sensors
can be computed relative to the stance foot at each time step
§ The transform from the world frame to the stance foot is updated whenever the swing foot becomes the new stance foot
60
Odometry Estimate Odometry estimate from two consecutive torso poses CHAPTER 3. MONTE CARLO LOCALIZATION FOR HUMANOID ROBOTS
ut
Figure 3.2.: Odometry estimate ut from two consecutive torso poses during the walking cycle.
only noisy estimates of the odometry are available while walking and a substantial amountof drift accumulates over time. Consequently, a particle filter has to account for that noisewith a higher variance, requiring a higher number of particles and thus more computa-tional power for successful pose estimation. By learning the motion model parametersinstead, the localization performance can be increased, both in terms of computationalload as well as accuracy.
Here, we consider the most general case of any kind of 3D positional and rotationaldisplacement, for instance originating from an omnidirectional walking engine. We fur-thermore assume that systematic drift affects the motion reported by odometry in the 2Dplane, i.e., only (ex,ey, ey) are affected from ex = (ex,ey,ez, ej, eq , ey). This is not a strong re-striction as long as the humanoid walks on a solid surface, since its motion is constrainedby this surface and it cannot leave the ground. Even when climbing challenging terrainsuch as stairs, the drift of the motion occurs in the 2D plane of the stance leg as long asthe robot does not fall or slide down a slope. General noise in the kinematic estimate ofthe humanoid’s height above the ground does not lead to a systematic drift.
3.2.1. Motion Model Calibration
For odometry calibration, we will refer to the reduced state vectors containing 2D posi-tion and orientation as x0 = (x,y,y)>. Corresponding to Eq. (3.8), u0
t = (ux,t ,uy,t ,uy,t)>
estimates the displacement between two poses reported by odometry
ex0t = ex0t�1 +u0t . (3.9)
To calibrate the drift of u0t , we assume that a ground truth pose x0 is available in a prior
learning phase, e.g., from an external motion capture system, scan matching, or visualodometry. Based on the deviations from the ground truth, values of a calibration matrixM 2 R3⇥3 can be determined to correct the 2D drift of odometry, such that
x0t = x0t�1 +Mu0t . (3.10)
46
61
Odometry Estimate § The incremental motion of the torso
is computed from kinematics of the legs § Typically error sources: Slippage on the
ground and backlash in the joints § Accordingly, we have only noisy odometry
estimates while walking, the drift accumulates over time
§ The particle filter has to account for that noise within the motion model
62
Motion Model § Noise modelled as a Gaussian with
systematic drift on the 2D plane § Prediction step samples a new pose for each
particle according to
§ learned with least squares optimization using ground truth data (as in Ch. 2)
covariance matrix
calibration matrix
motion composition
63
Sampling from the Motion Model We need to draw samples from a Gaussian
Gaussian
64
How to Sample from a Gaussian § Drawing from a 1D Gaussian can be done in
closed form
Example with 106 samples
65
How to Sample from a Gaussian § Drawing from a 1D Gaussian can be done in
closed form
§ If we consider the individual dimensions of the odometry as independent, we can draw each dimension using a 1D Gaussian
draw each of the dimensions independently
66
Sampling from the Odometry § We sample an individual motion for each
particle
§ We then incorporate the sampled
odometry into the pose of particle i
68
Motion Model
odometry (uncalibrated) ground truth
particle distrib. acc. to uncalibrated motion model:
§ Resulting particle distribution (2000 particles) § Nao robot walking straight for 40cm
69
Motion Model
odometry (uncalibrated) ground truth
odometry (uncalibrated) ground truth
particle distribution acc. to uncalibrated motion model:
particle distribution acc. to calibrated motion model:
properly captures drift, closer to the ground truth
Observation Model § Observation consists of independent
§ range measurements ,
§ height (computed from the values of the joint encoders),
§ and roll and pitch measurements (by IMU)
§ Observation model:
Observation Model
Observation Model
Range data: Raycasting or endpoint model in 3D map
73
Recap: Simplest Ray-Cast Model § Consider the first obstacle along the ray § Compare the actually measured distance
with the expected distance § Use a Gaussian to evaluate the difference
measured distance object in map
74
Recap: Beam-Endpoint Model Evaluate the distance of the hypothetical beam end point to the closest obstacle in the map with a Gaussian
Courtesy: Thrun, Burgard, Fox
75
Observation Model
§ Range data: Ray-casting or endpoint model in 3D map
§ Torso height: Compare measured value from kinematics to predicted height (accord. to motion model)
§ IMU data: Compare measured roll and pitch to the predicted angles
§ Use individual Gaussians to evaluate the difference
76
Likelihoods of Measurements Gaussian distribution
standard deviation corresponding to noise characteristics of joint encoders and IMU
computed from the height difference to the closest ground level in the map
78
Localization Evaluation
§ Trajectory of 5m, 10 runs each § Ground truth from external motion capture system § Raycasting results in a significantly smaller error § Calibrated motion model requires fewer particles
79
Trajectory and Error Over Time 2D Laser Range Finder
80
Trajectory and Error Over Time Depth Data
14 Int. Journal of Humanoid Robotics (IJHR), to appear. Author’s preprint.
Fig. 4. Experimental Environment II
At each sample point x(i)
j
, the algorithm evaluates the observation likelihood
p(rt
, C
t
| m,x(i)
j
) based on the current range measurement rt
and the set of cameraimages C
t
. We assume that the range measurement and the measurements from theindividual images are conditionally independent and compute the range measure-ment observation p(r
t
| m,x(i)
j
) according to the raycasting model (Sec. 3.3.2) and
the vision observations p(ct
| m,x(i)
j
) according to chamfer matching (Sec. 4.1).Finally, the algorithm draws a new particle pose from a Gaussian fitted to the
weighted samples and computes the corresponding importance weight according to
w
(i)
t
/ w
(i)
t�1
· ⌘(i) · p(z̃t
| m,x(i)
t
) · p('̃t
| x(i)
t
) · p(✓̃t
| x(i)
t
) (25)
with the observation models defined in Eq. (14)–(16).
5. Experiments
In a set of localization experiments, we evaluated various aspects of our localiza-tion approach with two di↵erent Nao humanoids. The experimental environmentsare shown in Fig. 1 and Fig. 4. A volumetric 3D map of these environments wasmanually constructed.
We used the weighted mean of the particle distribution as estimated localizationpose of the robot. To provide ground truth data, we used a MotionAnalysis Raptor-
E motion capture system in both environments. For the tracking experiments, weinitialized the particle filter pose at the ground truth pose returned by the motioncapture system.
5.1. Humanoid Robot Platform
As a robotic platform in our experiments, we used the Aldebaran Robotics Naohumanoid, shown in Fig. 5. The robot is 58 cm tall, weighs 4.8 kg and has 25 degreesof freedom.
81
Summary § Estimation of the 6D torso pose in a given
3D model § Odometry estimate from kinematic walking § Sample an own motion for each particle
using the motion model § Use individual Gaussians in the observation
model to evaluate the differences between measured and expected values
§ Particle filter allows to locally track and globally estimate the robot’s pose
82
Literature § Chapter 3, Humanoid Robot Navigation in
Complex Indoor Environments, Armin Hornung, PhD thesis, Univ. Freiburg, 2014
83
Internal Evaluation
§ Please fill out the sheets § Provide helpful comments
84
Acknowledgment § Previous versions of parts of the slides have
been created by Wolfram Burgard, Armin Hornung, and Cyrill Stachniss