1 motion estimation from image and inertial measurements dennis strelow and sanjiv singh carnegie...

Post on 13-Dec-2015

235 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

1

Motion estimation from image and inertial measurements

Dennis Strelow and Sanjiv Singh

Carnegie Mellon University

2

Introduction (1)

micro air vehicle (MAV) navigation

AeroVironment Black Widow AeroVironment Microbat

3

Introduction (2)

mars rover navigation

Mars Exploration Rovers (MER) Hyperion

4

Introduction (3)

robotic search and rescue

RhexCenter for Robot-Assisted Search and Rescue, U. of South Florida

5

Introduction (4)

NASA ISS personal satellite assistant

6

Introduction (5)

Each of these requires:

six degree of freedom motion

in unknown environments

without GPS or other absolute positioning

over the long term

…and in many cases…

small, light, and cheap sensors

7

Introduction (6)

If we adopt a camera as our sensor, estimate…

the vehicle (i.e., sensor) motion

…and as a by-product:

the sparse structure of the environment

8

Introduction (7)

One paradigm for estimating the motion and sparse structure uses two steps:

tracking: track image features through the image sequence

estimation: find 6 DOF camera positions and 3D point positions consistent with the image features

9

Introduction (8)

10

Introduction (8)

Lucas-Kanade

11

Introduction (9)

12

Introduction (10)

13

Introduction (10)

bundle adjustment, a.k.a. nonlinear shape-from-motion

14

Introduction (11)

This example sequence is benign:

small number of images

large number of easily tracked image features

all points visible in all frames

…and the estimated motion and scene structure are still ambiguous

15

Introduction (12)

Sequences from navigation applications much harder:

long observations sequences

small number of features, which may be poorly tracked

each feature visible in a small section of the image stream

16

Introduction (13)

We have been working on both:

tracking

estimation

This talk:

mostly estimation

17

Introduction (14)

18

Introduction (15)

Image measurements only

often the only option

Image and inertial measurements

can disambiguate image-only estimates

more complex: requires more calibration, estimation of additional unknowns

19

Introduction (16)

Batch estimation:

uses all of the observations at once

all observations must be available before computation begins

Online estimation:

observations are incorporated as they arrive

suitable for long or “infinite” sequences

20

Introduction (17)

Conventional images:

images from standard cameras

model as pinhole projection with radial distortion

Omnidirectional images:

images that result from combining a conventional camera with a convex mirror

requires a more complex projection model

21

Introduction (18)

22

Introduction (19)

23

Outline

Background: image-only batch estimation

Image-and-inertial batch estimation

Online estimation

Experiments

Future work

24

Background: image-only batch estimation (1)

25

Background: image-only batch estimation (1)

Tracking provides image location xij for each point j that appears in image i

26

Background: image-only batch estimation (2)

Suppose we also have estimates of:

the camera rotation ρi and translation ti at time of each image

Three dimensional point positions Xj of each tracked point

Then the reprojections are:))(( iji tXR

27

Background: image-only batch estimation (3)

28

Background: image-only batch estimation (4)

29

Background: image-only batch estimation (5)

So, minimize:

with respect to all the ρi, ti, Xj

“Bundle adjustment”: use iterative nonlinear minimization on this error

)),)(((,

image ijijji

i xtXRDE

30

Background: image-only batch estimation (6)

Bundle adjustment pros:

any projection model can be used

some missing points can be tolerated

can be extended

Bundle adjustment cons:

requires some initial estimate

slow

31

Background: image-only batch estimation (7)

Batch estimation cons:

all observations must be in hand before estimation begins

Image-only estimation cons:

sensitivity to mistracking

cannot recover global scale

32

Outline

Background: image-only batch estimation

Image-and-inertial batch estimation

Online estimation

Experiments

Future work

33

Image-and-inertial batch estimation (1)

34

Image-and-inertial batch estimation (2)

Image and inertial measurements are highly complementary

35

Image-and-inertial batch estimation (2)

Image and inertial measurements are highly complementary

Inertial measurements can:

establish the global scale

provide robustness if:• too few features• features infinitely far away• features in an “accidental” configuration

36

Image-and-inertial batch estimation (3)

Image measurements can:

reduce inertial integration drift

separate effects of:• rotation• gravity• acceleration• biasin inertial readings

37

Image-and-inertial batch estimation (4)

Gyro measurements:

ω’, ω: measured and actual angular velocity

bω: gyro bias

n: gaussian noise

nb '

38

Image-and-inertial batch estimation (5)

Accelerometer measurements:

a’, a: measured and actual acceleration

g: gravity vector

ba: accelerometer bias

n: gaussian noise

nbgaRa' aT )((

39

Image-and-inertial batch estimation (6)

Minimizes a combined error:

inertialimagecombined EEE

40

Image-and-inertial batch estimation (7)

Image term Eimage is the same as before

41

Image-and-inertial batch estimation (8)

Inertial error term Einertial is:

1

1n,translatio

1

1velocity,

1

1rotation,

ntranslatiovelocityrotationinertial

f

ii

f

ii

f

ii

E

E

E

EEEE

42

Image-and-inertial batch estimation (8)

Inertial error term Einertial is:

1

1n,translatio

1

1velocity,

1

1rotation,

ntranslatiovelocityrotationinertial

f

ii

f

ii

f

ii

E

E

E

EEEE

43

Image-and-inertial batch estimation (8)

Inertial error term Einertial is:

1

1n,translatio

1

1velocity,

1

1rotation,

ntranslatiovelocityrotationinertial

f

ii

f

ii

f

ii

E

E

E

EEEE

44

Image-and-inertial batch estimation (9)

)),...,,(,( 11n,translatio iiitii tItDE

timeτi-1 (time of image i - 1)

ti-1 = t(τi-1)

ti = t(τi)

I(τi-1, τi, …, ti-1)

τi (time of image i)

tran

slat

ion

45

Image-and-inertial batch estimation (10)

timeτ0

tran

slat

ion

τ1 τ2 τ5 τ3 τ4 τf-3 τf-2 τf-1

1

1n,translationtranslatio

f

iiEE

46

Image-and-inertial batch estimation (11)

1

1n,translatio

1

1velocity,

1

1rotation,

ntranslatiovelocityrotationinertial

f

ii

f

ii

f

ii

E

E

E

EEEE

47

Image-and-inertial batch estimation (11)

1

1n,translatio

1

1velocity,

1

1rotation,

ntranslatiovelocityrotationinertial

f

ii

f

ii

f

ii

E

E

E

EEEE

48

Image-and-inertial batch estimation (12)

It(τi-1, τi ,…, ti-1) depends on:

τi-1, τi (known)

all inertial measurements for times τi-1< τ < τi (known)

ρi-1, ti-1

camera linear velocities: vi

g

bω, ba

49

Image-and-inertial batch estimation (13)

The combined error function (image-and-inertial) is then minimized with respect to:

ρi, ti

Xj

50

Image-and-inertial batch estimation (14)

The combined error function (image and inertial) is then minimized with respect to:

ρi, ti

Xj

vi

g

bω, ba

51

Image-and-inertial batch estimation (15)

52

Image-and-inertial batch estimation (16)

accurate motions even when image-only and inertial-only motions are poor

53

Image-and-inertial batch estimation (17)

In addition:

recovers good gravity values, even with poor initialization

global scale often recovered with less than 5% error

54

Outline

Background: image-only batch estimation

Image-and-inertial batch estimation

Online estimation

image measurements only

image and inertial measurements

Experiments

Future work

55

Online algorithms (1)

56

Online algorithms (2)

Batch estimation:

uses all of the observations at once

all observations must be available before computation begins

Online estimation:

observations are incorporated as they arrive

suitable for long or “infinite” sequences

57

Online algorithms (3)

Our online algorithms are iterated extended Kalman filters (IEKFs)

Some components of the IEKF when applied to vehicle motion:

state estimate distribution gaussian: mean and covariance

motion model

measurement model

58

Online algorithms (4)

initializationprior distribution on unknowns

timestamped observations

time propagationprior distribution on unknowns

timestamp kinit + 1

measurement updateprior distribution on unknowns

observation kinit + 1

prior distribution on unknowns

time propagationprior distribution on unknowns

timestamp i

measurement updateprior distribution on unknowns

observation i

1, …, kinit

59

Online estimation (5): image-only

For the image-only online algorithm, the unknowns (state) are:

ρ(τ), t(τ)

currently visible point positions Xj

60

Online estimation (6): image-only, cont.

initializationprior distribution on unknowns

timestamped observations1, …, kinit

apply the batch image-only algorithm to observations 1, …, kinit

61

Online estimation (7): image-only, cont.

prior distribution on unknowns

time propagationprior distribution on unknowns

timestamp i

assume that ρ, t are perturbed by gaussian noise

assume that Xj are unchanged

62

Online estimation (8): image-only, cont.

n

tXR

tXR

x

x

z

pp

))()((

))()((

1

0

1

0

prior distribution on unknowns

measurement updateprior distribution on unknowns

observation i

To update given an image measurement z, use the reprojection equation:

63

Online estimation (9): image-only, cont.

For points that become visible after online operation has begun…

…adapt Smith, Self, and Cheeseman’s method for SLAM

64

Online estimation (10): image-and-inertial

initializationprior distribution on unknowns

timestamped observations

time propagationprior distribution on unknowns

timestamp kinit + 1

measurement updateprior distribution on unknowns

observation kinit + 1

prior distribution on unknowns

time propagationprior distribution on unknowns

timestamp i

measurement updateprior distribution on unknowns

observation i

1, …, kinit

65

Online estimation (11): image-and-inertial, cont.

Augmented state includes:

ρ(τ), t(τ)

currently visible point positions Xj

66

Online estimation (11): image-and-inertial, cont.

Augmented state includes:

ρ(τ), t(τ)

currently visible point positions Xj

v(τ)

g

bω, ba

67

Online estimation (11): image-and-inertial, cont.

Augmented state includes:

ρ(τ), t(τ)

currently visible point positions Xj

v(τ)

g

bω, ba

camera system angular velocity: ω(τ)

world system linear acceleration: a(τ)

68

Online estimation (12): image-and-inertial, cont.

initializationprior distribution on unknowns

timestamped observations1, …, kinit

apply the batch image-and-inertial algorithm

69

Online estimation (13): image-and-inertial, cont.

assume that ω, a are perturbed by gaussian noise

assume that Xj, g, bω, ba are unchanged

prior distribution on unknowns

time propagationprior distribution on unknowns

timestamp i

70

Online estimation (14): image-and-inertial, cont.

nbgaR

b

az

aT

)()('

'

prior distribution on unknowns

measurement updateprior distribution on unknowns

observation i

use the inertial sensor model we’ve seen:

71

Outline

Background: image-only batch estimation

Image and inertial batch estimation

Online estimation

Experiments

Hyperion

CMU crane

Future work

72

Experiments (1): Hyperion

73

Experiments (2): Hyperion, cont.

74

Experiments (3): Hyperion, cont.

75

Experiments (4): Hyperion, cont.

76

Experiments (4): Hyperion, cont.

77

Experiments (5): Hyperion, cont.

78

Experiments (6): Hyperion, cont.

79

Experiments (7): CMU crane

Crane capable of translating a platform…

…through x, y, z…

…through a workspace of about 10 x 10 x 5 m

80

Experiments (8): CMU crane, cont.

y tr

ansl

atio

n (m

eter

s)

x translation (meters)3.00.0-3.0

-3.0

0.0

3.0

(x, y) translation ground truth

81

Experiments (9): CMU crane, cont.z

(m)

time

3.5

4.5

z translation ground truth

No change in rotation

82

Experiments (10): CMU crane, cont.

83

Experiments (11): CMU crane, cont.

Hard sequence:

• Each image contains an average of 56.0 points

• Each point appears in an average of 62.3 images (4.4% of sequence)

• Image-and-inertial online algorithm applied

• 40 images used in batch initialization

84

Experiments (12): CMU crane, cont.

85

Experiments (13): CMU crane, cont.

Estimated z camera translations

86

Experiments (14): CMU crane, cont.

6 DOF errors, after scaled rigid alignment:

Rotation: 0.14 radians average

Translation: 31.5 cm average (0.9% of distance traveled)

Global scale error: -3.4%

87

Outline

Background: image-only batch estimation

Image and inertial batch estimation

Online estimation

Experiments

Future work

88

Future work (1)

For long sequences:

no single external point is always visible

Estimated positions will drift due to…

random and gross observation errors

modeling errors

suboptimality in online estimation

89

Future work (2)

90

Future work (3)

91

Future work (4)

Need to incorporate some advances from the SLAM community to deal with this issue…

…in particular, reacquiring revisited features

92

Future work (5)

Two issues:

(1) recognizing revisited features

(2) exploiting revisited features in the estimation

Lowe’s SIFT features a good candidate for (1)

93

94

Thanks!

Pointers to:

these slides

related work by others

related publications by our group

feature tracking movies

VRML models

at:http://www.cs.cmu.edu/~dstrelow/uw

top related