Transcript
Page 1: Accurate and Linear Time Pose Estimation from Points and …

Accurate and Linear Time Pose Estimationfrom Points and LinesAlexander Vakhitov1, Jan Funke2 & Francesc Moreno-Noguer2

1 St. Petersburg State University, St. Petersburg, Russia; Skolkovo Institute of Science and Technology, Moscow, Russia, [email protected] Institut de Robòtica i Informàtica Industrial, UPC-CSIC, Barcelona, Spain, {jfunke,fmoreno}@iri.upc.edu

Camera Pose From Lines andPoints

Model :• 3D points• line segments (3D

endpoints)

Observation:• noisy projections of

3D points• noisy projections of

line endpointsshifted along theline

Algebraic Line Error

Point projectionx̃ = π(θ, X)

• X 3D point,• x̃ ∈ R

3

homogeneouscoordinates of thepoint projection,

• θ pose parameters.

Line projection

l̂i = p̃id×q̃i

d, li = l̂i

|̂li| ∈ R3.

• p̃id, q̃i

d 2D segmentendpoints

• li normalized linecoefficients

Point-to-line error:Epl(θ, X, li) = (li)�π(θ, X). (1)

Using 3D segment endpoints {Pi, Qi} in (1) as X,Algebraic line segment error (ALSE):

El(θ, Pi, Qi, li) =E2

pl(θ, Pi, li)+

E2pl(θ, Qi, li).

• linear in 3Dsegment endpoint

• but: depends onsegment shift

Segment Shift Problem

• p, q - projected 3Dendpoints

• pd, qd - detected2D endpoints

• d1 + d4 ALSE• d2 + d3 optimal

error

3 step method:

1 Initial estimate with E/OPnPL2 Shift correction3 Final estimate with E/OPnPL

EPnPL

Unknowns are 4 control points Cj.Unknowns vector μ = [C�

1 , . . . , C�4 ]�

Point projection: πEPnP(θ, X) = ∑4j=1 αjCc

j

Points cost: ‖Mpμ‖2 → minDenote mi

l(Xi) = ([α1, . . . , α4] ⊗ li)�

Point-to-line error: Epl,EPnP = (mil(Pi))�μ

Lines cost: Elines = ∑i

⎛⎜⎝(mi

l(Pi))�μ⎞⎟⎠2+

+⎛⎜⎝(mi

l(Qi))�μ⎞⎟⎠2

.Total cost: ‖Mpμ‖2 + Elines =

= ‖M̄μ‖2 → minμ

OPnPL

Unknowns are quaternion parameters a, b, c, dCamera pose R, tVectorized pose λR = r̂, λt = t̂Point projection: πOPnP(θ, X) = λ(R̂X + t̂).Point-to-line error: Epl,OPnP = l�i λ(R̂X + t̂)Knowns (point/line) Gp/l, Hp/l, kp/lPoints/lines cost: Ep/l = ‖Gp/l̂r + Hp/l̂t12 + kp/l‖2

Total cost: E = Epoints + Elines → minExpress t̂12 from ∂E/∂t̂ = 0Solve using GB ∂E

∂a = ∂E∂b = ∂E

∂c = ∂E∂d = 0

Contributions

Points Lines Lines+Points

Model contours with estimated pose (white), detected lines and points (blue)

• algebraic line error as linear constraints on theline endpoints

• OPnP [1], EPnP [2] => OPnPL, EPnPL

Key features:

• camera pose accuracy increased• negligible runtime overhead

Code publicly available at: github.com/alexander-vakhitov/pnpl

Synthetic Experiments

(a) Accuracy w.r.t. noise level, nonplanar: npoints = 6, nlines = 10

0 0.5 1 1.5 2 2.5 3 3.5 4Noise, std.dev. (pix.)

0

0.5

1

1.5

Rot

atio

n E

rror

(deg

rees

)

Median Rotation

0 0.5 1 1.5 2 2.5 3 3.5 4Noise, std.dev. (pix.)

0

2

4

6

Rot

atio

n E

rror

(deg

rees

)

Mean Rotation

0 0.5 1 1.5 2 2.5 3 3.5 4Noise, std.dev. (pix.)

0

0.5

1

1.5

2

Tran

slat

ion

Err

or (%

)

Median Translation

0 0.5 1 1.5 2 2.5 3 3.5 4Noise, std.dev. (pix.)

0

1

2

3

4

5

Tran

slat

ion

Err

or (%

)

Mean TranslationMirzaeiRPnLPlueckerEPnP_GNOPnPDLTEPnPLOPnPLOPnP*

(b) Accuracy w.r.t. feature num., nonplanar: σnoise = 1

6 7 8 9 10 11 12 13 14 15n

p, n

l

0

0.5

1

Rot

atio

n E

rror

(deg

rees

)

Median Rotation

6 7 8 9 10 11 12 13 14 15n

p, n

l

0

0.5

1

Rot

atio

n E

rror

(deg

rees

)

Mean Rotation

6 7 8 9 10 11 12 13 14 15n

p, n

l

0

0.5

1

1.5

Tran

slat

ion

Err

or (%

)

Median Translation

6 7 8 9 10 11 12 13 14 15n

p, n

l

0

0.5

1

1.5

2

Tran

slat

ion

Err

or (%

)

Mean TranslationMirzaeiRPnLPlueckerEPnP_GNOPnPDLTEPnPLOPnPLOPnP*

(c) Accuracy w.r.t. noise level, nonplanar, only lines

0 0.5 1 1.5 2 2.5 3 3.5 4Noise, std.dev. (pix.)

0

0.5

1

1.5

Rot

atio

n E

rror

(deg

rees

)

Median Rotation

0 0.5 1 1.5 2 2.5 3 3.5 4Noise, std.dev. (pix.)

0

2

4

6

Rot

atio

n E

rror

(deg

rees

)

Mean Rotation

0 0.5 1 1.5 2 2.5 3 3.5 4Noise, std.dev. (pix.)

0

0.5

1

1.5

2

Tran

slat

ion

Err

or (%

)

Median Translation

0 0.5 1 1.5 2 2.5 3 3.5 4Noise, std.dev. (pix.)

0

1

2

3

4

5

Tran

slat

ion

Err

or (%

)

Mean Translation

Mirzaei

RPnL

Pluecker

DLT

EPnPL

OPnPL

OPnP*

(d) Running time of the algorithms, of the phases of OPnPL, EPnPL

80 200 320 440 560 np+nl

0

0.02

0.04

0.06

0.08

Aver

age

Runt

ime

(sec

)

Time

20 220 420 620np+nl

0

0.005

0.01

0.015

0.02

Tim

e, s

.

Mean processing time

20 220 420 620np+nl

0

0.01

0.02

0.03

0.04

Tim

e, s

.

Mean solving timeMirzaeiRPnLPlueckerEPnP_GNOPnPDLTEPnPLOPnPL

See other experiments (planar case etc.) in the paper.

Real Experiments

OPnP (pt)OPnPL

(lin)/(pt+lin) OPnP (pt)OPnPL

(lin)/(pt+lin)

(21.3, 100.0) (9.3, 30.5)/(9.5, 28.2) (61.5, 605.4) (0.4, 6.0)/(0.3, 5.7)

(1.3, 33.8) (0.6, 9.1)/(0.2, 2.6) (118.2, 328.0) (2.7, 11.5)/(2.7, 11.3)

(3.2, 100.0) (0.8, 3.3)/(0.9, 5.6) (9.5, 100.0) (0.7, 3.0)/(0.2, 1.3)

(31.8, 162.2) (1.3, 6.8)/(1.3, 6.8) (1.3, 22.0) (0.9, 10.1)/(0.2, 2.3)

(Rot. Err. (deg), Translation Error (%))

[1] Y. Zheng, Y. Kuang, S. Sugimoto, K. Astrom, andM. Okutomi.Revisiting the PnP problem: a fast, general and optimalsolution.In Computer Vision (ICCV), 2013 IEEE InternationalConference on, pages 2344–2351. IEEE, 2013.

[2] V. Lepetit, F. Moreno-Noguer, and P. Fua.EPnP: An accurate O(n) solution to the PnP problem.International Journal of Computer Vision,81(2):155–166, 2009.

Top Related