the conditional lucas & kanade...

1
The Conditional Lucas & Kanade Algorithm Chen-Hsuan Lin , Rui Zhu , and Simon Lucey {chenhsul, rz1}@andrew.cmu.edu, [email protected] Contributions Establish theoretical connections: The Lucas & Kanade Algorithm ( LK ) Supervised Descent Method ( SDM ) The Conditional LK Algorithm : efficient aligner which achieves comparable performance with little training data LK Aligns a source image against a template image by minimizing their appearance error SDM (inverse-compositional) 1st-order Taylor approximation solve Similarity Learns the appearance-geometry linear relationship from synthetically generated data Linear regressors are trained from independently sampled data per iteration Prediction of the geometric displacement is learned conditioned on the appearance Training: Evaluation: Both assume a linear relationship between appearance and geometry Solve for geometric updates iteratively until convergence is reached Difference N pixels, 2 gradient directions 2N degrees of freedom in R LK SDM N pixels, P warp parameters PN degrees of freedom in R Full dependency across pixels Pixel independence assumption Generative appearance synthesis 2N N P fits by trying to synthesize Conditional learning objective N P P N predicts conditioned on pixel intensity image gradients y x N 2N P predefined (regularization) Conditional LK N P 2N N 2N P Learn the image gradients conditioned on the appearance Final regressor: Pixel independence assumption Conditional learning objective 2N degrees of freedom in R Warp swapping property: Geometric warp functions can be swapped and combined with the conditional image gradients to form another series of linear regressors Non-linear least squares problem Experiments classically taken via finite differencing ITERATIONS 1 TO 5 ITERATIONS 1 TO 5 Visualization of Convergence Analysis Warp Swapping Frequency of convergence Convergence rate Applications Low frame-rate tracking Similar results for different warps, feature images, examples/iteration, and test perturbations BLUE: IC-LK YELLOW: Conditional LK

Upload: others

Post on 18-May-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The Conditional Lucas & Kanade Algorithmci2cv.net/media/papers/ChenHsuan-ECCV_2016_poster.pdf · The Conditional Lucas & Kanade Algorithm Chen-Hsuan Lin, Rui Zhu, and Simon Lucey

The Conditional Lucas & Kanade AlgorithmChen-Hsuan Lin, Rui Zhu, and Simon Lucey{chenhsul, rz1}@andrew.cmu.edu, [email protected]

Contributions• Establish theoretical connections: The Lucas & Kanade Algorithm (LK) Supervised Descent Method (SDM)

• The Conditional LK Algorithm: efficient aligner which achieves comparable performance with little training data

LK• Aligns a source image against a template image by

minimizing their appearance error SDM

(inverse-compositional)

1st-order Taylor approximation

solve

Similarity

• Learns the appearance-geometry linear relationship from

synthetically generated data

• Linear regressors are trained from independently

sampled data per iteration

• Prediction of the geometric displacement is learned

conditioned on the appearance

Training:

Evaluation:

• Both assume a linear relationship between appearance and geometry

• Solve for geometric updates iteratively until convergence is reached

Difference

N pixels, 2 gradient directions

2N degrees of freedom in R

LK SDM

N pixels, P warp parameters

PN degrees of freedom in R

Full dependency across pixels Pixel independence assumption

Generative appearance synthesis

2N

N P

fits by trying to synthesize

Conditional learning objective

NP P

N

predicts conditioned on

pixelintensity

imagegradients

y

x

N 2N

P

predefined

(regularization)

Conditional LK

NP

2N

N 2N

P• Learn the image gradients conditioned on the appearance

• Final regressor:

Pixel independence assumption

Conditional learning objective

2N degrees of freedom in R

• Warp swapping property:

Geometric warp functions can be swapped and combined with the

conditional image gradients to form another series of linear regressors

• Non-linear least squares problem

Experiments

classically taken via

finite differencing

ITERATIONS 1 TO 5 ITERATIONS 1 TO 5

Visualization of Convergence Analysis

Warp Swapping

Frequency of convergence Convergence rate

Applications

Low frame-rate tracking

Similar results for different warps, feature images, examples/iteration, and test perturbations

BLUE: IC-LK YELLOW: Conditional LK