light field video stabilization

1
Light Field Video Stabilization Hailin Jin, Aseem Agarwala Adobe Systems, Inc. Brandon M. Smith, Li Zhang Computer Sciences, UW-Madison This work is sup- ported in part by Adobe System Inc. & NSF IIS-0845916. Motivation Video Stabilization Results Summary of Contributions State-of-the-art Works well if structure from motion is successful • Small dynamic targets • Background has enough visual features Liu et al., SIGGRAPH 2009 Videos shot with a hand-held conventional video camera often appear very shaky. This shakiness is often the most distracting aspect of amateur videos that easily distinguishes them from more professional work. New Approach Viewplus Profusion 25C • New-view synthesis [Levoy & Hanrahan SIGGRAPH ‘96, Gortler et al. SIGGRAPH ‘96] • Synthetic aperture [Wilburn et al., SIGGRAPH 2005] • Noise Removal [Zhang et al., CVPR 2009] Existing applications • Video Stabilization New application Panasonic HD Stereo Camcorder Why Does a Camera Array Help? Stabilization as image based rendering [Buehler et al. CVPR 2001] More input views at each time instant • Easier to work with dynamic scenes • Better handling of parallax Actual Camera Path Synthesize a video along a virtual smooth camera path Virtual Camera Path How to Avoid Structure from Motion? Advantage: Do not need to compute 3D input camera path Spacetime optimization: Maximize smoothness of virtual video as function of {R f , t f } f=1…F Actual Camera Path Virtual Camera Path Insight: Only Need Relative Transformation R f , t f New View Synthesis R,t Synthesized Image Relative Transformation New View Synthesis Input Images Depth Maps Future Work Professional Solutions Use special hardware to avoid camera shake steadicam camera dolly camera crane Expensive Cumbersome Consumer Solutions 2D-transformation based methods Burt & Anandan, Image Stabilization by Registration to a Reference Mosaic. DARPA Image Understanding Workshop, 1994 Hansen et al., Real-Time Scene Stabilization and Mosaic Construction. DARPA Image Understanding Workshop, 1994 Lee et al., Video Stabilization Using Robust Feature Trajectories. ICCV 2009 sensor stabilization Vibration Reduction (VR) optical stabilization • Distant scenes • Limited DOF • Small baseline • Rotational camera motion Matching Salient Features Canny Edge Maps Optical Flow [Bruhn et al. 2005] to Define the Smoothness of a Video? Original Camera å f=2 F-1 + E reg Virtual Camera w f,j {R,t} f=1,…,F ( ) = E , Z f-1 , Z f , Z f+1 p f-1 p f+1 p f ) 2 q f,j - - q f-1,j prev + q f+1,j next 1 ( q f+1 q f-1 reproj(p f-1 , Z f-1 , R f-1 ,t f-1 ) reproj(p f+1 , Z f+1 , R f+1 ,t f+1 ) q f reproj(p f , Z f , R f ,t f ) Îf å j | | 2 | | Algorithm Outline 3. Detect Canny edges, use flow to match edges over time 2. Compute optical flow for each time instant [Bruhn et al. 2005] 1.Compute depth map for each time instant [Smith et al. 2009] 4. Run spacetime optimization to find {R,t} f=1,…,F å f=2 F-1 + E reg w f,j {R,t} f=1,…,F ( ) = E ) 2 1 ( Îf å j f | | 2 q f,j - - q f-1,j prev + q f+1,j next 5. New view synthesis | | Use fewer cameras (two instead of five) Motion deblurring with camera arrays Better handle image periphery problems Evaluate a range of camera baselines Increase algorithm efficiency • Stabilization without structure from motion • Can handle challenging cases • Use an array for stabilization – Nearby, dynamic targets – Large scene depth variation – Violent camera shake UW-Madison Graphics & Vision Group

Upload: fritz

Post on 22-Feb-2016

28 views

Category:

Documents


1 download

DESCRIPTION

Light Field Video Stabilization. Insight : Only Need Relative Transformation R f , t f. This work is sup- ported in part by Adobe System Inc. & NSF IIS-0845916. Relative Transformation. , Z f-1. p f-1. p f+1. , Z f+1. , Z f. p f. Brandon M. Smith, Li Zhang - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Light Field Video Stabilization

Light Field Video StabilizationHailin Jin, Aseem Agarwala

Adobe Systems, Inc.Brandon M. Smith, Li Zhang

Computer Sciences, UW-Madison

This work is sup-ported in part byAdobe System Inc. & NSF IIS-0845916.

Motivation

Video Stabilization

Results

Summary of Contributions

State-of-the-art

Works well if structure from motion is successful

• Small dynamic targets• Background has enough visual features

Liu et al., SIGGRAPH 2009

Videos shot with a hand-held conventional video camera often appear very shaky. This shakiness is often the most distracting aspect of amateur videos that easily distinguishes them from more professional work.

New Approach

Viewplus Profusion 25C

• New-view synthesis [Levoy & Hanrahan SIGGRAPH ‘96, Gortler et al. SIGGRAPH ‘96]• Synthetic aperture [Wilburn et al., SIGGRAPH 2005]• Noise Removal [Zhang et al., CVPR 2009]

Existing applications

• Video StabilizationNew application

Panasonic HD Stereo Camcorder

Why Does a Camera Array Help?Stabilization as image based rendering [Buehler et al. CVPR 2001]

More input views at each time instant• Easier to work with dynamic scenes• Better handling of parallax

Actual Camera Path

Synthesize a video along a virtual smooth camera path

Virtual Camera Path

How to Avoid Structure from Motion?

Advantage: Do not need to compute 3D input camera path

Spacetime optimization: Maximize smoothness of virtual video as function of {Rf, tf}f=1…F

Actual Camera Path

Virtual Camera Path

Insight: Only Need Relative Transformation Rf, tf

New View Synthesis

R,t

Synthesized Image

RelativeTransformation

New View Synthesis

Input Images Depth Maps

Future Work

Professional Solutions

Use special hardware to avoid camera shakesteadicam camera dollycamera crane

• Expensive • Cumbersome

Consumer Solutions2D-transformation based methods

Burt & Anandan, Image Stabilization by Registration to a Reference Mosaic. DARPA Image Understanding Workshop, 1994

Hansen et al., Real-Time Scene Stabilization and Mosaic Construction. DARPA Image Understanding Workshop, 1994

Lee et al., Video Stabilization Using Robust Feature Trajectories. ICCV 2009

sensor stabilization

Vibration Reduction (VR)

optical stabilization

• Distant scenes • Limited DOF

• Small baseline• Rotational camera motion

Matching Salient Features

Canny Edge Maps

Optical Flow [Bruhn et al. 2005]

How to Define the Smoothness of a Video?

Original Camera

åf=2

F-1

+ Ereg

Virtual Camera

wf,j{R,t}f=1,…,F( )=E

, Zf-1 , Zf, Zf+1pf-1 pf+1pf

)2

qf,j - - qf-1,jprev+ qf+1,jnext

1 (

qf+1qf-1

reproj(pf-1, Zf-1, Rf-1,tf-1) reproj(pf+1, Zf+1, Rf+1,tf+1)

qf

reproj(pf, Zf, Rf,tf)

Îfåj

| | 2||

Algorithm Outline

3. Detect Canny edges, use flow to match edges over time

2. Compute optical flow for each time instant [Bruhn et al. 2005]

1. Compute depth map for each time instant [Smith et al. 2009]

4. Run spacetime optimization to find {R,t}f=1,…,Få

f=2

F-1

+ Eregwf,j{R,t}f=1,…,F( )=E )21 (

Îfåj f

| | 2qf,j - - qf-1,jprev

+ qf+1,jnext

5. New view synthesis

||

• Use fewer cameras (two instead of five)• Motion deblurring with camera arrays• Better handle image periphery problems• Evaluate a range of camera baselines

• Increase algorithm efficiency• Stabilization without structure from motion• Can handle challenging cases

• Use an array for stabilization

– Nearby, dynamic targets– Large scene depth variation– Violent camera shake

UW-MadisonGraphics & Vision Group