point distribution models active appearance models

Post on 13-Jan-2016

47 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Point Distribution Models Active Appearance Models. Compilation based on: Dhruv Batra ECE CMU Tim Cootes Machester. Essence of the Idea (cont.). Explain a new example in terms of the model parameters. So what’s a model. Model. “texture”. “Shape”. Active Shape Models. - PowerPoint PPT Presentation

TRANSCRIPT

Point Distribution Models Active Appearance Models

Compilation based on: Dhruv Batra ECE CMU Tim Cootes Machester

Essence of the Idea (cont.)

Explain a new example in terms of the model parameters

So what’s a model

Model

“Shape” “texture”

Active Shape Models

training set

Texture Models

warp to mean shape

Intensity Normalisation

Allow for global lighting variations Common linear approach

Shift and scale so that Mean of elements is zero Variance of elements is 1

Alternative non-linear approach Histogram equalization

Transforms so similar numbers of each grey-scale value

st /)1'( gg

'1 ign

t22 )'(

1

1

tgn

s i

Shape: Review of Construction

Mark face regionon training set

Sample region

Normalise

Statistical Analysis

'g

g

Pbgg

The Fun Step

Multivariate Statistical Analysis

Need to model the distribution of normalised vectors Generate plausible new examples Test if new region similar to training set Classify region

Fitting a gaussian

Mean and covariance matrix of data define a gaussian model

g

1g

2g

Principal Component Analysis

Compute eigenvectors of covariance, S

Eigenvectors : main directions Eigenvalue : variance along

eigenvector

11p22p

g

1g

2g

Eigenvector Decomposition

If A is a square matrix then an eigenvector of A is a vector, p, such that

Usually p is scaled to have unit length,|p|=1

pAp λp with associated eigenvalue theis λ

Eigenvector Decomposition

If K is an n x n covariance matrix, there exist n linearly independent eigenvectors, and all the corresponding eigenvalues are non-negative.

We can decompose K as

TPDPK

)( 1 nppP

n

ndiag

00

00

00

)( 2

1

1D

Eigenvector Decomposition

Recall that a normal pdf has

The inverse of the covariance matrix is

)5.0exp()( 1xKxx Tp

TPPDK 11

TPDPK IPPPP TT

1

12

11

1

00

00

00

n

D

Fun with Eigenvectors

The normal distribution has form

exp(...)||)2()( 5.02/ Kx np

n

ii

TT

1

|||||||||||| DPDPPDPK

Fun with Eigenvectors

Consider the transformation

)( xxPb T

1p

2p

1x

2x

x1b

2b

Fun with Eigenvectors

The exponent of the distribution becomes

M

bn

i i

i

T

TT

T

5.0

5.0

5.0

)()(5.0

)()(5.0

1

2

1

1

1

bDb

xxPPDxx

xxKxx

mean thefrom distance' is`Mahalanob theis M

Normal distribution

Thus by applying the transformation

The normal distribution is simplified to)( xxPb T

)5.0exp()()( Mkpp bx

n

i i

ibM1

2

5.0

1

5.0 )()2(

n

ii

nk

Dimensionality Reduction

Co-ords often correllated Nearby points move together

11bpxx

1b

xx

1p

Dimensionality Reduction

Data lies in subspace of reduced dim.

However, for some t,

i

i

nnbb ppxPbxx 11

tjb j if 0

t

) is of (Variance jjb

Approximation

Each element of the data can be written

rbPxx t )( 1 tt ppP

n

tiir n 1

2 1 , of elements of Variance r

)( xxPb Tt

222 |||||| error,ion Approximat bxxr

bPxx t

Normal PDF

)5.0exp()( ttt Mkp x

2

2

1

2 ||

r

t

i i

it

bM

r

5.0

1

)(25.0 )()2(

t

ii

tnr

ntk

:others all along and

, directions along varianceAssuming2

i

r

it

p

Useful Trick

If x of high dimension, S huge If No. samples, N<dim(x) use

),,( 1 xxxxD N

NN x T

NDDS

1 DDT T

N

1

iiλ uT r eigenvecto with of eigenvaluean is If

iiλ DuS r eigenvecto with of eigenvaluean is then

Building Eigen-Models

Given examples Compute mean and eigenvectors of

covar. Model is then

P – First t eigenvectors of covar. matrix

b – Shape model parameters

}{ ig

Pbgg

Eigen-Face models

Model of variation in a region

1b 2b

4b3b

Pbgg

Applications: Locating objects

Scan window over target region At each position:

Sample, normalise, evaluate p(g) Select position with largest p(g)

Multi-Resolution Search

Train models at each level of pyramid Gaussian pyramid with step size 2 Use same points but different local

models Start search at coarse resolution

Refine at finer resolution

Application: Object Detection

Scan image to find points with largest p(g)

If p(g)>pmin then object is present Strictly should use a background

model:

This only works if the PDFs are good approximations – often not the case

)background()()model()( backgroundmodel PpPp gg

Back (sadly) to Texture Models

raster scan

Normalizations

PCA Galore

Reduce Dimensions of shape vector

Reduce Dimension of “texture” vector

They are still correlated; repeat..

Object/Image to Parameters

modeling

~80

Playing with the Parameters

First two modes of shape variation First two modes of gray-level variation

First four modes of appearance variation

Active Appearance Model Search

Given: Full training model set, new image to be interpreted, “reasonable” starting approximation

Goal: Find model with least approximation error

High Dimensional Search: Curse of the dimensions strikes again

Active Appearance Model Search

Trick: Each optimization is a similar problem, can be learnt

Assumption: Linearity

Perturb model parameters with known amount

Generate perturbed image and sample error

Learn multivariate regression for many such perterbuations

Active Appearance Model Search

Algorithm: current estimate of model parameters: normalized image sample at current estimate

Active Appearance Model Search

Slightly different modeling:

Error term:

Taylor expansion (with linear assumption)

Min (RMS sense) error:

Systematically perturb and estimate by numerical differentiation

Active Appearance Model Search (Results)

Sub-cortical Structures

Initial Position Converged

Random Aside

Shape Vector provides alignment

=

43

Alexei (Alyosha) Efros, 15-463 (15-862): Computational Photography, http://graphics.cs.cmu.edu/courses/15-463/2005_fall/www/Lectures/faces.ppt

Random Aside

Alignment is the key

1. Warp to mean shape

2. Average pixels

Alexei (Alyosha) Efros, 15-463 (15-862): Computational Photography, http://graphics.cs.cmu.edu/courses/15-463/2005_fall/www/Lectures/faces.ppt

Random Aside

Enhancing Gender

more same original androgynous more opposite

D. Rowland, D. Perrett. “Manipulating Facial Appearance through Shape and Color”, IEEE Computer Graphics and Applications, Vol. 15, No. 5: September 1995, pp. 70-76

Random Aside (can’t escape structure!)

Alexei (Alyosha) Efros, 15-463 (15-862): Computational Photography, http://graphics.cs.cmu.edu/courses/15-463/2005_fall/www/Lectures/faces.ppt

Antonio Torralba & Aude Oliva (2002)

Averages: Hundreds of images containing a person are

averaged to reveal regularities in the intensity patterns across

all the images.

Random Aside (can’t escape structure!)

“100 Special Moments” by Jason Salavon

Jason Salavon, http://salavon.com/PlayboyDecades/PlayboyDecades.shtml

Random Aside (can’t escape structure!)

“Every Playboy Centerfold, The Decades (normalized)” by Jason Salavon

1960s 1970s 1980sJason Salavon, http://salavon.com/PlayboyDecades/PlayboyDecades.shtml

top related