machine learning for signal processingmlsp.cs.cmu.edu/courses/fall2016/slides/lecture7.e... ·...

125
Machine Learning for Signal Processing Data driven representations: 1. Eigenrepresentations Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1

Upload: others

Post on 04-Aug-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Machine Learning for Signal Processing

Data driven representations: 1. Eigenrepresentations

Class 7. 22 Sep 2016

Instructor: Bhiksha Raj

11-755/18-797 1

Page 2: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Linear Algebra Reminders: 1

• A matrix transforms a sphereoid to an ellipsoid

• The Eigenvectors of the matrix are the vectors who do

not change direction during this transformation

11-755/18-797 2

𝐲 = 𝐀𝐱

Page 3: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Linear Algebra Reminders: 2

• A matrix transforms the orthogonal set of right singular vectors to the orthogonal set of left singular vectors

– These are the major axes of the ellipsoid obtained from the sphereoid

– The scaling factors are the singular values 11-755/18-797 3

𝐲 = 𝐀𝐱

𝐀 = 𝐔𝐒𝐕𝑇

𝑽𝟏 𝑽𝟐

𝑼𝟏

𝑼𝟐

𝑠1

𝑠2

Page 4: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Linear Algebra Reminders: 3

• For a symmetric matrix left and right singular vectors are identical – Orthogonal vectors which do not change direction from the transform

– These are the major axes of the ellipsoid obtained from a sphereoid

• These are also the eigenvectors of the matrix – Since they do not change direction

11-755/18-797 5

𝐲 = 𝐀𝐱

𝐔𝐒𝐕𝑇=𝐕𝐒𝐔𝑇

𝑼𝟏 𝑼𝟐

𝑼𝟏

𝑼𝟐 𝑠1

𝑠2

𝐀 = 𝐀𝑇 𝑨=𝐔𝐒𝐔𝑇 𝐔 = 𝐕

Page 5: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Linear Algebra Reminders: 4 –> SVD

• SVD decomposes a matrix into a the sum of a

sequence of “unit-energy” matrices weighted

by the corresponding singular values

• Retaining only the “high-singular-value”

components retains most of the energy in the

matrix

11-755/18-797 6

Page 6: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

SVD on data-container matrices

11-755/18-797 7

𝐗 = 𝐔𝐒𝐕T =

𝐔 = 𝐕T = 𝐒 = 0 0

+

+

… 𝐗 = 𝑠𝑖𝑈𝑖𝑉𝑖

𝑇

𝑖

𝑈𝑖𝑉𝑖𝑇

𝐹

2= 1

Page 7: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

SVD decomposes the data

• Each left singular vector and the corresponding right singular vector contribute one “basic” component to the data

• The “magnitude” of its contribution is the corresponding singular value

11-755/18-797 8

[

[ [

[

𝐗 = 𝑋1 𝑋2 ⋯𝑋𝑁 𝐗 = 𝐔𝐒𝐕T

...444333222111 TTTT VUsVUsVUsVUsX

Page 8: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Expanding the SVD

• Each left singular vector and the corresponding right singular vector contribute on “basic” component to the data

• The “magnitude” of its contribution is the corresponding singular value

• Low singular-value components contribute little, if anything

– Carry little information

– Are often just “noise” in the data 11-755/18-797 9

...444333222111 TTTT VUsVUsVUsVUsX

Page 9: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Expanding the SVD

• Low singular-value components contribute little, if anything

– Carry little information

– Are often just “noise” in the data

• Data can be recomposed using only the “major” components with minimal change of value

– Minimum squared error between original data and recomposed data

– Sometimes eliminating the low-singular-value components will, in fact “clean” the data

11-755/18-797 10

...444333222111 TTTT VUsVUsVUsVUsX

TT VUsVUs 222111 X

Page 10: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

• On with the topic for today…

11-755/18-797 11

Page 11: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Recall: Representing images

• The most common element in the image: background

– Or rather large regions of relatively featureless shading

– Uniform sequences of numbers

11-755/18-797 12

Page 12: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Adding more bases

• Checkerboards with different variations

11-755/18-797 13

BWPROJECTION

ageBpinvW

ageBW

Im)(

Im

] [

.

.

...Im

3213

2

1

332211

BBBBw

w

w

W

BwBwBwage

B1 B2 B3 B4 B5 B6

Getting closer at 625 bases!

Page 13: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

“Bases”

• “Bases” are the “standard” units such that all instances can be expressed a weighted combinations of these units

• Ideal requirements: Bases must be orthogonal

• Checkerboards are one choice of bases – Orthogonal

– But not “smooth”

• Other choices of bases: Complex exponentials, Wavelets, etc..

11-755/18-797 14

B1 B2 B3 B4 B5 B6

...332211 BwBwBwimage

Page 14: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Data specific bases?

• Issue: The bases we have considered so far are data agnostic

– Checkerboards, Complex exponentials, Wavelets..

– We use the same bases regardless of the data we analyze • Image of face vs. Image of a forest

• Segment of speech vs. Seismic rumble

• How about data specific bases

– Bases that consider the underlying data • E.g. is there something better than checkerboards to describe

faces

• Something better than complex exponentials to describe music?

11-755/18-797 15

Page 15: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Recall: Basis based representation

• The most important challenge in ML: Find the best set of bases for a given data set

11-755/18-797 16

u1

u2

u3

v1

v2

v3

Page 16: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

The Energy Compaction Property

• Define “better”?

• The description

• The ideal:

– If the description is terminated at any point, we should

still get most of the information about the data

• Error should be small

11-755/18-797 17

NN BwBwBwBwX ...332211

iii BwBwBwX ...ˆ2211

ii XXError

1 ii ErrorError

Page 17: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

• A collection of images

– All normalized to 100x100 pixels

• What is common among all of them?

– Do we have a common descriptor?

11-755/18-797 18

Data-specific description of faces

Page 18: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

A typical face

• Assumption: There is a “typical” face that captures most of

what is common to all faces

– Every face can be represented by a scaled version of a typical face

– We will denote this face as V

• Approximate every face f as f = wf V

• Estimate V to minimize the squared error

– How? What is V?

11-755/18-797 19

The typical face

Page 19: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

A collection of least squares typical faces

• Assumption: There are a set of K “typical” faces that captures most of all faces

• Approximate every face f as f = wf,1 V1+ wf,2 V2 + wf,3 V3 +.. + wf,k Vk

– V2 is used to “correct” errors resulting from using only V1. So on average

– V3 corrects errors remaining after correction with V2

– And so on..

– V = [V1 V2 V3]

• Estimate V to minimize the squared error

– How? What is V?

11-755/18-797 20

2

1,1,

2

2,2,1,1, )( ffffff VwfVwVwf

2

2,2,1,1,

2

3,3,2,2,1,1, )( )( ffffffffff VwVwfVwVwVwf

Page 20: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

A recollection

11-755/18-797 21

M =

N =

S=Pinv(N) M

? U =

• Finding the best explanation of music M in terms of notes N

• Also finds the score S of M in terms of N

MNpinvS

MNSU

)(

Page 21: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

How about the other way?

11-755/18-797 22

N = M Pinv(S)

M =

N = ? ?

S =

U =

• Finding the notes N given music M and score S • Also finds best explanation of M in terms of S

)( SpinvMN

MNSU

Page 22: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Finding Everything

11-755/18-797 23

Find the four notes and their score that generate the closest approximation to M

M =

N = ? ?

S =

U =

?

MNSU

Page 23: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

The Same Problem

• Here 𝑈, 𝑉 and 𝑊 are all unknown and must be estimated

– Such that the total squared error between 𝐹 and 𝑈 is minimized

• For each face 𝑓

– 𝑓 = 𝑤𝑓,1𝑉1 + 𝑤𝑓,2𝑉2 + ⋯+ 𝑤𝑓,𝐾𝑉𝐾

• For the collection of faces 𝐹 ≈ 𝑉𝑊

– 𝑉is 𝐷 × 𝐾, 𝑊 is 𝐾 × 𝑁 • 𝐷 is the number of pixels, 𝑁 is the number of faces in the set

11-755/18-797 24

F =

U = Approximation

W

V

Typical faces

Page 24: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Abstracting the problem: Finding the FIRST typical face

• Each “point” represents a face in “pixel space”

11-755/18-797 25

Pixel 1

Pix

el 2

Page 25: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Abstracting the problem: Finding the FIRST typical face

• Each “point” represents a face in “pixel space”

• Any “typical face” V is a vector in this space

11-755/18-797 26

Pixel 1

Pix

el 2

V

Page 26: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Abstracting the problem: Finding the FIRST typical face

• Each “point” represents a face in “pixel space”

• The “typical face” V is a vector in this space

• The approximation wf, V for any face f is the projection of f onto V

• The distance between f and its projection wfV is the projection error for f

11-755/18-797 27

Pixel 1

Pix

el 2

V

f

error

Page 27: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Abstracting the problem: Finding the FIRST typical face

• Every face in our data will suffer error when approximated by its projection on V

• The total squared length of all error lines is the total squared projection error

11-755/18-797 28

Pixel 1

Pix

el 2

V

Page 28: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Abstracting the problem: Finding the FIRST typical face

• The problem of finding the first typical face V1: Find the V for which the total projection error is minimum!

• This “minimum squared error” V is our “best” first typical face

• It is also the first Eigen face

11-755/18-797 29

Pixel 1

Pix

el 2

V

Page 29: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Abstracting the problem: Finding the FIRST typical face

• The problem of finding the first typical face V1: Find the V for which the total projection error is minimum!

• This “minimum squared error” V is our “best” first typical face

• It is also the first Eigen face

11-755/18-797 30

Pixel 1

Pix

el 2

V

Page 30: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Abstracting the problem: Finding the FIRST typical face

• The problem of finding the first typical face V1: Find the V for which the total projection error is minimum!

• This “minimum squared error” V is our “best” first typical face

• It is also the first Eigen face

11-755/18-797 31

Pixel 1

Pix

el 2

V

Page 31: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Abstracting the problem: Finding the FIRST typical face

• The problem of finding the first typical face V1: Find the V for which the total projection error is minimum!

• This “minimum squared error” V is our “best” first typical face

• It is also the first Eigen face

11-755/18-797 32

Pixel 1

Pix

el 2

V

Page 32: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Abstracting the problem: Finding the FIRST typical face

• The problem of finding the first typical face V1: Find the V for which the total projection error is minimum!

• This “minimum squared error” V is our “best” first typical face

• It is also the first Eigen face

11-755/18-797 33

Pixel 1

Pix

el 2

V1

Page 33: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Formalizing the Problem: Error from approximating a single vector

• Consider: approximating x = wv

– E.g x is a face, and “v” is the “typical face”

• Finding an approximation wv which is closest to x

– In a Euclidean sense

– Basically projecting x onto v

11-755/18-797 34

x

y

v

x

Approximating: x = wv

Page 34: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Formalizing the Problem: Error from approximating a single vector

• Projection of a vector x on to a vector v

• Assuming v is of unit length:

11-755/18-797 35

x

y

v

x

vv

xvx

T

vxvxTˆ

vxvxxxTerror ˆ

vvTx

x-vvTx

2

error squared vxvxT

Approximating: x = wv

Page 35: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Error from approximating a single vector

• Projection 𝐱 = 𝐯𝑇𝐱 𝐯

• Squared length of projection 𝐱 2 = 𝐯𝑇𝐱 2 = 𝐯𝑇𝐱 𝑇 𝐯𝑇𝐱 = 𝐱𝑇𝐯𝐯𝑇𝐱

• Pythogoras theorem: Squared length of error 𝑒 𝐱 = 𝐱 2 − 𝐱 2

𝑒 𝐱 = 𝐱𝑇𝐱 − 𝐱𝑇𝐯𝐯𝑇𝐱

11-755/18-797 36

x

y

v

x

(vTx)v

x-(vTx)v

Page 36: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Error for many vectors

• Error for one vector:

• Error for many vectors

• Goal: Estimate v to minimize this error!

11-755/18-797 37

xvvxxxxTTTe )(

x

y

v

i

i

TT

ii

T

i

i

ieE xvvxxxx )( i i

i

TT

ii

T

i xvvxxx

Page 37: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Error for many vectors

• Total error:

• Add constraint: vTv = 1

• Constrained objective to minimize:

11-755/18-797 38

x

y

v

1 vvxvvxxxT

i i

i

TT

ii

T

iE

i i

i

TT

ii

T

iE xvvxxx

Page 38: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Two Matrix Identities

• Derivative w.r.t v

11-755/18-797 39

vxxvxxvxvvx vv

TTTTT 2

vvvv 2 T

1 vvxvvxxxT

i i

i

TT

ii

T

iE

Page 39: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Minimizing error

• Differentiating w.r.t v and equating to 0

11-755/18-797 40

x

y

v

i

T

ii 022 vvxx

1 vvxvvxxxT

i i

i

TT

ii

T

iE

vvxx

i

T

ii

Page 40: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Definition: The correlation matrix

• The encircled term is the correlation matrix

11-755/18-797 41

vvxx

i

T

ii

X = Data Matrix

XT =

Tra

nsposed

Data

Matr

ix

Correlation =

NxxxX ... 21RXXxx T

i

T

ii

Page 41: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

A detour: The correlation matrix

11-755/18-797 42

Page 42: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

A new definition: The correlation matrix

• For data-holder matrices: the product of a matrix and its transpose

– Also equal to the sum of the outer products of the columns of the matrix

– The correlation matrix is symmetric

– It quantifies the average dependence of individual components of the data on other components

11-755/18-797 43

X = Data Matrix

XT =

Tra

nsposed

Data

Matr

ix

Correlation =

NxxxX ... 21 RXXxx T

i

T

ii

TRR

𝐱𝑖 are 𝐷 dimensional

𝐷

𝐷

𝐷

Page 43: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Interpreting the correlation matrix

• Consider the effect of multiplying a unit vector

by 𝐑

11-755/18-797 44

𝐯

? 𝐲 = 𝐑𝐯

Page 44: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Interpreting the correlation matrix

• Consider the effect of multiplying a unit vector

by 𝐑

11-755/18-797 45

𝐯 𝐲 = 𝐑𝐯 𝐑𝐯 = 𝐗𝐗𝑇v

𝐑𝐯 = 𝐱𝑖𝑇v 𝐱𝑖

𝒊

𝐑𝐯 = 𝐱𝑖𝒊 𝐱𝑖𝑇v

Page 45: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

The inner product term

• Consider 𝐱𝑖𝑇v 𝐱𝑖

• This is the projection of unit vector v on 𝐱𝑖, scaled by the squared length of 𝐱𝑖

11-755/18-797 46

Understanding 𝐱𝑖𝑇v 𝐱𝑖

v 𝐱𝑖 𝜃𝑖

𝐱𝑖𝑇v

𝐱𝑖2𝐱𝑖 = cos𝜃𝑖𝐱 𝑖

Page 46: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

The inner product term

• Consider 𝐱𝑖𝑇v 𝐱𝑖

• This is the projection of unit vector v on 𝐱𝑖, scaled by the squared length of 𝐱𝑖

11-755/18-797 47

Understanding 𝐱𝑖𝑇v 𝐱𝑖

v 𝐱𝑖 𝜃𝑖

𝐱𝑖𝑇v 𝐱𝑖 = 𝐱𝑖

2cos𝜃𝑖𝐱 𝑖

Page 47: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Interpreting the correlation matrix

• Consider the effect of multiplying a unit vector

by 𝐑

11-755/18-797 48

𝐯 𝐲 = 𝐑𝐯 𝐑𝐯 = 𝐱𝑖

𝑇v 𝐱𝑖

𝒊

𝐑𝐯 = 𝐱𝑖2cos𝜃𝑖𝐱 𝑖

𝒊

Page 48: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Interpreting the correlation matrix

• Each unit vector is transformed to the cosine and energy weighted sum of the individual vectors

– Dragged towards longer vectors in proportion to square of length

– Dragged toward closer (in angle) vectors

11-755/18-797 49

𝐯

Where will Rv be?

Page 49: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Interpreting the correlation matrix

• Each unit vector is transformed to the cosine and energy weighted sum of the individual vectors

– Dragged towards longer vectors in proportion to square of length

– Dragged toward closer (in angle) vectors

11-755/18-797 50

𝐯 R𝐯

Page 50: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Interpreting the correlation matrix

• The unit sphereoid is converted to an ellipsoid

– The major axes point to the directions of greatest energy

– These are the eigenvectors

– Their length is proportional to the square of the lengths of the data vectors

• Why? 11-755/18-797 51

“Centered” zero-mean data

Page 51: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

“Uncorrelated” data

• When the average number of data instances to one side of an axis is the same as the number to the other side, the axis is not rotated

– Poistive and negative rotations cancel out

– The data are “uncorrelated” 11-755/18-797 52

“Centered” zero-mean data 𝐑𝐯 = 𝐱𝑖

2cos𝜃𝑖𝐱 𝑖𝒊

Page 52: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Interpreting the correlation matrix

• For “uncentered” data..

11-755/18-797 53

“Uncentered”

Page 53: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Returning to the problem of finding the best basis

11-755/18-797 54

Page 54: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Minimizing error

• Differentiating w.r.t v and equating to 0

11-755/18-797 55

x

y

v

i

T

ii 022 vvxx

1 vvxvvxxxT

i i

i

TT

ii

T

iE

vvxx

i

T

ii

Page 55: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

The best “basis”

• The minimum-error basis is found by solving

• v is an Eigen vector of the correlation matrix R

– is the corresponding Eigen value

11-755/18-797 56

x

y

v

vRv

Page 56: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

What about the total error?

• xTv = vTx (inner product)

11-755/18-797 57

i i

T

ii

T

i

T

iE vxxvxx

i

T

i

T

i

i

T

i

T

i

i

T

i

T

iE vvxxvvxxRvvxx

i

i

T

iE xx

i i

T

ii

T

i

T

i vxxvxx

Page 57: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Minimizing the error • The total error is

• We already know that the optimal basis is an Eigen vector

• The total error depends on the negative of the corresponding Eigen value

• To minimize error, we must maximize

• i.e. Select the Eigen vector with the largest Eigen value

11-755/18-797 58

i

i

T

iE xx

Page 58: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

The typical face

• Compute the correlation matrix for your data – Arrange them in matrix X and compute R = XXT

• Compute the principal Eigen vector of R – The Eigen vector with the largest Eigen value

• This is the typical face

11-755/18-797 59

The typical face

Page 59: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

The approximation with the first typical face

• The first typical face models some of the characteristics of the faces

• Simply by scaling its grey level

• But the approximation has error

• The second typical face must explain some of this error

11-755/18-797 60

Page 60: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

?

The second typical face

• Approximation with only the first typical face has error

• The second face must explain this error

• How do we find this this face?

11-755/18-797 61

The first typical face

The second typical face?

Page 61: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Solution: Iterate

• Get the “error” faces by subtracting the first-level approximation from the original image

• Repeat the estimation on the “error” images

11-755/18-797 62

-

-

-

-

=

=

=

=

Page 62: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Solution: Iterate

• Get the “error” faces by subtracting the first-level approximation from the original image

• Repeat the estimation on the “error” images

11-755/18-797 63

-

-

-

-

=

=

=

=

Page 63: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Abstracting the problem: Finding the second typical face

• Each “point” represents an error face in “pixel space”

• Find the vector V2 such that the projection of these error faces on V2 results in the least error

11-755/18-797 64

Pixel 1

Pix

el 2

ERROR

FACES

Page 64: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Minimizing error

• Differentiating w.r.t v and equating to 0

11-755/18-797 65

i

T

ii 022 vvee

1 vvevveeeT

i i

i

TT

ii

T

iE

vvee

i

T

ii

Pixel 1

Pix

el 2

ERROR

FACES

The same math applies

but now to the set

of error data points

Page 65: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Minimizing error

11-755/18-797 66

Pixel 1

Pix

el 2

ERROR

FACES

The same math applies

but now to the set

of error data points

• The minimum-error basis is found by solving

• v2 is an Eigen vector of the correlation matrix Re

corresponding to the largest eigen value of Re

22 vvR e T

e eeR

Page 66: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Which gives us our second typical face

• But approximation with the two faces will still result in error

• So we need more typical faces to explain this error

• We can do this by subtracting the appropriately scaled version of the second “typical” face from the error images and repeating the process

11-755/18-797 67

Page 67: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Solution: Iterate

• Get the second-level “error” faces by subtracting the scaled second typical face from the first-level error

• Repeat the estimation on the second-level “error” images

11-755/18-797 68

-

-

-

-

=

=

=

=

Error face Second-level error

Page 68: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

An interesting property

• Each “typical face” will be orthogonal to all other typical faces

– Because each of them is learned to explain what the rest could not

– None of these faces can explain one another!

11-755/18-797 69

Page 69: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

To add more faces

• We can continue the process, refining the error each time

– An instance of a procedure is called “Gram-Schmidt” orthogonalization

• OR… we can do it all at once

11-755/18-797 70

Page 70: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

With many typical faces

• Approximate every face f as f = wf,1 V1+ wf,2 V2 +... + wf,k Vk

• Here W, V and U are ALL unknown and must be determined – Such that the squared error between U and M is minimum

11-755/18-797 71

M =

U = Approximation

W

V

Typical faces

Page 71: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

With multiple bases

• Assumption: all bases v1 v2 v3.. are unit length

• Assumption: all bases are orthogonal to one another: viTvj = 0 if i != j

– We are trying to find the optimal K-dimensional subspace to project the data

– Any set of basis vectors in this subspace will define the subspace

– Constraining them to be orthogonal does not change this

• I.e. if V = [v1 v2 v3 … ], VTV = I

– Pinv(V) = VT

• Projection matrix for V = VPinv(V) = VVT

11-755/18-797 72

Page 72: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

With multiple bases

• Projection for a vector

• Error vector =

• Error length =

11-755/18-797 73

xVVxxxxTTTe )(

xVVxTˆ

xVVxxxT ˆ

x

y

V

x

VVTx

x-VVTx

Represents a

K-dimensional subspace

Page 73: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

With multiple bases

• Error for one vector:

• Error for many vectors

• Goal: Estimate V to minimize this error!

11-755/18-797 74

xVVxxxxTTTe )(

x

y

i i

i

TT

ii

T

iE xVVxxx

V

Page 74: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Minimizing error

• With constraint VTV = I, objective to minimize

– Note: now L is a diagonal matrix

– The constraint simply ensures that vTv = 1 for every basis

• Differentiating w.r.t V and equating to 0

11-755/18-797 75

IVVxVVxxx L T

i i

i

TT

ii

T

i traceE

022 L

VVxx

i

T

ii VRV L

Page 75: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

i

K

j

ji

T

iE1

xx

Finding the optimal K bases

• Compute the Eigendecompsition of the correlation matrix

• Select K Eigen vectors

• But which K?

• Total error =

• Select K eigen vectors corresponding to the K largest Eigen values

11-755/18-797 76

VRV L

Page 76: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Eigen Faces!

• Arrange your input data into a matrix X

• Compute the correlation R = XXT

• Solve the Eigen decomposition: RV = LV

• The Eigen vectors corresponding to the K largest eigen values are our optimal bases

• We will refer to these as eigen faces.

11-755/18-797 77

Page 77: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

How many Eigen faces

• How to choose “K” (number of Eigen faces)

• Lay all faces side by side in vector form to form a matrix – In my example: 300 faces. So the matrix is 10000 x 300

• Multiply the matrix by its transpose – The correlation matrix is 10000x10000

11-755/18-797 78

M = Data Matrix

MT =

Tra

nsposed

Data

Matr

ix

Correlation =

10000x300

300x10000

10000x10000

Page 78: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Eigen faces

• Compute the eigen vectors – Only 300 of the 10000 eigen values are non-zero

• Why?

• Retain eigen vectors with high eigen values (>0) – Could use a higher threshold

11-755/18-797 79

[U,S] = eig(correlation)

10000

2

1

.0.0

.....

.....

0.00

0.0.

S

U

eig

enfa

ce1

eig

enfa

ce2

Page 79: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Eigen Faces

• The eigen vector with the highest eigen value is the first typical face

• The vector with the second highest eigen value is the second typical face.

• Etc.

11-755/18-797 80

U

eig

enfa

ce1

eig

enfa

ce2

eigenface1 eigenface2

eigenface3

Page 80: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Representing a face

• The weights with which the eigen faces must be combined to compose the face are used to represent the face!

11-755/18-797 81

= w1 + w2 + w3

Representation = [w1 w2 w3 …. ]T

Page 81: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Energy Compaction Example

• One outcome of the “energy compaction principle”: the approximations are recognizable

• Approximating a face with one basis:

11-755/18-797 82

11vwf

Page 82: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Energy Compaction Example

• One outcome of the “energy compaction principle”: the approximations are recognizable

• Approximating a face with one Eigenface:

11-755/18-797 83

11vwf

Page 83: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Energy Compaction Example

• One outcome of the “energy compaction principle”: the approximations are recognizable

• Approximating a face with 10 eigenfaces:

11-755/18-797 84

10102211 ... vvv wwwf

Page 84: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Energy Compaction Example

• One outcome of the “energy compaction principle”: the approximations are recognizable

• Approximating a face with 30 eigenfaces:

11-755/18-797 85

303010102211 ...... vvvv wwwwf

Page 85: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Energy Compaction Example

• One outcome of the “energy compaction principle”: the approximations are recognizable

• Approximating a face with 60 eigenfaces:

11-755/18-797 86

6060303010102211 ......... vvvvv wwwwwf

Page 86: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

How did I do this?

11-755/18-797 87

Page 87: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

How did I do this?

• Hint: only changing weights assigned to Eigen faces..

11-755/18-797 88

Page 88: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Class specificity

• The Eigenimages (bases) are very specific to the class of data they are trained on

– Faces here

• They will not be useful for other classes

11-755/18-797 89

eigenface1 eigenface2

eigenface3

Page 89: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Class specificity

• Eigen bases are class specific

• Composing a fishbowl from Eigenfaces

11-755/18-797 90

Page 90: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Class specificity

• Eigen bases are class specific

• Composing a fishbowl from Eigenfaces

• With 1 basis

11-755/18-797 91

11vwf

Page 91: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Class specificity

• Eigen bases are class specific

• Composing a fishbowl from Eigenfaces

• With 10 bases

11-755/18-797 92

10102211 ... vvv wwwf

Page 92: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Class specificity

• Eigen bases are class specific

• Composing a fishbowl from Eigenfaces

• With 30 bases

11-755/18-797 93

303010102211 ...... vvvv wwwwf

Page 93: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Class specificity

• Eigen bases are class specific

• Composing a fishbowl from Eigenfaces

• With 100 bases

11-755/18-797 94

100100303010102211 ......... vvvvv wwwwwf

Page 94: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Universal bases • Universal bases..

• End up looking a lot like discrete cosine transforms!!!! • DCTs are the best “universal” bases

– If you don’t know what your data are, use the DCT

11-755/18-797 95

Page 95: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Relation of Eigen decomposition to SVD

• Eigen decomposition of the correlation matrix gives you left singular vectors of data matrix

11-755/18-797 96

𝐗𝐗𝑇 = 𝐑 = 𝐄𝐃𝐄𝑇

𝐗 = 𝐔𝐒𝐕𝑇

𝐗𝐗𝑇 = 𝐔𝐒𝐕𝑇 𝐕𝐒𝐔𝑇 = 𝐔𝐒2𝐔𝑇

Eigen Decomposition of the Correlation Matrix

SVD of the Data Matrix

Comparing 𝐄 = 𝐔 𝐃 = 𝐒2

Page 96: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Dimensionality Reduction

• 𝐑 = 𝐄𝐃𝐄𝑇 – The columns of 𝐄 are our “Eigen” bases

• We can express any vector 𝑋 as a combination of these bases

𝑋 = 𝑤𝐷𝑋𝐸1 + 𝑤𝐷

𝑋𝐸𝐷 + ⋯+ 𝑤𝐷𝑋𝐸𝐷

• Using only the “top” 𝐾 bases – Corresponding to the top 𝐾 Eigen values

𝑋 ≈ 𝑤𝐷𝑋𝐸1 + 𝑤𝐷

𝑋𝐸𝐷 + ⋯+ 𝑤𝐾𝑋𝐸𝐾

11-755/18-797 97

Page 97: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Dimensionality Reduction

• Using only the “top” 𝐾 bases

– Corresponding to the top 𝐾 Eigen values

𝑋 ≈ 𝑤𝐷𝑋𝐸1 + 𝑤𝐷

𝑋𝐸𝐷 + ⋯+ 𝑤𝐾𝑋𝐸𝐾

• In vector form: 𝑋 ≈ 𝑬1:𝐾𝐰𝐾

𝑋

𝐰𝐾𝑋 = 𝑃𝑖𝑛𝑣 𝑬1:𝐾 𝑋 = 𝑬1:𝐾

𝑇 𝑋

𝐖𝐾𝑋 = 𝑬1:𝐾

𝑇 𝐗

• If “𝑬” is agreed upon, knowing 𝐖𝐾𝑋 is sufficient to reconstruct 𝐗

– Store only 𝐾 numbers per vector instead of 𝐷 without losing too much information

– Dimensionality Reduction

11-755/18-797 98

Page 98: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Lets give it a name

𝐖𝐾𝑋 = 𝑬1:𝐾

𝑇 𝐗

• Retaining only the top K weights for every data vector

– Computed by multiplying the data matrix by the transpose

of the top K Eigen vectors of R

• This is called the Karhunen Loeve Transform

– Not PCA!

11-755/18-797 99

𝐑 = 𝐄𝐃𝐄𝑇

𝐄 are the “Eigen Bases”

Page 99: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

An audio example

• The spectrogram has 974 vectors of dimension 1025

• The covariance matrix is size 1025 x 1025

• There are 1025 eigenvectors

11-755/18-797 100

Page 100: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Eigenvalues and Eigenvectors

• Left panel: Matrix with 1025 eigen vectors

• Right panel: Corresponding eigen values

– Most Eigen values are close to zero

• The corresponding eigenvectors are “unimportant”

11-755/18-797 101

Page 101: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Eigenvalues and Eigenvectors

• The vectors in the spectrogram are linear combinations of all 1025 Eigen vectors

• The Eigen vectors with low Eigen values contribute very little

– The average value of ai is proportional to the square root of the Eigenvalue

– Ignoring these will not affect the composition of the spectrogram

11-755/18-797 102

Vec = a1 *eigenvec1 + a2 * eigenvec2 + a3 * eigenvec3 …

Page 102: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

An audio example

• The same spectrogram projected down to the 25 eigen vectors with the highest eigen values

– Only the 25-dimensional weights are shown • The weights with which the 25 eigen vectors must be added to

compose a least squares approximation to the spectrogram

11-755/18-797 103

MVPinvM

VVV

reducedlow

reduced

)(

]..[

dim

251

Page 103: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

An audio example

• The same spectrogram constructed from only the 25 Eigen vectors with the highest Eigen values

– Looks similar • With 100 Eigenvectors, it would be indistinguishable from the original

– Sounds pretty close

– But now sufficient to store 25 numbers per vector (instead of 1024)

11-755/18-797 104

dimlowreducedtedreconstruc MVM

Page 104: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

With only 5 eigenvectors

• The same spectrogram constructed from only the 5 Eigen vectors with the highest Eigen values

– Highly recognizable

11-755/18-797 105

Page 105: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

SVD instead of Eigen

• Do we need to compute a 10000 x 10000 correlation matrix and then perform Eigen analysis? – Will take a very long time on your laptop

• SVD – Only need to perform “Thin” SVD. Very fast

• U = 10000 x 300 – The columns of U are the eigen faces!

– The Us corresponding to the “zero” eigen values are not computed

• S = 300 x 300

• V = 300 x 300

11-755/18-797 106

M = Data Matrix

10000x300

U=10000x300 S=300x300 V=300x300

=

U

eig

enfa

ce1

eig

enfa

ce2

Page 106: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Using SVD to compute Eigenbases

[U, S, V] = SVD(X)

• U will have the Eigenvectors

• Thin SVD for 100 bases:

[U,S,V] = svds(X, 100)

• Much more efficient

11-755/18-797 107

Page 107: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Eigen Decomposition of data

• Nothing magical about faces or sound – can be applied to any data.

– Eigen analysis is one of the key components of data compression and representation

– Represent N-dimensional data by the weights of the K leading Eigen vectors

• Reduces effective dimension of the data from N to K

• But requires knowledge of Eigen vectors

11-755/18-797 108

Page 108: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Eigen decomposition of what?

• Eigen decomposition of the correlation matrix

• Is there an alternate way?

11-755/18-797 109

Page 109: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Linear vs. Affine

• The model we saw

– Approximate every face f as f = wf,1 V1+ wf,2 V2 +... + wf,k Vk

– Linear combination of bases

• If you add a constant

f = wf,1 V1+ wf,2 V2 +... + wf,k Vk + m

– Affine combination of bases

11-755/18-797 110

Page 110: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Estimation with the constant

11-755/18-797 111

Page 111: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Estimation with the constant

11-755/18-797 112

Page 112: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Estimation the remaining

11-755/18-797 113

Page 113: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Estimating the Affine model

11-755/18-797 114

Page 114: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Estimating the Affine model

11-755/18-797 115

VCV L

Page 115: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Properties of the affine model

• The bases V1, V2, …, Vk are all orthogonal to one another

– Eigen vectors of the symmetric Covariance matrix

• But they are not orthogonal to m

– Because m is an unscaled constant

• Essentially a sequential estimate

– Jointly V1, V2, …, Vk and m by optimizing the following will give you indeterminate solutions

11-755/18-797 116

IVV L T

f i iif tracemVwf2

,

Page 116: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Linear vs. Affine

• The model we saw

– Approximate every face f as f = wf,1 V1+ wf,2 V2 +... + wf,k Vk

– The Karhunen Loeve Transform

– Retains maximum Energy for any order k

• If you add a constant

f = wf,1 V1+ wf,2 V2 +... + wf,k Vk + m

– Principal Component Analysis

– Retains maximum Variance for any order k

11-755/18-797 117

Page 117: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

How do they relate

• Relationship between correlation matrix and covariance matrix

R = C + mmT

• Karhunen Loeve bases are Eigen vectors of R

• PCA bases are Eigen vectors of C

• How do they relate

– Not easy to say..

11-755/18-797 118

Page 118: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

The Eigen vectors

• The Eigen vectors of C are the major axes of the ellipsoid Cv, where v are the vectors on the unit sphere

11-755/18-797 119

Page 119: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

The Eigen vectors

• The Eigen vectors of R are the major axes of the ellipsoid Cv + mmTv

• Note that mmT has rank 1 and mmTv is a line

11-755/18-797 120

mmT

Page 120: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

The Eigen vectors

• The principal Eigenvector of R lies between the principal Eigen vector of C and m

• Similarly the principal Eigen value

• Similar logic is not easily extendable to the other Eigenvectors, however

11-755/18-797 121

mmT

||||)1(

m

mee CR

10

2||||)1( m CR

Page 121: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Eigenvectors

• Turns out: Eigenvectors of the correlation matrix represent the major and minor axes of an ellipse centered at the origin which encloses the data most compactly

• The SVD of data matrix X uncovers these vectors

• KLT

11-755/18-797 122

Pixel 1

Pix

el 2

Page 122: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Eigenvectors

• Turns out: Eigenvectors of the covariance represent the major and minor axes of an ellipse centered at the mean which encloses the data most compactly

• PCA uncovers these vectors

• In practice, “Eigen faces” refers to PCA faces, and not KLT faces

11-755/18-797 123

Pixel 1

Pix

el 2

Page 123: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

What about sound?

• Finding Eigen bases for speech signals:

• Look like DFT/DCT

• Or wavelets

• DFTs are pretty good most of the time

11-755/18-797 124

Page 124: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Eigen Analysis

• Can often find surprising features in your data

• Trends, relationships, more

• Commonly used in recommender systems

• An interesting example..

11-755/18-797 125

Page 125: Machine Learning for Signal Processingmlsp.cs.cmu.edu/courses/fall2016/slides/Lecture7.E... · Class 7. 22 Sep 2016 Instructor: Bhiksha Raj 11-755/18-797 1 . Linear Algebra Reminders:

Eigen Analysis

• Cheng Liu’s research on pipes.. • SVD automatically separates useful and uninformative

features

11-755/18-797 126