linear discriminant analysis, two classes linear discriminant

15
CSCE 666 Pattern Analysis | Ricardo Gutierrez-Osuna | CSE@TAMU 1 L10: Linear discriminants analysis β€’ Linear discriminant analysis, two classes β€’ Linear discriminant analysis, C classes β€’ LDA vs. PCA β€’ Limitations of LDA β€’ Variants of LDA β€’ Other dimensionality reduction methods

Upload: others

Post on 03-Feb-2022

26 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Linear discriminant analysis, two classes Linear discriminant

CSCE 666 Pattern Analysis | Ricardo Gutierrez-Osuna | CSE@TAMU 1

L10: Linear discriminants analysis

β€’ Linear discriminant analysis, two classes

β€’ Linear discriminant analysis, C classes

β€’ LDA vs. PCA

β€’ Limitations of LDA

β€’ Variants of LDA

β€’ Other dimensionality reduction methods

Page 2: Linear discriminant analysis, two classes Linear discriminant

CSCE 666 Pattern Analysis | Ricardo Gutierrez-Osuna | CSE@TAMU 2

Linear discriminant analysis, two-classes

β€’ Objective – LDA seeks to reduce dimensionality while preserving as much of the

class discriminatory information as possible

– Assume we have a set of 𝐷-dimensional samples π‘₯(1, π‘₯(2, … π‘₯(𝑁 , 𝑁1

of which belong to class πœ”1, and 𝑁2 to class πœ”2

– We seek to obtain a scalar 𝑦 by projecting the samples π‘₯ onto a line

𝑦 = 𝑀𝑇π‘₯

– Of all the possible lines we would like to select the one that maximizes the separability of the scalars

x1

x2

x1

x2

x1

x2

x1

x2

Page 3: Linear discriminant analysis, two classes Linear discriminant

CSCE 666 Pattern Analysis | Ricardo Gutierrez-Osuna | CSE@TAMU 3

– In order to find a good projection vector, we need to define a measure of separation

– The mean vector of each class in π‘₯-space and 𝑦-space is

πœ‡π‘– =1

𝑁𝑖 π‘₯π‘₯βˆˆπœ”π‘–

and πœ‡ 𝑖 =1

𝑁𝑖 π‘¦π‘¦βˆˆπœ”π‘–

=1

𝑁𝑖 𝑀𝑇π‘₯π‘₯βˆˆπœ”π‘–

= π‘€π‘‡πœ‡π‘–

– We could then choose the distance between the projected means as our objective function

𝐽 𝑀 = πœ‡ 1 βˆ’ πœ‡ 2 = 𝑀𝑇 πœ‡1 βˆ’ πœ‡2

β€’ However, the distance between projected means is not a good measure since it does not account for the standard deviation within classes

x1

x2

1

2

This axis yields better class separability

This axis has a larger distance between means

Page 4: Linear discriminant analysis, two classes Linear discriminant

CSCE 666 Pattern Analysis | Ricardo Gutierrez-Osuna | CSE@TAMU 4

β€’ Fisher’s solution – Fisher suggested maximizing the difference between the means,

normalized by a measure of the within-class scatter

– For each class we define the scatter, an equivalent of the variance, as

𝑠 𝑖2 = 𝑦 βˆ’ πœ‡ 𝑖

2π‘¦βˆˆπœ”π‘–

β€’ where the quantity 𝑠 12 + 𝑠 2

2 is called the within-class scatter of the projected examples

– The Fisher linear discriminant is defined as the linear function 𝑀𝑇π‘₯ that maximizes the criterion function

𝐽 𝑀 =πœ‡ 1βˆ’πœ‡ 2

2

𝑠 12+𝑠 2

2

– Therefore, we are looking for a projection where examples from the same class are projected very close to each other and, at the same time, the projected means are as farther apart as possible

x1

x2

1

2

Page 5: Linear discriminant analysis, two classes Linear discriminant

CSCE 666 Pattern Analysis | Ricardo Gutierrez-Osuna | CSE@TAMU 5

β€’ To find the optimum π‘€βˆ—, we must express 𝐽(𝑀) as a function of 𝑀 – First, we define a measure of the scatter in feature space π‘₯

𝑆𝑖 = π‘₯ βˆ’ πœ‡π‘– π‘₯ βˆ’ πœ‡π‘–π‘‡

π‘₯βˆˆπœ”π‘–

𝑆1 + 𝑆2 = π‘†π‘Š β€’ where π‘†π‘Š is called the within-class scatter matrix

– The scatter of the projection 𝑦 can then be expressed as a function of the scatter matrix in feature space π‘₯

𝑠 𝑖2 = 𝑦 βˆ’ πœ‡ 𝑖

2π‘¦βˆˆπœ”π‘–

= 𝑀𝑇π‘₯ βˆ’ π‘€π‘‡πœ‡π‘–2

π‘₯βˆˆπœ”π‘–=

= 𝑀𝑇 π‘₯ βˆ’ πœ‡π‘– π‘₯ βˆ’ πœ‡π‘–π‘‡π‘€π‘₯βˆˆπœ”π‘–

= 𝑀𝑇𝑆𝑖𝑀

𝑠 1

2 + 𝑠 22 = π‘€π‘‡π‘†π‘Šπ‘€

– Similarly, the difference between the projected means can be expressed in terms of the means in the original feature space

πœ‡ 1 βˆ’ πœ‡ 22 = π‘€π‘‡πœ‡1 βˆ’ π‘€π‘‡πœ‡2

2 = 𝑀𝑇 πœ‡1 βˆ’ πœ‡2 πœ‡1 βˆ’ πœ‡2𝑇

𝑆𝐡

𝑀 = 𝑀𝑇𝑆𝐡𝑀

β€’ The matrix 𝑆𝐡 is called the between-class scatter. Note that, since 𝑆𝐡 is the outer product of two vectors, its rank is at most one

– We can finally express the Fisher criterion in terms of π‘†π‘Š and 𝑆𝐡 as

𝐽 𝑀 =𝑀𝑇𝑆𝐡𝑀

π‘€π‘‡π‘†π‘Šπ‘€

Page 6: Linear discriminant analysis, two classes Linear discriminant

CSCE 666 Pattern Analysis | Ricardo Gutierrez-Osuna | CSE@TAMU 6

– To find the maximum of 𝐽(𝑀) we derive and equate to zero 𝑑

𝑑𝑀𝐽 𝑀 =

𝑑

𝑑𝑀

𝑀𝑇𝑆𝐡𝑀

π‘€π‘‡π‘†π‘Šπ‘€= 0 β‡’

π‘€π‘‡π‘†π‘Šπ‘€π‘‘ 𝑀𝑇𝑆𝐡𝑀

π‘‘π‘€βˆ’ 𝑀𝑇𝑆𝐡𝑀

𝑑 π‘€π‘‡π‘†π‘Šπ‘€

𝑑𝑀= 0 β‡’

π‘€π‘‡π‘†π‘Šπ‘€ 2𝑆𝐡𝑀 βˆ’ 𝑀𝑇𝑆𝐡𝑀 2π‘†π‘Šπ‘€ = 0

– Dividing by π‘€π‘‡π‘†π‘Šπ‘€

π‘€π‘‡π‘†π‘Šπ‘€

π‘€π‘‡π‘†π‘Šπ‘€π‘†π΅π‘€ βˆ’

𝑀𝑇𝑆𝐡𝑀

π‘€π‘‡π‘†π‘Šπ‘€π‘†π‘Šπ‘€ = 0 β‡’

𝑆𝐡𝑀 βˆ’ π½π‘†π‘Šπ‘€ = 0 β‡’ π‘†π‘Š

βˆ’1𝑆𝐡𝑀 βˆ’ 𝐽𝑀 = 0

– Solving the generalized eigenvalue problem (π‘†π‘Šβˆ’1𝑆𝐡𝑀 = 𝐽𝑀) yields

π‘€βˆ— = arg max𝑀𝑇𝑆𝐡𝑀

π‘€π‘‡π‘†π‘Šπ‘€= π‘†π‘Š

βˆ’1 πœ‡1 βˆ’ πœ‡2

– This is know as Fisher’s linear discriminant (1936), although it is not a discriminant but rather a specific choice of direction for the projection of the data down to one dimension

Page 7: Linear discriminant analysis, two classes Linear discriminant

CSCE 666 Pattern Analysis | Ricardo Gutierrez-Osuna | CSE@TAMU 7

Example β€’ Compute the LDA projection for the

following 2D dataset 𝑋1 = {(4,1), (2,4), (2,3), (3,6), (4,4)}

𝑋2 = {(9,10), (6,8), (9,5), (8,7), (10,8)}

β€’ SOLUTION (by hand) – The class statistics are

𝑆1 =.8 βˆ’.4

2.64 𝑆2 =

1.84 βˆ’.042.64

πœ‡1 = 3.0 3.6 𝑇; πœ‡2 = 8.4 7.6 𝑇

– The within- and between-class scatter are

𝑆𝐡 =29.16 21.6

16.0 π‘†π‘Š =

2.64 βˆ’.445.28

– The LDA projection is then obtained as the solution of the generalized eigenvalue problem

π‘†π‘Šβˆ’1𝑆𝐡𝑣 = πœ†π‘£ β‡’ π‘†π‘Š

βˆ’1𝑆𝐡 βˆ’ πœ†πΌ = 0 β‡’11.89 βˆ’ πœ† 8.81

5.08 3.76 βˆ’ πœ†= 0 β‡’ πœ† = 15.65

11.89 8.815.08 3.76

𝑣1

𝑣2= 15.65

𝑣1

𝑣2β‡’

𝑣1

𝑣2=

.91

.39

– Or directly by

π‘€βˆ— = π‘†π‘Šβˆ’1 πœ‡1 βˆ’ πœ‡2 = βˆ’.91 βˆ’ .39 𝑇

0 2 4 6 8 10 0

2

4

6

8

10

x2

x1

wLDA

Page 8: Linear discriminant analysis, two classes Linear discriminant

CSCE 666 Pattern Analysis | Ricardo Gutierrez-Osuna | CSE@TAMU 8

LDA, C classes

β€’ Fisher’s LDA generalizes gracefully for C-class problems – Instead of one projection 𝑦, we will now seek (𝐢 βˆ’ 1) projections

[𝑦1, 𝑦2, … π‘¦πΆβˆ’1] by means of (𝐢 βˆ’ 1) projection vectors 𝑀𝑖arranged by columns into a projection matrix π‘Š = [𝑀1|𝑀2| … |π‘€πΆβˆ’1]:

𝑦𝑖 = 𝑀𝑖𝑇π‘₯ β‡’ 𝑦 = π‘Šπ‘‡π‘₯

β€’ Derivation – The within-class scatter generalizes as

π‘†π‘Š = 𝑆𝑖𝐢𝑖=1

β€’ where 𝑆𝑖 = π‘₯ βˆ’ πœ‡π‘– π‘₯ βˆ’ πœ‡π‘–π‘‡

π‘₯βˆˆπœ”π‘–

and πœ‡π‘– =1

𝑁𝑖 π‘₯π‘₯βˆˆπœ”π‘–

– And the between-class scatter becomes

𝑆𝐡 = 𝑁𝑖 πœ‡π‘– βˆ’ πœ‡ πœ‡π‘– βˆ’ πœ‡ 𝑇𝐢𝑖=1

β€’ where πœ‡ =1

𝑁 π‘₯βˆ€π‘₯ =

1

𝑁 π‘π‘–πœ‡π‘–

𝐢𝑖=1

– Matrix 𝑆𝑇 = 𝑆𝐡 + π‘†π‘Š is called the total scatter

1

2

3

SB 1

SB 3

SB 2

SW 3

SW 1

SW 2

x1

x2

1

2

3

SB 1

SB 3

SB 2

SW 3

SW 1

SW 2

x1

x2

Page 9: Linear discriminant analysis, two classes Linear discriminant

CSCE 666 Pattern Analysis | Ricardo Gutierrez-Osuna | CSE@TAMU 9

– Similarly, we define the mean vector and scatter matrices for the projected samples as

πœ‡ 𝑖 =1

N𝑖 π‘¦π‘¦βˆˆπœ”π‘–

𝑆 π‘Š = 𝑦 βˆ’ πœ‡ 𝑖 𝑦 βˆ’ πœ‡ 𝑖𝑇

π‘¦βˆˆπœ”π‘–

𝐢𝑖=1

πœ‡ =1

𝑁 π‘¦βˆ€π‘¦ 𝑆 𝐡 = 𝑁𝑖 πœ‡ 𝑖 βˆ’ πœ‡ πœ‡ 𝑖 βˆ’ πœ‡ 𝑇𝐢

𝑖=1

– From our derivation for the two-class problem, we can write 𝑆 π‘Š = π‘Šπ‘‡π‘†π‘Šπ‘Š 𝑆 𝐡 = π‘Šπ‘‡π‘†π΅π‘Š

– Recall that we are looking for a projection that maximizes the ratio of between-class to within-class scatter. Since the projection is no longer a scalar (it has 𝐢 βˆ’ 1 dimensions), we use the determinant of the scatter matrices to obtain a scalar objective function

𝐽 π‘Š =𝑆 𝐡

𝑆 π‘Š=

π‘Šπ‘‡π‘†π΅π‘Š

π‘Šπ‘‡π‘†π‘Šπ‘Š

– And we will seek the projection matrix π‘Šβˆ— that maximizes this ratio

Page 10: Linear discriminant analysis, two classes Linear discriminant

CSCE 666 Pattern Analysis | Ricardo Gutierrez-Osuna | CSE@TAMU 10

– It can be shown that the optimal projection matrix π‘Šβˆ— is the one whose columns are the eigenvectors corresponding to the largest eigenvalues of the following generalized eigenvalue problem

π‘Šβˆ— = 𝑀1βˆ— 𝑀2

βˆ— β€¦π‘€πΆβˆ’1βˆ— = arg max

π‘Šπ‘‡π‘†π΅π‘Š

π‘Šπ‘‡π‘†π‘Šπ‘Šβ‡’ 𝑆𝐡 βˆ’ πœ†π‘–π‘†π‘Š 𝑀𝑖

βˆ— = 0

β€’ NOTES – 𝑆𝐡 is the sum of 𝐢 matrices of rank ≀ 1 and the mean vectors are

constrained by 1

𝐢 πœ‡π‘–

𝐢𝑖=1 = πœ‡

β€’ Therefore, 𝑆𝐡 will be of rank (𝐢 βˆ’ 1) or less

β€’ This means that only (𝐢 βˆ’ 1) of the eigenvalues πœ†π‘– will be non-zero

– The projections with maximum class separability information are the eigenvectors corresponding to the largest eigenvalues of π‘†π‘Š

βˆ’1𝑆𝐡

– LDA can be derived as the Maximum Likelihood method for the case of normal class-conditional densities with equal covariance matrices

Page 11: Linear discriminant analysis, two classes Linear discriminant

CSCE 666 Pattern Analysis | Ricardo Gutierrez-Osuna | CSE@TAMU 11

LDA vs. PCA β€’ This example illustrates the performance of PCA

and LDA on an odor recognition problem – Five types of coffee beans were presented to an array

of gas sensors – For each coffee type, 45 β€œsniffs” were performed and

the response of the gas sensor array was processed in order to obtain a 60-dimensional feature vector

β€’ Results – From the 3D scatter plots it is clear that LDA

outperforms PCA in terms of class discrimination – This is one example where the discriminatory

information is not aligned with the direction of maximum variance 0 50 100 150 200

-40

-20

0

20

40

60

Sulaw esy Kenya Arabian Sumatra Colombia

Sen

sor

resp

onse

normalized data

-260-240

-220-200 70

80

90

10010

15

20

25

30

35

25

13

1

2

1

5

53

521

53

53

4

3 33

31 41 4 5

54

5

45

5

3

3

1

4

1

52

5

5

51

5

2

534 2

4 3

13

1

35123

2

21

45

2

4

2

4

1

1

45 3

5

5

4

414

2

4

54 32

5

53

1

331

32

2 52

32

1

3 44

1

1

2

13

1

1

255

3 4

12

31

2

44 45 41

1

2

4

5

4

322 3

52222

2

2

45

324 45 5

215154 4

4341 1

1 33232412 3

53

2

14

34

52 1 4

153 54

3

1

42421 5

23 1 5

123 4 51 513

543

2 44

1

43

22313

2

axis 1

axis 2

axis

3

PCA

-1.96-1.94

-1.92-1.9

-1.88 0.3

0.35

0.47.32

7.34

7.36

7.38

7.4

7.42

4444

4

2

44 4

44

2

444

4 44 44

4 44 4

2

4 44 4

44

4

2

4333 44

4343

54

2

5

2

4

34

4

3

2

54

2

54

2

53

2

3

22

2

4 4

22

34

3

2

43

22

3

222

1 31 5

35

2

3

2

1

2

5

2

5

2

3 55

2

3133

2

5

2

3

5

13 53 55

2

331

51

5

5

2

3 5535

33

2222

1

535

31

335

13

45

2

1 1

531

22

53 5

22

5

2

3113

3

2

5

2

55

2

51

1 13 5

331

2

551

513

11 5 5

511

1113

11

131 5

11 1

1

1 5 515

1111

111

axis 1

axis 2

axis

3

LDA

Page 12: Linear discriminant analysis, two classes Linear discriminant

CSCE 666 Pattern Analysis | Ricardo Gutierrez-Osuna | CSE@TAMU 12

Limitations of LDA β€’ LDA produces at most 𝐢 βˆ’ 1 feature projections

– If the classification error estimates establish that more features are needed, some other method must be employed to provide those additional features

β€’ LDA is a parametric method (it assumes unimodal Gaussian likelihoods)

– If the distributions are significantly non-Gaussian, the LDA projections may not preserve complex structure in the data needed for classification

β€’ LDA will also fail if discriminatory information is not in the mean but in the variance of the data

1

2

2

1

1=

2=

1

2

1

2

1=

2=

1

2

1

2

2

1

1=

2=

1

2

1

2

1=

2=

1

2

x1

x2

LD

APC

A

x1

x2

LD

APC

A

Page 13: Linear discriminant analysis, two classes Linear discriminant

CSCE 666 Pattern Analysis | Ricardo Gutierrez-Osuna | CSE@TAMU 13

Variants of LDA β€’ Non-parametric LDA (Fukunaga)

– NPLDA relaxes the unimodal Gaussian assumption by computing 𝑆𝐡 using local information and the kNN rule. As a result of this β€’ The matrix 𝑆𝐡 is full-rank, allowing us to extract more than (𝐢 βˆ’ 1) features β€’ The projections are able to preserve the structure of the data more closely

β€’ Orthonormal LDA (Okada and Tomita) – OLDA computes projections that maximize the Fisher criterion and, at the same

time, are pair-wise orthonormal β€’ The method used in OLDA combines the eigenvalue solution of π‘†π‘Š

βˆ’1𝑆𝐡 and the Gram-Schmidt orthonormalization procedure

β€’ OLDA sequentially finds axes that maximize the Fisher criterion in the subspace orthogonal to all features already extracted

β€’ OLDA is also capable of finding more than (𝐢 βˆ’ 1) features

β€’ Generalized LDA (Lowe) – GLDA generalizes the Fisher criterion by incorporating a cost function similar to the

one we used to compute the Bayes Risk β€’ As a result, LDA can produce projects that are biased by the cost function, i.e., classes

with a higher cost 𝐢𝑖𝑗 will be placed further apart in the low-dimensional projection

β€’ Multilayer perceptrons (Webb and Lowe) – It has been shown that the hidden layers of multi-layer perceptrons perform non-

linear discriminant analysis by maximizing π‘‡π‘Ÿ[𝑆𝐡𝑆𝑇†], where the scatter matrices

are measured at the output of the last hidden layer

Page 14: Linear discriminant analysis, two classes Linear discriminant

CSCE 666 Pattern Analysis | Ricardo Gutierrez-Osuna | CSE@TAMU 14

Other dimensionality reduction methods β€’ Exploratory Projection Pursuit (Friedman and Tukey)

– EPP seeks an M-dimensional (M=2,3 typically) linear projection of the data that maximizes a measure of β€œinterestingness”

– Interestingness is measured as departure from multivariate normality

β€’ This measure is not the variance and is commonly scale-free. In most implementations it is also affine invariant, so it does not depend on correlations between features. [Ripley, 1996]

– In other words, EPP seeks projections that separate clusters as much as possible and keeps these clusters compact, a similar criterion as Fisher’s, but EPP does NOT use class labels

– Once an interesting projection is found, it is important to remove the structure it reveals to allow other interesting views to be found more easily

x1

x2

x1

x2

Interesting Uninteresting

Page 15: Linear discriminant analysis, two classes Linear discriminant

CSCE 666 Pattern Analysis | Ricardo Gutierrez-Osuna | CSE@TAMU 15

β€’ Sammon’s non-linear mapping (Sammon)

– This method seeks a mapping onto an M-dimensional space that preserves the inter-point distances in the original N-dimensional space

– This is accomplished by minimizing the following objective function

𝐸 𝑑, 𝑑′ = 𝑑 𝑃𝑖,𝑃𝑗 βˆ’π‘‘ 𝑃𝑖

β€²,𝑃𝑗′

2

𝑑 𝑃𝑖,𝑃𝑗𝑖≠𝑗

β€’ The original method did not obtain an explicit mapping but only a lookup table for the elements in the training set

– Newer implementations based on neural networks do provide an explicit mapping for test data and also consider cost functions (e.g., Neuroscale)

β€’ Sammon’s mapping is closely related to Multi Dimensional Scaling (MDS), a family of multivariate statistical methods commonly used in the social sciences

– We will review MDS techniques when we cover manifold learning

P’j

x1

x2

P’i

Pi

Pj

x3

x2

x1

d(Pi, Pj)= d(P’i, P’j) " i,j

P’j

x1

x2

P’i

P’j

x1

x2

P’i

Pi

Pj

x3

x2

x1

d(Pi, Pj)= d(P’i, P’j) " i,j