computational intelligence: methods and applications

20
Computational Intelligence: Computational Intelligence: Methods and Applications Methods and Applications Lecture 6 Principal Component Analysis. Włodzisław Duch Dept. of Informatics, UMK Google: W Duch

Upload: lihua

Post on 26-Jan-2016

52 views

Category:

Documents


3 download

DESCRIPTION

Computational Intelligence: Methods and Applications. Lecture 6 Principal Component Analysis. Włodzisław Duch Dept. of Informatics, UMK Google: W Duch. Linear transformations – example. 2D vectors X uniformly distributed in a unit circle with mean (1,1); - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Computational Intelligence:  Methods and Applications

Computational Intelligence: Computational Intelligence: Methods and ApplicationsMethods and Applications

Lecture 6 Principal Component Analysis.

Włodzisław Duch

Dept. of Informatics, UMK

Google: W Duch

Page 2: Computational Intelligence:  Methods and Applications

Linear transformations – exampleLinear transformations – example2D vectors X uniformly distributed in a unit circle with mean (1,1);

Y = AX, A = 2x2 matrix

The shape is elongated, rotated and the mean is shifted.

1 1

2 2

2 1

1 1

Y X

Y X

Page 3: Computational Intelligence:  Methods and Applications

Invariant distancesInvariant distances

Euclidean distance is not invariant to general linear transformations

This is invariant only for orthonormal matrices ATA = I that make rigid rotations, without stretching or shrinking distances.

Idea: standardize the data in some way to create invariant distances.

Y A X

2 T1 2 1 2 1 2

T1 2 1 2T

Y Y Y Y Y Y

X X A A X X

Page 4: Computational Intelligence:  Methods and Applications

Data standardizationData standardization

For each vector component X(j)T=(X1(j), ... Xd

(j)), j=1 .. n

calculate mean and std: n – number of vectors, d – their dimension

( ) ( )

1 1

1 1;

i

n nj j

ij j

X Xn n

X X Vector of mean

feature values.

Averages over rows.

(1) (2) ( )

(1) (2) ( )1 1 1 1

(1) (2) ( )2 2 2 2

(1) (2) ( )

n

n

n

nd d d d

X X X X

X X X X

X X X X

X X X

Page 5: Computational Intelligence:  Methods and Applications

Standard deviationStandard deviation

Calculate standard deviation:

Transform X => Z, standardized data vectors

( )

1

22 ( )

1

1

1

1

i

i i

nj

ij

nj

ij

X Xn

X Xn

Vector of mean feature values.

Variance = square of standard deviation (std), sum of all deviations from the mean value.

( ) ( )j ji i i iZ X X

Page 6: Computational Intelligence:  Methods and Applications

Std dataStd data

Std data: zero mean and unit variance.

Standardize data after making data transformation.

Effect: data is invariant to scaling only (diagonal transformation).

Distances are invariant, data distribution is the same.

How to make data invariant to any linear transformations?

,

( ) ( )

1 1

2 22 ( ) ( ) 2

1 1

1 10

1 11

1 1

i i

Z i i i

n nj j

i i ij j

n nj j

i i ij j

Z Z X Xn n

Z Z X Xn n

Page 7: Computational Intelligence:  Methods and Applications

Data standardization exampleData standardization example

In slide 2 example Y=AX, assume all X means =1 and variances = 1

Transformation

Vector of mean

feature values.

Variance

check it!

1 3 2 1 1

1 2 1 1 1

X Y

1 1

2 2

2 1

1 1

Y X

Y X

2 2 T1 5Diag

1 2X

Yσ σ AA

2 T1 2 1 2 1 2T Y Y X X A A X X How to make this

invariant?

Page 8: Computational Intelligence:  Methods and Applications

Covariance matrixCovariance matrix

Variance (spread around mean value) + correlation between features.

where X is d x n dimensional matrix of vectors shifted to their means.

Covariance matrix is symmetric Cij = Cji and positive definite.

Diagonal elements are variances (square of std), i2 = Cii

Pearson correlation coefficient

( ) ( )

1

T( ) ( ) T

1

1; , 1

1

1 1

1 1

i

nk k

ij i j jk

nk k

k

C X X X X i j dn

n n

XC X X X X XX

[ 1, 1]ij ij i jr C

Spherical distribution of data has Cij=I (unit matrix).

Elongated ellipsoids: large off-diagonal elements, strong correlations between features.

CX is d x d

Page 9: Computational Intelligence:  Methods and Applications

CorrelationCorrelation

Correlation coefficient is linear and may be confusing …

Page 10: Computational Intelligence:  Methods and Applications

Mahalanobis distanceMahalanobis distance

Linear combinations of features leads to rotations and scaling of data.

Mahalanobis distance defined as:

is invariant to linear transformations:

T; ; Y X Y AX Y AX C AC A

2 T1 2 1 2 1 21

T 11 2 1 2T T 1 1

21 2

Y

X

YC

X

C

Y Y Y Y C Y Y

X X A A C A A X X

X X

2 T 1

XXC

X X C X

Page 11: Computational Intelligence:  Methods and Applications

Principal componentsPrincipal components

How to avoid correlated features?

Correlations covariance matrix is non-diagonal !

Solution: diagonalize it, then use the transformation that makes it

diagonal to de-correlate features.

C – symmetric, positive definite matrix XTCX > 0 for ||X||>0;

its eigenvectors are orthonormal:

its eigenvalues are all non-negative i ≥ 0

Z – matrix of orthonormal eigenvectors (because CX is real+symmetric),

transforms X into Y, with diagonal CY, i.e. decorrelated.

T ( ) ( )

T T

; ;i ii

X X

Y X

Y Z X C Z Z C Z ZΛ

C Z C Z Z ZΛ Λ

In matrix form, X, Y are dxn, Z, CX, CY are dxd

( )T ( )i jij Z Z

Page 12: Computational Intelligence:  Methods and Applications

Matrix formMatrix form

Eigenproblem for C matrix in matrix form:X C Z ZΛ

11 12 1 11 12 1

21 22 2 21 22 2

1 2 1 2

11 12 1 1

21 22 2 2

1 2

0 0

0 0

0 0

d d

d d

d d dd d d dd

d

d

d d dd d

C C C Z Z Z

C C C Z Z Z

C C C Z Z Z

Z Z Z

Z Z Z

Z Z Z

Page 13: Computational Intelligence:  Methods and Applications

Principal componentsPrincipal componentsPCA: old idea, C. Pearson (1901), H. Hotelling 1933

Result: PC are linear combinations of all features, providing new uncorrelated features, with diagonal covariance matrix = eigenvalues.

T

T

;

Y X

Y Z X

C Z C Z Λ

TXZΛZ C

Small i small variance data change little in direction Yi

PCA minimizes C matrix reconstruction errors:

Zi vectors for large i are sufficient to get:

because vectors for small eigenvalues will have very

small contribution to the covariance matrix.

Y – principal components, or vectors X transformed using eigenvectors of CX

Covariance matrix of transformed vectors is diagonal => ellipsoidal distribution of data.

Page 14: Computational Intelligence:  Methods and Applications

Two components for visualizationTwo components for visualization

New coordinate system: axis ordered according to variance = size of the eigenvalue.

First k dimensions account for

1

1

k

ii

dk

ii

V

fraction of all variance (please note that i are variances); frequently 80-90% is sufficient for rough description.

Diagonalization methods: see Numerical Recipes, www.nr.com

Page 15: Computational Intelligence:  Methods and Applications

PCA propertiesPCA properties

PC Analysis (PCA) may be achieved by:

• transformation making covariance matrix diagonal

• projecting the data on a line for which the sums of squares of distances from original points to projections is minimal.

• orthogonal transformation to new variables that have stationary variances Y(W) – around max. variance change is minimal.

True covariance matrices are usually not known, they have to be estimated from data.

This works well on single-cluster data;

more complex structure may require local PCA: the PCA transformation should then be done separately for each cluster or neighborhood of a query vector X.

Page 16: Computational Intelligence:  Methods and Applications

Some remarks on PCASome remarks on PCA

PC results obviously depend on the initial scaling of the features, therefore one should standardize the data first to make it independent of scaling or measurement units. Example: Heart data.

Assume that the data matrix X has been standardized, show that:

that is the mean stays as zero and the variance of principal components

is equal to the eigenvalues. Therefore rejecting Yi components with

small variance leads to small errors in reconstruction of X = ZY, where rejected components are replaced by zero values.

PC is useful for:

finding new, more informative, uncorrelated features;

reducing dimensionality: reject low variance features,

reconstructing original data from lower-dimensional projections.

20i i iY Y

Page 17: Computational Intelligence:  Methods and Applications

PCA Wisconsin examplePCA Wisconsin exampleWisconsin Breast Cancer data:

• Collected at the University of Wisconsin Hospitals, USA.

• 699 cases, 458 (65.5%) benign (red), 241 malignant (green).

• 9 features: quantized 1, 2 .. 10, cell properties, ex:

Clump Thickness, Uniformity of Cell Size, Shape, Marginal Adhesion, Single Epithelial Cell Size, Bare Nuclei,

Bland Chromatin, Normal Nucleoli, Mitoses.

2D scatterograms do not show any structure no matter which subspaces are taken!

Page 18: Computational Intelligence:  Methods and Applications

Example cont.Example cont.PC gives useful information already in 2D.

Taking first PCA component of the standardized data:

If (Y1.41) then benign else malignant

18 errors/699 cases = 97.4%

Transformed vectors are not

standardized, std’s are below.

Eigenvalues decrease to zero slowly, but classes are well separated.

Page 19: Computational Intelligence:  Methods and Applications

PCA disadvantagesPCA disadvantagesUseful for dimensionality reduction but:

• Largest variance determines which components are used, but does not guarantee interesting viewpoint for clustering data.

• The meaning of features is lost when linear combinations are formed.

Analysis of coefficients in Z1 and other important eigenvectors may show which original features are given much weight.

PCA may be also done in an efficient way by performing singular value decomposition of the standardized data matrix.

PCA is also called Karhuen-Loève transformation.

Many variants of PCA are described in A. Webb, Statistical pattern recognition, J. Wiley 2002.

Page 20: Computational Intelligence:  Methods and Applications

2 skewed distributions2 skewed distributions

PCA transformation for 2D data:

First component will be chosen along the largest variance line, both clusters will strongly overlap, no interesting structure will be visible.

In fact projection to orthogonal axis to the first PCA component has much more discriminating power.

Discriminant coordinates should be used to reveal class structure.