dimensionality reducon lecture 23 - people | mit...

24
Dimensionality Reduc1on Lecture 23 David Sontag New York University Slides adapted from Carlos Guestrin and Luke Zettlemoyer

Upload: others

Post on 21-Feb-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

DimensionalityReduc1onLecture23

DavidSontagNewYorkUniversity

Slides adapted from Carlos Guestrin and Luke Zettlemoyer

Page 2: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

Dimensionalityreduc9on

•  Inputdatamayhavethousandsormillionsofdimensions!– e.g.,textdatahas???,imageshave???

•  Dimensionalityreduc1on:representdatawithfewerdimensions– easierlearning–fewerparameters– visualiza9on–showhighdimensionaldatain2D– discover“intrinsicdimensionality”ofdata

•  highdimensionaldatathatistrulylowerdimensional•  noisereduc9on

Page 3: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

!"#$%&"'%()$*+,-"'%

� .&&+#/-"'%0(*1-1(21//)'3"#1-$456(4"$&('%(1(4'7$)(*"#$%&"'%14(&/1,$

� 831#/4$&0

9

Slide from Yi Zhang

n = 2 k = 1

n = 3 k = 2

Page 4: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

Example(fromBishop)

•  Supposewehaveadatasetofdigits(“3”)perturbedinvariousways:

•  Whatopera9onsdidIperform?Whatisthedata’sintrinsicdimensionality?

•  Heretheunderlyingmanifoldisnonlinear

Page 5: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

Lowerdimensionalprojec9ons•  Obtainnewfeaturevectorbytransformingtheoriginalfeaturesx1…xn

•  Newfeaturesarelinearcombina9onsofoldones•  Reducesdimensionwhenk<n•  ThisistypicallydoneinanunsupervisedseZng

–  justX,butnoY

argmax�

m⌥

j=1

z

Q(t+1)(z | xj) log�p(z, xj | �(t))

=m⌥

j=1

z

Q(z | xj) log

⇧P (z | xj , �(t))P (xj | �(t))

Q(z | xj)

=m⌥

j=1

z

Q(z | xj) log�P (xj | �(t))

⇥�

m⌥

j=1

z

Q(z | xj) log

⇤Q(z | xj)

P (z | xj , �(t))

z1 = w(1)0 +⌥

i

w(1)i xi

5

argmax�

m⌥

j=1

z

Q(t+1)(z | xj) log�p(z, xj | �(t))

=m⌥

j=1

z

Q(z | xj) log

⇧P (z | xj , �(t))P (xj | �(t))

Q(z | xj)

=m⌥

j=1

z

Q(z | xj) log�P (xj | �(t))

⇥�

m⌥

j=1

z

Q(z | xj) log

⇤Q(z | xj)

P (z | xj , �(t))

z1 = w(1)0 +⌥

i

w(1)i xi

zk = w(k)0 +⌥

i

w(k)i xi

5

Ingeneralwillnotbeinver9ble–cannotgofromzbacktox

Page 6: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

Whichprojec9onisbe[er?

3

example of this is if each data point represented a grayscale image, and eachx(i)j took a value in {0, 1, . . . , 255} corresponding to the intensity value of

pixel j in image i.Now, having carried out the normalization, how do we compute the “ma-

jor axis of variation” u—that is, the direction on which the data approxi-mately lies? One way to pose this problem is as finding the unit vector u sothat when the data is projected onto the direction corresponding to u, thevariance of the projected data is maximized. Intuitively, the data starts offwith some amount of variance/information in it. We would like to choose adirection u so that if we were to approximate the data as lying in the direc-tion/subspace corresponding to u, as much as possible of this variance is stillretained.

Consider the following dataset, on which we have already carried out thenormalization steps:

Now, suppose we pick u to correspond the the direction shown in thefigure below. The circles denote the projections of the original data onto thisline.

4

We see that the projected data still has a fairly large variance, and thepoints tend to be far from zero. In contrast, suppose had instead picked thefollowing direction:

Here, the projections have a significantly smaller variance, and are muchcloser to the origin.

We would like to automatically select the direction u corresponding tothe first of the two figures shown above. To formalize this, note that given a

4

We see that the projected data still has a fairly large variance, and thepoints tend to be far from zero. In contrast, suppose had instead picked thefollowing direction:

Here, the projections have a significantly smaller variance, and are muchcloser to the origin.

We would like to automatically select the direction u corresponding tothe first of the two figures shown above. To formalize this, note that given a

From notes by Andrew Ng

Page 7: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

Reminder:VectorProjec9ons

•  Basicdefini9ons:– A.B=|A||B|cosθ

•  Assume|B|=1(unitvector)– A.B=|A|cosθ– So,dotproductislengthofprojec9on!

Page 8: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

Usinganewbasisforthedata•  Projectapointintoa(lowerdimensional)space:

– point:x=(x1,…,xn)– selectabasis–setofunit(length1)basisvectors(u1,…,uk)•  weconsiderorthonormalbasis:

– uj•uj=1,anduj•ul=0forj≠l– selectacenter–x,definesoffsetofspace– bestcoordinatesinlowerdimensionalspacedefinedbydot-products:(z1,…,zk),zji=(xi-x)•uj

Page 9: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

Maximizevarianceofprojec9on

5

unit vector u and a point x, the length of the projection of x onto u is givenby xTu. I.e., if x(i) is a point in our dataset (one of the crosses in the plot),then its projection onto u (the corresponding circle in the figure) is distancexTu from the origin. Hence, to maximize the variance of the projections, wewould like to choose a unit-length u so as to maximize:

1

m

m∑

i=1

(x(i)Tu)2 =1

m

m∑

i=1

uTx(i)x(i)Tu

= uT

(

1

m

m∑

i=1

x(i)x(i)T

)

u.

We easily recognize that the maximizing this subject to ||u||2 = 1 gives the

principal eigenvector of Σ = 1m

∑mi=1 x

(i)x(i)T , which is just the empiricalcovariance matrix of the data (assuming it has zero mean).1

To summarize, we have found that if we wish to find a 1-dimensionalsubspace with with to approximate the data, we should choose u to be theprincipal eigenvector of Σ. More generally, if we wish to project our datainto a k-dimensional subspace (k < n), we should choose u1, . . . , uk to be thetop k eigenvectors of Σ. The ui’s now form a new, orthogonal basis for thedata.2

Then, to represent x(i) in this basis, we need only compute the corre-sponding vector

y(i) =

uT1 x

(i)

uT2 x

(i)

...uTk x

(i)

∈ Rk.

Thus, whereas x(i) ∈ Rn, the vector y(i) now gives a lower, k-dimensional,approximation/representation for x(i). PCA is therefore also referred to asa dimensionality reduction algorithm. The vectors u1, . . . , uk are calledthe first k principal components of the data.

Remark. Although we have shown it formally only for the case of k = 1,using well-known properties of eigenvectors it is straightforward to show that

1If you haven’t seen this before, try using the method of Lagrange multipliers to max-imize uTΣu subject to that uTu = 1. You should be able to show that Σu = λu, for someλ, which implies u is an eigenvector of Σ, with eigenvalue λ.

2Because Σ is symmetric, the ui’s will (or always can be chosen to be) orthogonal toeach other.

Let x(i) be the ith data point minus the mean.

Choose unit-length u to maximize:

Let ||u||=1 and maximize. Using the method of Lagrange multipliers, can show that the solution is given by the principal eigenvector of the covariance matrix! (shown on board)

Covariance matrix ⌃

Page 10: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

BasicPCAalgorithm

•  StartfrommbyndatamatrixX•  Recenter:subtractmeanfromeachrowofX

– Xc←X–X

•  Computecovariancematrix:–  Σ←1/mXc

TXc

•  FindeigenvectorsandvaluesofΣ•  Principalcomponents:keigenvectorswithhighesteigenvalues

[Pearson1901,Hotelling,1933]

Page 11: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

PCAexample

Data: Projection: Reconstruction:

Page 12: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

Dimensionalityreduc9onwithPCA

23

!"#$%&"'%()"*+,-$./0*"'%,/&"%1,234In high-dimensional problem, data usually lies near a linear subspace, as noise introduces small variability

Only keep data projections onto principal components with large eigenvalues

Can ignore the components of lesser significance.

You might lose some information, but if the eigenvalues ����������� ����������much

0

5

10

15

20

25

PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 PC9 PC10

Varia

nce

(%)

Slide from Aarti Singh

Percentageoftotalvariancecapturedbydimensionzjforj=1to10:

var(zj) =1

m

mX

i=1

(zij)2

=1

m

mX

i=1

(xi · uj)2

= �j

�jPnl=1 �l

Page 13: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

Eigenfaces[Turk,Pentland’91]•  Inputimages: !  Principal components:

Page 14: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

Eigenfacesreconstruc9on

•  Eachimagecorrespondstoaddingtogether(weightedversionsof)theprincipalcomponents:

Page 15: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

Scalingup

•  Covariancematrixcanbereallybig!–  Σisnbyn– 10000featurescanbecommon!– findingeigenvectorsisveryslow…

•  Usesingularvaluedecomposi9on(SVD)– Findskeigenvectors– greatimplementa9onsavailable,e.g.,Matlabsvd

Page 16: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

SVD•  WriteX=ZSUT

– X←datamatrix,onerowperdatapoint

– S←singularvaluematrix,diagonalmatrixwithentriesσi•  Rela9onshipbetweensingularvaluesofXandeigenvaluesofΣgivenbyλi=σi2/m

– Z←weightmatrix,onerowperdatapoint•  Z9mesSgivescoordinateofxiineigenspace

– UT←singularvectormatrix•  InourseZng,eachrowiseigenvectoruj

Page 17: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

PCAusingSVDalgorithm•  StartfrommbyndatamatrixX•  Recenter:subtractmeanfromeachrowofX

– Xc←X–X

•  CallSVDalgorithmonXc–askforksingularvectors

•  Principalcomponents:ksingularvectorswithhighestsingularvalues(rowsofUT)– Coefficients:projecteachpointontothenewvectors

Page 18: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

Non-linearmethods

12

!"#$%#&'$"#()$&*+#)",#-.%!"#$%&'(%"&)*"+*"$),-.,/*+,'(0-,)*1-".%/,*#"-,*,++%2%,&(*-,1-,),&('(%"&3*'&/*2'1(0-,*0&/,-45%&6*-,4'(%"&)*(7'(*6".,-&*(7,*/'('

8969** 86"3*1,-)"&'4%(5*'&/*%&(,44%6,&2,*'-,*7%//,&*'((-%$0(,)*(7'(*27'-'2(,-%:,**70#'&*$,7'.%"-*%&)(,'/*"+*)0-.,5*;0,)(%"&)<"1%2)*=)1"-()3*)2%,&2,3*&,>)3*,(29?*%&)(,'/*"+*/"20#,&()

@+(,&*#'5*&"(*7'.,*175)%2'4*#,'&%&6

� A%&,'-/)-%,-0"1&2.30.%$%#&4%"156-6&7/248B'2("-*C&'45)%)D&/,1,&/,&(*!"#1"&,&(*C&'45)%)*=D!C?

� E"&4%&,'-!"01",-"% *-9$%3"06DF@GCHA"2'4*A%&,'-*8#$,//%&6*=AA8?

Slide from Aarti Singh

Page 19: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

Isomap

converts distances to inner products (17),which uniquely characterize the geometry ofthe data in a form that supports efficientoptimization. The global minimum of Eq. 1 isachieved by setting the coordinates yi to thetop d eigenvectors of the matrix !(DG) (13).

As with PCA or MDS, the true dimen-sionality of the data can be estimated fromthe decrease in error as the dimensionality ofY is increased. For the Swiss roll, whereclassical methods fail, the residual varianceof Isomap correctly bottoms out at d " 2(Fig. 2B).

Just as PCA and MDS are guaranteed,given sufficient data, to recover the truestructure of linear manifolds, Isomap is guar-anteed asymptotically to recover the true di-mensionality and geometric structure of astrictly larger class of nonlinear manifolds.Like the Swiss roll, these are manifolds

whose intrinsic geometry is that of a convexregion of Euclidean space, but whose ambi-ent geometry in the high-dimensional inputspace may be highly folded, twisted, orcurved. For non-Euclidean manifolds, such asa hemisphere or the surface of a doughnut,Isomap still produces a globally optimal low-dimensional Euclidean representation, asmeasured by Eq. 1.

These guarantees of asymptotic conver-gence rest on a proof that as the number ofdata points increases, the graph distancesdG(i, j) provide increasingly better approxi-mations to the intrinsic geodesic distancesdM(i, j), becoming arbitrarily accurate in thelimit of infinite data (18, 19). How quicklydG(i, j) converges to dM(i, j) depends on cer-tain parameters of the manifold as it lieswithin the high-dimensional space (radius ofcurvature and branch separation) and on the

density of points. To the extent that a data setpresents extreme values of these parametersor deviates from a uniform density, asymp-totic convergence still holds in general, butthe sample size required to estimate geodes-ic distance accurately may be impracticallylarge.

Isomap’s global coordinates provide asimple way to analyze and manipulate high-dimensional observations in terms of theirintrinsic nonlinear degrees of freedom. For aset of synthetic face images, known to havethree degrees of freedom, Isomap correctlydetects the dimensionality (Fig. 2A) and sep-arates out the true underlying factors (Fig.1A). The algorithm also recovers the knownlow-dimensional structure of a set of noisyreal images, generated by a human hand vary-ing in finger extension and wrist rotation(Fig. 2C) (20). Given a more complex dataset of handwritten digits, which does not havea clear manifold geometry, Isomap still findsglobally meaningful coordinates (Fig. 1B)and nonlinear structure that PCA or MDS donot detect (Fig. 2D). For all three data sets,the natural appearance of linear interpolationsbetween distant points in the low-dimension-al coordinate space confirms that Isomap hascaptured the data’s perceptually relevantstructure (Fig. 4).

Previous attempts to extend PCA andMDS to nonlinear data sets fall into twobroad classes, each of which suffers fromlimitations overcome by our approach. Locallinear techniques (21–23) are not designed torepresent the global structure of a data setwithin a single coordinate system, as we do inFig. 1. Nonlinear techniques based on greedyoptimization procedures (24–30) attempt todiscover global structure, but lack the crucialalgorithmic features that Isomap inheritsfrom PCA and MDS: a noniterative, polyno-mial time procedure with a guarantee of glob-al optimality; for intrinsically Euclidean man-

Fig. 2. The residualvariance of PCA (opentriangles), MDS [opentriangles in (A) through(C); open circles in (D)],and Isomap (filled cir-cles) on four data sets(42). (A) Face imagesvarying in pose and il-lumination (Fig. 1A).(B) Swiss roll data (Fig.3). (C) Hand imagesvarying in finger exten-sion and wrist rotation(20). (D) Handwritten“2”s (Fig. 1B). In all cas-es, residual variance de-creases as the dimen-sionality d is increased.The intrinsic dimen-sionality of the datacan be estimated bylooking for the “elbow”at which this curve ceases to decrease significantly with added dimensions. Arrows mark the true orapproximate dimensionality, when known. Note the tendency of PCA and MDS to overestimate thedimensionality, in contrast to Isomap.

Fig. 3. The “Swiss roll” data set, illustrating how Isomap exploits geodesicpaths for nonlinear dimensionality reduction. (A) For two arbitrary points(circled) on a nonlinear manifold, their Euclidean distance in the high-dimensional input space (length of dashed line) may not accuratelyreflect their intrinsic similarity, as measured by geodesic distance alongthe low-dimensional manifold (length of solid curve). (B) The neighbor-hood graph G constructed in step one of Isomap (with K " 7 and N "

1000 data points) allows an approximation (red segments) to the truegeodesic path to be computed efficiently in step two, as the shortestpath in G. (C) The two-dimensional embedding recovered by Isomap instep three, which best preserves the shortest path distances in theneighborhood graph (overlaid). Straight lines in the embedding (blue)now represent simpler and cleaner approximations to the true geodesicpaths than do the corresponding graph paths (red).

R E P O R T S

www.sciencemag.org SCIENCE VOL 290 22 DECEMBER 2000 2321

Goal:usegeodesicdistancebetweenpoints(withrespecttomanifold)

Es9matemanifoldusinggraph.Distancebetweenpointsgivenbydistanceofshortestpath

Embedonto2DplanesothatEuclideandistanceapproximatesgraphdistance

[Tenenbaum, Silva, Langford. Science 2000]

Page 20: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

Isomap

ifolds, a guarantee of asymptotic conver-gence to the true structure; and the ability todiscover manifolds of arbitrary dimensional-ity, rather than requiring a fixed d initializedfrom the beginning or computational resourc-es that increase exponentially in d.

Here we have demonstrated Isomap’s per-formance on data sets chosen for their visu-ally compelling structures, but the techniquemay be applied wherever nonlinear geometrycomplicates the use of PCA or MDS. Isomapcomplements, and may be combined with,linear extensions of PCA based on higherorder statistics, such as independent compo-nent analysis (31, 32). It may also lead to abetter understanding of how the brain comesto represent the dynamic appearance of ob-jects, where psychophysical studies of appar-ent motion (33, 34) suggest a central role forgeodesic transformations on nonlinear mani-folds (35) much like those studied here.

References and Notes1. M. P. Young, S. Yamane, Science 256, 1327 (1992).2. R. N. Shepard, Science 210, 390 (1980).3. M. Turk, A. Pentland, J. Cogn. Neurosci. 3, 71 (1991).4. H. Murase, S. K. Nayar, Int. J. Comp. Vision 14, 5

(1995).5. J. W. McClurkin, L. M. Optican, B. J. Richmond, T. J.

Gawne, Science 253, 675 (1991).6. J. L. Elman, D. Zipser, J. Acoust. Soc. Am. 83, 1615

(1988).7. W. Klein, R. Plomp, L. C. W. Pols, J. Acoust. Soc. Am.48, 999 (1970).

8. E. Bizzi, F. A. Mussa-Ivaldi, S. Giszter, Science 253, 287(1991).

9. T. D. Sanger, Adv. Neural Info. Proc. Syst. 7, 1023(1995).

10. J. W. Hurrell, Science 269, 676 (1995).11. C. A. L. Bailer-Jones, M. Irwin, T. von Hippel, Mon.Not. R. Astron. Soc. 298, 361 (1997).

12. P. Menozzi, A. Piazza, L. Cavalli-Sforza, Science 201,786 (1978).

13. K. V. Mardia, J. T. Kent, J. M. Bibby, MultivariateAnalysis, (Academic Press, London, 1979).

14. A. H. Monahan, J. Clim., in press.15. The scale-invariant K parameter is typically easier to

set than !, but may yield misleading results when the

local dimensionality varies across the data set. Whenavailable, additional constraints such as the temporalordering of observations may also help to determineneighbors. In earlier work (36) we explored a morecomplex method (37), which required an order ofmagnitude more data and did not support the theo-retical performance guarantees we provide here for!- and K-Isomap.

16. This procedure, known as Floyd’s algorithm, requiresO(N3) operations. More efficient algorithms exploit-ing the sparse structure of the neighborhood graphcan be found in (38).

17. The operator " is defined by "(D) # $HSH/2, where Sis the matrix of squared distances {Sij # Di j

2}, and H isthe “centering matrix” {Hij # %ij $ 1/N} (13).

18. Our proof works by showing that for a sufficientlyhigh density (&) of data points, we can always choosea neighborhood size (! or K) large enough that thegraph will (with high probability) have a path notmuch longer than the true geodesic, but smallenough to prevent edges that “short circuit” the truegeometry of the manifold. More precisely, given ar-bitrarily small values of '1, '2, and (, we can guar-antee that with probability at least 1 $ (, estimatesof the form

)1 ! '1*dM)i, j* " dG)i, j* " )1 # '2*dM)i, j*

will hold uniformly over all pairs of data points i, j. For!-Isomap, we require

! " )2/+*r0!24'1, ! $ s0,

& % ,log)V/(-d)'2!/16*d*./-d)'2!/8*d

where r0 is the minimal radius of curvature of themanifold M as embedded in the input space X, s0 isthe minimal branch separation of M in X, V is the(d-dimensional) volume ofM, and (ignoring boundaryeffects) -d is the volume of the unit ball in Euclideand-space. For K-Isomap, we let ! be as above and fixthe ratio (K / 1)/& # -d(!/2)d/2. We then require

e!)K#1*/4 " (-d)!/4*d/4V,

)e/4*)K#1*/ 2 " (-d)!/8*d/16V,

& % ,4 log)8V/(-d)'2!/32+*d*./-d)'2!/16+*d

The exact content of these conditions—but not theirgeneral form—depends on the particular technicalassumptions we adopt. For details and extensions tononuniform densities, intrinsic curvature, and bound-ary effects, see http://isomap.stanford.edu.

19. In practice, for finite data sets, dG(i, j) may fail toapproximate dM(i, j) for a small fraction of points thatare disconnected from the giant component of theneighborhood graph G. These outliers are easily de-tected as having infinite graph distances from themajority of other points and can be deleted fromfurther analysis.

20. The Isomap embedding of the hand images is avail-able at Science Online at www.sciencemag.org/cgi/content/full/290/5500/2319/DC1. For additionalmaterial and computer code, see http://isomap.stanford.edu.

21. R. Basri, D. Roth, D. Jacobs, Proceedings of the IEEEConference on Computer Vision and Pattern Recog-nition (1998), pp. 414–420.

22. C. Bregler, S. M. Omohundro, Adv. Neural Info. Proc.Syst. 7, 973 (1995).

23. G. E. Hinton, M. Revow, P. Dayan, Adv. Neural Info.Proc. Syst. 7, 1015 (1995).

24. R. Durbin, D. Willshaw, Nature 326, 689 (1987).25. T. Kohonen, Self-Organisation and Associative Mem-ory (Springer-Verlag, Berlin, ed. 2, 1988), pp. 119–157.

26. T. Hastie, W. Stuetzle, J. Am. Stat. Assoc. 84, 502(1989).

27. M. A. Kramer, AIChE J. 37, 233 (1991).28. D. DeMers, G. Cottrell, Adv. Neural Info. Proc. Syst. 5,

580 (1993).29. R. Hecht-Nielsen, Science 269, 1860 (1995).30. C. M. Bishop, M. Svensen, C. K. I. Williams, NeuralComp. 10, 215 (1998).

31. P. Comon, Signal Proc. 36, 287 (1994).32. A. J. Bell, T. J. Sejnowski, Neural Comp. 7, 1129

(1995).33. R. N. Shepard, S. A. Judd, Science 191, 952 (1976).34. M. Shiffrar, J. J. Freyd, Psychol. Science 1, 257 (1990).

Table 1. The Isomap algorithm takes as input the distances dX(i, j ) between all pairs i, j from N data pointsin the high-dimensional input space X, measured either in the standard Euclidean metric (as in Fig. 1A)or in some domain-specific metric (as in Fig. 1B). The algorithm outputs coordinate vectors yi in ad-dimensional Euclidean space Y that (according to Eq. 1) best represent the intrinsic geometry of thedata. The only free parameter (! or K ) appears in Step 1.

Step

1 Construct neighborhood graph Define the graph G over all data points by connectingpoints i and j if [as measured by dX(i, j )] they arecloser than ! (!-Isomap), or if i is one of the Knearest neighbors of j (K-Isomap). Set edge lengthsequal to dX(i, j).

2 Compute shortest paths Initialize dG(i, j) # dX(i, j) if i, j are linked by an edge;dG(i, j) # 0 otherwise. Then for each value of k #1, 2, . . ., N in turn, replace all entries dG(i, j) bymin{dG(i, j), dG(i,k) / dG(k, j)}. The matrix of finalvalues DG # {dG(i, j)} will contain the shortest pathdistances between all pairs of points in G (16, 19).

3 Construct d-dimensional embedding Let 'p be the p-th eigenvalue (in decreasing order) ofthe matrix "(DG) (17 ), and v p

i be the i-thcomponent of the p-th eigenvector. Then set thep-th component of the d-dimensional coordinatevector yi equal to 1'pvp

i .

Fig. 4. Interpolations along straight lines inthe Isomap coordinate space (analogous tothe blue line in Fig. 3C) implement perceptu-ally natural but highly nonlinear “morphs” ofthe corresponding high-dimensional observa-tions (43) by transforming them approxi-mately along geodesic paths (analogous tothe solid curve in Fig. 3A). (A) Interpolationsin a three-dimensional embedding of faceimages (Fig. 1A). (B) Interpolations in a four-dimensional embedding of hand images (20)appear as natural hand movements whenviewed in quick succession, even though nosuch motions occurred in the observed data. (C)Interpolations in a six-dimensional embedding ofhandwritten “2”s (Fig. 1B) preserve continuity notonly in the visual features of loop and arch artic-ulation, but also in the implied pen trajectories,which are the true degrees of freedom underlyingthose appearances.

R E P O R T S

22 DECEMBER 2000 VOL 290 SCIENCE www.sciencemag.org2322

Page 21: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

Isomap

[Tenenbaum, Silva, Langford. Science 2000]

tion to geodesic distance. For faraway points,geodesic distance can be approximated byadding up a sequence of “short hops” be-tween neighboring points. These approxima-tions are computed efficiently by findingshortest paths in a graph with edges connect-ing neighboring data points.

The complete isometric feature mapping,or Isomap, algorithm has three steps, whichare detailed in Table 1. The first step deter-mines which points are neighbors on themanifold M, based on the distances dX (i, j)between pairs of points i, j in the input space

X. Two simple methods are to connect eachpoint to all points within some fixed radius !,or to all of its K nearest neighbors (15). Theseneighborhood relations are represented as aweighted graph G over the data points, withedges of weight dX(i, j) between neighboringpoints (Fig. 3B).

In its second step, Isomap estimates thegeodesic distances dM (i, j) between all pairsof points on the manifold M by computingtheir shortest path distances dG(i, j) in thegraph G. One simple algorithm (16) for find-ing shortest paths is given in Table 1.

The final step applies classical MDS tothe matrix of graph distances DG " {dG(i, j)},constructing an embedding of the data in ad-dimensional Euclidean space Y that bestpreserves the manifold’s estimated intrinsicgeometry (Fig. 3C). The coordinate vectors yi

for points in Y are chosen to minimize thecost function

E ! !#$DG% " #$DY%!L2 (1)

where DY denotes the matrix of Euclideandistances {dY(i, j) " !yi & yj!} and !A!L2

the L2 matrix norm '(i, j Ai j2 . The # operator

Fig. 1. (A) A canonical dimensionality reductionproblem from visual perception. The input consistsof a sequence of 4096-dimensional vectors, rep-resenting the brightness values of 64 pixel by 64pixel images of a face rendered with differentposes and lighting directions. Applied to N " 698raw images, Isomap (K" 6) learns a three-dimen-sional embedding of the data’s intrinsic geometricstructure. A two-dimensional projection is shown,with a sample of the original input images (redcircles) superimposed on all the data points (blue)and horizontal sliders (under the images) repre-senting the third dimension. Each coordinate axisof the embedding correlates highly with one de-gree of freedom underlying the original data: left-right pose (x axis, R " 0.99), up-down pose ( yaxis, R " 0.90), and lighting direction (slider posi-tion, R " 0.92). The input-space distances dX(i, j )given to Isomap were Euclidean distances be-tween the 4096-dimensional image vectors. (B)Isomap applied to N " 1000 handwritten “2”sfrom the MNIST database (40). The two mostsignificant dimensions in the Isomap embedding,shown here, articulate the major features of the“2”: bottom loop (x axis) and top arch ( y axis).Input-space distances dX(i, j ) were measured bytangent distance, a metric designed to capture theinvariances relevant in handwriting recognition(41). Here we used !-Isomap (with ! " 4.2) be-cause we did not expect a constant dimensionalityto hold over the whole data set; consistent withthis, Isomap finds several tendrils projecting fromthe higher dimensional mass of data and repre-senting successive exaggerations of an extrastroke or ornament in the digit.

R E P O R T S

22 DECEMBER 2000 VOL 290 SCIENCE www.sciencemag.org2320

Page 22: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

Isomap

[Tenenbaum, Silva, Langford. Science 2000]

tion to geodesic distance. For faraway points,geodesic distance can be approximated byadding up a sequence of “short hops” be-tween neighboring points. These approxima-tions are computed efficiently by findingshortest paths in a graph with edges connect-ing neighboring data points.

The complete isometric feature mapping,or Isomap, algorithm has three steps, whichare detailed in Table 1. The first step deter-mines which points are neighbors on themanifold M, based on the distances dX (i, j)between pairs of points i, j in the input space

X. Two simple methods are to connect eachpoint to all points within some fixed radius !,or to all of its K nearest neighbors (15). Theseneighborhood relations are represented as aweighted graph G over the data points, withedges of weight dX(i, j) between neighboringpoints (Fig. 3B).

In its second step, Isomap estimates thegeodesic distances dM (i, j) between all pairsof points on the manifold M by computingtheir shortest path distances dG(i, j) in thegraph G. One simple algorithm (16) for find-ing shortest paths is given in Table 1.

The final step applies classical MDS tothe matrix of graph distances DG " {dG(i, j)},constructing an embedding of the data in ad-dimensional Euclidean space Y that bestpreserves the manifold’s estimated intrinsicgeometry (Fig. 3C). The coordinate vectors yi

for points in Y are chosen to minimize thecost function

E ! !#$DG% " #$DY%!L2 (1)

where DY denotes the matrix of Euclideandistances {dY(i, j) " !yi & yj!} and !A!L2

the L2 matrix norm '(i, j Ai j2 . The # operator

Fig. 1. (A) A canonical dimensionality reductionproblem from visual perception. The input consistsof a sequence of 4096-dimensional vectors, rep-resenting the brightness values of 64 pixel by 64pixel images of a face rendered with differentposes and lighting directions. Applied to N " 698raw images, Isomap (K" 6) learns a three-dimen-sional embedding of the data’s intrinsic geometricstructure. A two-dimensional projection is shown,with a sample of the original input images (redcircles) superimposed on all the data points (blue)and horizontal sliders (under the images) repre-senting the third dimension. Each coordinate axisof the embedding correlates highly with one de-gree of freedom underlying the original data: left-right pose (x axis, R " 0.99), up-down pose ( yaxis, R " 0.90), and lighting direction (slider posi-tion, R " 0.92). The input-space distances dX(i, j )given to Isomap were Euclidean distances be-tween the 4096-dimensional image vectors. (B)Isomap applied to N " 1000 handwritten “2”sfrom the MNIST database (40). The two mostsignificant dimensions in the Isomap embedding,shown here, articulate the major features of the“2”: bottom loop (x axis) and top arch ( y axis).Input-space distances dX(i, j ) were measured bytangent distance, a metric designed to capture theinvariances relevant in handwriting recognition(41). Here we used !-Isomap (with ! " 4.2) be-cause we did not expect a constant dimensionalityto hold over the whole data set; consistent withthis, Isomap finds several tendrils projecting fromthe higher dimensional mass of data and repre-senting successive exaggerations of an extrastroke or ornament in the digit.

R E P O R T S

22 DECEMBER 2000 VOL 290 SCIENCE www.sciencemag.org2320

Page 23: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

Isomap

converts distances to inner products (17),which uniquely characterize the geometry ofthe data in a form that supports efficientoptimization. The global minimum of Eq. 1 isachieved by setting the coordinates yi to thetop d eigenvectors of the matrix !(DG) (13).

As with PCA or MDS, the true dimen-sionality of the data can be estimated fromthe decrease in error as the dimensionality ofY is increased. For the Swiss roll, whereclassical methods fail, the residual varianceof Isomap correctly bottoms out at d " 2(Fig. 2B).

Just as PCA and MDS are guaranteed,given sufficient data, to recover the truestructure of linear manifolds, Isomap is guar-anteed asymptotically to recover the true di-mensionality and geometric structure of astrictly larger class of nonlinear manifolds.Like the Swiss roll, these are manifolds

whose intrinsic geometry is that of a convexregion of Euclidean space, but whose ambi-ent geometry in the high-dimensional inputspace may be highly folded, twisted, orcurved. For non-Euclidean manifolds, such asa hemisphere or the surface of a doughnut,Isomap still produces a globally optimal low-dimensional Euclidean representation, asmeasured by Eq. 1.

These guarantees of asymptotic conver-gence rest on a proof that as the number ofdata points increases, the graph distancesdG(i, j) provide increasingly better approxi-mations to the intrinsic geodesic distancesdM(i, j), becoming arbitrarily accurate in thelimit of infinite data (18, 19). How quicklydG(i, j) converges to dM(i, j) depends on cer-tain parameters of the manifold as it lieswithin the high-dimensional space (radius ofcurvature and branch separation) and on the

density of points. To the extent that a data setpresents extreme values of these parametersor deviates from a uniform density, asymp-totic convergence still holds in general, butthe sample size required to estimate geodes-ic distance accurately may be impracticallylarge.

Isomap’s global coordinates provide asimple way to analyze and manipulate high-dimensional observations in terms of theirintrinsic nonlinear degrees of freedom. For aset of synthetic face images, known to havethree degrees of freedom, Isomap correctlydetects the dimensionality (Fig. 2A) and sep-arates out the true underlying factors (Fig.1A). The algorithm also recovers the knownlow-dimensional structure of a set of noisyreal images, generated by a human hand vary-ing in finger extension and wrist rotation(Fig. 2C) (20). Given a more complex dataset of handwritten digits, which does not havea clear manifold geometry, Isomap still findsglobally meaningful coordinates (Fig. 1B)and nonlinear structure that PCA or MDS donot detect (Fig. 2D). For all three data sets,the natural appearance of linear interpolationsbetween distant points in the low-dimension-al coordinate space confirms that Isomap hascaptured the data’s perceptually relevantstructure (Fig. 4).

Previous attempts to extend PCA andMDS to nonlinear data sets fall into twobroad classes, each of which suffers fromlimitations overcome by our approach. Locallinear techniques (21–23) are not designed torepresent the global structure of a data setwithin a single coordinate system, as we do inFig. 1. Nonlinear techniques based on greedyoptimization procedures (24–30) attempt todiscover global structure, but lack the crucialalgorithmic features that Isomap inheritsfrom PCA and MDS: a noniterative, polyno-mial time procedure with a guarantee of glob-al optimality; for intrinsically Euclidean man-

Fig. 2. The residualvariance of PCA (opentriangles), MDS [opentriangles in (A) through(C); open circles in (D)],and Isomap (filled cir-cles) on four data sets(42). (A) Face imagesvarying in pose and il-lumination (Fig. 1A).(B) Swiss roll data (Fig.3). (C) Hand imagesvarying in finger exten-sion and wrist rotation(20). (D) Handwritten“2”s (Fig. 1B). In all cas-es, residual variance de-creases as the dimen-sionality d is increased.The intrinsic dimen-sionality of the datacan be estimated bylooking for the “elbow”at which this curve ceases to decrease significantly with added dimensions. Arrows mark the true orapproximate dimensionality, when known. Note the tendency of PCA and MDS to overestimate thedimensionality, in contrast to Isomap.

Fig. 3. The “Swiss roll” data set, illustrating how Isomap exploits geodesicpaths for nonlinear dimensionality reduction. (A) For two arbitrary points(circled) on a nonlinear manifold, their Euclidean distance in the high-dimensional input space (length of dashed line) may not accuratelyreflect their intrinsic similarity, as measured by geodesic distance alongthe low-dimensional manifold (length of solid curve). (B) The neighbor-hood graph G constructed in step one of Isomap (with K " 7 and N "

1000 data points) allows an approximation (red segments) to the truegeodesic path to be computed efficiently in step two, as the shortestpath in G. (C) The two-dimensional embedding recovered by Isomap instep three, which best preserves the shortest path distances in theneighborhood graph (overlaid). Straight lines in the embedding (blue)now represent simpler and cleaner approximations to the true geodesicpaths than do the corresponding graph paths (red).

R E P O R T S

www.sciencemag.org SCIENCE VOL 290 22 DECEMBER 2000 2321

Residualvariance

Numberofdimensions

Faceimages Swissrolldata

PCA

Isomap

Page 24: Dimensionality Reducon Lecture 23 - People | MIT CSAILpeople.csail.mit.edu/.../courses/ml16/slides/lecture23.pdf · 2016. 4. 21. · Lecture 23 David Sontag New York University Slides

Whatyouneedtoknow

•  Dimensionalityreduc9on– whyandwhenit’simportant

•  Principalcomponentanalysis– minimizingreconstruc9onerror–  rela9onshiptocovariancematrixandeigenvectors– usingSVD

•  Non-lineardimensionalityreduc9on