cluster analysis 2

Download Cluster Analysis 2

Post on 25-Dec-2015

5 views

Category:

Documents

1 download

Embed Size (px)

DESCRIPTION

zx

TRANSCRIPT

  • Clustering II

    *

  • Hierarchical Clustering Produces a set of nested clusters organized as a hierarchical treeCan be visualized as a dendrogramA tree-like diagram that records the sequences of merges or splits

    *

  • Strengths of Hierarchical ClusteringNo assumptions on the number of clustersAny desired number of clusters can be obtained by cutting the dendogram at the proper level

    Hierarchical clusterings may correspond to meaningful taxonomiesExample in biological sciences (e.g., phylogeny reconstruction, etc), web (e.g., product catalogs) etc

    *

  • Hierarchical ClusteringTwo main types of hierarchical clusteringAgglomerative: Start with the points as individual clusters At each step, merge the closest pair of clusters until only one cluster (or k clusters) left

    Divisive: Start with one, all-inclusive cluster At each step, split a cluster until each cluster contains a point (or there are k clusters)

    Traditional hierarchical algorithms use a similarity or distance matrixMerge or split one cluster at a time

    *

  • Complexity of hierarchical clusteringDistance matrix is used for deciding which clusters to merge/split

    At least quadratic in the number of data points

    Not usable for large datasets

  • Agglomerative clustering algorithmMost popular hierarchical clustering technique

    Basic algorithm

    Compute the distance matrix between the input data pointsLet each data point be a clusterRepeatMerge the two closest clustersUpdate the distance matrixUntil only a single cluster remains Key operation is the computation of the distance between two clustersDifferent definitions of the distance between clusters lead to different algorithms

    *

  • Input/ Initial settingStart with clusters of individual points and a distance/proximity matrix

    Distance/Proximity Matrix

    ...

    p1

    p2

    p3

    p4

    p9

    p10

    p11

    p12

    *

  • Intermediate StateAfter some merging steps, we have some clusters

    C1C4C2C5C3Distance/Proximity Matrix

    ...

    p1

    p2

    p3

    p4

    p9

    p10

    p11

    p12

    *

  • Intermediate StateMerge the two closest clusters (C2 and C5) and update the distance matrix.

    C1C4C2C5C3

    Distance/Proximity Matrix

    ...

    p1

    p2

    p3

    p4

    p9

    p10

    p11

    p12

    *

  • After MergingHow do we update the distance matrix?

    C1C4C2 U C5C3? ? ? ? ???C2 U C5C1C1C3C4C2 U C5C3C4

    ...

    p1

    p2

    p3

    p4

    p9

    p10

    p11

    p12

    *

  • Distance between two clustersEach cluster is a set of points

    How do we define distance between two sets of pointsLots of alternativesNot an easy task

  • Distance between two clustersSingle-link distance between clusters Ci and Cj is the minimum distance between any object in Ci and any object in Cj

    The distance is defined by the two most similar objects

  • Single-link clustering: example Determined by one pair of points, i.e., by one link in the proximity graph.

    Sheet1

    I1I2I3I4I5

    I11.000.900.100.650.20

    I20.901.000.700.600.50

    I30.100.701.000.400.30

    I40.650.600.401.000.80

    I50.200.500.300.801.00

    Sheet2

    Sheet3

    *

  • Single-link clustering: exampleNested ClustersDendrogram

    *

  • Strengths of single-link clusteringOriginal Points Can handle non-elliptical shapes

    *

  • Limitations of single-link clusteringOriginal Points Sensitive to noise and outliers It produces long, elongated clusters

    *

  • Distance between two clustersComplete-link distance between clusters Ci and Cj is the maximum distance between any object in Ci and any object in Cj

    The distance is defined by the two most dissimilar objects

  • Complete-link clustering: exampleDistance between clusters is determined by the two most distant points in the different clusters

    Sheet1

    I1I2I3I4I5

    I11.000.900.100.650.20

    I20.901.000.700.600.50

    I30.100.701.000.400.30

    I40.650.600.401.000.80

    I50.200.500.300.801.00

    Sheet2

    Sheet3

    *

  • Complete-link clustering: exampleNested ClustersDendrogram

    *

  • Strengths of complete-link clusteringOriginal Points More balanced clusters (with equal diameter) Less susceptible to noise

    *

  • Limitations of complete-link clusteringOriginal Points Tends to break large clusters All clusters tend to have the same diameter small clusters are merged with larger ones

    *

  • Distance between two clustersGroup average distance between clusters Ci and Cj is the average distance between any object in Ci and any object in Cj

  • Average-link clustering: exampleProximity of two clusters is the average of pairwise proximity between points in the two clusters.

    Sheet1

    I1I2I3I4I5

    I11.000.900.100.650.20

    I20.901.000.700.600.50

    I30.100.701.000.400.30

    I40.650.600.401.000.80

    I50.200.500.300.801.00

    Sheet2

    Sheet3

    *

  • Average-link clustering: exampleNested ClustersDendrogram

    *

  • Average-link clustering: discussionCompromise between Single and Complete Link

    StrengthsLess susceptible to noise and outliers

    LimitationsBiased towards globular clusters

    *

  • Distance between two clustersCentroid distance between clusters Ci and Cj is the distance between the centroid ri of Ci and the centroid rj of Cj

  • Distance between two clustersWards distance between clusters Ci and Cj is the difference between the total within cluster sum of squares for the two clusters separately, and the within cluster sum of squares resulting from merging the two clusters in cluster Cij

    ri: centroid of Cirj: centroid of Cjrij: centroid of Cij

  • Wards distance for clustersSimilar to group average and centroid distance

    Less susceptible to noise and outliers

    Biased towards globular clusters

    Hierarchical analogue of k-meansCan be used to initialize k-means

    *

  • Hierarchical Clustering: ComparisonGroup AverageWards MethodMINMAX

    *

  • Hierarchical Clustering: Time and Space requirementsFor a dataset X consisting of n points

    O(n2) space; it requires storing the distance matrix

    O(n3) time in most of the casesThere are n steps and at each step the size n2 distance matrix must be updated and searchedComplexity can be reduced to O(n2 log(n) ) time for some approaches by using appropriate data structures

    *

  • Divisive hierarchical clusteringStart with a single cluster composed of all data points

    Split this into components

    Continue recursively

    Monothetic divisive methods split clusters using one variable/dimension at a time

    Polythetic divisive methods make splits on the basis of all variables together

    Any intercluster distance measure can be used

    Computationally intensive, less widely used than agglomerative methods

  • Model-based clusteringAssume data generated from k probability distributionsGoal: find the distribution parametersAlgorithm: Expectation Maximization (EM)Output: Distribution parameters and a soft assignment of points to clusters

    *

  • Model-based clusteringAssume k probability distributions with parameters: (1,, k)Given data X, compute (1,, k) such that Pr(X|1,, k) [likelihood] or ln(Pr(X|1,, k)) [loglikelihood] is maximized.Every point xX need not be generated by a single distribution but it can be generated by multiple distributions with some probability [soft clustering]

    *

  • EM AlgorithmInitialize k distribution parameters (1,, k); Each distribution parameter corresponds to a cluster center

    Iterate between two steps

    Expectation step: (probabilistically) assign points to clusters

    Maximation step: estimate model parameters that maximize the likelihood for the given assignment of points

    *

  • EM AlgorithmInitialize k cluster centersIterate between two stepsExpectation step: assign points to clusters

    Maximation step: estimate model parameters

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

Recommended

View more >