Cluster AnalysisCS240B Lecture notes
based on those by
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004
2
What is Cluster Analysis?
Finding groups of objects such that the objects in a group will be similar (or related) to one another and different from (or unrelated to) the objects in other groups
Inter-cluster distances are maximized
Intra-cluster distances are
minimized
3
Cluster Analysis: many & diverse applications
Understanding– Group related
documents for browsing, group genes and proteins that have similar functionality, or group stocks with similar price fluctuations
Summarization– Reduce the size of large
data sets
Discovered Clusters Industry Group
1 Applied-Matl-DOWN,Bay-Network-Down,3-COM-DOWN,
Cabletron-Sys-DOWN,CISCO-DOWN,HP-DOWN, DSC-Comm-DOWN,INTEL-DOWN,LSI-Logic-DOWN,
Micron-Tech-DOWN,Texas-Inst-Down,Tellabs-Inc-Down, Natl-Semiconduct-DOWN,Oracl-DOWN,SGI-DOWN,
Sun-DOWN
Technology1-DOWN
2 Apple-Comp-DOWN,Autodesk-DOWN,DEC-DOWN,
ADV-Micro-Device-DOWN,Andrew-Corp-DOWN, Computer-Assoc-DOWN,Circuit-City-DOWN,
Compaq-DOWN, EMC-Corp-DOWN, Gen-Inst-DOWN, Motorola-DOWN,Microsoft-DOWN,Scientific-Atl-DOWN
Technology2-DOWN
3 Fannie-Mae-DOWN,Fed-Home-Loan-DOWN, MBNA-Corp-DOWN,Morgan-Stanley-DOWN
Financial-DOWN
4 Baker-Hughes-UP,Dresser-Inds-UP,Halliburton-HLD-UP,
Louisiana-Land-UP,Phillips-Petro-UP,Unocal-UP, Schlumberger-UP
Oil-UP
Clustering precipitation in Australia
4
Types of Clusterings
A clustering is a set of clusters
Important distinction between hierarchical and partitional sets of clusters
Partitional Clustering– A division data objects into non-overlapping subsets
(clusters) such that each data object is in exactly one subset
Hierarchical clustering– A set of nested clusters organized as a hierarchical
tree
5
Partitional Clustering
Original Points A Partitional Clustering
6
Hierarchical Clustering
p4p1
p3
p2
p4 p1
p3
p2
p4p1 p2 p3
p4p1 p2 p3
Traditional Hierarchical Clustering
Non-traditional Hierarchical Clustering Non-traditional Dendrogram
Traditional Dendrogram
7
Other Distinctions Between Sets of Clusters
Exclusive versus non-exclusive– In non-exclusive clusterings, points may belong to
multiple clusters.– Can represent multiple classes or ‘border’ points
Fuzzy versus non-fuzzy– In fuzzy clustering, a point belongs to every cluster
with some weight between 0 and 1– Weights must sum to 1– Probabilistic clustering has similar characteristics
Partial versus complete– In some cases, we only want to cluster some of the
data
Heterogeneous versus homogeneous– Cluster of widely different sizes, shapes, and densities
8
Clustering Algorithms
K-means and its variants
Hierarchical clustering
Density-based clustering
9
K-means Clustering
Partitional clustering approach Each cluster is associated with a centroid (center point) Each point is assigned to the cluster with the closest centroid Number of clusters, K, must be specified The basic algorithm is very simple
10
K-means Clustering – Details Initial centroids are often chosen randomly.
– Clusters produced vary from one run to another.
The centroid is (typically) the mean of the points in the cluster. ‘Closeness’ is measured by Euclidean distance, cosine similarity, correlation, etc. K-means will converge for common similarity measures mentioned above. Most of the convergence happens in the first few iterations.
– Often the stopping condition is changed to ‘Until relatively few points change clusters’
Complexity is O( n * K * I * d )– n = number of points, K = number of clusters,
I = number of iterations, d = number of attributes
11
Two different K-means Clusterings
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Sub-optimal Clustering
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Optimal Clustering
Original Points
12
K-means Clusterings-example
Initial centroids: Case 1
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
13
Importance of Choosing Initial Centroids
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 1
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 3
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 4
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 5
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 6
14
Importance of Choosing Initial Centroids
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 1
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 3
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 4
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 5
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 6
15
K-means Clusterings: Initial centers
Initial centers: Case 1
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
16
Importance of Choosing Initial Centroids …
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 1
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 3
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 4
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 5
17
Importance of Choosing Initial Centroids …
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 1
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 3
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 4
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
xy
Iteration 5
18
Problems with Selecting Initial Points
If there are 2 ‘real’ clusters then any two centroids produces an optimal clustering—i.e., each point will become the centroid of cluster. But if we know that there are K>2 `real’ clusters the chance of selecting one centroid for each becomes small as k grows.
– For example, if K = 10, then probability = 10!/1010 = 0.00036
– Sometimes the initial centroids will readjust themselves in ‘right’ way, and sometimes they don’t
– An obvious limitation of random initial assignements (also we normally do not know K)
19
Evaluating K-means Clusters
Most common measure is Sum of Squared Error (SSE)
– For each point, the error is the distance to the nearest cluster
– To get SSE, we square these errors and sum them.
– x is a data point in cluster Ci and mi is the representative point for cluster Ci
– One easy way to reduce SSE is to increase K, the number of clusters
– But in general, the fewer the clusters the better. A good clustering with smaller K can have a lower SSE than a poor clustering with higher KHow do we minimize both K and SSE? For a given K we can try different sets of initial random points.
K
i Cxi
i
xmdistSSE1
2 ),(
20
10 Clusters Example (5 pairs)
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
yIteration 1
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
yIteration 2
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
yIteration 3
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
yIteration 4
Starting with two initial centroids in one cluster of each pair of clusters
21
10 Clusters Example
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 1
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 2
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 3
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 4
Starting with two initial centroids in one cluster of each pair of clusters
22
10 Clusters Example
Starting with some pairs of clusters having three initial centroids, while other have only one.
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 1
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 2
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 3
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 4
23
10 Clusters Example
Starting with some pairs of clusters having three initial centroids, while other have only one.
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
yIteration 1
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 2
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 3
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 4
24
Solutions to Initial Centroids Problem
Multiple runs– Helps, but probability is not on your side
Start with more than k initial centroids and then select k centroids from the most widely separated resulting clusters
Use hierarchical clustering to determine initial centroids on a small sample of data
Bisecting K-means– Not as susceptible to initialization issues
Postprocessing
25
Improvements
Eliminate outliers (preprocessing) and very small clusters that may represent outliers
Outliers are defined as points that have large distances from every centroids (they are bad because they exercise an excessive—quadratic effect)
On the other hand many outliers in the same vicinity, could indicate the need to generate a new cluster
Split ‘loose’ clusters, i.e., clusters with relatively high SSE
Merge clusters that are ‘close’ and that have relatively low SSE
– Some clusters could turn out empty.
Can use these steps during the clustering process
26
Bisecting K-means
Bisecting K-means algorithm– Variant of K-means that can produce a partitional or a
hierarchical clustering
27
Bisecting K-means Example
28
Limitations of K-means
K-means has problems when clusters are of differing – Sizes– Densities– Non-globular shapes
K-means has problems when the data contains outliers.
29
Limitations of K-means: Differing Sizes
Original Points K-means (3 Clusters)
30
Limitations of K-means: Differing Density
Original Points K-means (3 Clusters)
31
Limitations of K-means: Non-globular Shapes
Original Points K-means (2 Clusters)
32
Overcoming K-means Limitations
Original Points K-means Clusters
One solution is to use many clusters.Find parts of clusters, but need to put together.
33
Overcoming K-means Limitations
Original Points K-means Clusters
34
Overcoming K-means Limitations
Original Points K-means Clusters
35
Hierarchical Clustering
Produces a set of nested clusters organized as a hierarchical tree
Can be visualized as a dendrogram– A tree like diagram that records the sequences
of merges or splits
1 3 2 5 4 60
0.05
0.1
0.15
0.2
1
2
3
4
5
6
1
23 4
5
36
Strengths of Hierarchical Clustering
Do not have to assume any particular number of clusters– Any desired number of clusters can be
obtained by ‘cutting’ the dendogram at the proper level
They may correspond to meaningful taxonomies– Example in biological sciences (e.g., animal
kingdom, phylogeny reconstruction, …)
37
Hierarchical Clustering
Two main types of hierarchical clustering– Agglomerative:
Start with the points as individual clusters At each step, merge the closest pair of clusters until only one cluster (or k clusters) left
– Divisive: Start with one, all-inclusive cluster At each step, split a cluster until each cluster contains a point (or there are k clusters)
Traditional hierarchical algorithms use a similarity or distance matrix
– Merge or split one cluster at a time
38
Agglomerative Clustering Algorithm
More popular hierarchical clustering technique
Basic algorithm is straightforward1. Compute the proximity matrix
2. Let each data point be a cluster
3. Repeat
4. Merge the two closest clusters
5. Update the proximity matrix
6. Until only a single cluster remains
Key operation is the computation of the proximity of two clusters
– Different approaches to defining the distance between clusters distinguish the different algorithms
39
How to Define Inter-Cluster Similarity
p1
p3
p5
p4
p2
p1 p2 p3 p4 p5 . . .
.
.
.
Similarity?
MIN MAX Group Average Distance Between Centroids Objective function
– Ward’s Method
Proximity Matrix
40
Cluster Similarity: Ward’s Method
Similarity of two clusters is based on the increase in squared error when two clusters are merged– Similar to group average if distance between
points is distance squared Less susceptible to noise and outliers Biased towards globular clusters Hierarchical analogue of K-means
– Can be used to initialize K-means
41
Recent Hierarchical Clustering Methods
Major weakness of agglomerative clustering methods
– do not scale well: time complexity of at least O(n2), where n is the number of total objects
– can never undo what was done previously Integration of hierarchical with distance-based clustering
– BIRCH (1996): uses CF-tree and incrementally adjusts the quality of sub-clusters
– ROCK (1999): clustering categorical data by neighbor and link analysis
– CHAMELEON (1999): hierarchical clustering using dynamic modeling
42
Cluster Validity
For supervised classification we have a variety of measures to evaluate how good our model is– Accuracy, precision, recall
For cluster analysis, the analogous question is how to evaluate the “goodness” of the resulting clusters?
But “clusters are in the eye of the beholder”!
43
Order the similarity matrix with respect to cluster labels and inspect visually.
Using Similarity Matrix for Cluster Validation
0 0.2 0.4 0.6 0.8 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
y
Points
Po
ints
20 40 60 80 100
10
20
30
40
50
60
70
80
90
100Similarity
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
44
Points
Po
ints
20 40 60 80 100
10
20
30
40
50
60
70
80
90
100Similarity
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Using Similarity Matrix for Cluster Validation
Clusters in random data are not so crisp
K-means
0 0.2 0.4 0.6 0.8 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
y
45
K-means on data streams
Streams is partitioned into windows Initially: for each new point P a new centroid is
created with probability d/D where D is proportional to size of the domain, and d is the actual distance of P to the closest centroid [Callaghan ’02]
Later windows can use the centroids of the previous window---improved by
– Eliminating empty clusters– Splitting large sparse ones– Merging large clusters
Large changes in cluster position/population denote concept changes.
Maintenance on sliding windows, or window panes is harder.