Download - Cluster analysis

Transcript
Page 1: Cluster analysis

Lecture-41Lecture-41

What is Cluster Analysis?What is Cluster Analysis?

Page 2: Cluster analysis

General Applications of Clustering General Applications of Clustering

Pattern RecognitionPattern RecognitionSpatial Data Analysis Spatial Data Analysis create thematic maps in GIS by clustering feature create thematic maps in GIS by clustering feature

spacesspaces detect spatial clusters and explain them in spatial data detect spatial clusters and explain them in spatial data

miningminingImage ProcessingImage ProcessingEconomic Science ( market research)Economic Science ( market research)WWWWWW Document classificationDocument classification Cluster Weblog data to discover groups of similar access Cluster Weblog data to discover groups of similar access

patternspatternsLecture-41 - What is Cluster Analysis?Lecture-41 - What is Cluster Analysis?

Page 3: Cluster analysis

Examples of Clustering ApplicationsExamples of Clustering Applications

Marketing: Help marketers discover distinct groups in their Marketing: Help marketers discover distinct groups in their customer bases, and then use this knowledge to develop customer bases, and then use this knowledge to develop targeted marketing programstargeted marketing programs

Land use: Identification of areas of similar land use in an Land use: Identification of areas of similar land use in an earth observation databaseearth observation database

Insurance: Identifying groups of motor insurance policy Insurance: Identifying groups of motor insurance policy holders with a high average claim costholders with a high average claim cost

City-planning: Identifying groups of houses according to City-planning: Identifying groups of houses according to their house type, value, and geographical locationtheir house type, value, and geographical location

Earth-quake studies: Observed earth quake epicenters Earth-quake studies: Observed earth quake epicenters should be clustered along continent faultsshould be clustered along continent faults

Lecture-41 - What is Cluster Analysis?Lecture-41 - What is Cluster Analysis?

Page 4: Cluster analysis

What Is Good Clustering?What Is Good Clustering?

A good clustering method will produce high quality A good clustering method will produce high quality clusters withclusters with high high intra-classintra-class similarity similarity low low inter-classinter-class similarity similarity

The quality of a clustering result depends on both The quality of a clustering result depends on both the similarity measure used by the method and its the similarity measure used by the method and its implementation.implementation.

The quality of a clustering method is also The quality of a clustering method is also measured by its ability to discover some or all of measured by its ability to discover some or all of the hidden patterns.the hidden patterns.

Lecture-41 - What is Cluster Analysis?Lecture-41 - What is Cluster Analysis?

Page 5: Cluster analysis

Requirements of Clustering in Data MiningRequirements of Clustering in Data Mining ScalabilityScalability

Ability to deal with different types of attributesAbility to deal with different types of attributes

Discovery of clusters with arbitrary shapeDiscovery of clusters with arbitrary shape

Minimal requirements for domain knowledge to Minimal requirements for domain knowledge to determine input parametersdetermine input parameters

Able to deal with noise and outliersAble to deal with noise and outliers

Insensitive to order of input recordsInsensitive to order of input records

High dimensionalityHigh dimensionality

Incorporation of user-specified constraintsIncorporation of user-specified constraints

Interpretability and usabilityInterpretability and usabilityLecture-41 - What is Cluster Analysis?Lecture-41 - What is Cluster Analysis?

Page 6: Cluster analysis

Lecture-42Lecture-42

Types of Data in Cluster AnalysisTypes of Data in Cluster Analysis

Page 7: Cluster analysis

Data StructuresData Structures

Data matrixData matrix (two modes)(two modes)

Dissimilarity matrixDissimilarity matrix (one mode)(one mode)

npx...nfx...n1x...............ipx...ifx...i1x...............1px...1fx...11x

0...)2,()1,(:::

)2,3()

...ndnd

0dd(3,10d(2,1)

0

Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis

Page 8: Cluster analysis

Measure the Quality of ClusteringMeasure the Quality of Clustering

Dissimilarity/Similarity metric: Similarity is expressed in Dissimilarity/Similarity metric: Similarity is expressed in terms of a distance function, which is typically metric:terms of a distance function, which is typically metric:

dd((i, ji, j))There is a separate “quality” function that measures the There is a separate “quality” function that measures the “goodness” of a cluster.“goodness” of a cluster.The definitions of distance functions are usually very The definitions of distance functions are usually very different for interval-scaled, boolean, categorical, ordinal different for interval-scaled, boolean, categorical, ordinal and ratio variables.and ratio variables.Weights should be associated with different variables Weights should be associated with different variables based on applications and data semantics.based on applications and data semantics.It is hard to define “similar enough” or “good enough” It is hard to define “similar enough” or “good enough” the answer is typically highly subjective.the answer is typically highly subjective.

Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis

Page 9: Cluster analysis

Type of data in clustering analysisType of data in clustering analysis

Interval-scaled variablesInterval-scaled variables

Binary variablesBinary variables

Categorical, Ordinal, and Ratio Scaled Categorical, Ordinal, and Ratio Scaled

variablesvariables

Variables of mixed typesVariables of mixed types

Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis

Page 10: Cluster analysis

Interval-valued variablesInterval-valued variables

Standardize dataStandardize data Calculate the mean absolute deviation:Calculate the mean absolute deviation:

wherewhere

Calculate the standardized measurement (Calculate the standardized measurement (z-scorez-score))

Using mean absolute deviation is more robust than Using mean absolute deviation is more robust than

using standard deviation using standard deviation

.)...211

nffff xx(xn m

|)|...|||(|121 fnffffff mxmxmxns

f

fifif s

mx z

Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis

Page 11: Cluster analysis

Similarity and Dissimilarity ObjectsSimilarity and Dissimilarity Objects

Distances are normally used to measure the similarity or Distances are normally used to measure the similarity or dissimilarity between two data objectsdissimilarity between two data objects

Some popular ones include: Some popular ones include: Minkowski distanceMinkowski distance::

where where ii = ( = (xxi1i1, , xxi2i2, …, , …, xxipip) and) and j j = ( = (xxj1j1, , xxj2j2, …, , …, xxjpjp) are two ) are two pp--

dimensional data objects, and dimensional data objects, and qq is a positive integer is a positive integer

If If qq = = 11, , dd is Manhattan distance is Manhattan distance

q q

pp

qq

jxixjxixjxixjid )||...|||(|),(2211

||...||||),(2211 pp jxixjxixjxixjid

Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis

Page 12: Cluster analysis

Similarity and Dissimilarity Objects Similarity and Dissimilarity Objects

If qIf q = = 22,, d d is Euclidean distance:is Euclidean distance:

PropertiesProperties

d(i,j)d(i,j) 0 0

d(i,i)d(i,i) = 0 = 0

d(i,j)d(i,j) = = d(j,i)d(j,i)d(i,j)d(i,j) d(i,k)d(i,k) + + d(k,j)d(k,j)

Also one can use weighted distance, parametric Also one can use weighted distance, parametric Pearson product moment correlation, or other Pearson product moment correlation, or other disimilarity measures.disimilarity measures.

)||...|||(|),( 22

22

2

11 pp jxixjxixjxixjid

Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis

Page 13: Cluster analysis

Binary VariablesBinary VariablesA contingency table for binary dataA contingency table for binary data

Simple matching coefficient (invariant, if the binary Simple matching coefficient (invariant, if the binary variable is variable is symmetricsymmetric):):

Jaccard coefficient (noninvariant if the binary variable is Jaccard coefficient (noninvariant if the binary variable is asymmetricasymmetric): ):

dcbacb jid

),(

pdbcasumdcdcbaba

sum

01

01

cbacb jid

),(

Object i

Object j

Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis

Page 14: Cluster analysis

Dissimilarity between Binary Dissimilarity between Binary VariablesVariables

ExampleExample

gender is a symmetric attributegender is a symmetric attribute the remaining attributes are asymmetric binarythe remaining attributes are asymmetric binary let the values Y and P be set to 1, and the value N be set to 0let the values Y and P be set to 1, and the value N be set to 0

Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4Jack M Y N P N N NMary F Y N P N P NJim M Y P N N N N

75.0211

21),(

67.0111

11),(

33.0102

10),(

maryjimd

jimjackd

maryjackd

Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis

Page 15: Cluster analysis

Categorical VariablesCategorical VariablesA generalization of the binary variable in that it can A generalization of the binary variable in that it can take more than 2 states, e.g., red, yellow, blue, greentake more than 2 states, e.g., red, yellow, blue, green

Method 1: Simple matchingMethod 1: Simple matching MM is no of matches, is no of matches, p p is total no of variables is total no of variables

Method 2: use a large number of binary variablesMethod 2: use a large number of binary variables creating a new binary variable for each of the creating a new binary variable for each of the MM nominal nominal

statesstates

pmpjid ),(

Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis

Page 16: Cluster analysis

Ordinal VariablesOrdinal VariablesAn ordinal variable can be discrete or continuousAn ordinal variable can be discrete or continuous

order is important, e.g., rankorder is important, e.g., rank

Can be treated like interval-scaled Can be treated like interval-scaled replacing replacing xxifif by their rank by their rank map the range of each variable onto [0, 1] by replacingmap the range of each variable onto [0, 1] by replacing i i-th -th

object in the object in the ff-th variable by-th variable by

compute the dissimilarity using methods for interval-scaled compute the dissimilarity using methods for interval-scaled variablesvariables

11

f

ifif M

rz

},...,1{ fif Mr

Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis

Page 17: Cluster analysis

Ratio-Scaled VariablesRatio-Scaled Variables

Ratio-scaled variable: a positive measurement on a Ratio-scaled variable: a positive measurement on a nonlinear scale, approximately at exponential scale, nonlinear scale, approximately at exponential scale, such as such as AeAeBtBt or or AeAe-Bt-Bt

Methods:Methods: treat them like interval-scaled variablestreat them like interval-scaled variables apply logarithmic transformationapply logarithmic transformation

yyif if == log(x log(xifif)) treat them as continuous ordinal data treat their rank as treat them as continuous ordinal data treat their rank as

interval-scaled.interval-scaled.

Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis

Page 18: Cluster analysis

Variables of Mixed Variables of Mixed TypesTypesA database may contain all the six types of variablesA database may contain all the six types of variables

symmetric binary, asymmetric binary, nominal, ordinal, symmetric binary, asymmetric binary, nominal, ordinal, interval and ratio.interval and ratio.

One may use a weighted formula to combine their effects.One may use a weighted formula to combine their effects.

ff is binary or nominal: is binary or nominal:ddijij

(f)(f) = 0 if x = 0 if xif if = x= xjfjf , or d , or dijij(f)(f) = 1 o.w. = 1 o.w.

ff is interval-based: use the normalized distance is interval-based: use the normalized distance ff is ordinal or ratio-scaled is ordinal or ratio-scaled

compute ranks rcompute ranks rifif and and

and treat zand treat zifif as interval-scaled as interval-scaled

)(1

)()(1),(

fij

pf

fij

fij

pf d

jid

1

1

f

if

Mrzif

Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis

Page 19: Cluster analysis

Lecture-43Lecture-43

A Categorization of Major A Categorization of Major Clustering MethodsClustering Methods

Page 20: Cluster analysis

Major Clustering ApproachesMajor Clustering ApproachesPartitioning algorithmsPartitioning algorithms

Construct various partitions and then evaluate them by some criterionConstruct various partitions and then evaluate them by some criterion

Hierarchy algorithmsHierarchy algorithms Create a hierarchical decomposition of the set of data (or objects) using Create a hierarchical decomposition of the set of data (or objects) using

some criterionsome criterion

Density-basedDensity-based based on connectivity and density functionsbased on connectivity and density functions

Grid-basedGrid-based based on a multiple-level granularity structurebased on a multiple-level granularity structure

Model-basedModel-based A model is hypothesized for each of the clusters and the idea is to find A model is hypothesized for each of the clusters and the idea is to find

the best fit of that model to each otherthe best fit of that model to each other

Lecture-43 - A Categorization of Major Clustering MethodsLecture-43 - A Categorization of Major Clustering Methods

Page 21: Cluster analysis

Lecture-44Lecture-44

Partitioning MethodsPartitioning Methods

Page 22: Cluster analysis

Partitioning Algorithms: Basic ConceptPartitioning Algorithms: Basic Concept

Partitioning method: Construct a partition of a database Partitioning method: Construct a partition of a database DD of of nn objects into a set of objects into a set of kk clusters clusters

Given a Given a kk, find a partition of , find a partition of k clusters k clusters that optimizes the that optimizes the chosen partitioning criterionchosen partitioning criterion Global optimal: exhaustively enumerate all partitionsGlobal optimal: exhaustively enumerate all partitions Heuristic methods: Heuristic methods: k-meansk-means and and k-medoidsk-medoids algorithms algorithms k-meansk-means - Each cluster is represented by the center of - Each cluster is represented by the center of

the clusterthe cluster k-medoidsk-medoids or PAM (Partition around medoids) - Each or PAM (Partition around medoids) - Each

cluster is represented by one of the objects in the cluster cluster is represented by one of the objects in the cluster

Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods

Page 23: Cluster analysis

The The K-MeansK-Means Clustering Method Clustering Method

Given Given kk, the , the k-meansk-means algorithm is algorithm is implemented in 4 steps:implemented in 4 steps: Partition objects into Partition objects into kk nonempty subsets nonempty subsets Compute seed points as the centroids of the Compute seed points as the centroids of the

clusters of the current partition. The centroid is the clusters of the current partition. The centroid is the center (mean point) of the cluster.center (mean point) of the cluster.

Assign each object to the cluster with the nearest Assign each object to the cluster with the nearest seed point. seed point.

Go back to Step 2, stop when no more new Go back to Step 2, stop when no more new assignment.assignment.

Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods

Page 24: Cluster analysis

The The K-MeansK-Means Clustering Method Clustering Method

ExampleExample

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 100

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 100

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods

Page 25: Cluster analysis

the the K-MeansK-Means Method MethodStrengthStrength Relatively efficientRelatively efficient: : OO((tkntkn), where ), where nn is no of objects, is no of objects, kk is is

no of clusters, and no of clusters, and t t is no of iterations. Normally, is no of iterations. Normally, kk, , tt << << nn.. Often terminates at a Often terminates at a local optimumlocal optimum. The . The global optimumglobal optimum

may be found using techniques such as: may be found using techniques such as: deterministic deterministic annealingannealing and and genetic algorithmsgenetic algorithms

WeaknessWeakness Applicable only when Applicable only when meanmean is defined, then what about is defined, then what about

categorical data?categorical data? Need to specify Need to specify k, k, the the numbernumber of clusters, in advance of clusters, in advance Unable to handle noisy data and Unable to handle noisy data and outliersoutliers Not suitable to discover clusters with Not suitable to discover clusters with non-convex shapesnon-convex shapes

Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods

Page 26: Cluster analysis

Variations of the Variations of the K-MeansK-Means Method MethodA few variants of the A few variants of the k-meansk-means which differ in which differ in Selection of the initial Selection of the initial kk means means Dissimilarity calculationsDissimilarity calculations Strategies to calculate cluster meansStrategies to calculate cluster means

Handling categorical data: Handling categorical data: k-modesk-modes Replacing means of clusters with modesReplacing means of clusters with modes Using new dissimilarity measures to deal with Using new dissimilarity measures to deal with

categorical objectscategorical objects Using a frequency-based method to update modes of Using a frequency-based method to update modes of

clustersclusters A mixture of categorical and numerical data: A mixture of categorical and numerical data: k-prototypek-prototype

methodmethod

Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods

Page 27: Cluster analysis

TheThe K K--MedoidsMedoids Clustering Method Clustering Method

Find Find representativerepresentative objects, called objects, called medoidsmedoids, in clusters, in clusters

PAMPAM (Partitioning Around Medoids, 1987) (Partitioning Around Medoids, 1987) starts from an initial set of medoids and iteratively starts from an initial set of medoids and iteratively

replaces one of the medoids by one of the non-replaces one of the medoids by one of the non-medoids if it improves the total distance of the resulting medoids if it improves the total distance of the resulting clusteringclustering

PAMPAM works effectively for small data sets, but does not works effectively for small data sets, but does not scale well for large data setsscale well for large data sets

CLARACLARA (Kaufmann & Rousseeuw, 1990) (Kaufmann & Rousseeuw, 1990)

CLARANSCLARANS (Ng & Han, 1994): Randomized sampling (Ng & Han, 1994): Randomized sampling

Focusing + spatial data structure (Ester et al., 1995)Focusing + spatial data structure (Ester et al., 1995)

Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods

Page 28: Cluster analysis

PAM (Partitioning Around Medoids) (1987)PAM (Partitioning Around Medoids) (1987)

PAM (Kaufman and Rousseeuw, 1987), built in PAM (Kaufman and Rousseeuw, 1987), built in SplusSplus

Use real object to represent the clusterUse real object to represent the cluster Select Select kk representative objects arbitrarily representative objects arbitrarily For each pair of non-selected object For each pair of non-selected object hh and selected and selected

object object ii, calculate the total swapping cost , calculate the total swapping cost TCTCihih

For each pair of For each pair of ii and and hh, ,

If If TCTCihih < 0, < 0, ii is replaced by is replaced by hh

Then assign each non-selected object to the most Then assign each non-selected object to the most similar representative objectsimilar representative object

repeat steps 2-3 until there is no changerepeat steps 2-3 until there is no changeLecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods

Page 29: Cluster analysis

PAM Clustering: PAM Clustering: Total swapping cost Total swapping cost TCTCihih==jjCCjihjih

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

j

ih

t

Cjih = 0

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

t

i hj

Cjih = d(j, h) - d(j, i)

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

h

i t

j

Cjih = d(j, t) - d(j, i)

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

t

ih j

Cjih = d(j, h) - d(j, t)Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods

Page 30: Cluster analysis

CLARACLARA (Clustering Large Applications) (Clustering Large Applications) (1990)(1990)

CLARACLARA (Kaufmann and Rousseeuw in 1990) (Kaufmann and Rousseeuw in 1990)

Built in statistical analysis packages, such as S+Built in statistical analysis packages, such as S+

It draws It draws multiple samplesmultiple samples of the data set, applies of the data set, applies PAMPAM on on each sample, and gives the best clustering as the outputeach sample, and gives the best clustering as the output

StrengthStrength: deals with larger data sets than : deals with larger data sets than PAMPAM

Weakness:Weakness: Efficiency depends on the sample sizeEfficiency depends on the sample size A good clustering based on samples will not necessarily A good clustering based on samples will not necessarily

represent a good clustering of the whole data set if the represent a good clustering of the whole data set if the sample is biasedsample is biased

Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods

Page 31: Cluster analysis

CLARANS CLARANS (“Randomized” CLARA)(“Randomized” CLARA) (1994)(1994)

CLARANSCLARANS (A Clustering Algorithm based on Randomized (A Clustering Algorithm based on Randomized Search) CLARANS draws sample of neighbors Search) CLARANS draws sample of neighbors dynamicallydynamically

The clustering process can be presented as searching a The clustering process can be presented as searching a graph where every node is a potential solution, that is, a graph where every node is a potential solution, that is, a set of set of kk medoids medoids

If the local optimum is found, If the local optimum is found, CLARANSCLARANS starts with new starts with new randomly selected node in search for a new local optimumrandomly selected node in search for a new local optimum

It is more efficient and scalable than both It is more efficient and scalable than both PAMPAM and and CLARACLARAFocusing techniques and spatial access structures may Focusing techniques and spatial access structures may further improve its performancefurther improve its performance

Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods

Page 32: Cluster analysis

Lecture-45Lecture-45

Hierarchical MethodsHierarchical Methods

Page 33: Cluster analysis

Hierarchical ClusteringHierarchical Clustering

Use distance matrix as clustering criteria. This Use distance matrix as clustering criteria. This method does not require the number of clusters method does not require the number of clusters kk as an input, but needs a termination condition as an input, but needs a termination condition

Step 0 Step 1 Step 2 Step 3 Step 4

b

dc

e

a a b

d ec d e

a b c d e

Step 4 Step 3 Step 2 Step 1 Step 0

agglomerative(AGNES)

divisive(DIANA)

Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods

Page 34: Cluster analysis

AGNES (Agglomerative Nesting)AGNES (Agglomerative Nesting)Introduced in Kaufmann and Rousseeuw (1990)Introduced in Kaufmann and Rousseeuw (1990)

Implemented in statistical analysis packages, e.g., Implemented in statistical analysis packages, e.g., SplusSplus

Use the Single-Link method and the dissimilarity Use the Single-Link method and the dissimilarity matrix. matrix.

Merge nodes that have the least dissimilarityMerge nodes that have the least dissimilarity

Go on in a non-descending fashionGo on in a non-descending fashion

Eventually all nodes belong to the same clusterEventually all nodes belong to the same cluster

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 100

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 100

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods

Page 35: Cluster analysis

A Dendrogram Shows How the Clusters are Merged Hierarchically

Decompose data objects into a several levels of nested partitioning (tree of clusters), called a dendrogram.

A clustering of the data objects is obtained by cutting the dendrogram at the desired level, then each connected component forms a cluster.

Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods

Page 36: Cluster analysis

DIANA (Divisive Analysis)DIANA (Divisive Analysis)Introduced in Kaufmann and Rousseeuw (1990)Introduced in Kaufmann and Rousseeuw (1990)

Implemented in statistical analysis packages, Implemented in statistical analysis packages, e.g., Spluse.g., Splus

Inverse order of AGNESInverse order of AGNES

Eventually each node forms a cluster on its ownEventually each node forms a cluster on its own

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 100

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods

Page 37: Cluster analysis

More on Hierarchical Clustering MethodsMore on Hierarchical Clustering MethodsMajor weakness of agglomerative clustering methodsMajor weakness of agglomerative clustering methods do not scaledo not scale well: time complexity of at least well: time complexity of at least OO((nn22), ),

where where nn is the number of total objects is the number of total objects can never undo what was done previouslycan never undo what was done previously

Integration of hierarchical with distance-based clusteringIntegration of hierarchical with distance-based clustering BIRCH (1996)BIRCH (1996): uses CF-tree and incrementally adjusts : uses CF-tree and incrementally adjusts

the quality of sub-clustersthe quality of sub-clusters CURE (1998CURE (1998): selects well-scattered points from the ): selects well-scattered points from the

cluster and then shrinks them towards the center of the cluster and then shrinks them towards the center of the cluster by a specified fractioncluster by a specified fraction

CHAMELEON (1999)CHAMELEON (1999): hierarchical clustering using : hierarchical clustering using dynamic modelingdynamic modeling

Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods

Page 38: Cluster analysis

BIRCH (1996)BIRCH (1996)

Birch: Balanced Iterative Reducing and Clustering using Birch: Balanced Iterative Reducing and Clustering using Hierarchies, by Zhang, Ramakrishnan, Livny (SIGMODHierarchies, by Zhang, Ramakrishnan, Livny (SIGMOD’’96)96)

Incrementally construct a CF (Clustering Feature) tree, a Incrementally construct a CF (Clustering Feature) tree, a hierarchical data structure for multiphase clusteringhierarchical data structure for multiphase clustering Phase 1: scan DB to build an initial in-memory CF tree Phase 1: scan DB to build an initial in-memory CF tree

(a multi-level compression of the data that tries to (a multi-level compression of the data that tries to preserve the inherent clustering structure of the data) preserve the inherent clustering structure of the data)

Phase 2: use an arbitrary clustering algorithm to cluster Phase 2: use an arbitrary clustering algorithm to cluster the leaf nodes of the CF-tree the leaf nodes of the CF-tree

Scales linearlyScales linearly: finds a good clustering with a single scan : finds a good clustering with a single scan and improves the quality with a few additional scansand improves the quality with a few additional scans

Weakness:Weakness: handles only numeric data, and sensitive to the handles only numeric data, and sensitive to the order of the data record.order of the data record.

Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods

Page 39: Cluster analysis

Clustering Feature Clustering Feature VectorVector

Clustering Feature: CF = (N, LS, SS)

N: Number of data points

LS: Ni=1=Xi

SS: Ni=1=Xi

2

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

CF = (5, (16,30),(54,190))

(3,4)(2,6)(4,5)(4,7)(3,8)

Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods

Page 40: Cluster analysis

CF TreeCF TreeCF1

child1

CF3

child3

CF2

child2

CF6

child6

CF1

child1

CF3

child3

CF2

child2

CF5

child5

CF1 CF2 CF6prev next CF1 CF2 CF4

prev next

B = 7

L = 6

Root

Non-leaf node

Leaf node Leaf node

Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods

Page 41: Cluster analysis

CURE CURE (Clustering (Clustering Using Using

REpresentatives )REpresentatives )

CURE: proposed by Guha, Rastogi & Shim, 1998CURE: proposed by Guha, Rastogi & Shim, 1998 Stops the creation of a cluster hierarchy if a level Stops the creation of a cluster hierarchy if a level

consists ofconsists of k k clustersclusters Uses multiple representative points to evaluate the Uses multiple representative points to evaluate the

distance between clusters, adjusts well to arbitrary distance between clusters, adjusts well to arbitrary shaped clusters and avoids single-link effectshaped clusters and avoids single-link effect

Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods

Page 42: Cluster analysis

Drawbacks of Distance-Drawbacks of Distance-Based MethodBased Method

Drawbacks of square-error based clustering method Drawbacks of square-error based clustering method Consider only one point as representative of a clusterConsider only one point as representative of a cluster Good only for convex shaped, similar size and density, and Good only for convex shaped, similar size and density, and

ifif k k can be reasonably estimated can be reasonably estimated

Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods

Page 43: Cluster analysis

Cure: The AlgorithmCure: The Algorithm Draw random sample Draw random sample ss..

Partition sample to Partition sample to pp partitions with size partitions with size s/ps/p

Partially cluster partitions into Partially cluster partitions into s/pqs/pq clusters clusters

Eliminate outliersEliminate outliers

By random samplingBy random sampling

If a cluster grows too slow, eliminate it.If a cluster grows too slow, eliminate it.

Cluster partial clusters.Cluster partial clusters.

Label data in diskLabel data in disk

Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods

Page 44: Cluster analysis

Data Partitioning and Data Partitioning and ClusteringClustering

s = 50s = 50 p = 2p = 2 s/p = 25s/p = 25

x x

x

y

y y

y

x

y

x

s/pq = 5

Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods

Page 45: Cluster analysis

Cure: Shrinking Representative Cure: Shrinking Representative PointsPoints

Shrink the multiple representative points towards the Shrink the multiple representative points towards the gravity center by a fraction of gravity center by a fraction of ..

Multiple representatives capture the shape of the clusterMultiple representatives capture the shape of the cluster

x

y

x

y

Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods

Page 46: Cluster analysis

Clustering Categorical Clustering Categorical Data: ROCKData: ROCK

ROCK: Robust Clustering using linKs,ROCK: Robust Clustering using linKs,by S. Guha, R. Rastogi, K. Shim (ICDE’99). by S. Guha, R. Rastogi, K. Shim (ICDE’99). Use links to measure similarity/proximityUse links to measure similarity/proximity Not distance basedNot distance based Computational complexity:Computational complexity:

Basic ideas:Basic ideas: Similarity function and neighbors:Similarity function and neighbors:

Let Let TT11 = {1,2,3}, = {1,2,3}, TT22={3,4,5}={3,4,5}

O n nm m n nm a( log )2 2

Sim T TT TT T

( , )1 21 2

1 2

Sim T T( , ){ }

{ , , , , }.1 2

31 2 3 4 5

15

0 2

Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods

Page 47: Cluster analysis

Rock: AlgorithmRock: Algorithm

Links: The number of common neighbours for Links: The number of common neighbours for the two points.the two points.

AlgorithmAlgorithm Draw random sampleDraw random sample Cluster with linksCluster with links Label data in diskLabel data in disk

{1,2,3}, {1,2,4}, {1,2,5}, {1,3,4}, {1,3,5}{1,4,5}, {2,3,4}, {2,3,5}, {2,4,5}, {3,4,5}

{1,2,3} {1,2,4}3

Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods

Page 48: Cluster analysis

CHAMELEONCHAMELEONCHAMELEON: hierarchical clustering using dynamic CHAMELEON: hierarchical clustering using dynamic modeling, by G. Karypis, E.H. Han and V. Kumarmodeling, by G. Karypis, E.H. Han and V. Kumar’’99 99 Measures the similarity based on a dynamic modelMeasures the similarity based on a dynamic model Two clusters are merged only if the Two clusters are merged only if the interconnectivityinterconnectivity

and and closeness (proximity)closeness (proximity) between two clusters are between two clusters are high high relative torelative to the internal interconnectivity of the the internal interconnectivity of the clusters and closeness of items within the clustersclusters and closeness of items within the clusters

A two phase algorithmA two phase algorithm 1. Use a graph partitioning algorithm: cluster objects 1. Use a graph partitioning algorithm: cluster objects

into a large number of relatively small sub-clustersinto a large number of relatively small sub-clusters 2. Use an agglomerative hierarchical clustering 2. Use an agglomerative hierarchical clustering

algorithm: find the genuine clusters by repeatedly algorithm: find the genuine clusters by repeatedly combining these sub-clusterscombining these sub-clusters

Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods

Page 49: Cluster analysis

Overall Framework of Overall Framework of CHAMELEONCHAMELEON

Construct

Sparse Graph Partition the Graph

Merge Partition

Final Clusters

Data Set

Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods

Page 50: Cluster analysis

Lecture-46Lecture-46

Density-Based MethodsDensity-Based Methods

Page 51: Cluster analysis

Density-Based Clustering MethodsDensity-Based Clustering Methods

Clustering based on density (local cluster Clustering based on density (local cluster criterion), such as density-connected pointscriterion), such as density-connected pointsMajor features:Major features: Discover clusters of arbitrary shapeDiscover clusters of arbitrary shape Handle noiseHandle noise One scanOne scan Need density parameters as termination conditionNeed density parameters as termination condition

Several methodsSeveral methods DBSCAN DBSCAN OPTICSOPTICS DENCLUEDENCLUE CLIQUECLIQUE

Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods

Page 52: Cluster analysis

Density-Based ClusteringDensity-Based Clustering

Two parametersTwo parameters:: EpsEps: Maximum radius of the neighbourhood: Maximum radius of the neighbourhood MinPtsMinPts: Minimum number of points in an Eps-: Minimum number of points in an Eps-

neighbourhood of that pointneighbourhood of that point

NNEpsEps(p):(p): {q belongs to D | dist(p,q) <= Eps}{q belongs to D | dist(p,q) <= Eps}

Directly density-reachableDirectly density-reachable: : A point A point pp is directly is directly density-reachable from a point density-reachable from a point qq wrt. wrt. EpsEps, MinPts if , MinPts if 1) 1) pp belongs to belongs to NNEpsEps(q)(q) 2) core point condition:2) core point condition:

|N|NEpsEps (q)| >= MinPts (q)| >= MinPts

pq

MinPts = 5

Eps = 1 cm

Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods

Page 53: Cluster analysis

Density-Based ClusteringDensity-Based Clustering

Density-reachable: Density-reachable: A point A point pp is density-reachable from a is density-reachable from a

point point qq wrt. wrt. EpsEps, , MinPtsMinPts if there is a if there is a chain of points chain of points pp11, , ……, , ppnn, , pp11 = = qq, , ppnn = = pp such that such that ppi+1i+1 is directly density- is directly density-reachable from reachable from ppii

Density-connectedDensity-connected A point A point pp is density-connected to a is density-connected to a

point point qq wrt. wrt. EpsEps, , MinPtsMinPts if there is a if there is a point point o o such that both, such that both, pp and and qq are are density-reachable from density-reachable from oo wrt. wrt. EpsEps and and MinPtsMinPts..

p

qp1

p q

o

Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods

Page 54: Cluster analysis

DBSCAN: Density Based Spatial DBSCAN: Density Based Spatial Clustering of Applications with NoiseClustering of Applications with Noise

Relies on a Relies on a density-baseddensity-based notion of cluster: A notion of cluster: A clustercluster is defined as a maximal set of density- is defined as a maximal set of density-connected pointsconnected pointsDiscovers clusters of arbitrary shape in spatial Discovers clusters of arbitrary shape in spatial databases with noisedatabases with noise

Core

Border

Outlier

Eps = 1cm

MinPts = 5

Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods

Page 55: Cluster analysis

DBSCAN: The AlgorithmDBSCAN: The Algorithm Arbitrary select a point Arbitrary select a point pp

Retrieve all points density-reachable from Retrieve all points density-reachable from pp wrt wrt EpsEps and and MinPtsMinPts..

If If pp is a core point, a cluster is formed. is a core point, a cluster is formed.

If If pp is a border point, no points are density-reachable is a border point, no points are density-reachable from from pp and DBSCAN visits the next point of the and DBSCAN visits the next point of the database.database.

Continue the process until all of the points have been Continue the process until all of the points have been processed.processed.

Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods

Page 56: Cluster analysis

OPTICS: A Cluster-Ordering Method OPTICS: A Cluster-Ordering Method

OPTICS: Ordering Points To Identify the OPTICS: Ordering Points To Identify the Clustering StructureClustering Structure Ankerst, Breunig, Kriegel, and Sander (SIGMODAnkerst, Breunig, Kriegel, and Sander (SIGMOD’’99)99) Produces a special order of the database wrt its Produces a special order of the database wrt its

density-based clustering structure density-based clustering structure This cluster-ordering contains info equiv to the This cluster-ordering contains info equiv to the

density-based clusterings corresponding to a broad density-based clusterings corresponding to a broad range of parameter settingsrange of parameter settings

Good for both automatic and interactive cluster Good for both automatic and interactive cluster analysis, including finding intrinsic clustering structureanalysis, including finding intrinsic clustering structure

Can be represented graphically or using visualization Can be represented graphically or using visualization techniquestechniques

Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods

Page 57: Cluster analysis

OPTICS: Some Extension from OPTICS: Some Extension from DBSCANDBSCAN

Index-based:Index-based: k = number of dimensions k = number of dimensions N = 20N = 20p = 75%p = 75%M = N(1-p) = 5M = N(1-p) = 5

Complexity: O(Complexity: O(kNkN22))Core DistanceCore Distance

Reachability DistanceReachability Distance

D

p2

MinPts = 5

= 3 cm

Max (core-distance (o), d (o, p))

r(p1, o) = 2.8cm. r(p2,o) = 4cm

o

o

p1

Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods

Page 58: Cluster analysis

Reachability-distance

Cluster-orderof the objects

undefined

Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods

Page 59: Cluster analysis

DENCLUE: using density functionsDENCLUE: using density functions

DENsity-based CLUstEring by Hinneburg & Keim DENsity-based CLUstEring by Hinneburg & Keim (KDD(KDD’’98)98)

Major featuresMajor features Solid mathematical foundationSolid mathematical foundation Good for data sets with large amounts of noiseGood for data sets with large amounts of noise Allows a compact mathematical description of arbitrarily Allows a compact mathematical description of arbitrarily

shaped clusters in high-dimensional data setsshaped clusters in high-dimensional data sets Significant faster than existing algorithm (faster than Significant faster than existing algorithm (faster than

DBSCAN by a factor of up to 45)DBSCAN by a factor of up to 45) But needs a large number of parametersBut needs a large number of parameters

Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods

Page 60: Cluster analysis

Uses grid cells but only keeps information about grid cells Uses grid cells but only keeps information about grid cells that do actually contain data points and manages these that do actually contain data points and manages these cells in a tree-based access structure.cells in a tree-based access structure.

Influence function: describes the impact of a data point Influence function: describes the impact of a data point within its neighborhood.within its neighborhood.

Overall density of the data space can be calculated as Overall density of the data space can be calculated as the sum of the influence function of all data points.the sum of the influence function of all data points.

Clusters can be determined mathematically by identifying Clusters can be determined mathematically by identifying density attractors.density attractors.

Density attractors are local maximal of the overall density Density attractors are local maximal of the overall density function.function.

Denclue: Technical EssenceDenclue: Technical Essence

Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods

Page 61: Cluster analysis

Gradient: The steepness of a Gradient: The steepness of a slopeslope

ExampleExample

N

i

xxdD

Gaussian

i

exf1

2),(

2

2

)(

N

i

xxd

iiD

Gaussian

i

exxxxf1

2),(

2

2

)(),(

f x y eGaussian

d x y

( , )( , )

2

22

Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods

Page 62: Cluster analysis

Density AttractorDensity Attractor

Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods

Page 63: Cluster analysis

Center-Defined and ArbitraryCenter-Defined and Arbitrary

Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods

Page 64: Cluster analysis

Lecture-47Lecture-47

Grid-Based MethodsGrid-Based Methods

Page 65: Cluster analysis

Grid-Based Clustering MethodGrid-Based Clustering Method

Using multi-resolution grid data structureUsing multi-resolution grid data structure

Several interesting methodsSeveral interesting methods STING (a STatistical INformation Grid approach) STING (a STatistical INformation Grid approach) WaveCluster WaveCluster

A multi-resolution clustering approach using A multi-resolution clustering approach using wavelet methodwavelet method

CLIQUECLIQUE

Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods

Page 66: Cluster analysis

STING: A Statistical Information Grid STING: A Statistical Information Grid ApproachApproach

Wang, Yang and Muntz (VLDB’97)Wang, Yang and Muntz (VLDB’97)The spatial area area is divided into rectangular The spatial area area is divided into rectangular cellscellsThere are several levels of cells corresponding to There are several levels of cells corresponding to different levels of resolutiondifferent levels of resolution

Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods

Page 67: Cluster analysis

STING: A Statistical Information STING: A Statistical Information Grid Approach Grid Approach

Each cell at a high level is partitioned into a number of smaller Each cell at a high level is partitioned into a number of smaller cells in the next lower levelcells in the next lower level

Statistical info of each cell is calculated and stored Statistical info of each cell is calculated and stored beforehand and is used to answer queriesbeforehand and is used to answer queries

Parameters of higher level cells can be easily calculated from Parameters of higher level cells can be easily calculated from parameters of lower level cellparameters of lower level cell

countcount, , meanmean, , ss, , minmin, , maxmax type of distribution—normal, type of distribution—normal, uniformuniform, etc., etc.

Use a top-down approach to answer spatial data queriesUse a top-down approach to answer spatial data queries Start from a pre-selected layerStart from a pre-selected layer——typically with a small number typically with a small number

of cellsof cells For each cell in the current level compute the confidence For each cell in the current level compute the confidence

intervalinterval Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods

Page 68: Cluster analysis

STING: A Statistical Information Grid STING: A Statistical Information Grid Approach Approach

Remove the irrelevant cells from further considerationRemove the irrelevant cells from further consideration When finish examining the current layer, proceed to When finish examining the current layer, proceed to

the next lower level the next lower level Repeat this process until the bottom layer is reachedRepeat this process until the bottom layer is reached Advantages:Advantages:

Query-independent, easy to parallelize, incremental Query-independent, easy to parallelize, incremental updateupdateO(K),O(K), where where KK is the number of grid cells at the is the number of grid cells at the lowest level lowest level

Disadvantages:Disadvantages:All the cluster boundaries are either horizontal or All the cluster boundaries are either horizontal or vertical, and no diagonal boundary is detectedvertical, and no diagonal boundary is detected

Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods

Page 69: Cluster analysis

WaveCluster WaveCluster

Sheikholeslami, Chatterjee, and Zhang (VLDB’98) Sheikholeslami, Chatterjee, and Zhang (VLDB’98)

A multi-resolution clustering approach which applies A multi-resolution clustering approach which applies wavelet transform to the feature spacewavelet transform to the feature space A wavelet transform is a signal processing A wavelet transform is a signal processing

technique that decomposes a signal into different technique that decomposes a signal into different frequency sub-band.frequency sub-band.

Both grid-based and density-basedBoth grid-based and density-based

Input parameters: Input parameters: No of grid cells for each dimensionNo of grid cells for each dimension the wavelet, and the no of applications of wavelet the wavelet, and the no of applications of wavelet

transform.transform.

Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods

Page 70: Cluster analysis

WaveClusterWaveCluster

How to apply wavelet transform to find clustersHow to apply wavelet transform to find clusters Summaries the data by imposing a multidimensional Summaries the data by imposing a multidimensional

grid structure onto data spacegrid structure onto data space These multidimensional spatial data objects are These multidimensional spatial data objects are

represented in a n-dimensional feature spacerepresented in a n-dimensional feature space Apply wavelet transform on feature space to find the Apply wavelet transform on feature space to find the

dense regions in the feature spacedense regions in the feature space Apply wavelet transform multiple times which result in Apply wavelet transform multiple times which result in

clusters at different scales from fine to coarseclusters at different scales from fine to coarse

Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods

Page 71: Cluster analysis

What Is Wavelet What Is Wavelet

Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods

Page 72: Cluster analysis

QuantizationQuantization

Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods

Page 73: Cluster analysis

TransformationTransformation

Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods

Page 74: Cluster analysis

WaveCluster WaveCluster Why is wavelet transformation useful for Why is wavelet transformation useful for clusteringclustering Unsupervised clusteringUnsupervised clustering It uses hat-shape filters to emphasize region where It uses hat-shape filters to emphasize region where

points cluster, but simultaneously to suppress weaker points cluster, but simultaneously to suppress weaker information in their boundary information in their boundary

Effective removal of outliersEffective removal of outliers Multi-resolutionMulti-resolution Cost efficiencyCost efficiency

Major features:Major features: Complexity O(N)Complexity O(N) Detect arbitrary shaped clusters at different scalesDetect arbitrary shaped clusters at different scales Not sensitive to noise, not sensitive to input orderNot sensitive to noise, not sensitive to input order Only applicable to low dimensional dataOnly applicable to low dimensional data

Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods

Page 75: Cluster analysis

CLIQUE (Clustering In QUEst)CLIQUE (Clustering In QUEst) Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98). Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98).

Automatically identifying subspaces of a high dimensional Automatically identifying subspaces of a high dimensional data space that allow better clustering than original space data space that allow better clustering than original space

CLIQUE can be considered as both density-based and grid-CLIQUE can be considered as both density-based and grid-basedbased It partitions each dimension into the same number of It partitions each dimension into the same number of

equal length intervalequal length interval It partitions an m-dimensional data space into non-It partitions an m-dimensional data space into non-

overlapping rectangular unitsoverlapping rectangular units A unit is dense if the fraction of total data points contained A unit is dense if the fraction of total data points contained

in the unit exceeds the input model parameterin the unit exceeds the input model parameter A cluster is a maximal set of connected dense units within A cluster is a maximal set of connected dense units within

a subspacea subspaceLecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods

Page 76: Cluster analysis

CLIQUE: The Major StepsCLIQUE: The Major Steps

Partition the data space and find the number of points that lie Partition the data space and find the number of points that lie inside each cell of the partition.inside each cell of the partition.

Identify the subspaces that contain clusters using the Apriori Identify the subspaces that contain clusters using the Apriori principleprinciple

Identify clusters:Identify clusters: Determine dense units in all subspaces of interestsDetermine dense units in all subspaces of interests Determine connected dense units in all subspaces of Determine connected dense units in all subspaces of

interests.interests.

Generate minimal description for the clustersGenerate minimal description for the clusters Determine maximal regions that cover a cluster of Determine maximal regions that cover a cluster of

connected dense units for each clusterconnected dense units for each cluster Determination of minimal cover for each clusterDetermination of minimal cover for each cluster

Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods

Page 77: Cluster analysis

Sala

ry

(10,

000)

20 30 40 50 60age

54

31

26

70

20 30 40 50 60age

54

31

26

70

Vac

atio

n(w

eek)

age

Vac

atio

n

Salary 30 50

= 3

Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods

Page 78: Cluster analysis

Strength and Weakness of Strength and Weakness of CLIQUECLIQUE

Strength Strength It It automaticallyautomatically finds subspaces of the finds subspaces of the highest highest

dimensionalitydimensionality such that high density clusters exist in such that high density clusters exist in those subspacesthose subspaces

It is It is insensitiveinsensitive to the order of records in input and to the order of records in input and does not presume some canonical data distributiondoes not presume some canonical data distribution

It scalesIt scales linearly linearly with the size of input and has good with the size of input and has good scalability as the number of dimensions in the data scalability as the number of dimensions in the data increasesincreases

WeaknessWeakness The accuracy of the clustering result may be degraded The accuracy of the clustering result may be degraded

at the expense of simplicity of the methodat the expense of simplicity of the method

Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods

Page 79: Cluster analysis

Lecture-48Lecture-48

Model-Based Clustering MethodsModel-Based Clustering Methods

Page 80: Cluster analysis

Model-Based Clustering MethodsModel-Based Clustering MethodsAttempt to optimize the fit between the data and some mathematical Attempt to optimize the fit between the data and some mathematical modelmodelStatistical and AI approachStatistical and AI approach

Conceptual clusteringConceptual clusteringA form of clustering in machine learningA form of clustering in machine learningProduces a classification scheme for a set of unlabeled objectsProduces a classification scheme for a set of unlabeled objectsFinds characteristic description for each concept (class)Finds characteristic description for each concept (class)

COBWEB COBWEB A popular a simple method of incremental conceptual learningA popular a simple method of incremental conceptual learningCreates a hierarchical clustering in the form of a classification Creates a hierarchical clustering in the form of a classification treetreeEach node refers to a concept and contains a probabilistic Each node refers to a concept and contains a probabilistic description of that conceptdescription of that concept

Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods

Page 81: Cluster analysis

COBWEB Clustering MethodCOBWEB Clustering Method

A classification tree

Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods

Page 82: Cluster analysis

More on Statistical-Based ClusteringMore on Statistical-Based Clustering

Limitations of COBWEBLimitations of COBWEB The assumption that the attributes are independent of The assumption that the attributes are independent of

each other is often too strong because correlation may each other is often too strong because correlation may existexist

Not suitable for clustering large database data – Not suitable for clustering large database data – skewed tree and expensive probability distributionsskewed tree and expensive probability distributions

CLASSITCLASSIT an extension of COBWEB for incremental clustering of an extension of COBWEB for incremental clustering of

continuous datacontinuous data suffers similar problems as COBWEB suffers similar problems as COBWEB

AutoClass (Cheeseman and Stutz, 1996)AutoClass (Cheeseman and Stutz, 1996) Uses Bayesian statistical analysis to estimate the Uses Bayesian statistical analysis to estimate the

number of clustersnumber of clusters Popular in industryPopular in industry

Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods

Page 83: Cluster analysis

Other Model-Based Clustering MethodsOther Model-Based Clustering Methods

Neural network approachesNeural network approaches Represent each cluster as an exemplar, acting as a Represent each cluster as an exemplar, acting as a

“prototype” of the cluster“prototype” of the cluster New objects are distributed to the cluster whose New objects are distributed to the cluster whose

exemplar is the most similar according to some exemplar is the most similar according to some dostance measuredostance measure

Competitive learningCompetitive learning Involves a hierarchical architecture of several units Involves a hierarchical architecture of several units

(neurons)(neurons) Neurons compete in a “winner-takes-all” fashion for Neurons compete in a “winner-takes-all” fashion for

the object currently being presentedthe object currently being presented

Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods

Page 84: Cluster analysis

Model-Based Clustering Model-Based Clustering MethodsMethods

Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods

Page 85: Cluster analysis

Self-organizing feature maps (SOMs)Self-organizing feature maps (SOMs)Clustering is also performed by having several Clustering is also performed by having several units competing for the current objectunits competing for the current objectThe unit whose weight vector is closest to the The unit whose weight vector is closest to the current object winscurrent object winsThe winner and its neighbors learn by having The winner and its neighbors learn by having their weights adjustedtheir weights adjustedSOMs are believed to resemble processing that SOMs are believed to resemble processing that can occur in the braincan occur in the brainUseful for visualizing high-dimensional data in Useful for visualizing high-dimensional data in 2- or 3-D space2- or 3-D space

Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods

Page 86: Cluster analysis

Lecture-49Lecture-49

Outlier AnalysisOutlier Analysis

Page 87: Cluster analysis

What Is Outlier Discovery?What Is Outlier Discovery?What are outliers?What are outliers? The set of objects are considerably dissimilar from The set of objects are considerably dissimilar from

the remainder of the datathe remainder of the data Example: Sports: Michael Jordon, Wayne Example: Sports: Michael Jordon, Wayne

Gretzky, ...Gretzky, ...ProblemProblem Find top n outlier points Find top n outlier points Applications:Applications: Credit card fraud detectionCredit card fraud detection Telecom fraud detectionTelecom fraud detection Customer segmentationCustomer segmentation Medical analysisMedical analysis

Lecture-49 - Outlier AnalysisLecture-49 - Outlier Analysis

Page 88: Cluster analysis

Outlier Discovery: Outlier Discovery: Statistical ApproachesStatistical Approaches

Assume a model underlying distribution that Assume a model underlying distribution that generates data set (e.g. normal distribution) generates data set (e.g. normal distribution) Use discordancy tests depending on Use discordancy tests depending on data distributiondata distribution distribution parameter (e.g., mean, variance)distribution parameter (e.g., mean, variance) number of expected outliersnumber of expected outliers

DrawbacksDrawbacks most tests are for single attributemost tests are for single attribute In many cases, data distribution may not be knownIn many cases, data distribution may not be known

Lecture-49 - Outlier AnalysisLecture-49 - Outlier Analysis

Page 89: Cluster analysis

Outlier Discovery: Distance-Based Outlier Discovery: Distance-Based ApproachApproach

Introduced to counter the main limitations imposed Introduced to counter the main limitations imposed by statistical methodsby statistical methods We need multi-dimensional analysis without knowing We need multi-dimensional analysis without knowing

data distribution.data distribution.

Distance-based outlier: A DB(p, D)-outlier is an Distance-based outlier: A DB(p, D)-outlier is an object O in a dataset T such that at least a fraction object O in a dataset T such that at least a fraction p of the objects in T lies at a distance greater than p of the objects in T lies at a distance greater than D from OD from OAlgorithms for mining distance-based outliers Algorithms for mining distance-based outliers Index-based algorithmIndex-based algorithm Nested-loop algorithm Nested-loop algorithm Cell-based algorithmCell-based algorithm

Lecture-49 - Outlier AnalysisLecture-49 - Outlier Analysis

Page 90: Cluster analysis

Outlier Discovery: Deviation-Based Outlier Discovery: Deviation-Based ApproachApproach

Identifies outliers by examining the main Identifies outliers by examining the main characteristics of objects in a groupcharacteristics of objects in a group

Objects that “deviate” from this description are Objects that “deviate” from this description are considered outliersconsidered outliers

sequential exception technique sequential exception technique simulates the way in which humans can distinguish unusual simulates the way in which humans can distinguish unusual

objects from among a series of supposedly like objectsobjects from among a series of supposedly like objects

OLAP data cube techniqueOLAP data cube technique uses data cubes to identify regions of anomalies in large uses data cubes to identify regions of anomalies in large

multidimensional datamultidimensional data

Lecture-49 - Outlier AnalysisLecture-49 - Outlier Analysis


Top Related