traditional machine learning: unsupervised learningjuhan/gct634/slides/04... · 2020. 9. 24. ·...

26
Juhan Nam GCT634/AI613: Musical Applications of Machine Learning (Fall 2020) Traditional Machine Learning: Unsupervised Learning

Upload: others

Post on 07-Nov-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Juhan Nam

GCT634/AI613: Musical Applications of Machine Learning (Fall 2020)

Traditional Machine Learning:Unsupervised Learning

Page 2: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Traditional Machine Learning Pipeline in Classification Tasks

● A set of hand-designed audio features are selected for a given task and they are concatenated ○ The majority of them are extracted in frame-level: MFCC, chroma, spectral

statistics ○ The concatenated features are complementary to each other

“Class #1 ”

“Class #2”

“Class #3”

ClassifierMFCC

Spectral Statistics Chroma

. . .

Page 3: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Issues: Redundancy and Dimensionality

● The information in the concatenated feature vectors can be repetitive

● Adding more features increases the dimensionality of the feature vectors. The classification will become more demanding.

“Class #1 ”

“Class #2”

“Class #3”

ClassifierMFCC

Spectral Statistics Chroma

. . .

?

Page 4: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Issues: Temporal Summarization

● Taking the entire frames as a single vector is too much for classifiers○ 10 ~ 100 frames per second is typical in the frame-level processing

● Temporal order is important and thus taking multiple features (that capture local temporal order) are acceptable○ MFCC: concatenated with its frame-wise differences (delta and double-delta)

● However, extracting the long-term temporal dependency is hard!○ Averaging is OK but too simple. ○ DCT over time for each feature dimension is an option

ClassifierMFCC

Spectral Statistics Chroma

. . .

??

Page 5: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Unsupervised Learning

● Principal Component Analysis (PCA)○ Learn a linear transform so that the transformed features are de-correlated○ Dimensionality reduction: 2D for visualization

● K-means○ Learn K cluster centers and determine the membership○ Move each data point to a fixed set of learned vectors (cluster centers):

vector quantization and one-hot sparse feature representation

● Gaussian Mixture Models (GMM)○ Learn K Gaussian distribution parameters and the soft membership ○ Density estimation (likelihood estimation): can be used for classification

when estimated for each class

Page 6: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Principal Component Analysis

● Correlation and Redundancy○ We can measure the redundancy between two elements in a feature vector

by computing their correlation

○ If some of the elements have high correlations, we can remove the redundant elements

Pearson correlation coefficient =∑! "!# $"! %!#%!"!# $"! " %# &% "

𝑥!𝑥"⋮𝑥#

Page 7: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Principal Component Analysis

● Transform the input space (𝑋) into a latent space (𝑍) such that the latent space is de-correlated (i.e., each dimension is orthogonal to each other)○ Linear transform designed to maximize the variance of the first principal

component and minimize the variance of the last principle component

𝑋

𝑍$𝑍

Orthogonal vectors (principal components)

Page 8: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Principal Component Analysis

● Transform the input space (𝑋) into a latent space (𝑍) such that the latent space is de-correlated (i.e., each dimension is orthogonal to each other)○ Linear transform designed to maximize the variance of the first principal

component and minimize the variance of the last principle component

𝑍

𝑍𝑍! = 𝑁𝐈 =

𝜆" 00 𝜆#

0 00 0

0 00 0

⋱ ⋮⋯ 𝜆$

=

𝑊 𝑋The diagonal elements correspond to the variances

of transformed data points on each dimension

Page 9: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Principal Component Analysis: Eigenvalue Decomposition

● To derive 𝑊

● Eigenvalue Decomposition (𝑄: eigenvectors, Λ: eigenvalue matrix)

● 𝑊 is obtained from the eigenvectors of Cov(𝑋)

𝑍𝑍! = 𝑁𝐈 (𝑊𝑋)(𝑊𝑋)!= 𝑁𝐈 𝑊𝑋𝑋!𝑊! = 𝑁𝐈

𝑊Cov(𝑋)𝑊! = 𝑁𝐈

𝐴𝑄 = 𝑄Λ 𝑄!𝐴𝑄 = Λ𝐈𝑄%" 𝐴𝑄 = Λ(If 𝐴 is symmetric)

𝑊 = 𝑄!

𝐴𝑥& = 𝜆&𝑥&𝑄 = [𝑥!𝑥"…𝑥#] Λ = diag(𝜆%)

Page 10: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Principal Component Analysis: Eigenvalue Decomposition

● In addition, we can normalize the latent space

● 𝑊2 contains a set of orthonormal vectors

𝑊2Cov(𝑋)𝑊2! = 𝐈

Λ%"/#𝑄!𝐴𝑄Λ%"/# = 𝐈

Λ$ = Λ&!/" =

41 𝜆!0

0 41 𝜆"

⋯⋱ 00 41 𝜆$

𝑍2 = Λ2𝑍 𝑄!𝐴𝑄 = Λ𝐈

𝑊2 = Λ%"/#𝑄! = Λ%"/#𝑊 = Λ2𝑊

Page 11: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Principal Component Analysis In Practice

● In practice, 𝑋 is a huge matrix where each column is a data point ○ Computing the covariance matrix is a bottleneck and so we often sample the

input data

Cov 𝑋 =

𝑋

. . .

𝑋!

. . .

Page 12: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Principal Component Analysis In Practice

● Shift the distribution to have zero mean

● The normalization is optional: called PCA whitening

Shifting

𝑋

𝑋$ = 𝑋 −mean(𝑋)

Rotation

Normalization(Scaling)

𝑊𝑋$Λ$ 𝑊𝑋$

Page 13: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Dimensionality Reduction Using PCA

● We can remove principal components with small variances○ Sort the variances in the latent space (the eigenvalues) in descending order

and removing the tails○ A strategy is accumulating the variances from the first principal component.

When it reaches 90% or 95% of the sum of all variances, remove the remaining dimensions. This significantly reduces the dimensionality.

● Note that you can reconstruct the original data with some loss○ You can use PCA as a data compression method

Variances

⋯95%

Page 14: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Visualization Using PCA

● Taking the first two or three principal components only for 2D or 3D visualization○ A popularized used feature visualization method along with t-SNE in

analyzing the latent feature space in the trained deep neural network

source:https://jakevdp.github.io/PythonDataScienceHandbook/05.09-principal-component-analysis.html

Page 15: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

K-Means Clustering

● Grouping the data points into K clusters ○ Each point has the membership to one of the clusters○ Each cluster has a cluster center (not necessarily one of the data points)○ The membership is determined by choosing the nearest cluster center○ The cluster center is the mean of the data points that belong to the cluster

This is dilemma!

Page 16: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

K-Means: Definition

● The loss function to minimize is defined as:

○ Regarded as a problem that learns cluster centers (𝜇!) that minimize the loss○ 𝑟!

(#) is the binary indicator of the membership of each data point

● Taking the derivative of the loss 𝐿 w.r.t the cluster center 𝜇;

𝐿 = 2()!

*

2+)!

,

𝑟(+ 𝑥(() − 𝜇+" 𝑟+

(() = 51 if 𝑘 = argmin

/𝑥(() − 𝜇+

"

0 otherwise

𝜇+ =∑()!* 𝑟+

(()𝑥(()

∑()!* 𝑟+(()

Again, we should know the cluster centers (to determine membership) before computing the cluster centers

𝑑𝐿𝑑𝜇+

= 2()!

*

2+)!

,

2𝑟+(()(𝑥(() − 𝜇+) = 0

Page 17: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Learning Algorithm

● Iterative learning○ Initialize the cluster centers with random values (a)○ Compute the memberships of each data point given the cluster centers (b)○ Update the cluster centers by averaging the data points that belong to them (c)○ Repeat the two steps above until convergence (d, e, f)

426 9. MIXTURE MODELS AND EM

(a)

−2 0 2

−2

0

2 (b)

−2 0 2

−2

0

2 (c)

−2 0 2

−2

0

2

(d)

−2 0 2

−2

0

2 (e)

−2 0 2

−2

0

2 (f)

−2 0 2

−2

0

2

(g)

−2 0 2

−2

0

2 (h)

−2 0 2

−2

0

2 (i)

−2 0 2

−2

0

2

Figure 9.1 Illustration of the K-means algorithm using the re-scaled Old Faithful data set. (a) Green pointsdenote the data set in a two-dimensional Euclidean space. The initial choices for centres µ1 and µ2 are shownby the red and blue crosses, respectively. (b) In the initial E step, each data point is assigned either to the redcluster or to the blue cluster, according to which cluster centre is nearer. This is equivalent to classifying thepoints according to which side of the perpendicular bisector of the two cluster centres, shown by the magentaline, they lie on. (c) In the subsequent M step, each cluster centre is re-computed to be the mean of the pointsassigned to the corresponding cluster. (d)–(i) show successive E and M steps through to final convergence ofthe algorithm.

(The PRML book)

9.1. K-means Clustering 427

Figure 9.2 Plot of the cost function J given by(9.1) after each E step (blue points)and M step (red points) of the K-means algorithm for the exampleshown in Figure 9.1. The algo-rithm has converged after the thirdM step, and the final EM cycle pro-duces no changes in either the as-signments or the prototype vectors.

J

1 2 3 40

500

1000

case, the assignment of each data point to the nearest cluster centre is equivalent to aclassification of the data points according to which side they lie of the perpendicularbisector of the two cluster centres. A plot of the cost function J given by (9.1) forthe Old Faithful example is shown in Figure 9.2.

Note that we have deliberately chosen poor initial values for the cluster centresso that the algorithm takes several steps before convergence. In practice, a betterinitialization procedure would be to choose the cluster centres µk to be equal to arandom subset of K data points. It is also worth noting that the K-means algorithmitself is often used to initialize the parameters in a Gaussian mixture model beforeapplying the EM algorithm.Section 9.2.2

A direct implementation of the K-means algorithm as discussed here can berelatively slow, because in each E step it is necessary to compute the Euclidean dis-tance between every prototype vector and every data point. Various schemes havebeen proposed for speeding up the K-means algorithm, some of which are based onprecomputing a data structure such as a tree such that nearby points are in the samesubtree (Ramasubramanian and Paliwal, 1990; Moore, 2000). Other approachesmake use of the triangle inequality for distances, thereby avoiding unnecessary dis-tance calculations (Hodgson, 1998; Elkan, 2003).

So far, we have considered a batch version of K-means in which the whole dataset is used together to update the prototype vectors. We can also derive an on-linestochastic algorithm (MacQueen, 1967) by applying the Robbins-Monro procedureSection 2.3.5to the problem of finding the roots of the regression function given by the derivativesof J in (9.1) with respect to µk. This leads to a sequential update in which, for eachExercise 9.2data point xn in turn, we update the nearest prototype µk using

µnewk = µold

k + ηn(xn − µoldk ) (9.5)

where ηn is the learning rate parameter, which is typically made to decrease mono-tonically as more data points are considered.

The K-means algorithm is based on the use of squared Euclidean distance as themeasure of dissimilarity between a data point and a prototype vector. Not only doesthis limit the type of data variables that can be considered (it would be inappropriatefor cases where some or all of the variables represent categorical labels for instance),

The loss monotonicallydecreases every iteration

Page 18: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Data Compression Using K-means

● Vector Quantization ○ The set of cluster centers is called “codebook”: ○ Encoding a sample vector to a single scalar

value of “codebook index” (membership index)○ The compressed data can be reconstructed

using the codebook ○ Example: speech codec (CELP)

■ A component of speech sound is vector-quantized and the codebook index is transmitted in the speech communication

Encoding3 5 ⋯

Decoding𝑥(!) 𝑥(") ⋯ 𝜇0 𝜇1 ⋯

Example of a codebook for a 2D Gaussian with 16 code vectors

source:https://wiki.aalto.fi/pages/viewpage.action?pageId=149883153

Page 19: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Codebook-based Feature Summarization

● Compute the histogram of codebook index ○ Represent the codebook index with one-hot vector

■ if K is a large number, it is regarded as a sparse representation of the features○ Useful for summarizing a long sequence of feature-level features

■ Often called “a bag of features” (computer vision) or a bag of words (NLP)0 00 01 0 …0 00 10 0

Summarization (histogram)

K-dimensional vector

Encoding𝑥(!) 𝑥(") ⋯

one-hot vectorrepresentation

source:https://towardsdatascience.com/bag-of-visual-words-in-a-nutshell-9ceea97ce0fb

a bag of features

Page 20: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Gaussian Mixture Model (GMM)

● Fit a set of multivariate Gaussian distribution to data○ Similar to K-means clustering but it learns not only the cluster centers

(means) but also the covariance of clusters○ The membership is a soft assignment as a multinomial distribution

■ The multinomial distribution is regarded as mapping on a latent space

K-means GMM

Page 21: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Gaussian Mixture Model (GMM)

● Replace the hard assignment with a multinomial distribution

● Replace a single cluster with a multivariate Gaussian distribution○ Mean and covariance

𝑟+ ∈ 0,1

(hard assignment)

𝜋+ = 𝑃(𝑧+|𝑥) 2+)!

,

𝜋+ = 1

(soft assignment)

𝑑(𝑥, 𝜇+) = 𝑥 − 𝜇+ " 𝑃(𝑥|𝑧!) =1

(2𝜋)%/' Σ! (/' 𝑒)('(*)+%)

& ,% '/)(*)+%)

. . .

1 2 3 4 K

Page 22: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Gaussian Mixture Model (GMM)

● The likelihood of a data point can be computed as a mixture of Gaussians

● Fit this model to data by maximum likelihood estimation○ Equivalent to minimizing the negative log likelihood (this is the loss function)○ This model fitting is called density estimation

● GMM is also called a latent model ○ z is a latent variable: regarded as hidden causes of the data distribution

𝑝 𝑥 =22

𝑝 𝑥, ℎ =22

𝑝 ℎ 𝑝 𝑥 ℎ = 2+)!

,

𝜋+ 𝑁(𝑥|𝜇+, Σ+)

Page 23: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Learning Algorithm: K-Means

● Iterative learning○ Initialize the cluster centers with random values (a)○ Compute the memberships of each data point given the cluster centers (b)○ Update the cluster centers by averaging the data points that belong to them (c)○ Repeat the two steps above until convergence (d, e, f)

Page 24: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Learning Algorithm: GMM

● Iterative learning○ Initialize the cluster centers with random values (a)○ Compute the memberships of each data point given the cluster centers (b)○ Update the cluster centers by averaging the data points that belong to them (c)○ Repeat the two steps above until convergence (d, e, f)

Expectation (E step)

Maximization (M step)

Update the clusters by minimizing the loss given the membership(Update the clusters by maximizing the likelihood given the membership)

Gaussian distribution parameters

Page 25: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Learning Algorithm

● Initialize the parameters

● E-step○ Evaluate the “soft” membership of samples given the Gaussian distributions

● M-step○ Update the parameters that maximize the log-likelihood

𝛾+(() =

𝜋+𝑁(𝑥(()|𝜇+, Σ+)∑/ 𝜋/𝑁(𝑥(()|𝜇+, Σ+)

𝜃 ∈ 𝜋+, 𝜇+, Σ+

𝑁+ =2(

𝛾+(() 𝜇+ =

1𝑁+

2(

𝛾+(()𝑥(() Σ+ =

1𝑁+

2(

𝛾+(()(𝑥(() − 𝜇+)(𝑥(() − 𝜇+)3𝜋+ =

𝑁+𝑁

# of membership multinomial dist. Gaussian dist. (mean and covariance of each cluster)

Page 26: Traditional Machine Learning: Unsupervised Learningjuhan/gct634/Slides/04... · 2020. 9. 24. · Traditional Machine Learning Pipeline in Classification Tasks A set of hand-designed

Classification Using GMM

● Training: fit one GMM model to each class of data

● Test: use Bayes’ rule for classification

𝑃 𝑥 𝑦 = 𝑐1, 𝜃4! , 𝑃 𝑥 𝑦 = 𝑐2, 𝜃4" , 𝑃 𝑥 𝑦 = 𝑐3, 𝜃40 , …

V𝑦 = argmax5

𝑃 𝑦 𝑥 = 𝑥( = argmax5

𝑃 𝑥 = 𝑥( 𝑦𝑝(𝑦)

𝑝(𝑥 = 𝑥()

V𝑦 = argmax5

𝑃 𝑥 = 𝑥( 𝑦 𝑝(𝑦)

Prior distribution of each class

If you don’t any information on the prior, you can ignore 𝑝(𝑦)by assuming that all classes are equally possible.

𝑐1

𝑐2𝑐3