traditional machine learning: supervised learningjuhan/gct634/slides/05... · 2 days ago ·...

24
Juhan Nam GCT634/AI613: Musical Applications of Machine Learning (Fall 2020) Traditional Machine Learning: Supervised Learning

Upload: others

Post on 11-Oct-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

Juhan Nam

GCT634/AI613: Musical Applications of Machine Learning (Fall 2020)

Traditional Machine Learning:Supervised Learning

Page 2: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

Classification Models

● Generative models○ Fit a probability distribution of each class: 𝑃 𝑥 𝑦○ Use Bayes’ rule for classification:

■ 𝑃 𝑦 𝑥 = 𝑃 𝑥 𝑦 𝑃(𝑦)/𝑃(𝑥)○ Algorithms

■ Gaussian mixture models ■ Naïve Bayes

● Discriminative models○ Find the boundary for each class○ Algorithms

■ K-nearest neighbor■ Logistic regression, support vector machine ■ Multi-layer perceptron

Page 3: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

K-Nearest Neighbor (K-NN)

● Predict the label of a test example from the nearest training examples ○ Uses only the distance from training examples and has no model parameters ○ The choice of distance: Euclidean (L2), Manhattan (L1) or cosine

“Metal”“Jazz”

“Classical”

Feature Space X1

X2

: test examples

Page 4: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

K-Nearest Neighbor (K-NN)

● Find “K” nearest training examples and do the majority voting ○ K is typically an odd number

“Metal”“Jazz”

“Classical”

Feature Space X1

X2

: test examples

Page 5: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

K-Nearest Neighbor (K-NN)

● Decision boundaries○ Result in the Voronoi diagram

http://cgm.cs.mcgill.ca/~godfried/teaching/projects.pr.98/sergei/project.html

1-NN

5-NN

https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm

Page 6: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

K-Nearest Neighbor (K-NN)

● K-NN is a simple and reliable classifier ○ No model parameters: K and the distance are only options○ Find non-linear boundaries: Voronoi diagram

● However, the distance from the test example should be computed across the entire training examples○ Slow computation: “test phase”: O(1) in train and O(N) in test○ Require a large memory to store the training data○ Fast K-NN (usually using a hash table) is another research topic to speed up

computing the distance

Page 7: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

Model-based Approach for Classification

● Learning model 𝑓 is defined in terms of parameters 𝜃 and predicts the class from 𝑓 𝑥|𝜃 given the input 𝑥○ The model does not memorize the training data

● The model parameters 𝜃 are learned by minimizing the average loss for the training data given as input data 𝑥 and output label 𝑦 (supervised learning)○ There are many choices of loss functions that measure the difference

between the prediction and the ground truth 𝑦

min!

1𝑁+"#$

%

𝑙(𝑓 𝑥"|𝜃 , 𝑦")

Page 8: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

Linear Model

● Defined as 𝑦& = 𝑓 𝑥|𝜃 = 𝑊𝑥 + 𝑏

● Each row of 𝑊 and 𝑏 (𝑤", 𝑏") is a “class template” or “class prototype”○ It separates the examples of the corresponding class from others

● In test phase, 𝑓 𝑥;𝑊, 𝑏 works as a score function○ Predict the class by choosing the class template that has the maximum

correlation with the input: $𝑦 = argmax!

(𝑤!𝑥 + 𝑏!)

“Metal”

“Jazz”

“Classical”

+

𝑊𝑥

𝑏

=

𝑓(𝑥;𝑊, 𝑏)

𝑤!𝑥 + 𝑏! > 0

𝑤!𝑥 + 𝑏! < 0

𝑤!𝑥 + 𝑏! = 0

𝑤"𝑥 + 𝑏" > 0

𝑤#𝑥 + 𝑏# > 0 𝑥)

𝑥*

Page 9: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

Loss functions

● Determine how much penalty is assigned to each example given 𝑊 and 𝑏○ Whether the example is correctly classified or not○ How far the example is from the boundary○ The penalty should be continuous at the boundary

𝑥)

𝑥*

Loss:?

Loss:?Loss:?

Loss:?Loss:?

: positive examples (𝑦 = 1) : negative examples (𝑦 = -1) 𝑤𝑥 + 𝑏 > 0

𝑤𝑥 + 𝑏 < 0

Loss:?

𝑤𝑥 + 𝑏 = 0

Page 10: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

Loss functions

● Hinge loss: 𝑙 𝑥, 𝑦|𝜃 = 𝑚𝑎𝑥(0, 1 − 𝑦 : (𝑤𝑥 + 𝑏))

● Logistic loss: 𝑙 𝑥, 𝑦|𝜃 = log(1 + 𝑒'()(+,-.))

Boundary

Correctly Classified

Incorrectly Classified

𝑦 " (𝑤𝑥 + 𝑏)

Page 11: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

Support Vector Machine

● Use the hinge loss function○ The examples with non-zero loss are called “support vectors”. The number

of support vectors is much smaller than the size of the training set ○ The boundary is determined to have the maximum margin between the

support vectors (“maximum margin classifier”)

: positive examples (𝑦 = 1) : negative examples (𝑦 = -1)

𝑥)

𝑥*

𝑤𝑥 + 𝑏 = 1𝑤𝑥 + 𝑏 = −1

Support Vectors

Margin

Loss:0

Loss:0

Page 12: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

Support Vector Machine and Kernel Method

● The weight vector is determined by the support vectors

● Kernel method○ Assume that the input space is mapped into a non-linear space

○ But, instead of computing the non-linear mapping explicitly, we use the inner product of the non-linear mapping directly (“kernel trick”)

𝑤"𝑥 + 𝑏 = (0#$%

&

𝛼# 𝑦(#)𝑥(#))"𝑥 + 𝑏 =0,-)

.

𝛼, 𝑦 , 𝑥(,), 𝑥 + 𝑏

Inner product between support vector 𝑥(%) and input 𝑥

𝑥(#), 𝑥 𝐾 𝑥 # , 𝑥 = ∅(𝑥(#)), ∅(𝑥) = ∅(𝑥(#))"∅(𝑥(#))

𝑥 ∅ 𝑥

Page 13: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

Support Vector Machine and Kernel Method

● An example of kernel (polynomial kernel)

● Types of kernels○ Polynomial: 𝐾 𝑥, 𝑧 = (𝑥"𝑧 + 𝑐)) (𝑐 > 0)

○ Radial basis function (RBF): 𝐾 𝑥, 𝑧 = 𝑒*!"#$%$

$

𝐾 𝑥, 𝑧 = (𝑥1𝑧)*= 0,

𝑥,𝑧,

*𝑥 =

𝑥)𝑥*𝑥2

∅ 𝑥 =

𝑥%𝑥%𝑥%𝑥+𝑥%𝑥,𝑥+𝑥%𝑥+𝑥%𝑥+𝑥%𝑥,𝑥%𝑥,𝑥%𝑥,𝑥%

= (∑, 𝑥,𝑧,) (∑3 𝑥3𝑧3) = ∑,∑3(𝑥,𝑥3)(𝑧,𝑧3)

Input example

Simply taking the inner product and its square is equivalent to this non-linear mapping

support vector

Page 14: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

Support Vector Machine and Kernel Method

● Transform the linear boundary into a non-linear boundary

source: https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html#sphx-glr-auto-examples-classification-plot-classifier-comparison-py

Page 15: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

Multi-Class SVM

● The hinge loss function is generalized to multi-class classification

𝑓 𝑥, 𝑦 = 𝑘 𝜃 = +012

max(0, 1 − 𝑤2𝑥 − 𝑏2 + 𝑤0𝑥 + 𝑏0)

Loss contributions from the others

“Metal”

“Jazz”

“Classical”

𝑤!𝑥 + 𝑏! > 0

𝑤!𝑥 + 𝑏! < 0

𝑤!𝑥 + 𝑏! = 0

𝑤"𝑥 + 𝑏" > 0𝑤#𝑥 + 𝑏# > 0

𝑥)

𝑥*

Page 16: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

Logistic Regression

● Defining the prediction from the conditional probability 𝑃(𝑦 𝑥 .○ For the binary classification, we use Bernoulli distribution

○ Parameterize 𝑔 with the linear function of the input and sigmoid function𝑃(𝑦 = 1 𝑥 = 𝑔 , 𝑃(𝑦 = −1 𝑥 = 1 − 𝑔

𝑃(𝑦 = 1 𝑥 = 𝑔 𝑧 = 𝑔 𝑤𝑥 + 𝑏 =1

1 + 𝑒45 =1

1 + 𝑒46748

𝑥* 𝑤𝑥 + 𝑏 > 0

𝑤𝑥 + 𝑏 < 0

Sigmoidfunction

𝑔 𝑧 = 0.5

𝑔 𝑧 > 0.5

𝑔 𝑧 < 0.5

Page 17: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

Logistic Regression

● The logistic loss is defined as the cross-entropy between the probability distribution and the ground truth (or true distribution)○ This also corresponds to the negative log-likelihood

𝑙 𝑥, 𝑦|𝜃 =09

𝑞 𝑥 [−log𝑝(𝑥)] = −09

𝑞 𝑥 Elog 𝑔(𝑤9𝑥 + 𝑏)

=−log 𝑔(𝑤𝑥 + 𝑏)−log (1 − 𝑔 𝑤𝑥 + 𝑏 ) =log(1 + 𝑒4:;(67<8))

positive examples negative examples

𝑞 𝑥 =[1,0] or [0,1]

Page 18: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

Multi-Class Logistic Regression

● Softmax: multi-class extension of the logistic regression○ From the Bernoulli distribution to the multinomial distribution

● Again, the loss function is defined as the cross-entropy between the multinomial distribution and the ground truth○ Also, it is the same as the negative log-likelihood of the training data

𝑃(𝑦 = 𝑘 𝑥 =𝑒-&./0&

∑! 𝑒-'./0'

𝑙 𝑥, 𝑦 = 𝑘 𝜃 = − 𝑤=𝑥 + 𝑏= + log(09

𝑒6!7<8!)

𝑤"𝑥 + 𝑏"𝑤#𝑥 + 𝑏#𝑤$𝑥 + 𝑏$

𝑒%!&'(!

𝑒%"&'("

𝑒%#&'(#

𝑒%!&'(!

∑) 𝑒%$&'($

𝑒%"&'("

∑) 𝑒%$&'($

𝑒%#&'(#

∑) 𝑒%$&'($“Logit”Softmax of “Logit”

Exponential Normalization

Page 19: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

Multi-Layer Perceptron (MLP)

● A stack of linear transform and a non-linear function○ 𝑔#(𝑥) is a non-linear function such as sigmoid, tanh, rectified

linear unit○ This is a neural network!

● Learn non-linear boundaries like the kernel methods in SVM

● We will talk more about MLP in the deep learning part!

𝑥 𝑓) 𝑥 = 𝑊)𝑥 + 𝑏) 𝑧) = 𝑔)(𝑓) 𝑥 )

𝑓* 𝑥 = 𝑊*𝑧) + 𝑏* 𝑧* = 𝑔*(𝑓* 𝑥 )

𝑓2 𝑥 = 𝑊2𝑧* + 𝑏2

source: https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html#sphx-glr-auto-examples-classification-plot-classifier-comparison-py

Page 20: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

Regularization

● More complicated classifiers with many parameters can learn the non-linear boundary better but it can overfit to the training data ○ Use less parameters or add a regularization term ○ L2 or L1 regularization is a popular choice

● We will talk more about regularization in the deep learning part!

min>

1𝑁0,-)

?

𝑙(𝑓 𝑥,|𝜃 , 𝑦,) + 𝜆 E 𝑅(𝜃)

𝑅 𝜃 =0@-)

A

𝑊@* 𝑅 𝜃 =0

@-)

A

𝑊@)

“Jazz”

“Classical”

𝑥)

𝑥*

over-fitting

Page 21: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

Learning: Gradient descent

● Update rule for learning (or optimization)

○ Step 1: initialize the parameters (𝜃1) with random values○ Step 2: compute the gradient at the parameter point using the training set○ Step 3: update the parameter with the gradient descent rule○ Step 4: repeat 2 and 3 while monitoring the loss from the training set and

validation set○ Step 5: stop the iteration based on stopping criteria (e.g., the validation loss

increases 5 epochs consecutively)

𝜃3 = 𝜃34) − 𝜇 P𝜕𝑙 𝜃𝜕𝜃 >->*+,

gradient of the loss function learning rate

𝑙(𝜃) =0,-)

?

𝑓 𝑥3, 𝑦, 𝜃

Page 22: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

Learning Parameters

● As the iteration goes on, the parameters move towards local minima

𝑤)

𝑤*

𝑤∗

𝜃1𝜃%𝜃+⋯

Page 23: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

Learning Parameters

● Training Data Size○ Stochastic gradient descent (SGD): 𝑁 = 1○ Mini batch: 𝑁 =a small number of sampled data (e.g. 32, 64, 128 samples)○ Batch update: use the entire training data

● The loss function changes depending on the data!

𝑙(𝜃) =0,-)

?

𝑓 𝑥3, 𝑦, 𝜃

Page 24: Traditional Machine Learning: Supervised Learningjuhan/gct634/Slides/05... · 2 days ago · Learning model !is defined in terms of parameters "and predicts the class from !#|"given

Learning Parameters

● As the training data size increases, the loss becomes less fluctuated

𝑤)

𝑤*

𝑤∗

𝑤)

𝑤*

𝑤∗

Stochastic gradient descent Mini batch