kernel methods for statistical learningyosinski.com/mlss12/media/slides/mlss-2012-fukumizu... ·...

162
1 1 Kenji Fukumizu The Institute of Statistical Mathematics / Graduate University for Advanced Studies September 6-7, 2012 Machine Learning Summer School 2012, Kyoto Version 2012.09.04 The latest version of slides is downloadable at http://www.ism.ac.jp/~fukumizu/MLSS2012/ Kernel Methods for Statistical Learning

Upload: others

Post on 24-Jan-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

1 1

Kenji Fukumizu

The Institute of Statistical Mathematics / Graduate University for Advanced Studies

September 6-7, 2012 Machine Learning Summer School 2012, Kyoto

Version 2012.09.04

The latest version of slides is downloadable at http://www.ism.ac.jp/~fukumizu/MLSS2012/

Kernel Methods for Statistical Learning

Page 2: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Lecture Plan I. Introduction to kernel methods

II. Various kernel methods kernel PCA, kernel CCA, kernel ridge regression, etc

III. Support vector machine A brief introduction to SVM

IV. Theoretical backgrounds of kernel methods Mathematical aspects of positive definite kernels

V. Nonparametric inference with positive definite kernels Recent advances of kernel methods

2

Page 3: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

General references (More detailed lists are given at the end of each section)

– Schölkopf, B. and A. Smola. Learning with Kernels. MIT Press. 2002.

– Lecture slides (more detailed than this course) This page contains Japanese information, but the slides are written in English. Slides: 1, 2, 3, 4, 5, 6, 7, 8

– For Japanese only (Sorry!):

• 福水 「カーネル法入門 -正定値カーネルによるデータ解析」 朝倉書店(2010)

• 赤穂 「カーネル多変量解析 ―非線形データ解析の新しい展開」 岩波書店(2008) 3

Page 4: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

1

Kenji Fukumizu

The Institute of Statistical Mathematics / Graduate University for Advanced Studies

September 6-7Machine Learning Summer School 2012, Kyoto

I. Introduction to Kernel Methods

I-1

Page 5: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

2

Outline

1. Linear and nonlinear data analysis

2. Principles of kernel methods

3. Positive definite kernels and feature spaces

I-2

Page 6: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

3

Linear and Nonlinear Data Analysis

I-3

Page 7: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

What is data analysis?– Analysis of data is a process of inspecting, cleaning, transforming,

and modeling data with the goal of highlighting useful information, suggesting conclusions, and supporting decision making.

–Wikipedia

I-4

Page 8: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Linear data analysis– ‘Table’ of numbers Matrix expression

– Linear algebra is used for methods of analysis• Correlation,• Linear regression analysis,• Principal component analysis,• Canonical correlation analysis, etc.

)()(1

)2()2(1

)1()1(1

Nm

N

m

m

XX

XX

XX

X m dimensional, N data

I-5

Page 9: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Example 1: Principal component analysis (PCA)PCA: project data onto the subspace with largest variance.

1st direction = ][Varargmax 1|||| XaTa

N

i

N

jjiTT X

NXa

NXa

1

2

1)()( 11][Var

.aVa XXT

N

i

TN

jjiN

jji X

NXX

NX

N 11

)()(1

)()( 111XXV

(Empirical) covariance matrix of

where

I-6

Page 10: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

– 1st principal direction

– p-th principal direction =

PCA Eigenproblem of covariance matrix

aVa XXT

a 1||||argmax

1u unit eigenvector w.r.t. the largest eigenvalue of

unit eigenvector w.r.t. the p-th largest eigenvalue of

I-7

Page 11: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Example 2: Linear classification– Binary classification

Find a linear classifier

so that

– Example: Fisher’s linear discriminant analyzer, Linear SVM, etc.

)()(1

)2()2(1

)1()1(1

Nm

N

m

m

XX

XX

XX

X

Input data

N

NY

YY

Y 1

)(

)2(

)1(

Class label

)sgn()( bxaxh T

)()( )( ii YXh for all (or most) i.

I-8

Page 12: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Are linear methods enough?

-6 -4 -2 0 2 4 6-6

-4

-2

0

2

4

6

x1

x2

Watch the following movie! http://jp.youtube.com/watch?v=3liCbRZPrZA

linearly inseparable

0

5

10

15

20 0

5

10

15

20

-15

-10

-5

0

5

10

15

z1

z3

z2

transform

)2,,(),,( 2122

21321 xxxxzzz

linearly separable

I-9

Page 13: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Another example: correlation

][][],[YVarXVar

YXCovXY

22 ][][][][YEYEXEXE

YEYXEXE

-3 -2 -1 0 1 2 3-3

-2

-1

0

1

2

3

X

Y

= 0.94

I-10

Page 14: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

0 0.5 1 1.5 2 2.5-0.5

0

0.5

1

1.5

2

2.5

-1.5 -1 -0.5 0 0.5 1 1.5-0.5

0

0.5

1

1.5

2

2.5

X

Y (X, Y ) = 0.17

(X2,Y )= 0.96

transform(X, Y) (X2, Y)

I-11

Page 15: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Nonlinear transform helps!

Analysis of data is a process of inspecting, cleaning, transforming,and modeling data with the goal of highlighting useful information, suggesting conclusions, and supporting decision making.

– Wikipedia.

Kernel method = a systematic way of transforming data into a high-dimensional feature space to extract nonlinearity or higher-order moments of data.

I-12

Page 16: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Principles of kernel methods

I-13

Page 17: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Kernel method: Big picture– Idea of kernel method

– What kind of space is appropriate as a feature space?• Should incorporate various nonlinear information of the

original data.• The inner product should be computable. It is essential for

many linear methods.

Feature space

xi Hk

xj

,ix

jx

Space of original data

feature map

Do linear analysis in the feature space!e.g. SVM

I-14

Page 18: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Computational issue– For example, how about using power series expansion?

(X, Y, Z) (X, Y, Z, X2, Y2, Z2, XY, YZ, ZX, …)

– But, many recent data are high-dimensional.e.g. microarray, images, etc...

The above expansion is intractable!

e.g. Up to 2nd moments, 10,000 dimension:

Dim of feature space: 10000C1 + 10000C2 = 50,005,000 (!)

– Need a cleverer way Kernel method.

I-15

Page 19: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Feature space by positive definite kernel– Feature map: from original space to feature space

– With special choice of feature space, we have a function (positive definite kernel) k(x, y) such that

– Many linear methods use only the inner products of data, and do not need the explicit form of the vector Φ .(e.g. PCA)

),()(),( jiji XXkXX

H :

kernel trick.

)(,),(,, 11 nn XXXX

I-16

Page 20: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Positive definite kernelDefinition. Ω: set. : Ω Ω → is a positive definite kernel if

1) (symmetry)

2) (positivity) for arbitrary x1, …, xn ∈ Ω

0),(1,

n

ji jiji xxkcc

),(),(

),(),(

1

111

nnn

n

xxkxxk

xxkxxk

),(),( xykyxk

is positive semidefinite,

i.e., for any Ric

(Gram matrix)

I-17

Page 21: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Examples: positive definite kernels on Rm (proof is give in Section IV) • Euclidean inner product

• Gaussian RBF kernel

• Laplacian kernel

• Polynomial kernel

22exp),( yxyxkG

dTP yxcyxk )(),( ),0( N dc

)0(

m

i iiL yxyxk1

||exp),(

)0(

yxyxk T),(

I-18

Page 22: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Proposition 1.1Let be a vector space with inner product ⋅,⋅ and Φ:Ω →be a map (feature map). If : Ω Ω → is defined by

then k(x,y) is necessarily positive definite.

– Positive definiteness is necessary.

– Proof)

),,()(),( yxkyx

I-19

Page 23: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

– Positive definite kernel is sufficient.

Theorem 1.2 (Moore-Aronszajn)For a positive definite kernel on Ω, there is a Hilbert space (reproducing kernel Hilbert space, RKHS) that consists of

functions on Ωsuch that1) ⋅, ∈ for any 2) span ⋅, ∈ Ω is dense in 3) (reproducing property)

, ⋅, for any ∈ , ∈ Ω

*Hilbert space: vector space with inner product such that the topology is complete.

I-20

Page 24: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Feature map by positive definite kernel

– Feature space = RKHS.Feature map:

Φ:Ω → , ↦ ⋅,

, … , ↦ ⋅, , … , ⋅,

– Kernel trick: by reproducing property

Φ ,Φ ⋅, , ⋅, ,

– Prepare only positive definite kernel: We do not need an explicit form of feature vector or feature space. All we need for kernel methods are kernel values , .

I-21

Page 25: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

1

Kenji Fukumizu

The Institute of Statistical Mathematics / Graduate University for Advanced Studies

September 6-7 Machine Learning Summer School 2012, Kyoto

II. Various Kernel Methods

1

Page 26: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Outline 1. Kernel PCA

2. Kernel CCA

3. Kernel ridge regression

4. Some topics on kernel methods

II-2

Page 27: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Kernel Principal Component Analysis

II-3

Page 28: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Principal Component Analysis PCA (review)

– Linear method for dimension reduction of data – Project the data in the directions of large variance.

1st principal axis = ][Varargmax 1|||| XaT

a =

∑ ∑=

=

−=

n

i

n

j jiTT X

nXa

nXa

1

2

1

11][Var

.aVa XXT=

∑ ∑∑=

==

−=

n

i

Tn

j jin

j ji Xn

XXn

Xn 1

11

111XXV

where

II-4

Page 29: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

From PCA to Kernel PCA – Kernel PCA: nonlinear dimension reduction of data (Schölkopf et al.

1998).

– Do PCA in feature space

:max 1|||| =Hf

:max 1|||| =a

II-5

∑ ∑=

=

−=

n

i

n

s siTT X

nXa

nXa

1

2

1

11][Var

∑ ∑=

=

Φ−Φ=Φn

i

n

s si Xn

Xfn

Xf1

2

1)(1)(,1])(,[Var

Page 30: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

It is sufficient to assume

Orthogonal directions to the data can be neglected, since for 𝑓 = ∑ 𝑐𝑖(Φ 𝑋𝑖 − 1

𝑛∑ Φ 𝑋𝑠 ) + 𝑓⊥𝑛𝑠=1 𝑛

𝑖=1 , where 𝑓⊥ is orthogonal to

the spanΦ 𝑋𝑖 − 1𝑛∑ Φ 𝑋𝑠 𝑛𝑠=1 𝑖=1

𝑛 , the objective function of kernel PCA does not depend on 𝑓⊥.

Then, [ ] cKcXf X

T 2~)(, =ΦVar

)(~),(~:~, jiijX XXK ΦΦ=

∑ =Φ−Φ=Φ

n

s snii XXX1

1 )()(:)(~with

(centered Gram matrix) where

cKcf XT

H~|||| 2 =

(centered feature vector) II-6

𝑓 = 𝑐𝑖 Φ 𝑋𝑖 −1𝑛Φ 𝑋𝑠

𝑛

𝑠=1

𝑛

𝑖=1

[Exercise]

Page 31: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Objective function of kernel PCA

cKc XT 2~max subject to 1~ =cKc X

T

( ) ∑ =−=

n

s sijiijX XXkn

XXkK1

),(1),(~

∑∑ ==+−

N

st stn

t jt XXkn

XXkn 1,21

),(1),(1

The centered Gram matrix 𝐾𝑋 is expressed with Gram matrix 𝐾𝑋 = 𝑘 𝑋𝑖 ,𝑋𝑗

𝑖𝑗 as

𝐾𝑋 = 𝐼𝑛 −1𝑛𝟏𝑛𝟏𝑛𝑇

𝑇

𝐾 𝐼𝑛 −1𝑛𝟏𝑛𝟏𝑛𝑇

𝟏𝑛 =1⋮1

∈ 𝐑𝑛

[Exercise] II-7

𝐼𝑛 = Unit matrix

Page 32: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

– Kernel PCA can be solved by eigen-decomposition.

– Kernel PCA algorithm

• Compute centered Gram matrix

• Eigendecompsoition of

• p-th principal component of

XK~

∑ ==

N

iTiiiX uuK

1

~ λ

021 ≥≥≥≥ Nλλλ eigenvalues

unit eigenvectors Nuuu ,,, 21

XK~

iTppi XuX λ=

II-8

Page 33: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Derivation of kernel method in general – Consider feature vectors with kernels. – Linear method is applied to the feature vectors. (Kernelization) – Typically, only the inner products

are used to express the objective function of the method.

– The solution is given in the form

(representer theorem, see Sec.IV), and thus everything is written by Gram matrices.

These are common in derivation of any kernel methods.

),()(),( jiji XXkXX =ΦΦ

)(, iXf Φ

,)(1

∑=

Φ=n

iii Xcf

II-9

Page 34: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Example of Kernel PCA Wine data (from UCI repository)

13 dim. chemical measurements of for three types of wine. 178 data. Class labels are NOT used in PCA, but shown in the figures.

First two principal components:

-5 0 5-4

-3

-2

-1

0

1

2

3

4

-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6Linear PCA Kernel PCA (Gaussian kernel)

(σ = 3)

II-10

Page 35: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6 σ = 4

-0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4-0.5

-0.4

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

0.4

0.5

Kernel PCA (Gaussian)

-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

σ = 5

σ = 2

( )22exp),( σyxyxkG −−=

II-11

Page 36: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Noise Reduction with Kernel PCA – PCA can be used for noise reduction

(principal directions represent signal, other directions noise).

– Apply kernel PCA for noise reduction: • Compute d-dim subspace Vd spanned by d-principal directions. • For a new data x, Gx: Projection of Φ(x) onto Vd = noise reduced feacure vector. • Compute a preimage in

data sapce for the noise reduced feature vector Gx.

2arg min ( )ˆ xx

x x G′

′= Φ −

-25 -20 -15 -10 -5 0 5 10 15 20

-20

0

20-10

-8

-6

-4

-2

0

2

4

6

8

10 Φ(x)

Gx

x

Note: Gx is not necessariy given as an image of Φ.

II-12

Page 37: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

USPS data

Noise reduced images(linear PCA)

Noisy images

Noise reduced images (kernel PCA, Gaussian kernel)

Original data (NOT used for PCA)

Generated by Matlab stprtool (by V. Franc)

II-13

Page 38: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Properties of Kernel PCA

– Nonlinear features can be extracted.

– Can be used for a preprocessing of other analysis like classification. (dimension reduction / feature extraction)

– The results depend on the choice of kernel and kernel parameters. Interpreting the results may not be straightforward.

– How to choose a kernel and kernel parameter? • Cross-validation is not straightforward unlike SVM. • If it is a preprocessing, the performance of the final analysis

should be maximized.

II-14

Page 39: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Kernel Canonical Correlation Analysis

II-15

Page 40: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Canonical Correlation Analysis – Canonical correlation analysis (CCA, Hotelling 1936)

Linear dependence of two multivariate random vectors. • Data 𝑋1,𝑌1 , … , (𝑋𝑁,𝑌𝑁) • 𝑋𝑖: m-dimensional, 𝑌𝑖: ℓ-dimensional

Find the directions 𝑎 and 𝑏 so that the correlation of 𝑎𝑇𝑋 and 𝑏𝑇𝑌 is maximized.

,][][

],[max],[max,, YbXa

YbXaYbXaTT

TT

ba

TT

ba VarVarCovCorr ==ρ

X Y a

aTX bTY b

II-16

Page 41: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Solution of CCA

– Rewritten as a generalized eigenproblem:

[Exercise: derive this. (Hint. Use Lagrange multiplier method.)]

– Solution:

max𝑎,𝑏

𝑎𝑇𝑉𝑋𝑋𝑏 subject to 𝑎𝑇𝑉𝑋𝑋𝑎 = 𝑏𝑇𝑉𝑋𝑋𝑏 = 1.

𝑂 𝑉𝑋𝑋𝑉𝑋𝑋 𝑂

𝑎𝑏 = 𝜌 𝑉𝑋𝑋 𝑂

𝑂 𝑉𝑋𝑋𝑎𝑏

𝑎 = 𝑉𝑋𝑋1/2𝑢1, 𝑏 = 𝑉𝑋𝑋

1/2𝑣1

where 𝑢1 (𝑣1, resp.) is the left (right, resp.) first eigenvector for the SVD of 𝑉𝑋𝑋

−1/2𝑉𝑋𝑋𝑉𝑋𝑋−1/2.

II-17

Page 42: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Kernel CCA – Kernel CCA (Akaho 2000, Melzer et al. 2002, Bach et al 2002)

• Dependence (not only correlation) of two random variables. • Data: 𝑋1,𝑌1 , … , (𝑋𝑁,𝑌𝑁) arbitrary variables

• Consider CCA for the feature vectors with 𝑘𝑋 and 𝑘𝑋:

𝑋1, … ,𝑋𝑁 ↦ Φ𝑋 𝑋1 , … , ,Φ𝑋 𝑋𝑁 ∈ 𝐻𝑋, 𝑌1, … ,𝑌𝑁 ↦ Φ𝑋 𝑌1 , … ,Φ𝑋 𝑌𝑁 ∈ 𝐻𝑋.

X Y f

Φx(X) )(XfΦx Φy(Y)

Φy g )(Yg

max𝑓∈𝐻𝑋,𝑔∈𝐻𝑌

Cov[𝑓 𝑋 ,𝑔 𝑌 ]Var 𝑓 𝑋 Var[𝑔 𝑌 ]

= max𝑓∈𝐻𝑋,𝑔∈𝐻𝑌

∑ 𝑓,Φ𝑋 𝑋𝑖 ⟨Φ𝑋 𝑌𝑖 ,𝑔⟩𝑁𝑖

∑ 𝑓,Φ𝑋 𝑋𝑖 2𝑁𝑖 ∑ 𝑔,Φ𝑋 𝑌𝑖 2𝑁

𝑖

II-18

Page 43: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

– We can assume 𝑓 = ∑𝑖=1𝑁 𝛼𝑖Φ𝑋(𝑋𝑖) and 𝑔 = ∑𝑖=1𝑁 𝛽𝑖Φ𝑋(𝑌𝑖).

max𝛼∈R𝑁,𝛽∈𝑅𝑁

𝛼𝑇 𝐾𝑋𝐾𝑋𝛽

𝛼𝑇𝐾𝑋2𝛼 𝛽𝑇𝐾𝑋2𝛽

– Regularization: to avoid trivial solution,

– Solution: generalized eigenproblem

(same as kernel PCA)

max𝑓∈𝐻𝑋,𝑔∈𝐻𝑌

∑ 𝑓,Φ𝑋 𝑋𝑖 ⟨Φ𝑋 𝑌𝑖 ,𝑔⟩𝑁𝑖=1

∑ 𝑓,Φ𝑋 𝑋𝑖 2𝑁𝑖=1 + 𝜀𝑁 𝑓 𝐻𝑋

2 ∑ 𝑔,Φ𝑋 𝑌𝑖 2 +𝜀𝑁 𝑔 𝐻𝑌2𝑁

𝑖=1

II-19

𝐾𝑋 and 𝐾𝑋 : centered Gram matrices.

𝑂 𝐾𝑋𝐾𝑋𝐾𝑋𝐾𝑋 𝑂

𝛼𝛽 = 𝜌 𝐾𝑋2 + 𝜀𝑁𝐾𝑋 𝑂

𝑂 𝐾𝑋2 + 𝜀𝑁𝐾𝑋

𝛼𝛽

Page 44: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Application of KCCA – Application to image retrieval (Hardoon, et al. 2004).

• Xi: image, Yi: corresponding texts (extracted from the same webpages).

• Idea: use d eigenvectors f1,…,fd and g1,…,gd as the feature spaces which contain the dependence between X and Y.

• Given a new word 𝑌𝑛𝑛𝑛, compute its feature vector, and find the image whose feature has the highest inner product.

Xi Y Φx(Xi)

Φx Φy(Y)

Φy g

Φ

Φ

)(,

)(,1

Yg

Yg

yd

y

Φ

Φ

)(,

)(,1

ixd

ix

Xf

Xf •

“at phoenix sky harbor on july 6, 1997. 757-2s7, n907wa, ...”

II-20

f

Page 45: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

– Example: • Gaussian kernel for images. • Bag-of-words kernel (frequency of words) for texts.

Text -- “height: 6-11, weight: 235 lbs, position: forward, born: september 18, 1968, split, croatia college: none” Extracted images

II-21

Page 46: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Kernel Ridge Regression

II-22

Page 47: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Ridge regression 𝑋1,𝑌1 , … , 𝑋𝑛,𝑌𝑛 : data (𝑋𝑖 ∈ 𝐑𝑚, 𝑌𝑖 ∈ 𝐑)

Ridge regression: linear regression with 𝐿2 penalty.

min 𝑎

|𝑌𝑖2 − 𝑎𝑇𝑋𝑖|2𝑛

𝑖=1

+ 𝜆 𝑎 2

– Solution (quadratic function):

𝑎 = 𝑉𝑋𝑋 + 𝜆𝐼𝑛 −1𝑋𝑇𝑌

– Ridge regression is preferred, when 𝑉𝑋𝑋 is (close to) singular.

(The constant term is omitted for simplicity)

𝑉𝑋𝑋 = 1𝑛𝑋𝑇𝑋, 𝑋 =

𝑋1𝑇⋮𝑋𝑛𝑇

∈ 𝐑𝑛×𝑚, 𝑌 =𝑌1⋮𝑌𝑛

∈ 𝐑𝑛

where

II-23

Page 48: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Kernel ridge regression – 𝑋1,𝑌1 , … , 𝑋𝑛,𝑌𝑛 : 𝑋 arbitrary data, 𝑌 ∈ 𝐑. – Kernelization of ridge regression: positive definite kernel 𝑘 for 𝑋

min 𝑓∈𝐻

|𝑌𝑖 − 𝑓,Φ 𝑋𝑖 𝐻|2𝑛

𝑖=1

+ 𝜆 𝑓 𝐻2

min 𝑓∈𝐻

|𝑌𝑖 − 𝑓(𝑋𝑖)|2𝑛

𝑖=1

+ 𝜆 𝑓 𝐻2

equivalently,

Ridge regression on 𝐻

Nonlinear ridge regr.

II-24

Page 49: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

– Solution is given by the form

Let 𝑓 = ∑ 𝑐𝑖Φ 𝑋𝑖 + 𝑓⊥ = 𝑓Φ + 𝑓⊥𝑛𝑖=1

(𝑓Φ ∈ 𝑆𝑆𝑎𝑛 Φ 𝑋𝑖 𝑖=1𝑛 , 𝑓⊥: orthogonal complement)

Objective function = ∑ 𝑌𝑖 − 𝑓Φ + 𝑓⊥,Φ 𝑋𝑖 2 + 𝜆 𝑓Φ + 𝑓⊥ 2𝑛𝑖=1

= ∑ 𝑌𝑖 − 𝑓Φ,Φ 𝑋𝑖 2 + 𝜆 ( 𝑓Φ 2+ 𝑓⊥ 2𝑛𝑖=1 )

The 1st term does not depend on 𝑓⊥, and 2nd term is minimized in the case 𝑓⊥ = 0.

– Objective function :

𝑌 − 𝐾𝑋𝑐 2 + 𝜆 𝑐𝑇𝐾𝑋𝑐 – Solution: = 𝐾𝑋 + 𝜆𝐼𝑛 −1𝑌

Function: 𝑓 𝑥 = 𝑌𝑇 𝐾𝑋 + 𝜆𝐼𝑛 −1𝐤 𝑥

,)(1

∑=

Φ=n

jjj Xcf

II-25

𝐤 𝑥 =𝑘 𝑥,𝑋1

⋮𝑘 𝑥,𝑋𝑛

Page 50: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Regularization – The minimization

min𝑓

𝑌𝑖 − 𝑓 𝑋𝑖 2

may be attained with zero errors. But the function may not be unique.

– Regularization

min 𝑓∈𝐻

∑ |𝑌𝑖 − 𝑓(𝑋𝑖)|2𝑛𝑖=1 + 𝜆 𝑓 𝐻

2

• Regularization with smoothness

penalty is preferred for uniqueness and smoothness.

• Link with some RKHS norm and smoothness is discussed in Sec. IV.

II-26

Page 51: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Comparison Kernel ridge regression vs local linear regression

𝑌 = 1/ 1.5 + | 𝑋| 2 + 𝑍, 𝑋 ~ 𝑁 0, 𝐼𝑑 , 𝑍~𝑁 0, 0.12 𝑛 = 100, 500 runs Kernel ridge regression with Gaussian kernel Local linear regression with Epanechnikov kernel (‘locfit’ in R is used) Bandwidth parameters are chosen by CV.

II-27

1 10 1000

0.002

0.004

0.006

0.008

0.01

0.012

Dimension of X

Mea

n sq

uare

err

ors

Kernel methodLocal linear

Page 52: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Local linear regression (e.g., Fan and Gijbels 1996)

– 𝐾: smoothing kernel (𝐾 𝑥 ≥ 0, ∫ 𝐾 𝑥 𝑑𝑥 = 1, not necessarily positive definite)

– Local linear regression 𝐸 𝑌 𝑋 = 𝑥0 is estimated by • For each 𝑥0, this minimization can be solved by linear algebra. • Statistical property of this estimator is well studied. • For one dimensional 𝑋, it works nicely with some theoretical

optimality. • But, weak for high-dimensional data.

min𝑎,𝑏

𝑌𝑖 − 𝑎 − 𝑏𝑇(𝑋𝑖−𝑥0) 2𝑛

𝑖

𝐾ℎ(𝑋𝑖 − 𝑥0)

𝐾ℎ(𝑥) = ℎ−𝑑𝐾 𝑥ℎ

II-28

Page 53: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Some topics on kernel methods

II-29

• Representer theorem • Structured data • Kernel choice • Low rank approximation

Page 54: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Representer theorem 𝑋1,𝑌1 , … , 𝑋𝑛,𝑌𝑛 : data 𝑘: positive definite kernel for 𝑋, 𝐻: corresponding RKHS. Ψ: monotonically increasing function on 𝑅+. Theorem 2.1 (representer theorem, Kimeldorf & Wahba 1970)

The solution to the minimization problem:

min𝑓∈𝐻

𝐹 (𝑋1,𝑌1, 𝑓 𝑋1 , … , (𝑋𝑛,𝑌𝑛, 𝑓 𝑋𝑛 )) + Ψ(| 𝑓 |)

is attained by 𝑓 = ∑ 𝑐𝑖Φ 𝑋𝑖 𝑛

𝑖=1 with some 𝑐1, … , 𝑐𝑛 ∈ 𝑅𝑛 .

The proof is essentially the same as the one for the kernel ridge regression. [Exercise: complete the proof]

II-30

Page 55: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Structured Data – Structured data: non-vectorial data with some structure.

• Sequence data (variable length): DNA sequence, Protein (sequence of amino acid) Text (sequence of words)

• Graph data (Koji’s lecture) Chemical compound etc.

• Tree data Parse tree

• Histograms / probability measures

– Many kernels uses counts of substructures (Haussler 1999).

S

NP VP

Det N

The cat chased the mouse.

V Det N

NP

II-31

Page 56: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Spectrum kernel – p-spectrum kernel (Leslie et al 2002): positive definite kernel for string.

𝑘𝑝 𝑠, 𝑡 = Occurrences of common subsequences of length p.

– Example: s = “statistics” t = “pastapistan”

3-spectrum

s: sta, tat, ati, tis, ist, sti, tic, ics t : pas, ast, sta, tap, api, pis, ist, sta, tan

K3(s, t) = 1・2 + 1・1 = 3

– Linear time (𝑂(𝑆(|𝑆| + |𝑡|) ) algorithm with suffix tree is known (Vishwanathan & Smola 2003).

sta tat ati tis ist sti tic ics pas ast tap api pis tan

Φ(s) 1 1 1 1 1 1 1 1 0 0 0 0 0 0 Φ(t) 2 0 0 0 1 0 0 0 1 1 1 1 1 1

II-32

Page 57: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

– Application: kernel PCA of ‘words’ with 3-spectrum kernel

-4 -2 0 2 4 6 8-6

-4

-2

0

2

4

6

8

mathematics

engineeringpioneering

cyberneticsphysics

psychology

metaphysicsmethodology

biology

biostatistics

statistics

informatics

bioinformatics

II-33

Page 58: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Choice of kernel Choice of kernel

– Choice of kernel (polyn or Gauss) – Choice of parameters (bandwidth parameter in Gaussian kernel)

General principles – Reflect the structure of data (e.g., kernels for structured data)

– For supervised learning (e.g., SVM) Cross-validation

– For unsupervised learning (e.g. kernel PCA) • No general methods exist. • Guideline: make or use a relevant supervised problem, and use

CV.

– Learning a kernel: Multiple kernel learning (MKL) 𝑘 𝑥,𝑦 = ∑ 𝑐𝑖𝑘𝑖(𝑥,𝑦)𝑀

𝑖=1 optimize 𝑐𝑖

II-34

Page 59: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Low rank approximation – Gram matrix: 𝑛 × 𝑛 where 𝑛 is the sample size.

Large 𝑛 causes computational problems. e.g. Inversion, eigendecomposition costs 𝑂(𝑛3) in time.

– Low-rank approximation

𝐾 ≈ 𝑅𝑅𝑇 , where 𝑅: 𝑛 × 𝑟 matrix (𝑟 < 𝑛)

– The decay of eigenvalues of a Gram matrix is often quite fast (See Widom 1963, 1964; Bach & Jordan 2002).

II-35

𝐾 ≈ 𝑅 𝑅 𝑟

𝑛 𝑟

𝑛 𝑛

𝑛

Page 60: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

– Two major methods • Incomplete Cholesky factorization (Fine & Sheinberg 2001)

𝑂(𝑛𝑟2) in time and 𝑂(𝑛𝑟) in space • Nyström approximation (Williams and Seeger 2001)

Random sampling + eigendecomposition – Example: kernel ridge regression

𝑌𝑇 𝐾𝑋 + 𝜆𝐼𝑛 −1𝐤 𝑥 time : 𝑂(𝑛3)

Low rank approximation: 𝐾𝑋 ≈ 𝑅𝑅𝑇. With Woodbury formula*

𝑌𝑇 𝐾𝑋 + 𝜆𝐼𝑛 −1𝐤 𝑥 ≈ 𝑌𝑇 𝑅𝑅𝑇 + 𝜆𝐼𝑛 −1𝐤 𝑥 = 1

𝜆𝑌𝑇𝐤 𝑥 − 𝑌𝑇𝑅 𝑅𝑇𝑅 + 𝜆𝐼𝑟 −1𝑅𝑇𝐤 𝑥

time : 𝑂(𝑟2𝑛 + 𝑟3)

* Woodbury (Sherman–Morrison–Woodbury) formula: 𝐴 + 𝑈𝑉 −1 = 𝐴−1 − 𝐴−1𝑈 𝐼 + 𝑉𝐴−1𝑈 −1𝑉𝐴−1. II-36

Page 61: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Other kernel methods – Kernel Fisher discriminant analysis (kernel FDA) (Mika et al. 1999)

– Kernel logistic regression (Roth 2001, Zhu&Hastie 2005)

– Kernel partial least square(kernel PLS)(Rosipal&Trejo 2001)

– Kernel K-means clustering(Dhillon et al 2004)

etc, etc, ...

– Variants of SVM Section III.

II-37

Page 62: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Summary: Properties of kernel methods – Various classical linear methods can be kernelized Linear algorithms on RKHS.

– The solution typically has the form

– The problem is reduced to manipulation of Gram matrices of size n (sample size).

• Advantage for high dimensional data. • For a large number of data, low-rank approximation is used

effectively.

– Structured data: • kernel can be defined on any set. • kernel methods can be applied to any type of data.

.)(1

∑=

Φ=n

iii Xcf (representer theorem)

II-38

Page 63: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

References Akaho. (2000) Kernel Canonical Correlation Analysis. Proc. 3rd Workshop on

Induction-based Information Sciences (IBIS2000). (in Japanese) Bach, F.R. and M.I. Jordan. (2002) Kernel independent component analysis. Journal of

Machine Learning Research, 3:1–48. Dhillon, I. S., Y. Guan, and B. Kulis. (2004) Kernel k-means, spectral clustering and

normalized cuts. Proc. 10th ACM SIGKDD Intern. Conf. Knowledge Discovery and Data Mining (KDD), 551–556. Fan, J. and I. Gijbels. Local Polynomial Modelling and Its Applications. Chapman

Hall/CRC, 1996. Fine, S. and K. Scheinberg. (2001) Efficient SVM Training Using Low-Rank Kernel

Representations. Journal of Machine Learning Research, 2:243-264. Gökhan, B., T. Hofmann, B. Schölkopf, A.J. Smola, B. Taskar, S.V.N. Vishwanathan.

(2007) Predicting Structured Data. MIT Press. Hardoon, D.R., S. Szedmak, and J. Shawe-Taylor. (2004) Canonical correlation analysis:

An overview with application to learning methods. Neural Computation, 16:2639–2664.

Haussler, D. Convolution kernels on discrete structures. Tech Report UCSC-CRL-99-10, Department of Computer Science, University of California at Santa Cruz, 1999.

II-39

Page 64: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Leslie, C., E. Eskin, and W.S. Noble. (2002) The spectrum kernel: A string kernel for SVM protein classification, in Proc. Pacific Symposium on Biocomputing, 564–575.

Melzer, T., M. Reiter, and H. Bischof. (2001) Nonlinear feature extraction using generalized canonical correlation analysis. Proc. Intern. Conf. .Artificial Neural Networks (ICANN 2001), 353–360.

Mika, S., G. Rätsch, J. Weston, B. Schölkopf, and K.-R. Müller. (1999) Fisher discriminant analysis with kernels. In Y.-H. Hu, J. Larsen, E. Wilson, and S. Douglas, edits, Neural Networks for Signal Processing, volume IX, 41–48. IEEE.

Rosipal, R. and L.J. Trejo. (2001) Kernel partial least squares regression in reproducing kernel Hilbert space. Journal of Machine Learning Research, 2: 97–123.

Roth, V. (2001) Probabilistic discriminative kernel classifiers for multi-class problems. In Pattern Recognition: Proc. 23rd DAGM Symposium, 246–253. Springer.

Schölkopf, B., A. Smola, and K-R. Müller. (1998) Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299–1319.

Schölkopf, B. and A. Smola. Learning with Kernels. MIT Press. 2002. Vishwanathan, S. V. N. and A.J. Smola. (2003) Fast kernels for string and tree matching.

Advances in Neural Information Processing Systems 15, 569–576. MIT Press. Williams, C. K. I. and M. Seeger. (2001) Using the Nyström method to speed up kernel

machines. Advances in Neural Information Processing Systems, 13:682–688.

II-40

Page 65: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Widom, H. (1963) Asymptotic behavior of the eigenvalues of certain integral equations. Transactions of the American Mathematical Society, 109:278295, 1963.

Widom, H. (1964) Asymptotic behavior of the eigenvalues of certain integral equations II. Archive for Rational Mechanics and Analysis, 17:215229, 1964.

II-41

Page 66: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Appendix

II-42

Page 67: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Exercise for kernel PCA –

II-43

cKcf XT

H~|||| 2 =

𝑓 𝐻2 = 𝑐𝑖Φ 𝑋𝑖

𝑛

𝑖=1

,𝑐𝑗Φ 𝑋𝑗

𝑛

𝑗=1

= 𝑐𝑖𝑐𝑗 Φ 𝑋𝑖 ,Φ 𝑋𝑗

𝑛

𝑖

= 𝑐𝑇𝐾𝑋𝑐.

[ ] cKcXf XT 2~)(, =ΦVar

Var 𝑓,Φ 𝑋 =1𝑛 𝑐𝑖Φ 𝑋𝑖

𝑛

𝑖=1

,Φ 𝑋𝑗

2𝑛

𝑗=1

=1𝑛 𝑐𝑖Φ 𝑋𝑖

𝑛

𝑖=1

,Φ 𝑋𝑗 𝑐ℎΦ 𝑋ℎ

𝑛

ℎ=1

,Φ 𝑋𝑗

𝑛

𝑗=1

=1𝑛𝑐𝑖𝐾𝑖𝑗

𝑛

𝑖

𝑐ℎ𝐾ℎ𝑗

𝑛

ℎℎ

𝑛

𝑗=1

=1𝑛 𝑐

𝑇𝐾𝑋2𝑐 ∎

Page 68: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

1

Kenji Fukumizu

The Institute of Statistical Mathematics / Graduate University for Advanced Studies

September 6-7 Machine Learning Summer School 2012, Kyoto

III. Support Vector Machines A Brief Introduction

1

Page 69: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Large margin classifier – 𝑋1,𝑌1 , … , (𝑋𝑛,𝑌𝑛): training data

• 𝑋𝑖: input (𝑚-dimensional) • 𝑌𝑖 ∈ ±1: binary teaching data,

– Linear classifier

𝑓𝑤(𝑥) = 𝑤𝑇𝑥 + 𝑏

ℎ 𝑥 = sgn 𝑓𝑤 𝑥 We wish to make 𝑓𝑤(𝑥) with the training data so that a new data 𝑥 can be correctly classified.

y = fw(x)

fw(x)≧0

fw(x) < 0

III-2

Page 70: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

-8 -6 -4 -2 0 2 4 6 8-8

-6

-4

-2

0

2

4

6

8

Large margin criterion Assumption: the data is linearly separable. Among infinite number of separating hyperplanes, choose the one to give the largest margin.

– Margin = distance of two classes measured along the direction of 𝑤.

– Support vector machine: Hyperplane to give the largest margin.

– The classifier is the middle of the margin.

– “Supporting points” on the two boundaries are called support vectors.

𝑤

margin

support vector

III-3

Page 71: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

– Fix the scale (rescaling of (𝑤, 𝑏) does not change the plane)

Then

Margin = 2𝑤

[Exercise] Prove this.

−=+

=+

1)max(

1)min(

bXwbXw

iT

iT if Yi = 1,

if Yi = − 1

III-4

Page 72: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Support vector machine (linear, hard margin) (Boser, Guyon, Vapnik 1992)

Objective function:

max𝑤,𝑏

1𝑤

subject to 𝑤𝑇𝑋𝑖 + 𝑏 ≥ 1 if 𝑌𝑖 = 1,

𝑤𝑇𝑋𝑖 + 𝑏 ≥ 1 if 𝑌𝑖 = −1.

– Quadratic program (QP): • Minimization of a quadratic function with linear constraints. • Convex, no local optima (Vandenberghe’ lecture) • Many solvers available (Chih-Jen Lin’s lecture)

subject to 𝑌𝑖(𝑤𝑇𝑋𝑖 + 𝑏) ≥ 1 (∀𝑖) min𝑤,𝑏

𝑤 2

SVM (hard margin)

III-5

Page 73: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Soft-margin SVM – “Linear separability” is too strong. Relax it.

– This is also QP. – The coefficient C must be given. Cross-validation is often used.

Hard margin Soft margin

𝑌𝑖(𝑤𝑇𝑋𝑖 + 𝑏) ≥ 1 𝑌𝑖(𝑤𝑇𝑋𝑖 + 𝑏) ≥ 1 − 𝜉𝑖 , 𝜉𝑖 ≥ 0

𝑌𝑖(𝑤𝑇𝑋𝑖 + 𝑏) ≥ 1 − 𝜉𝑖 , 𝜉𝑖≥ 0 min

𝑤,𝑏 𝑤 2 + 𝐶 ∑ 𝜉𝑖𝑛

𝑖=1

SVM (soft margin)

subject to (∀𝑖)

III-6

Page 74: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

III-7

-8 -6 -4 -2 0 2 4 6 8-8

-6

-4

-2

0

2

4

6

8

w

𝑤𝑇𝑥 + 𝑏 = 1

𝑤𝑇𝑥 + 𝑏 = −1

𝒘𝑻𝑿𝒊 + 𝒃 = 𝟏 − 𝝃𝒊

𝒘𝑻𝑿𝒋 + 𝒃 = −𝟏 + 𝝃𝒋

Page 75: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

SVM and regularization – Soft-margin SVM is equivalent to the regularization problem:

• loss function: Hinge loss

• c.f. Ridge regression (squared error)

[Exercise] Confirm the above equivalence.

min𝑤,𝑏

1 − 𝑌𝑖 𝑤𝑇𝑋𝑖 + 𝑏 + + 𝜆 𝑤 2𝑛

𝑖=1

loss regularization term

𝑧 + = max0, 𝑧

ℓ 𝑓 𝑥 ,𝑦 = 1 − 𝑓 𝑥 +

III-8

min𝑤,𝑏

∑ 𝑌𝑖 − 𝑤𝑇𝑋𝑖 + 𝑏 2 + 𝜆 𝑤 2𝑛𝑖=1

Page 76: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Kernelization: nonlinear SVM – 𝑋1,𝑌1 , … , (𝑋𝑛,𝑌𝑛): training data

• 𝑋𝑖: input on arbitrary space Ω • 𝑌𝑖 ∈ ±1: binary teaching data,

– Kernelization: Positive definite kernel 𝑘 on Ω (RKHS 𝐻),

Feature vectors Φ 𝑋1 , … ,Φ 𝑋𝑛

– Linear classifier on 𝐻 nonlinear classifier on Ω

ℎ 𝑥 = sgn 𝑓,Φ 𝑥 + 𝑏 = sgn 𝑓 𝑥 + 𝑏 , 𝑓 ∈ 𝐻.

III-9

Page 77: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Nonlinear SVM

By representer theorem, 𝑓 = ∑ 𝑤𝑗Φ 𝑋𝑗𝑛𝑗=1 .

• This is again QP.

min𝑓,𝑏

1 − 𝑌𝑖(𝑓 𝑋𝑖 + 𝑏) + + 𝜆 𝑓 𝐻2

𝑛

𝑖=1

𝑌𝑖(⟨𝑓,Φ 𝑋𝑖 ⟩ + 𝑏) ≥ 1, 𝜉𝑖≥ 0 min

𝑓,𝑏 𝑓 2 + 𝐶𝜉𝑖

𝑛

𝑖=1

subject to (∀𝑖)

or equivalently,

𝑌𝑖( 𝐾𝑤 𝑖 + 𝑏) ≥ 1, 𝜉𝑖≥ 0 min

𝑤,𝑏 𝑤𝑇𝐾𝑤 + 𝐶𝜉𝑖

𝑛

𝑖=1

nonlinear SVM (soft margin)

subject to (∀𝑖)

III-10

Page 78: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Dual problem

– By Lagrangian multiplier method, the dual problem is SVM (dual problem)

• The dual problem is often preferred. • The classifier is expressed by

𝑓∗ 𝑥 + 𝑏∗ = 𝛼𝑖∗𝑌𝑖

𝑛

𝑖=1

𝐾 𝑥,𝑋𝑖 + 𝑏∗

– Sparse expression: Only the data with 0 < 𝛼𝑖∗ ≤ 𝐶 appear in the summation. Support vectors.

𝑌𝑖( 𝐾𝑤 𝑖 + 𝑏) ≥ 1, 𝜉𝑖≥ 0 min

𝑤,𝑏 𝑤𝑇𝐾𝑤 + 𝐶 ∑ 𝜉𝑖𝑛

𝑖=1 subject to (∀𝑖)

0 ≤ 𝛼𝑖 ≤ 𝐶, ∑ 𝑌𝑖𝛼𝑖𝑛𝑖=1 = 0

max𝛼

∑ 𝛼𝑖𝑛𝑖=1 − ∑ 𝛼𝑖𝛼𝑗𝑌𝑖𝑌𝑗𝐾𝑖𝑗𝑛

𝑖,𝑗=1 subject to (∀𝑖)

III-11

Page 79: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

KKT condition Theorem

The solution of the primal and dual problem of SVM is given by the following equations:

(1) 1 − 𝑌𝑖 𝑓∗ 𝑋𝑖 + 𝑏∗ − 𝜉𝑖∗ ≤ 0 (∀ 𝑖) [primal constraint]

(2) 𝜉𝑖∗ ≥ 0 (∀ 𝑖) [primal constraint]

(3) 0 ≤ 𝛼𝑖∗ ≤ 𝐶, (∀ 𝑖) [dual constraint]

(4) 𝛼𝑖∗(1 − 𝑌𝑖 (𝑓∗ 𝑋𝑖 + 𝑏∗) − 𝜉𝑖∗) = 0 (∀ 𝑖) [complementary]

(6) 𝜉𝑖∗ 𝐶 − 𝛼𝑖∗ = 0 (∀ 𝑖), [complementary]

(7) ∑𝑗=1𝑛 𝐾𝑖𝑗𝑤𝑗∗ − ∑ 𝛼𝑗∗𝑌𝑗𝐾𝑖𝑗𝑛𝑗 = 0,

(8) ∑ 𝑗=1𝑛 𝛼𝑗∗𝑌𝑗 = 0,

III-12

Page 80: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Sparse expression – Two types of support vectors.

-8 -6 -4 -2 0 2 4 6 8-8

-6

-4

-2

0

2

4

6

8

w

support vectors 0 < αi < C (Yi

𝜑(Xi) = 1)

support vectors αi = C (Yi𝜑(Xi) 1) ≤

𝜑∗(𝑥) = 𝑓∗ 𝑥 + 𝑏∗ = 𝛼𝑖∗𝑌𝑖𝑋𝑖: support vectors

𝐾 𝑥,𝑋𝑖 + 𝑏∗

III-13

Page 81: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Summary of SVM – One of the kernel methods:

• kernelization of linear large margin classifier. • Computation depends on Gram matrices of size 𝑛.

– Quadratic program: • No local optimum. • Many solvers are available. • Further efficient optimization methods are available

(e.g. SMO, Plat 1999)

– Sparse representation • The solution is written by a small number of support vectors.

– Regularization • The objective function can be regarded as regularization with

hinge loss function.

III-14

Page 82: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

– NOT discussed on SVM in this lecture are • Many successful applications • Multi-class extension

– Combination of binary classifiers (1-vs-1, 1-vs-rest) – Generalization of large margin criterion

Crammer & Singer (2001), Mangasarian & Musicant (2001), Lee, Lin, & Wahba (2004), etc

• Other extensions – Support vector regression (Vapnik 1995)

– 𝜈-SVM (Schölkopf et al 2000)

– Structured-output (Collins & Duffty 2001, Taskar et al 2004, Altun et al 2003, etc)

– One-class SVM (Schökopf et al 2001)

• Optimization – Solving primal problem

• Implementation (Chih-Jen Lin’s lecture)

III-15

Page 83: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

References Altun, Y., I. Tsochantaridis, and T. Hofmann. Hidden Markov support vector machines.

In Proc. 20th Intern. Conf. Machine Learning, 2003. Boser, B.E., I.M. Guyon, and V.N. Vapnik. A training algorithm for optimal margin

classifiers. In D. Haussler, editor, Fifth Annual ACM Workshop on Computational Learning Theory, pages 144–152, Pittsburgh, PA, 1992. ACM Press.

Crammer, K. and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. Journal of Machine Learning Research, 2:265–292, 2001.

Collins, M. and N. Duffy. Convolution kernels for natural language. In Advances in Neural Information Processing Systems 14, pages 625–632. MIT Press, 2001.

Mangasarian, O. L. and David R. Musicant. Lagrangian support vector machines. Journal of Machine Learning Research, 1:161–177, 2001

Lee, Y., Y. Lin, and G. Wahba. Multicategory support vector machines, theory, and application to the classification of microarray data and satellite radiance data. Journal of the American Statistical Association, 99: 67–81, 2004.

Schölkopf, B., A. Smola, R.C. Williamson, and P.L. Bartlett. (2000) New support vector algorithms. Neural Computation, 12(5):1207–1245.

III-16

Page 84: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Schölkopf, B., J.C. Platt, J. Shawe-Taylor, R.C. Williamson, and A.J.Smola. (2001) Estimating the support of a high-dimensional distribution. Neural Computation, 13(7):1443–1471.

Vapnik, V.N. The Nature of Statistical Learning Theory. Springer 1995. Platt, J. Fast training of support vector machines using sequential minimal optimization. In

B. Schölkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning, pages 185–208. MIT Press, 1999.

Books on Application domains: – Lee, S.-W., A. Verri (Eds.) Pattern Recognition with Support Vector Machines: First

International Workshop, SVM 2002, Niagara Falls, Canada, August 10, 2002. Proceedings. Lecture Notes in Computer Science 2388, Springer, 2002.

– Joachims, T. Learning to Classify Text Using Support Vector Machines: Methods, Theory and Algorithms. Springer, 2002.

– Schölkopf, B., K. Tsuda, J.-P. Vert (Eds.). Kernel Methods in Computational Biology. MIT Press, 2004.

III-17

Page 85: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

1

Kenji Fukumizu

The Institute of Statistical Mathematics / Graduate University for Advanced Studies

September 6-7 Machine Learning Summer School 2012, Kyoto

IV. Theoretical Backgrounds of Kernel Methods

1

Page 86: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

𝐂-valued Positive definite kernel Definition.

Ω: set. is a positive definite kernel if for arbitrary 𝑥1, … , 𝑥𝑛 ∈ Ω and 𝑐1, … , 𝑐𝑛 ∈ 𝐂,

Remark: From the above condition, the Gram matrix 𝑘 𝑥𝑖 , 𝑥𝑗𝑖𝑗

is

necessarily Hermitian, i.e. 𝑘 𝑦, 𝑥 = 𝑘(𝑥,𝑦). [Exercise]

.0),(1,

≥∑ =

n

ji jiji xxkcc

C→Ω×Ω:k

IV-2

Page 87: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Operations that preserve positive definiteness

Proposition 4.1 If 𝑘𝑖: 𝑋 × 𝑋 → 𝐂 (𝑖 = 1,2, … ,) are positive definite kernels, then so are the following: 1. (positive combination) 𝑎𝑘1 + 𝑏𝑘2 (𝑎, 𝑏 ≥ 0). 2. (product) 𝑘1𝑘2 𝑘1 𝑥, 𝑦 𝑘2 𝑥,𝑦 3. (limit) lim

𝑖→∞ 𝑘𝑖(𝑥,𝑦), assuming the limit exists.

Proof. 1 and 3 are trivial from the definition. For 2, it suffices to prove that Hadamard product (element-wise) of two positive semidefinite matrices is positive semidefinite. Remark: The set of positive definite kernels on 𝑋 is a closed cone, where the topology is defined by the point-wise convergence.

IV-3

Page 88: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Proposition 4.2 Let 𝐴 and 𝐵 be positive semidefinite Hermitian matrices. Then, Hadamard product 𝐾 = 𝐴 ∗ 𝐵 (element-wise product) is positive semidefinite. Proof) Eigendecomposition of 𝐴: 𝐴 = 𝑈Λ𝑈𝑇 = i.e., Then,

∑ ==

n

pjp

ippij UUA

1λ (λp ≧ 0 by the positive semidefiniteness).

∑ =

n

ji ijji Kcc1,

( ) ( ) .01,1, 111 ≥++= ∑∑ ==

n

ji ijj

njinin

n

ji ijj

ji

i BUcUcBUcUc λλ

∑ ∑= ==

n

p

n

ji ijjp

ippji BUUcc

1 1,λ

∎ IV-4

𝑈𝑝𝑖𝜆1 0⋯ 00 𝜆2 ⋯⋮ ⋱ ⋮

0⋯ 0 𝜆𝑛

𝑈𝑝𝑗𝑇

Page 89: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Normalization Proposition 4.3

Let 𝑘 be a positive definite kernel on Ω, and 𝑓:Ω → 𝐂 be an arbitrary function. Then,

𝑘 𝑥, 𝑦 : = 𝑓 𝑥 𝑘 𝑥, 𝑦 𝑓(𝑦)

is positive definite. In particular,

𝑓 𝑥 𝑓(𝑦)

is a positive definite kernel.

– Proof [Exercise] – Example. Normalization:

is positive definite, and Φ(𝑥) = 1 for any 𝑥 ∈ Ω.

𝑘 𝑥, 𝑦 =𝑘 𝑥,𝑦

𝑘(𝑥, 𝑥)𝑘(𝑦,𝑦)

IV-5

Page 90: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Proof of positive definiteness – Euclidean inner product 𝑥𝑇𝑦: easy (Prop. 1.1).

– Polynomial kernel 𝑥𝑇𝑦 + 𝑐 𝑑 (𝑐 ≥ 0): 𝑥𝑇𝑦 + 𝑐 𝑑 = 𝑥𝑇𝑦 𝑑 + 𝑎1 𝑥𝑇𝑦 𝑑−1 + ⋯+ 𝑎𝑑, 𝑎𝑖 ≥ 0.

Product and non-negative combination of p.d. kernels.

– Gaussian RBF kernel exp − 𝑥−𝑦 2

𝜎2:

exp −𝑥 − 𝑦 2

𝜎2= 𝑒− 𝑥 2/𝜎2𝑒𝑥𝑇𝑦/𝜎2 𝑒− 𝑦 2/𝜎2

Note

𝑒𝑥𝑇𝑦/𝜎2 = 1 +1

1!𝜎2𝑥𝑇𝑦 +

12!𝜎4

𝑥𝑇𝑦 2 + ⋯

is positive definite (Prop. 4.1). Proposition 4.3 then completes the proof.

– Laplacian kernel is discussed later. IV-6

Page 91: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Shift-invariant kernel – A positive definite kernel 𝑘 𝑥, 𝑦 on 𝐑𝑚 is called shift-invariant if

the kernel is of the form 𝑘 𝑥,𝑦 = 𝜓 𝑥 − 𝑦 .

– Examples: Gaussian, Laplacian kernel

– Fourier kernel (C-valued positive definite kernel): for each 𝜔,

𝑘𝐹 𝑥,𝑦 = exp −1𝜔𝑇 𝑥 − 𝑦 = exp −1𝜔𝑇𝑥 exp −1𝜔𝑇𝑦

(Prop. 4.3)

– If 𝑘 𝑥,𝑦 = 𝜓 𝑥 − 𝑦 is positive definite, the function 𝜓 is called positive definite.

IV-7

Page 92: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Bochner’s theorem Theorem 4.4 (Bochner)

Let 𝜓 be a continuous function on 𝐑𝑚. Then, 𝜓 is (𝐂-valued) positive definite if and only if there is a finite non-negative Borel measure Λ on 𝐑𝑚 such that

𝜓 𝑧 = exp −1𝜔𝑇𝑧 𝑑Λ(𝜔)

Bochner’s theorem characterizes all the continuous shift-invariant positive definite kernels. exp −1𝜔𝑇𝑧 𝜔 ∈ 𝐑𝑚 is the generator of the cone (see Prop. 4.1).

– Λ is the inverse Fourier (or Fourier-Stieltjes) transform of 𝜓. – Roughly speaking, the shift invariant functions are the class that

have non-negative Fourier transform.

– Sufficiency is easy: ∑ 𝑐𝑖𝑐𝑗𝑖,𝑗 𝜓 𝑧𝑖 − 𝑧𝑗 = ∫ |∑ 𝑐𝑖𝑒 −1𝜔𝑇𝑧𝑖 |2𝑑Λ 𝜔𝑖 . Necessity is difficult.

IV-8

Page 93: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

RKHS in frequency domain Suppose (shift-invariant) kernel 𝑘 has a form

𝑘 𝑥,𝑦 = exp −1𝜔𝑇 𝑥 − 𝑦 𝜌 𝜔 𝑑𝜔 .

Then, RKHS 𝐻𝑘 is given by

𝐻𝑘 = 𝑓 ∈ 𝐿2 𝐑,𝑑𝑥 𝑓 𝜔

2

𝜌 𝜔𝑑𝜔 < ∞ ,

𝑓,𝑔 = 𝑓 𝜔 𝑔 𝜔𝜌 𝜔

𝑑𝜔

where 𝑓 is the Fourier transform of 𝑓:

𝑓(𝜔) = 12𝜋 𝑚 ∫𝑓(𝑥)exp − −1𝜔𝑇𝑥 𝑑𝜔 .

𝜌 𝜔 > 0.

IV-9

Page 94: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Gaussian kernel 𝑘𝐺 𝑥,𝑦 = exp −

𝑥 − 𝑦 2

2𝜎2, 𝜌𝐺 𝜔 =

12𝜋 𝑚 exp −

𝜎2 𝜔 2

2

𝐻𝑘𝐺 = 𝑓 ∈ 𝐿2 𝐑,𝑑𝑥 𝑓 𝜔 2 exp𝜎2 𝜔 2

2𝑑𝜔 < ∞

𝑓,𝑔 = 2𝜋 𝑚𝑓 𝜔 𝑔 𝜔 exp𝜎2 𝜔 2

2𝑑𝜔

Laplacian kernel (on 𝐑) 𝑘𝐿 𝑥,𝑦 = exp −𝛽|𝑥 − 𝑦| , 𝜌𝐿 𝜔 =

12𝜋 𝜔2 + 𝛽

𝐻𝑘𝐿 = 𝑓 ∈ 𝐿2 𝐑,𝑑𝑥 𝑓 𝜔 2 𝜔2 + 𝛽 𝑑𝜔 < ∞

𝑓,𝑔 = 2𝜋𝑓 𝜔 𝑔 𝜔 𝜔2 + 𝛽 𝑑𝜔

– Decay of 𝑓 ∈ 𝐻 for high-frequency is different for Gaussian and Laplacian. IV-10

Page 95: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

RKHS by polynomial kernel – Polynomial kernel on 𝐑:

𝑘𝑝 𝑥,𝑦 = 𝑥𝑇𝑦 + 𝑐 𝑑 , 𝑐 ≥ 0,𝑑 ∈ 𝐍

𝑘𝑝 𝑥, 𝑧0 = 𝑧0𝑑𝑥𝑑 + 𝑑1 𝑐𝑧0𝑑−1𝑥𝑑−1 + 𝑑2 𝑐2𝑧0𝑑−2𝑥𝑑−1 + ⋯+ 𝑐𝑑 .

Span of these functions are polynomials of degree 𝑑. Proposition 4.5

If 𝑐 ≠ 0, the RKHS is the space of polynomials of degree at most 𝑑.

[Proof: exercise. Hint. Find 𝑏𝑖 to satisfy ∑ 𝑏𝑖𝑘 𝑥, 𝑧𝑖𝑑𝑖=0 =

∑ 𝑎𝑖𝑥𝑖𝑑𝑖=0 as a solution to a linear equation.]

IV-11

Page 96: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Sum and product 𝐻1,𝑘1 , 𝐻2,𝑘2 : two RKHS’s and positive definite kernels on Ω.

Sum RKHS for 𝑘1 + 𝑘2:

𝐻1 + 𝐻2 = 𝑓:Ω → 𝑅 ∃𝑓1 ∈ 𝐻1,∃𝑓2 ∈ 𝐻2,𝑓 = 𝑓1 + 𝑓2

𝑓 2 = 𝑓1 𝐻12 + 𝑓2 𝐻2

2 𝑓 = 𝑓1 + 𝑓2, 𝑓1 ∈ 𝐻1,𝑓2 ∈ 𝐻2

Product RKHS for 𝑘1𝑘2: 𝐻1 ⊗𝐻2 =tensor product as a vector space.

𝑓 = ∑ 𝑓𝑖𝑔𝑖𝑛𝑖=1 𝑓𝑖 ∈ 𝐻1,𝑔𝑖 ∈ 𝐻2 is dense in 𝐻1 ⊗𝐻2.

∑ 𝑓𝑖1 𝑔𝑖

1 ,𝑛𝑖=1 ∑ 𝑓𝑗

2 𝑔𝑗2𝑚

𝑗=1 = ∑ ∑ 𝑓𝑖1 ,𝑓𝑗

2𝐻1 𝑔𝑖

1 ,𝑔𝑗2

𝐻2𝑚𝑗=1

𝑛𝑖=1 .

IV-12

Page 97: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Summary of Section IV – Positive definiteness of kernels are preserved by

• Non-negative combinations, • Product • Point-wise limit • Normalization

– Bochner’s theorem: characterization of the continuous shift-

invariance kernels on 𝐑𝑚.

– Explicit form of RKHS • RKHS with shift-invariance kernels has explicit expression in

frequency domain. • Polynomial kerns gives RKHS of polynomials. • Sum and product can be given.

IV-13

Page 98: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

References Aronszajn., N. Theory of reproducing kernels. Trans. American Mathematical

Society, 68(3):337–404, 1950. Saitoh., S. Integral transforms, reproducing kernels, and their applications.

Addison Wesley Longman, 1997.

IV-14

Page 99: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Solution to Exercises C-valued positive definiteness

Using the definition for one point, we have 𝑘 𝑥, 𝑥 is real and non-negative for all 𝑥. For any 𝑥 and 𝑦, applying the definition with coefficient (𝑐, 1) where 𝑐 ∈ 𝐂, we have

𝑐 2𝑘 𝑥, 𝑥 + 𝑐𝑘 𝑥,𝑦 + 𝑐𝑘 𝑦, 𝑥 + 𝑘 𝑦,𝑦 ≥ 0. Since the right hand side is real, its complex conjugate also satisfies

𝑐 2𝑘 𝑥, 𝑥 + 𝑐𝑘 𝑥,𝑦 + 𝑐𝑘 𝑦, 𝑥 + 𝑘 𝑦,𝑦 ≥ 0. The difference of the left hand side of the above two inequalities is real, so that

𝑐 𝑘 𝑦, 𝑥 − 𝑘 𝑥,𝑦 − 𝑐(𝑘 𝑦, 𝑥 − 𝑘 𝑥,𝑦 ) is a real number. On the other hand, since 𝛼 − 𝛼 must be pure imaginary for any complex number 𝛼,

𝑐 𝑘 𝑦, 𝑥 − 𝑘 𝑥,𝑦 = 0

holds for any 𝑐 ∈ 𝐂. This implies 𝑘 𝑦, 𝑥 = 𝑘(𝑥,𝑦).

IV-15

Page 100: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

1

Kenji Fukumizu

The Institute of Statistical Mathematics / Graduate University for Advanced Studies

September 6-7Machine Learning Summer School 2012, Kyoto

V. Nonparametric Inference with Positive Definite Kernels

1

Page 101: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Outline1. Mean and Variance on RKHS

2. Statistical tests with kernels

3. Conditional probabilities and beyond.

V-2

Page 102: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Introduction “Kernel methods” for statistical inference

– We have seen that kernelization of linear methods provides nonlinear methods, which capture ‘nonlinearity’ or ‘high-order moments’ of original data.

e.g. nonlinear SVM, kernel PCA, kernel CCA, etc.

– This section discusses more basic statistics on RKHS

(original space)

feature map H (RKHS)

X (X) = k( , X)

mean, covariance, conditional covariance

V-3

Page 103: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Mean and covariance on RKHS

V-4

Page 104: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Mean on RKHSX: random variable taking value on a measurable space Ω k: measurable positive definite kernel on Ω . H: RKHS.

Always assumes “bounded” kernel for simplicity: sup , ∞.

: feature vector = random variable on RKHS.

– Definition. The kernel mean ∈ of X on H is defined by

– Reproducing expectation:

* Notation: depends on , also. But, we do not show it for simplicity. 5

)(, XfEfmX

),()( XkX

Φ ⋅, .

∀ ∈

V-5

Page 105: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Covariance operator, : random variable taking values on Ω ,Ω , resp. , , , :RKHS given by kernels on Ω and Ω , resp.

Definition. Cross-covariance operator:

– Simply, covariance of feature vectors.c.f. Euclidean case VYX = E[YXT] – E[Y]E[X]T : covariance matrix

– Reproducing covariance:

YXYX HH :

V-6

Σ ≡ Φ ⊗Φ ∗ Φ ⊗ Φ ∗

∗denotes the linear functional ,⋅ : ↦ ⟨ , ⟩

)])(),([Cov()]([)]([)]()([, YgXfXfEYgEXfYgEfg YX for all ∈ , ∈ .

Page 106: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

– Standard identification: ∗ ≅ : ,⋅, ↔ .

– The operator is regarded as an element in ⊗ ,i.e.,

can be regarded as

V-7

Σ ≅ , ⊗ ∈ ⊗

Σ ≡ Φ ⊗Φ ∗ Φ ⊗ Φ ∗

X Y

X Y

HX HY

X Y

X(X) Y(Y)

YX

Page 107: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Characteristic kernel(Fukumizu, Bach, Jordan 2004, 2009; Sriperumbudur et al 2010)

– Kernel mean can capture higher-order moments of the variable.Example

X: R-valued random variablek: pos.def. kernel on R. Suppose k admits a Taylor series expansion on R.

The kernel mean mX works as a moment generating function:

2210 )()(),( xucxuccxuk

22210 ][][)],([)( uXEcuXEccXukEumX

][)(1

0

XEumdud

c uX

)exp(),( xuuxk e.g.)(ci > 0)

V-8

Page 108: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

P : family of all the probabilities on a measurable space (, B).H: RKHS on with a bounded measurable kernel k. mP: kernel mean on H for a variable with probability

Definition. A bounded measureable positive definite k is called characteristic (w.r.t. P) if the mapping

is one-to-one.

– The kernel mean with a characteristic kernel uniquely determines a probability.

PP

PmPH ,P

QPmm QP i.e.

V-9

~ ∼ ∀ ∈ ⇔ .

Page 109: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

– Analogy to “characteristic function”With Fourier kernel

• The characteristic function uniquely determines a Borelprobability on Rm.

• The kernel mean with a characteristic kernel uniquely determines a probability on (, B).

Note: may not be Euclidean.

– The characteristic RKHS must be large enough!Examples for Rm (proved later): Gaussian, Laplacian kernel.

Polynomial kernels are not characteristic.

yxyxF T1exp),(

)].,([)(.. uXFEuX fCh

)],([)( XukEumX

V-10

Page 110: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Statistical inference with kernel means

– Statistical inference: inference on some properties of probabilities.

– With characteristic kernels, they can be cast into the inference on kernel means.

Two sample problem: ?Independence test: ⊗ ?

?

⊗ ?

V-11

Page 111: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Empirical Estimation Empirical kernel mean

– An advantage of RKHS approach is easy empirical estimation.

– : i.i.d. : sample on RKHS

Empirical mean:

Theorem 5.1 (strong -consistency)Assume

nXX ,...,1 nXX ,,1

n

ii

n

ii

nX Xk

nX

nm

11

)( ),(1)(1ˆ

).(1ˆ )( nnOmm pXn

X

.)],([ XXkE

V-12

Page 112: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Empirical covariance operator, , … , , : i.i.d. sample on Ω Ω .

An estimator of YX is defined by

Theorem 5.2

– Hilbert-Schmidt norm: same as Frobenius norm of a matrix, but often used for infinite dimensional spaces.

| | ∑ ∑ ,

n

iXiXYiY

nYX mXkmYk

n 1

)( ˆ),(ˆ),(1ˆ

)(1ˆ )( nnOpHSYXn

YX

(sum of squares in matrix expression)

V-13

Page 113: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Statistical test with kernels

V-14

Page 114: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Two‐sample problem– Two sample homogeneity test

Two i.i.d. samples are given;

Q: Are they sampled from the same distribution? Null hypothesis H0: .Alternative H1: .

– Practically important: we often wish to distinguish two things.• Are the experimental results of treatment and control significantly

different? • Were the plays “Henry VI” and “Henry II” written by the same author?

– If then means of and are different, we may use it for test. If they are identical, we need to look at higher order information.

PXX n ~,...,1 .~,...,1 QYY and

V-15

Page 115: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

– If mean and variance are the same, it is a very difficult problem.– Example: do they have the same distribution?

-4 -3 -2 -1 0 1 2 3 4-4

-3

-2

-1

0

1

2

3

4

-4 -3 -2 -1 0 1 2 3 4-4

-3

-2

-1

0

1

2

3

4

-4 -3 -2 -1 0 1 2 3 4-4

-3

-2

-1

0

1

2

3

4

-4 -3 -2 -1 0 1 2 3 4-4

-3

-2

-1

0

1

2

3

4

-4 -3 -2 -1 0 1 2 3 4-4

-3

-2

-1

0

1

2

3

4

-4 -3 -2 -1 0 1 2 3 4-4

-3

-2

-1

0

1

2

3

4

ℓ 100

V-16

Page 116: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Kernel method for two-sample problem (Gretton et al. NIPS 2007, 2010, JMLR2012).

Kernel approach– Comparison of and comparison of and .

Maximum Mean Discrepancy – In population

– With characteristic kernel, MMD = 0 if and only if .

)].,([2)]~,([)]~,([22 YXkEYYkEXXkEmmMMDHYX

V-17

)()()()(sup,sup1||:||1||:||

xdQxfxdPxfmmfmmHH ff

YXff

HYX

( , : independent copy of , )

hence, MMD.

Page 117: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

– Test statistics:Empirical estimator MMD2

emp

– Asymptotic distributions under H0 and H1 are known (see Appendix)• Null distribution: ℓ infinite mixture of

• Alternative: ℓ normal distribution

– Approximation of null distribution• Approximation of the mixture coefficients.• Fitting it with Pearson curve by moment matching.• Bootstrap (Arcones & Gine 1992)

V-18

2ˆˆ HYX mm

1,2

1 11,2 ),(1),(2),(1

baba

n

i aai

n

jiji YYkYXk

nXXk

n

2empMMD

Page 118: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Experiments

Data set Attr. MMD-B WW KS

Neural I Same 96.5 97.0 95.0Different 0.0 0.0 10.0

Neural II Same 94.6 95.0 94.5Different 3.3 0.8 31.8

Health Same 95.5 94.7 96.1Different 1.0 2.8 44.0

Subtype Same 99.1 94.6 97.3Different 0.0 0.0 28.4

Percentage of accepting .Significance level .

Data size / Dim Neural I: 4000 / 63Neural II: 1000 / 100Health: 25 / 12600Subtype: 25 / 2118

Comparison of two databases.

(Gretton et al. JMLR 2012)

V-19

WW: Wald-Walfovitz testKS: Kolmogorov-Smirnov test

Classical methods (see Appendix)

Page 119: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

20

Experiments for mixtures

0 0.2 0.4 0.6 0.8 10

0.002

0.004

0.006

0.008

0.01

0.012

0.014

0 0.2 0.4 0.6 0.8 10

0.005

0.01

0.015

0.02

0.025

0.03

0 0.2 0.4 0.6 0.8 10

0.005

0.01

0.015

0.02NX = NY = 100 NX = NY = 200

NX = NY = 500

0,1 vsUnif 1 0,1

Average values of MMD over 100 samples

0,1 vs 0,1

c

Page 120: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Independence test Independence

– and are independent if and only if• Probability density function: , .• Cumulative distribution function: , .• Characteristic function:

Independence testGiven i.i.d. sample , , … , , , are and independent?– Null hypothesis H0: ⊗ (independent)– Alternative H1: ⊗ (not independent)

V-21

Page 121: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

V-22

-3 -2 -1 0 1 2 3-3

-2

-1

0

1

2

3

-3 -2 -1 0 1 2 3-3

-2

-1

0

1

2

3

Independent Dependent

Page 122: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Independence with kernelTheorem 5.3. (Fukumizu, Bach, Jordan, JMLR 2004)

If the product kernel is characteristic, then

Recall Σ ≅ ⊗ ∈ ⊗ .Comparison between (kernel mean of ) and ⊗(kernel mean of ).

– Dependence measure: Hilbert-Schmidt independence criterion

OYX X Y

V-23

, ≔ Σ

⊗ ⊗

, , , ,2 , ,

, : independent copy of , .

[Exercise]

Page 123: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Independence test with kernels(Gretton, Fukumizu, Teo, Song, Schölkopf, Smola. NIPS 2008)

– Test statistic:

– This is a special case of MMD comparing and ⊗ with product kernel .

– Asymptotic distributions are given similarly to MMD (Gretton et al 2009).

V-24

n

kjikiYjiX

n

jijiYjiXemp YYkXXk

nYYkXXk

nYX

1,,3

1,2 ),(),(2),(),(1),(HSIC

n

kkY

n

jijiX YYkXXk

n 1,1,4 ),(),(1

Or equivalently,

HSIC , ≔ Σ 1Tr

, : centered Gram matrices

Page 124: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Comparison with existing method Distance covariance (Székely et al., 2007; Székely & Rizzo, 2009)

– Distance covariance / distance correlation has gained popularity in statistical community as a dependence measure beyond Pearson correlation.

Definition. , : and ℓ-dimensional random vectors .

where , and , are i.i.d. ~ .

Distance covariance:, ≔

2 |

V-25

Distance correlation:

, ≔,

, ,

Page 125: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Relation between distance covariance and MMDProposition 5.4 (Sejdinovic, Gretton, Sriperumbudur, F. ICML2012)

A kernel on Euclidean spaces , .

is positive definite, and HSIC , , .

– Distance covariance is a specific instance of MMD. – Positive definite kernel approach is more general in choosing

kernels, and thus may perform better.

– Extension,

is positive definite for 0 2. We can define , ≔ HSIC ,

V-26

Page 126: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Experiments

V-27

2

Dependence

Ratio

of

acce

ptin

g in

depe

nden

ce

Original distance covariance

HSIC withGaussian kernel

Page 127: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Choice of kernel for MMD– Heuristics for Gaussian kernel:

median , 1, … ,

– Using performance of statistical test:Type II error of the test statistics (Gretton et al NIPS 2012).

Challenging open questions.

V-28

Page 128: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Conditional probabilities and beyond

V-29

Page 129: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Conditional probability– Conditional probabilities appear in many machine learning problems

• Regression / classification: direct inference of | or | . already seen in Section II.

• Bayesian inference

• Conditional independence / dependence– Graphical modeling

Conditional independent relations among variables.

– Causality– Dimension reduction / feature extraction

V-30

X1

X4

X2 X3

X5

Page 130: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Conditional kernel mean Conditional kernel mean

Definition.

– Simply, kernel mean of | .– It determines the conditional probability with a characteristic kernel. – Again, inference problems on conditional probabilities can be solved

as inference on conditional kernel means.

– But, how can we estimate it?

Φ ⋅,

V-31

Page 131: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Covariance operator revisited (Uncentered) cross-covariance operator

, : random variables on Ω , Ω , , : positive definite kernels on Ω ,Ω , , , ∞.

Definition. Uncentered cross-covariance operator

– Reproducing property:, ∀ ∈ , ∈, ∀ , ∈

≡ Φ ⊗Φ ∗ ∶ →

(Or ∈ ⊗ ∗ ≅ ⊗ )

V-32

Page 132: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Conditional probability by regression– Recall for zero-mean Gaussian random variable , ,

.

• Given by the solution to the least mean square ,

– For the feature vector

Φ Φ .

• Given by the solution to the least mean squareΦ Φ ,

is not well defined in infinite dimensional cases, but regularized estimator can be justified.

V-33

Page 133: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Estimator for conditional kernel mean– Empirical estimation: given , , … , , ,

Φ

In Gram matrix expression,

Proposition 5.5 (Consistency)

If is characteristic, , ∈ ⊗ , and → 0, →∞ as → ∞, then for every

⋅, → Φ in in probability.

,⋮,

,  ∗∗,⋮∗,

.

V-34

Page 134: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Conditional covariance Review: Gaussian variables

Conditional covariance matrix: | ≔ Fact: | Cov , | for any

Conditional cross-covariance operatorDefinition: , , : random variables on Ω ,Ω , Ω .

, , : positive definite kernel on Ω ,Ω , Ω . Conditional cross-covariance operator

| ≡

– Reproducing averaged conditional covarianceProposition 5.6 If is characteristic, then for ∀ ∈ , ∈ ,

V-35

, | Cov , |

Page 135: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

– An interpretation: Compare the conditional kernel means for , and | .

Φ ⊗Φ Φ | ⊗ Φ

Dependence on is not easy to handle Average it out.Φ ⊗Φ Φ | ⊗ Φ ]

– Empirical estimator:

| ≡

V-36

⋅ ⋅

Page 136: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Conditional independence– Recall: for Gaussian random variable

| ⇔ | .

– By average over , | does not imply conditional independence, which requires , | for each .

– Trick: consider

| ≔ ,

where , and the product kernel is used for .

Theorem 5.7 (Fukumizu et al. JMLR 2004) Assume , , are characteristic, then

| ⇔ | .

– | , | can be similarly used. V-37

Page 137: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Applications of conditional dependence measures

Conditional independence test (Fukumizu et al. NIPS2007)

– Squared HS-norm | can be used for conditional independence test.

– Unlike the independence test, the asymptotic null distribution is not available. Permutation test is needed.

– Background: The conditional independent test with continuous non-Gaussian variables is not easy, and a challenging open problem.

V-38

Page 138: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Causal inference– Directional acyclic graph (DAG) is used for representing the

causal structure among variables.– The structure can be learned by conditional independence tests.

The above test can be applied (Sun, Janzing, Schölkopf, F. ICML2007).

Feature extraction / dimension reduction for supervised learning see next.

V-39

X Yand

| ZX Y

X Y

Z

Page 139: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Dimension reduction and conditional independence

Dimension reduction for supervised learningInput: X = (X1, ... , Xm), Output: Y (either continuous or discrete)Goal: find an effective dimension reduction space (EDR space)

spanned by an m x d matrix B s.t.

No further assumptions on cond. p.d.f. p.

Conditional independence

| |( | ) ( | )TT

Y X Y B Xp Y X p Y B X BTX = (b1TX, ..., bd

TX)linear feature vector

where

XU V

Y

B spans effective subspace

X Y | BTX

,

V-40

Page 140: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Gradient‐based method(Samarov 1993; Hristache et al 2001)

Average Derivative Estimation (ADE)– Assumptions:

• Y is one dimensional • EDR space

– Gradient of the regression functionlies in the EDR space at each x

= average or PCA of the gradients at many x.

)|(~)|( XBYpXYp T

dyxBypy

xxXYE

xT )|(~]|[

dyzyp

zyB

xBz T)|(~

B

V-41

Page 141: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Kernel Helps!– Weakness of ADE:

• Difficulty of estimating gradients in high dimensional space. ADE uses local polynomial regression.

» Sensitive to bandwidth • May find only a subspace of the effective subspace.

).)(,0(~,)(~ 221 XNZZXfY e.g.

X Y | BTX

– Kernel method• Can handle conditional probability in regression form

Φ

• Characterizes conditional independence

V-42

Page 142: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Derivative with kernel– Reproducing the derivative (e.g. Steinwart & Christmann, Chap. 4):

Assume is differentiable and then

– Combining with the estimation of conditional kernel mean,

The top eigenvectors of estimates the EDR space

XX H

xxk

),()~,( xxkX

xxf

xxkf X

)(),(,

V-43

for any ∈

jY

iY

ij xxXYE

xxXYExM

]|)([ˆ

,]|)([ˆ)(ˆ

j

XnXXYX

i

XnXXYX x

xkICCx

xkICC

),()ˆ(ˆ,),()ˆ(ˆ 11

Page 143: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Gradient‐based kernel dimension reduction (gKDR)

(Fukumizu & Leng, NIPS 2012)– Method

• Compute

• Compute top d eigenvectors of Estimator

– gKDR estimate the subspace to realize the conditional independence

– Choice of kernel:Cross-validation with some regressor/classifier, e.g. kNN method.

.)()()()(1~1

11

n

iiXnXYnX

TiXn XInGGInGX

nM kk

T

Xx

nXXiX

ix

xXkx

xXkX

),(,...,),()( 1k

.~nM .B

),( jiXX XXkG

V-44

Page 144: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Experiment: ISOLET– Speech signals of 26 alphabets– 617 dim. (continuous)– 6238 training data / 1559 test data

– ResultsDim

gKDR

gKDR-v

CCA

c.f. C4.5 + ECOC: 6.61%Neural Networks (best): 3.27%

Classification errors for test data by SVM (%)

V-45

Data from UCI repository.

Dim

gKDR

gKDR-v

CCA

10 15 20 25 30 35 40 45 50

14.43 7.50 5.00 4.75 - - - - -16.87 7.57 4.75 4.30 3.85 3.85 3.59 3.53 3.0813.09 8.66 6.54 6.09 - - - - -

Page 145: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Experiment: Amazon Commerce Reviews– Author identification for Amazon commerce reviews.– Dim = 10000 (linguistic style: e.g. usage of digit, punctuation,

words and sentences' length, and frequency of words, etc)– = #authors x 30 (Data from UCI repository.)

V-46

Page 146: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

– Example of 2-dim plots for 3 authors

– gKDR (dim = #authors) vs correlation-based variable selection (dim=500/2000)

10-fold cross-validation errors (%)

#Authors gKDR+ 5NN

gKDR+ SVM

Corr (500)+ SVM

Corr(2000)+ SVM

10 9.3 12.0 15.7 8.3

20 16.2 16.2 30.2 18.0

30 20.1 18.0 29.2 24.0

40 22.8 21.8 35.4 25.0

50 22.7 19.5 41.1 29.0

V-47

Page 147: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Bayesian inference with kernels(Fukumizu, Song, Gretton NIPS 2011)

Bayes’ rule, .

Kernel Bayes’ rule– : kernel mean of prior, ∑ ⋅,ℓ

– | : kernel representation of relation between and , , , … , , ~ , i.i.d.

– Goal: compute kernel mean of posterior | .

| ∑ ⋅,

V-48

|

| Λ Λ Λ Λ Diag

Page 148: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Bayesian inference using kernel Bayes' rule

– NO PARAMETRIC MODELS, BUT SAMPLES!

– When is it useful?• Explicit form of cond. p.d.f. | or prior is

unavailable, but sampling is easy. – Approximate Bayesian Computation (ABC), Process prior.

• Cond. p.d.f. | is unknown, but sample from , is given in training phase.

– Example: nonparametric HMM (shown later).• If both of | and are known, there are many good

sampling methods, such as MCMC, SMC, etc. But, they may take long time. KBR uses matrix operations.

V-49

Page 149: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Application: nonparametric HMMModel: , ∏ ∏ | – Assume:

| and/or | is not known.But, data , is available in training phase.

Examples:• Measurement of hidden states is expensive, • Hidden states are measured with time delay.

– Testing phase (e.g., filtering, e.g.):given , … , ,estimate hidden state .

– Sequential filtering/prediction uses Bayes’ rule KBR applied.

X0 X1 X2 X3 XT

Y0 Y1 Y2 Y3 YT

V-50

Page 150: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Camera angles– Hidden : angles of a video camera located at a corner of a room.– Observed : movie frame of a room + additive Gaussian noise.– : 3600 downsampled frames of 20 x 20 RGB pixels (1200 dim. ). – The first 1800 frames for training, and the second half for testing

V-51

noise KBR(Trace) Kalmanfilter(Q)

2 = 10-4 0.15 0.01 0.56 0.022 = 10-3 0.21 0.01 0.54 0.02

Average MSE for camera angles (10 runs)

To represent SO(3) model, Tr[AB-1] for KBR, and quaternion expression for Kalmanfilter are used .

Page 151: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Summary of Part V Kernel mean embedding of probabilities

– Kernel mean gives a representation of probability distribution. – Inference on probabilities can be cast into inference on kernel

means. e.g. two sample test, independent test

Conditional probabilities– Conditional probabilities can be handled with kernel means and

covariances• Conditional independence test• Graphical modeling• Causal inference• Dimension reduction• Bayesian inference

V-52

Page 152: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

ReferencesFukumizu, K., Bach, F.R., and Jordan, M.I. (2004) Dimensionality reduction for

supervised learning with reproducing kernel Hilbert spaces. Journal of Machine Learning Research. 5:73-99,

Fukumizu, K., F.R. Bach and M. Jordan. (2009) Kernel dimension reduction in regression. Annals of Statistics. 37(4), pp.1871-1905

Fukumizu, K., L. Song, A. Gretton (2011) Kernel Bayes' Rule. Advances in Neural Information Processing Systems 24 (NIPS2011) 1737-1745.

Gretton, A., K.M. Borgwardt, M.Rasch, B. Schölkopf, A.J. Smola (2007) A Kernel Method for the Two-Sample-Problem. Advances in Neural Information Processing Systems 19, 513-520.

Gretton, A., Z. Harchaoui, K. Fukumizu, B. Sriperumbudur (2010) A Fast, Consistent Kernel Two-Sample Test. Advances in Neural Information Processing Systems 22, 673-681.

Gretton, A., K. Fukumizu, C.-H. Teo, L. Song, B. Schölkopf, A. Smola. (2008) A Kernel Statistical Test of Independence. Advances in Neural Information Processing Systems 20, 585-592.

Gretton, A., K. Fukumizu, C.-H. Teo, L. Song, B. Schölkopf, A. Smola. (2008) A Kernel Statistical Test of Independence. Advances in Neural Information Processing Systems 20, 585-592. V-53

Page 153: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Hristache, M., A. Juditsky, J. Polzehl, and V. Spokoiny. (2001) Structure Adaptive Approach for Dimension Reduction. Annals of Statsitics, 29(6):1537-1566.

Samarov, A. (1993). Exploring regression structure using nonparametric functional estimation. Journal of American Statistical Association. 88, 836-847.

Sejdinovic, D., A. Gretton, B. Sriperumbudur, K. Fukumizu. (2012) Hypothesis testing using pairwise distances and associated kernels. Proc. 29th International Conference on Machine Learning (ICML2012).

Sriperumbudur, B.K., A. Gretton, K. Fukumizu, B. Schölkopf, G.R.G. Lanckriet. (2010) Hilbert Space Embeddings and Metrics on Probability Measures. Journal of Machine Learning Research. 11:1517-1561.

Sun, X., D. Janzing, B. Schölkopf and K. Fukumizu. (2007) A kernel-based causal learning algorithm. Proc. 24th Annual International Conference on Machine Learning (ICML2007), 855-862.

Székely, G.J., M.L. Rizzo, and N.K. Bakirov. (2007) Measuring and testing dependence by correlation of distances. Annals of Statistics, 35(6): 2769-2794.

Székely, G.J. and M.L. Rizzo. (2009) Brownian distance covariance. Annals of Applied Statistics, 3(4):1236-1265.

V-54

Page 154: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Appendices

V-55

Page 155: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Statistical Test: quick introduction How should we set the threshold?

Example) Based on MMD, we wish to make a decision whether the two variables have the same distribution.

Simple-minded idea: Set a small value like t = 0.001MMD(X,Y) < t Perhaps, sameMMD(X,Y) t Different

But, the threshold should depend on the properties of X and Y.

Statistical hypothesis test– A statistical way of deciding whether a hypothesis is true or not. – The decision is based on sample We cannot be 100% certain.

V-56

Page 156: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Procedure of hypothesis test• Null hypothesis H0 = hypothesis assumed to be true

“X and Y have the same distribution”

• Prepare a test statistic TN

e.g. TN = MMDemp2

• Null distribution: Distribution of TN under H0

• Set significance level Typically = 0.05 or 0.01

• Compute the critical region: = Pr(TN > t under H0)

• Reject the null hypothesis if TN > tThe probability that MMDemp

2 > t under H0 is very small.

otherwise, accept H0 negatively. V-57

Page 157: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

p.d.f. of Null distribution

area = (5%, 1% etc)

threshold t

- If H0 is the truth, the value of TN should follow the null distribution.

- If H1 is the truth, the value of TN should be very large.

- Set the threshold with risk .

- The threshold depends on the distribution of the data.

critical region

One-sided test

TN

area = p-value TN > t p-value <

significance level

V-58

Page 158: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Type I and Type II error– Type I error = false positive (e.g. “ ” = positive)– Type II error = false negative

TRUTHH0 Alternative

TES

T R

ES

ULT

Rej

ect H

0A

ccep

t H0

Type I error

Type II error

• Significance level controls the type I error. • Under a fixed type I error, a good test statistics should

give small type II error.

True negative

True positiveFalse positive

False negative

V-59

Page 159: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

MMD: Asymptotic distribution Under H0

, … , ~ , , … , ℓ~ : i.i.d. Let ℓ.Assume → , ℓ → 1 (0 1) as → ∞.

Under the null hypothesis of ,

where , , … are i.i.d. with law 0; 1/ 1 , and are the eigenvalues of the integral operator on

,

with the centered kernel, , , , , .

MMD

11 → ∞ ,

: independent copy of . V-60

Page 160: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Under H1Under the alternative ,

where

4 , , , , .

– The asymptotic distributions are derived by the general theory of U-statistics (e.g. see van der Vaart 1998, Chapter 12).

– Estimation of the null distribution: • Estimation of • Approximation by Pearson curve with moment matching.• Bootstrap MMD (Arcones & Gine 1992)

MMD 0; → ∞ ,

V-61

Page 161: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Conventional methods for two sample problem

Kolmogorov-Smirnov (K-S) test for two samplesOne-dimensional variables– Empirical distribution function

– KS test statistics

– Asymptotic null distribution is known (not shown here).

N

iiN tXI

NtF

1)(1)(

)()(sup 21 tFtFD NNt

N R

DN FN1(t)

FN2(t)

t

V-62

Page 162: Kernel Methods for Statistical Learningyosinski.com/mlss12/media/slides/MLSS-2012-Fukumizu... · 2012. 9. 7. · Analysis of data is a process of inspecting, cleaning, transforming,

Wald-Wolfowitz run testOne-dimensional samples– Combine the samples and plot the points in ascending order. – Label the points based on the original two groups.– Count the number of “runs”, i.e. consecutive sequences of the

same label. – Test statistics

– In one-dimensional case, less powerful than KS test

Multidimensional extension of KS and WW test– Minimum spanning tree is used (Friedman Rafsky 1979)

R = Number of runs

)1,0(][][ N

RVarRERTN

R = 10

V-63