support vector machines (svms)ciortuz/ml.ex-book/slides/ml... · 2020-02-20 · cmu, 2016 fall, n....

135
Support Vector Machines (SVMs) 0.

Upload: others

Post on 04-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Support Vector Machines

(SVMs)

0.

Page 2: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

The distance between a hiperplane and a point in Rd:

proving the formula

CMU, 2010 fall, Ziv Bar-Joseph, HW4, pr. 1.1

1.

Page 3: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

What is the perpendicular distance of a point x0 to a hyperplane defined byw⊤x+ b = 0?

Let H be the hyperplane w⊤x + b = 0 (which is the same as w · x + b = 0). Wewant to find the projection x∗ of x0 on H. Given that the normal directionvector of H is w, it follows that x∗ = x0+tw, for some real value t. Furthermore,w⊤x∗ + b = 0. Solving for t, we get:

t = −b+ w⊤x0

w⊤w.

Finally, the perpendicular distance of x0 to H is the distance between x0 andx∗:

d(x0, x∗) =

(x0 − x∗)⊤(x0 − x∗) =√w⊤w t2 =

|w · x+ b|√w⊤ w

=|w · x+ b|

‖w‖2.

2.

Page 4: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

From the maximum margin principle to the

primal form of the SVM optimization problem

CMU, 2016 fall, N. Balcan, M. Gormley, HW4, pr. 1.1

3.

Page 5: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

In this problem, you will derive SVM objective from the large margin prin-ciple.

Suppose we have the following data D = (X,y) where X ∈ Rd×m, the i-th

column xi are the features of the i-th training sample and yi is the label ofthe i-th training sample, y ∈ {−1, 1}m.

We use the linear discriminant function

f(x) = w · x.

For classification, we classify x into class−1 if f(x) < 0 and into class 1 otherwise.If the data is linearly separable, and f isthe target function, then yf(x) > 0. Thisfact allows us to define

γ =yf(x)

‖w‖2

as the distance from x to the decisionboundary, i.e., the margin.

γxi

ix

w x

4.

Page 6: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

We would like to make this margin γ as large as possible, i.e., maximizethe perpendicular distance to the closest point. Thus our objective functionbecomes

maxw

mini=1,...,m

yif(xi)

‖w‖2. (1)

Show that (1) is equivalent to the following problem:

minw

1

2‖w‖22 (2)

such that yi(w · xi) ≥ 1, for i = 1, . . . ,m.

5.

Page 7: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Solution (in Romanian)

Vom demonstra echivalenta dintreproblemele de optimizare (1) si (2)construind o succesiune de catevaprobleme de optimizare echiva-lente, obtinute prin aplicarea unortransformari succesive, pornind dela prima dintre cele doua prob-leme date si ajungand ın final lacea de-a doua problema data ınenunt.

Fie, asadar, problemele de opti-mizare din partea dreapta.

maxw

mini=1,...,m

yiw · xi

‖w‖ (3)

a. ı. yi(w · xi) > 0, pentru i = 1, . . . ,m.

maxw

1

‖w‖ mini=1,...,m

yi w · xi (4)

a. ı. yi(w · xi) > 0, pentru i = 1, . . . ,m.

maxw

1

‖w‖ mini=1,...,m

yi w · xi (5)

a. ı. yi(w · xi) ≥ 1, pentru i = 1, . . . ,m.

maxw

1

‖w‖ (6)

a. ı. yi(w · xi) ≥ 1, pentru i = 1, . . . ,m.

6.

Page 8: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

• Se observa ca problema de optimizare (3) difera de problema initiala (1) prinintroducerea unor restrictii liniare. Mai mult, aceste restrictii corespundhiperplanelor care sunt separatori liniari pentru multimea de antrenamentdata (care este, ıntr-adevar, separabila liniar). Asadar, aceste restrictii nuschimba cu nimic solutia problemei date initial.a

• Problema de optimizare (4) a fost obtinuta din problema (3) modificand

doar functia obiectiv:1

‖w‖ a fost scos ın fata operatorului mini=1,...,m fiindca

acesta nu depinde de w. (Observati ca expresia care urmeaza acestui operatordepinde de w, dar acolo acest w este fixat; acolo variaza doar xi.)

aPentru un separator liniar oarecare al multimii de antrenament, expresia mini=1,...,m

yif(xi)

‖w‖are o valoare strict

pozitiva. Pentru hiperplanele care nu separa multimea de antrenament, expresia mini=1,...,m

yif(xi)

‖w‖are valoare

negativa sau 0.

7.

Page 9: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

• Problema de optimizare (5) a fost obtinuta din problema (4) schimband ın restrictiirelatia > 0 cu ≥ 1. In sine, aceasta modificare restrange multimea acelor valori ale luiw peste care se aplica operatorul maxw.Sa consideram ınsa un w′ arbitrar care este eliminat la trecerea de la forma (4) la forma(5). Notam γ′ = mini=1,...,m yiw

′ · xi︸ ︷︷ ︸

f(xi)

; stim ca γ′ ∈ (0, 1).

Rezulta ca

yiyiw

′ · xi

γ′= yi

w′

γ′· xi ≥ 1 pentru 1, . . . ,m.

Asadar, w′′ not.=

w′

γ′satisface restrictiile problemei (5).

In plus,

1

‖w′′‖ =1

‖w′‖γ′ si

1

‖w′′‖ mini=1,...,m

yiw′′ · xi

︸ ︷︷ ︸

1

=1

‖w′‖ mini=1,...,m

yiw′ · xi

︸ ︷︷ ︸

γ′

.

’<10<γ

w’ xw’’

Prin urmare, putem spune ca rolul lui w′ ın relatia (4) este jucat de w′′ ın relatia (5). Inconsecinta, la trecerea de la o forma la cealalta optimul ramane acelasi.a

aDin punct de vedere geometric, trecerea de la (4) la (5) corespunde ınmultirii cu o constanta pozitiva (supra-unitara) a ecuatiilor acelor separatori liniari [ai multimii de antrenament] pentru care minimul distantelor geometricepana la instantele de antrenament este strict mai mic decat 1, astfel ıncat ın urma ınmultirii aceasta distanta minimasa devina 1.

8.

Page 10: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

• Problema de optimizare (6) a fost obtinuta din problema (5) renuntand ın functiaobiectiv la mini=1,...,m yiw ·xi. Aceasta revine de fapt la a renunta la acele valori ale lui wpentru care mini=1,...,m yiw ·xi este strict mai mare decat 1. (Am vazut mai sus ca existavalori ale lui w (si anume, w′′) pentru care minimul respectiv este chiar 1.)

Sa consideram un w′′′ cu proprietatea γ′′′ not.= mini=1,...,m yi w

′′′ · xi︸ ︷︷ ︸

f(xi)

> 1.

Rezulta ca

mini=1,...,m

yiw′′′

γ′′′· xi = 1

si, notand wiv =w′′′

γ′′′, vom avea:

1

‖wiv‖ mini=1,...,m

yiwiv · xi

︸ ︷︷ ︸1

=1

‖wiv‖ =γ′′′

‖w′′′‖ =1

‖w′′′‖ mini=1,...,m

yiw′′′ · xi

︸ ︷︷ ︸

γ′′′

.wiv

γ’’’>1

w’’’ x

Prin urmare, [ca si mai sus,] putem spune ca rolul lui w′′′ ın relatia (5) este jucat de wiv

ın relatia (6). In consecinta, nu se modifica optimul daca se renunta [si] la acesti w′′′.a

aDin punct de vedere geometric, trecerea de la (5) la (6) corespunde ınmultirii cu o constanta pozitiva (sub-unitara)a ecuatiilor acelor separatori liniari [ai multimii de antrenament] pentru care minimul distantelor geometrice pana lainstantele de antrenament este strict mai mare decat 1, astfel ıncat ın urma ınmultirii aceasta distanta minima sadevina 1.

9.

Page 11: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

• Se constata imediat ca problema (6) este echivalenta cu problema de opti-mizare SVM care a fost data ın enunt (2).

Forma problemei de optimizare SVM nespune ca separatorul optimal se alegedintre acei separatori w · x ai multimiide antrenament care au exact valoarea 1pentru mini=1,...,m yiw · xi; alegerea se facemaximizand 1/‖w‖, adica distanta geo-metrica de la separator(i) pana la cele maiapropiate instante de antrenament. Intu-itiv, maximizarea aceasta va implica fap-tul ca distantele de la separatorul optimalpana la toate instantele de antrenamentcele mai apropiate (adica, vectorii-suport ,cei pozitivi si respectiv cei negativi) vorfi 1/‖w‖. (In figura alaturata, ın care ampus ın evidenta acest fapt, vectorii-suportsunt ıncercuiti.)

w x

1 / ||w||

10.

Page 12: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

The SVM optimization problem – the primal form;

getting the dual form,

the relationship between the primal and the dual solutions,

and the prediction rule

CMU, 2010 fall, Ziv Bar-Joseph, HW4, pr. 1.3-5

[Solution augmented by Liviu Ciortuz]

11.

Page 13: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

xi

maximalmargin

optimal separatinghyperplane

1II w II xi

II w II

D( )

geometricmargin

vectorssupport

D(x) < −1

D(x) = 0

D(x) = −1

D(x) = 1

D(x) > 1

w

D(x) = w · x + w 0

12.

Page 14: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

[In Romanian]

Se considera instantele x1, . . . , xm ∈ Rd si etichetele correspunzatoare

y1, . . . , ym ∈ {−1, 1}. Problema SVM cu margine “hard” — termen care desem-neaza cazul ın care instantele (x1, y1), . . . , (xn, yn) se presupune ca sunt liniarseparabile — este o problema de optimizare convexa, exprimata sub formaprimala astfel:

minw,w0

1

2‖w‖2

a. ı. (w · xi + w0)yi ≥ 1, pentru i = 1, . . . ,m,(P)

unde w ∈ Rd si w0 ∈ R.

In urma rezolvarii acestei probleme se va obtine un model liniar, de formay(x) = w · x + w0, ce va servi ulterior pentru clasificare, conform functiei dedecizie sign(y(x)).

13.

Page 15: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

La curs am prezentat metoda multiplicatorilor Lagrange, care ne permite ınanumite conditii sa rezolvam probleme de optimizare convexa cu restrictiide asemenea convexe — observati ca (P) este o astfel de problema —,transpunand aceste probleme ıntr-o forma mai convenabila numita formaduala.

Pentru ınceput, vom defini o alta functie, numita lagrangeanul generalizat,care combina functia obiectiv din (P) cu expresiile care intervin ın parteastanga a restrictiilor:

LP (w,w0, α)def.=

1

2‖w‖2 −

m∑

i=1

αi((w · xi + w0)yi − 1),

unde αi ≥ 0 pentru i = 1,m sunt asa-numitii multiplicatori Lagrange sauvariabilele duale. Variabilele primale sunt w si w0.

14.

Page 16: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Observatie (1)

Este imediat ca pentru problema (P1) este satisfacuta asa-numita conditie alui Slater (vezi Learning with Kernels, B. Scholkopf, A. Smola, MIT Press,2002, pag 167):

∃w ∈ Rd si w0 ∈ R astfel ıncat (w · xi + w0)yi − 1 > 0, pentru i = 1, . . . ,m. (7)

Intr-adevar, faptul ca instantele (x1, y1), . . . , (xn, yn) sunt liniar separabile im-plica imediat satisfacerea conditiei de mai sus.

In consecinta, forma primala (P) si forma duala (D) a problemei SVM —aceasta din urma va fi obtinuta la rezolvarea punctului b — vor avea aceeasivaloare optima pentru functiile obiectiv respective. In notatia consacratapentru metoda lui Lagrange, acest fapt se exprima astfel: d∗ = p∗. Aceastaegalitate se numeste proprietatea de dualitate tare.

15.

Page 17: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

a. Calculand derivatele partiale ale functiei LP ın raport cu variabilele primalew si w0, aratati ca ıntre valorile w, w0 si α pentru care aceste derivate partialese anuleaza exista relatiile:

w =

m∑

i=1

αixiyi, (8)

m∑

i=1

αiyi = 0. (9)

De asemenea, aratati ca din conditia de complementaritate Karush-Kuhn-Tucker (KKT)

αi((w · xi + w0)yi − 1) = 0 pentru i = 1, . . . ,m. (10)

se poate deduce relatia:

w0 = yi − w · xi pentru orice i astfel ıncat αi > 0. (11)

16.

Page 18: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Calculam mai ıntai derivatele partiale ale functiei LP ın raport cu w si respec-tiv w0:

a

∂wLP (w,w0, α) = w −∑m

i=1 αixiyi

∂w0LP (w,w0, α) = −∑m

i=1 αiyi.

Atunci cand se atinge optimul functiei LP (considerand argumentul sau α

fixat), aceste derivate partiale devin egale cu 0. Din∂

∂wLP (w, w0, α) = 0 rezulta

ca

w =m∑

i=1

αixiyi.

Similar,∂

∂αiLP (w, w0, α) = 0 implica relatia

∑mi=1 αiyi = 0.

a In notatia matriceala, considerand w si xi pentru i = 1, . . . ,m vectori-coloana, putem scrie lagrangeanul LP astfel:

LP (w,w0, α) =1

2w⊤w −

m∑

i=1

αi((w⊤xi + w0)yi − 1).

Pentru derivarea lui LP ın raport cu vectorul w, se folosesc regulile (5a) si (5b) din documentul Matrix Identities deSam Roweis (New York University, June 1999).

Revenind la formula initiala, care foloseste notatia vectoriala si produsul scalar din Rd, se poate constata ca ıntr-un

astfel de caz (simplu!), regulile de derivare sunt similare cu cele din R.

17.

Page 19: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Apoi, din conditia de complementaritate KKT, care se exprima sub forma

αi[(w · xi − w0)yi − 1] = 0 pentru i = 1, . . . ,m

rezulta ca pentru orice i ∈ {1, . . . ,m} cu αi > 0 avem (w · xi + w0)yi − 1 = 0.Aceasta relatie este echivalenta cu w · xi + w0 = yi fiindca yi ∈ {−1, 1}. Dinaceasta ultima egalitate rezulta

w0 = yi − w · xi.

Observatie:

Daca αi = 0 pentru i = 1,m, din relatia w =∑m

i=1αiyixi rezulta ca w = 0. In consecinta, functia

f(x) = w ·x+ w0 care da ecuatia separatorului optimal (f(x) = 0) nu va avea putere de discriminareıntre instantele pozitive si instantele negative (indiferent de valoarea atribuita lui w0, care este oconstanta).

De fapt, atunci cand s-a introdus problema SVM ca o problema de optimizare a marginii / distantei1/||w||, s-a considerat ın mod implicit ca se cauta solutii w 6= 0.

Asadar, ın cazul ın care ın urma rezolvarii problemei primale, respectiv a celei duale — urmata ın

ultimul caz de aplicarea relatiilor de legatura ıntre cele doua tipuri / seturi de solutii — obtinem

w = 0, cautarea lui w0 (valoarea optima pentru w0) pur si simplu nu are sens.

18.

Page 20: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

b. Calculati functia LD(α) care se obtine din expresia lagrangeanului gener-alizat LP (w,w0, α) substituind variabila w cu

∑mi=1 αixiyi si folosind egalitatea

∑mi=1 αiyi = 0, conform relatiilor (8) si (9) de la punctul precedent.

Substituind w =∑m

i=1 αixiyi ın expresia lui LP si tinand cont ca∑m

i=1 αiyi = 0,vom obtine:

LD(α) =1

2

i,j

αiαjyiyjxi · xj −m∑

i=1

αi

m∑

j=1

[(αjyjxj · xi + w0)yi − 1]

=m∑

i=1

αi −1

2

i,j

αiαjyiyjxi · xj (12)

19.

Page 21: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Observatie (2)

Este evident acum ca problemei (P) ıi poate fi asociata urmatoarea ,forma(numita ,,duala“):

maxα LD(α)

a. ı. αi ≥ 0, pentru i = 1, . . . ,m∑m

i=1 αiyi = 0

(D)

ın care restrictiile sunt mult mai simple decat erau ın forma primala (P).

Este de retinut faptul ca relatiile (8) si (11) de la punctul a constituie ,legaturadintre solutia / solutiile problemei (D) si solutia problemei (P).

20.

Page 22: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

c. Daca se da o instanta noua (de test) xnew, cum veti decide clasa ei?

Mai ıntai vom calcula

f(xnew) = w · xnew + w0

=

(m∑

i=1

αiyixi

)

· xnew + yk − w · xk (13)

unde α este solutia problemei duale (D), iar w si w0, solutiile problemei primale(P) sunt calculate conform relatiilor (8) si (11) de la punctul a.

Dupa aceea, daca f(xnew) ≥ 0 atunci xnew va fi clasificat pozitiv, iar ın cazcontrar va fi clasificat negativ.

21.

Page 23: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Observatie (3)

Remarcam faptul ca atat ın functia obiectiv a problemei de optimizare SVMın forma duala (D) cat si ın functia f care serveste la clasificarea instantelornoi, operatiile care se executa asupra instantelor sunt doar de tip produsscalar: xi · xj si respectiv xi · xnew.

Acest fapt face posibila folosirea functiilor-nucleu ın contextul SVM, ceea ceeste convenabil atat din punctul de vedere al obtinerii (eventuale) a separa-bilitatii, cat si din punctul de vedere al executarii eficiente a calculelor.

22.

Page 24: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

The C-SVM optimization problem – the primal form;

getting the dual form,

the relationship between the primal and the dual solutions,

and the prediction rule

CMU, 2012 spring, Ziv Bar-Joseph, HW3, pr. 3.2

[Solution augmented by Liviu Ciortuz]

23.

Page 25: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

[In Romanian]

Antrenam o SVM pe un set de date de intrare {xi} cu i = 1, . . . , n, considerateımpreuna cu setul de etichete asociate {yi}, cu yi ∈ {−1, 1}. Se presupune caaceste date sunt neseparabile liniar.

Obiectivul pe care ni-l propunem aici este sa maximizam marginea dar ınacelasi timp sa permitem ca, la finalul antrenarii, unele (ın genere putine)dintre datele de antrenament sa fie clasificate eronat.

Asadar, vom considera problema de optimizare

minw,w0,ξ

(1

2‖w‖2 + C

∑mi=1 ξi

)

a. ı. (w · xi + w0)yi ≥ 1− ξi, pentru i = 1, . . . ,mξi ≥ 0, pentru i = 1, . . . ,m,

(P′)

unde ξnot.= (ξ1, . . . , ξm), iar C > 0 este un parametru care controleaza compro-

misul (engl., trade-off) pe care urmarim sa-l facem ıntre marimea marginii pede o parte,a si penalizarile pentru ,,destindere“ (engl., slack penalty) reprezen-tate prin variabilele ξi ≥ 0 pe de alta parte.

aPrin margine aici vom ıntelege distanta dintre hiperplanul de separare optimala definit de solutia (w,w0) aproblemei (P′) si oricare dintre instantele xi pentru care yi(w · xi +w0) = 1. Aceasta distanta este egala cu 1/‖w‖.

24.

Page 26: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

a. Verificati faptul ca pentru problema (P′) este satisfacuta conditia lui Slater.

Conditia lui Slater pentru problema (P′) se formuleaza astfel:

∃w ∈ Rd, w0 ∈ R si ξ ∈ R

m a. ı. (w·xi+w0)yi−1+ξi > 0 si ξi > 0, pentru i = 1, . . . ,m.

Luand w = 0, w0 = 0 si ξi = 2 (de fapt, este suficient sa consideram ξi > 1)pentru i = 1, . . . ,m), se constata ca aceasta conditie se verifica imediat.

In consecinta, optimul problemei primale (P′) va coincide cu optimul proble-mei duale (vezi punctul d).

25.

Page 27: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

b. Folosind variabile duale (adica, multiplicatori Lagrange), scrieti expresialagrangeanului generalizat care corespunde problemei (P′). Specificati pen-tru fiecare multiplicator ın parte care este restrictia care-i corespunde lui ınproblema (P′).

Lagrangeanul generalizat care corespunde problemei de optimizare (P′) este:

LP (w,w0, ξ, α, β) =1

2‖w‖2 + C

m∑

i=1

ξi −m∑

i=1

αi((w · xi + w0)yi − 1 + ξi)−m∑

i=1

βiξi

Multiplicatorii αi ≥ 0 corespund restrictiilor (w ·xi +w0)yi ≥ 1− ξi, ın vreme cemultiplicatorii βi ≥ 0 corespund restrictiilor ξi ≥ 0.

c. Scrieti conditiile de complementaritate KKT (Karush-Kuhn-Tucker) core-spunzatoare problemei (P′).

αi((w · xi + w0)yi − 1 + ξi) = 0 si βiξi = 0, pentru i = 1, . . . ,m.

26.

Page 28: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

d. Calculand derivatele partiale ale lagrangeanului generalizat LP (w,w0, ξ, α, β)ın raport cu variabilele w,w0 si respectiv ξ, aratati ca forma duala a problemei(P′) este:

maxα

(∑m

i=1 αi −1

2

i,j αiαjyiyjxi · xj

)

a. ı. 0 ≤ αi ≤ C pentru i = 1, . . . ,m∑m

i=1 αiyi = 0.

(D′)

Analizati deosebirile dintre (D′) si forma duala a problemei SVM cu margine“hard” (a se vedea definitia (D) de la CMU, 2010 fall, Ziv Bar-Joseph, HW4,pr. 1.3-5.

Calculand derivatele partiale indicate ın enunt, vom avea:

∂wLP (w,w0, ξ, α, β) = 0 ⇔ w −

m∑

i=1

αiyixi = 0 ⇔ w =

m∑

i=1

αiyixi

∂w0LP (w,w0, ξ, α, β) = 0 ⇔ −

m∑

i=1

αiyi = 0 ⇔m∑

i=1

αiyi = 0

∂ξiLP (w,w0, ξ, α, β) = 0 ⇔ C − αi − βi = 0 ⇔ αi + βi = C pentru i = 1, . . . ,m.

27.

Page 29: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Substituind primele doua dintre aceste rezultate ın expresia de definitie a lui LP vomobtine:

LD(α, β)

def.=

1

2

i,j

αiαjyiyjxi · xj + Cm∑

i=1

ξi −m∑

i=1

αi

((( m∑

j=1

αjyjxj

)

· xi + w0

)

yi − 1 + ξi

)

−m∑

i=1

βiξi

=1

2

i,j

αiαjyiyjxi · xj + Cm∑

i=1

ξi −

−∑

i,j

αiαjyiyjxi · xj − w0

m∑

i=1

αiyi

︸ ︷︷ ︸

0

+

m∑

i=1

αi −m∑

i=1

αiξi −m∑

i=1

βiξi

= −1

2

i,j

αiαjyiyjxi · xj + C

m∑

i=1

ξi +

m∑

i=1

αi −m∑

i=1

(αi + βi)︸ ︷︷ ︸

C

ξi

=

m∑

i=1

αi −1

2

i,j

αiαjyiyjxi · xjnot.= LD(α).

Observati ca la scrierea argumentelor lagrangeanului LD, am renuntat ın final la β,ıntrucat βi (pentru i = 1, . . . ,m) a fost eliminat ın timpul efectuarii calculului.

28.

Page 30: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Tinand cont ca βi = C − αi ≥ 0 implica αi ≤ C, rezulta imediat ca forma dualaasociata problemei (P′) este cea indicata ın enunt.

De remarcat faptul ca forma duala (D′) pentru problema C-SVM (cu margine“soft”) difera de forma duala (D) pentru problema SVM cu margine “hard”doar prin restrictia suplimentara αi ≤ C. Aceasta ınseamna ca ,,importanta“care revine oricarui exemplu (xi, yi) ın determinarea separatorului optimaldevine limitata.

Facem aici mentiunea ca legatura dintre solutiile problemelor (P′) si (D′) estedata de relatiile care au fost obtinute la punctele c si d: mai ıntai

w =

m∑

i=1

αiyixi, (14)

iar apoi w0 se obtine din relatia (w · xi + w0)yi = 1− ξi pentru un i ∈ {1, . . . ,m}astfel ıncat αi > 0, asadar

w0 = −w · xi + yi(1 − ξi), cu ξi = 0 daca ın plus αi < C. (15)

Aceasta ultima egalitate este implicata de relatia αi + βi = C dedusa mai sussi conditia de complementaritate KKT βiξi = 0.

29.

Page 31: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Observatie

Daca αi = 0 pentru i = 1,m, obtinem w = 0, ceea ce reprezinta o solutie inadmisibila.

Ramane de tratat cazul ın care ∃αi = C si pentru orice alt αj avem fie αj = 0 fie αj = C. Este denotat mai ıntai faptul ca din relatia

∑mi=1

αiyi = 0 va rezulta ca jumatate din vectorii-suport suntinstante pozitive, iar restul (cealalta jumatate) sunt instante negative. Apoi, αj = C > 0 implicayj(w · xj +w0) = 1− ξj si βj = 0 si deci ξj ≥ 0. Pe de alta parte, αj = 0 implica βj = C si deci ξj = 0,si yj(w · xj + w0) ≥ 1.

In consecinta,

i

ξi =∑

i:αi=C

ξi =∑

i:αi=C

(1 − yi(w · xi − w0)) =∑

i:αi=C

(1 − yiw · xi)

= |{i : αi = C}| −∑

i:αi=C

yiw · xi = |{i : αi = C}| − w ·∑

i:αi=C

yixi

= |{i : αi = C}| −1

Cw ·

i:αi=C

αiyixi = |{i : αi = C}| −1

Cw2

Aceasta ne arata ca optimul problemei primale (P′) depinde doar de w (nu si de w0).

Notand ın mod generic prin x+ instantele pozitive (si cu ξ+ valoarea variabilei de ,,destindere‘corespunzatoare), iar prin x− instantele negative (si, corespunzator, ξ−), vom avea de satisfacutrestrictiile w ·x++w0 ≥ 1− ξ+ si w ·x−+w0 ≤ −1+ ξ−. Tinand cont ca ξ+ ≥ 0 si ξ− ≥ 0, consideram ca

din punct de vedere practic se poate lua pentru w0 valoarea1

2(maxx+

{w · x+}−minx−{w · x−}). De

remarcat ca ıntr-o astfel de situatie, marginea va reprezenta distanta dintre separatorul optimal sicele mai departate instante pozitive / negative clasificate corect. Evident, aceasta este o situatie(destul de) extrema ın raport cu ideea cu care s-a plecat la drum ın formalizarea clasificarii cumargine maximala.

30.

Page 32: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

e. Data fiind o solutie a problemei duale (D′), cum identificam vectorii-suport?

Fie α o solutie a problemei (D′), iar w si w0 stabiliti ca mai sus (vezi relatiile(14) si (15)). Vectorii-suport sunt instantele xi pentru care αi > 0.

Mai sus (vezi rezolvarea de la punctul d, la final) am aratat ca αi ∈ (0, C) ⇒ξi = 0, deci yi(w ·xi + w0) = 1. Alternativ, pentru αi = C rezulta βi = C − αi = 0,deci yi(w · xi + w0) ≤ 1.a Similar, pentru αi = 0 ⇒ βi = C ⇒ ξi = 0, deciyi(w · xi + w0) ≥ 1.

Observatie: Cele trei relatii deduse mai sus, rescrise sintetizat ca

αi ∈ (0, C) ⇒ yi(w · xi + w0) = 1

αi = C ⇒ yi(w · xi + w0) ≤ 1

αi = 0 ⇒ yi(w · xi + w0) ≥ 1

sunt folosite ca o conditie de oprire pentru algoritmul SMO, care rezolvaın practica problema de optimizare cu margine “soft” (desemnata aici si ıncontinuare prin C-SVM).

aDaca 0 < ξi ≤ 1, atunci instanta xi este situata ın interiorul marginii, iar daca ξi > 1, instanta xi este clasificataeronat.

31.

Page 33: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

f. Cum va fi clasificata o instanta noua x′?

Pastrand notatiile de mai sus, instanta de test x′ va fi clasificata pozitiv dacaw · x′ + w0 ≥ 0, si negativ ın caz contrar.

32.

Page 34: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

SVM without constraints,

but using instead the hinge loss function

CMU, 2008 fall, Eric Xing, HW2, pr. 1.2

[Solution augmented by Liviu Ciortuz]

33.

Page 35: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

[In Romanian]

Consideram m instante de antrenament {xi, yi}mi=1. Va readucem aminte caproblema SVM cu margine “soft” si parametru de ,,destindere“ C > 0 poatefi formulata ca o problema de optimizare (patratica) cu restrictii:

minw,w0,ξ

(1

2‖w‖2 + C

m∑

i=1

ξi

)

(16)

a. ı. (w · xi + w0)yi ≥ 1− ξi, pentru i = 1, . . . ,m

ξi ≥ 0, pentru i = 1, . . . ,m

a. Demonstrati ca formularea de mai sus este echivalenta cu o problema deoptimizare (tot patratica) fara restrictii de forma:

minw,w0

(

‖w‖2 + λ

m∑

i=1

max(1 − yi(w · xi + w0), 0))

, (17)

unde λ este un parametru real pozitiv fixat.

34.

Page 36: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Solution (in Romanian)

a. Din restrictiile din forma primala a problemei C-SVM, avem

ξi ≥ 1− yi(w · xi + w0) si ξi ≥ 0,

ceea ce echivaleaza cu ξi ≥ max(1− yi(w · xi + w0), 0) pentru i = 1, . . . ,m.

Operatorul min din expresia functiei obiectiv a problemei C-SVM (16) implicafaptul ca ξ∗i din solutia optima a acestei probleme (w∗, w∗

0 , ξ∗) va fi setata chiar

la valoarea max(1− yi(w · xi + w0), 0).

Vom arata prin reducere la absurd ca pentru orice solutie optima w∗, w∗0 , ξ

∗ aproblemei C-SVM (16), rezulta ca w∗, w∗

0 este si solutie optima a noii problemede optimizare (17) daca luam λ = 2C.

Intr-adevar, daca ar exista w, w0 o solutie optima a problemei (17) mai buna

decat w∗, w∗0,

a atunci ar ınsemna ca w, w0 si ξinot.= max(1 − yi(w · xi + w0), 0)

ar fi pentru problema C-SVM (16) o solutie mai buna decat solutie optimaw∗, w∗

0 , ξ∗, ceea ce este absurd.

a,,Mai bun“ se traduce prin: valoarea funtiei obiectiv [calculata pentru respectivul tuplu w,w0] este mai micadecat...

35.

Page 37: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Vom arata acum ca luand λ = 2C, orice solutie optima a noii probleme (17)este – printr-o extensie naturala, dupa cum veti vedea — solutie optima aproblemei C-SVM (16).

Notand ξi = max(1− yi(w ·xi+w0), 0), rezulta imediat ca (i) ξi ≥ 0 si (ii) functiaobiectiv ‖w‖2 + λ

∑mi=1 max(1− yi(w · xi + w0), 0) se rescrie ca ‖w‖2 + λ

∑mi=1 ξi.

Mai departe, din (i) putem avea fie ξi = 0, fie ξi > 0.

In primul caz, ξi = 0, avem 1− yi(w · xi + w0) ≤ 0, adica zinot.= yi(w · xi +w0) ≥ 1.

In al doilea caz, ξi > 0, sau echivalent zi < 1, din ξi = 1− yi(w · xi + w0) rezultayi(w · xi + w0) = 1− ξi.

Asadar, ın ambele situatii avem yi(w · xi + w0) ≥ 1− ξi.

Am aratat astfel ca orice solutie optima w, w0 a problemei (17), augmentata

cu ξinot.= max(1− yi(w · xi + w0), 0) satisface restrictiile problemei (16).

Ca si mai sus, prin reducere la absurd se poate arata ca w, w0, ξ este chiarsolutie optima pentru problema (16).

36.

Page 38: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

b. Exprimati valoarea noului parametru λ ın functie de parametrul de ,,des-tindere“ C.

λ = 2C, conform rezolvarii de la punctul a.

c. Cum apreciati din punct de vedere ,,calitativ“ aceasta noua formulare aproblemei de optimizare C-SVM?

Forma nou-obtinuta pentru problema de optimizare C-SVM este caracterizatade i. lipsa restrictiilor asupra variabilelor si ii. de realizarea unui echilibru(engl., trade-off) ıntre

− simplitate,a reflectata de termenul ‖ w ‖2;− o buna capacitate de predictie / generalizare ın cazul ne-separabilitatiiliniare a datelor de antrenament, gratie termenului λ

∑mi=1 max(1 − yi(w · xi +

w0), 0).

a In raport cu alti clasificatori, de exemplu retelele neuronale artificiale.

37.

Page 39: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Observatie importanta

Este imediat ca problema de optimizare (17) este echivalenta cu problema

minw,w0

(

θ‖w‖2 +m∑

i=1

max(1− yi(w · xi + w0), 0))

, (18)

daca se considera parametrul (fixat) θ =1

λ> 0.

Comparand problema de optimizare (18) cu problema de optimizare dindefinitia regresiei logistice cu termen de regularizare L2, putem observa omare similaritate.

Diferenta consta (doar!) ın functia de cost/pierdere folosita: SVM folosestefunctia de cost hinge, definita prin f(z) = max(1−z, 0), pe cand regresia logisticafoloseste functia de cost logistica ln(1 + e−z).a

Pentru un cadru unitar de prezentare a acestor metode de ınvatare (foartediferite!), bazat pe minimizarea costurilor/pierderilor, va recomandam sacititi Supplemental Lecture notes, de John Duchi, de la Universitatea Stan-ford.

a In mod similar, regresia liniara foloseste ca functie de cost suma patratelor erorilor, iar algoritmul AdaBoostfoloseste functia de cost [negativ-]exponentiala e−z.

38.

Page 40: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

The Sequential Minimal Optimization (SMO) algorithm:Proofs for the update rules

Nello Cristianini, John Showe-Taylor,

An Introduction to Support Vector Machines,

Cambridge University Press, 2000, pp. 139-140

[adapted by Liviu Ciortuz]

39.

Page 41: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

a. Using the standard notations for the C-SVM optimization problem (theprimal and dual forms), knowing that the SMO algorithm uses the coordinateascent optimization strategy, and assuming that the free variables at thecurrent iteration are α1 and α2,prove that

αnew, unclipped2 = α2 +

y2(E2 − E1)

η

αnew, unclipped1 = α1 + y1 y2(α2 − αnew, unclipped

2 ),

where

Ek = w · xk + w0︸ ︷︷ ︸

not. :f(xk)

−yk for k ∈ {1, 2},

w =

m∑

i=1

yi αi xi,

η = −‖x1 − x2‖2.

40.

Page 42: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Solution

We know that the dual form of the C-SVM optimization problem has the objective

function LD(α) =∑m

i=1 αi −1

2

∑mi=1

∑mj=1 yiyjαiαjxi · xj and the constraints

∑mi=1 yiαi =

0 and 0 ≤ αi ≤ C for i = 1, . . . ,m.

By denoting vi =( m∑

j=3

yj αj xj

︸ ︷︷ ︸

w−(y1α1x1+y2α2x2)

)

· xi for i = 1, 2, it follows that

LD(α1, α2) = α1 + α2 −1

2α21x

21 −

1

2α22x

22 − y1y2

︸︷︷︸s

α1α2x1 · x2 − y1α1v1 − y2α2v2 + const1.

Since y1α1+ y2α2 = −∑j>2 yjαj , by multiplying this relation with y1 — which belongs to{−1, 1} —, we will get

α1 + y1y2︸︷︷︸

s

α2 = −y1∑

j>2

yjαj

︸ ︷︷ ︸const2

.

Therefore,

αold1 + sαold

2 = const2︸ ︷︷ ︸not.: γ

= αnew, unclipped1 + sαnew, unclipped

2 . (19)

41.

Page 43: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

By substituting α1 = γ − sα2 into LD(α1, α2) we get:

LD(α2) = γ−sα2+α2−1

2(γ−sα2)

2x21−

1

2α22x

22−s(γ−sα2)α2x1 ·x2−y1(γ−sα2)v1−y2α2v2+const1.

In order to maximize LD(α2) we will first compute its derivative w.r.t. α2, and thenequate it to 0:

∂LD(α2)

∂α2= −s+ 1 +

1

22s(γ − sα2)x

21 −

1

22α2x

22 − sγx1 · x2 + 2α2x1 · x2 + y1sv1 − y2v2

= −s+ 1 + (sγ − α2)x21 − α2x

22 − sγx1 · x2 + 2α2x1 · x2 + y1sv1 − y2v2

= −α2(x21 + x2

2 − 2x1 · x2)− s+ 1 + sγx21 − sγx1 · x2 + y1 s

︸︷︷︸y1y2

v1 − y2v2

= −α2(x1 − x2)2 − s+ 1 + sγx2

1 − sγx1 · x2 + y2v1 − y2v2 = 0.

Therefore

αnew, unclipped2 =

−s+ 1 + sγ(x21 − x1 · x2) + y2(v1 − v2)

x21 + x2

2 − 2x1 · x2

=y2(−y1 + y2 + y1γ(x

21 − x1 · x2) + v1 − v2)

‖x1 − x2‖2(∗)=

y2(f(x1)− y1 − f(x2) + y2) + α2‖x1 − x2‖2‖x1 − x2‖2

= α2 +y2(E1 − E2)

‖x1 − x2‖2= α2 +

y2(E2 − E1)

−‖x1 − x2‖2.

42.

Page 44: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Note

The equality (∗) on the previous slide holds because by denotingf(x) = w · x+ w0 =

∑mj=1 yjαjxj · x+ w0 we can write

v1 − v2 = f(x1)− y1α1x21 − y2α2x1 · x2 − (f(x2)− y1α1x1 · x2 − y2α2x

22)

= f(x1)− y1 α1︸︷︷︸γ−sα2

x21 − y2α2x1 · x2 − f(x2) + y1 α1

︸︷︷︸γ−sα2

x1 · x2 + y2α2x22

= f(x1)− f(x2)− y1γx21 + sy1

︸︷︷︸y2

α2x21 − y2α2x1 · x2 + y1γx1 · x2 − sy1

︸︷︷︸y2

α2x1 · x2 + y2α2x22

= f(x1)− f(x2)− y1γx21 + y2α2x

21 − y2α2x1 · x2 + y1γx1 · x2 − y2α2x1 · x2 + y2α2x

22

= f(x1)− f(x2)− y1γ(x21 − x1 · x2) + y2α2(x

21 − 2x1 · x2 + x2

2)

= f(x1)− f(x2)− y1γ(x21 − x1 · x2) + y2α2‖x1 − x2‖2.

LC: This could be reformulated as the following Lemma:

Prove that f(x1)− f(x2) = v1 − v2 + y1γx1 · (x1 − x2)− y2α2‖x1 − x2‖2.

43.

Page 45: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Finally, in order to deduce αnew, unclipped1 , we will use the equality

αold1 + sαold

2 = γ = αnew, unclipped1 + sα

new, unclipped2 .

So,

αnew, unclipped1 = γ − sα

new, unclipped2 = αold

1 + sαold2 − sα

new, unclipped2

= αold1 + y1y2(α

old2 − α

new, unclipped2 ).

44.

Page 46: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

b. Rememeber that 0 ≤ αj ≤ C for j ∈ {1, 2}. At each iteration ofthe SMO algorithm you have to calculate two bounds, L and H,so as to further constrain α2: 0 ≤ L ≤ α2 ≤ H ≤ C.

Prove that (in the context of our problem) these bounds are ex-pressed by the following “clipped” relations:

• If y1 6= y2, L = max(0, α2 − α1), H = min(C,C + α2 − α1)

• If y1 = y2, L = max(0, α1 + α2 − C), H = min(α1 + α2, C)

Note: Therefore, the updating rules will be:

αnew, clipped2 =

H if αnew, unclipped2 > H

αnew, unclipped2 if L ≤ α

new, unclipped2 ≤ H

L if αnew, unclipped2 < L

(20)

αnew, clipped1 = γ − sα

new, clipped2 = αold

1 + sαold2 − sα

new, clipped2

= αold1 + y1y2(α

old2 − α

new, clipped2 ).

45.

Page 47: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Proof:

y1 6= y2 ⇒ α1 − α2 = γ ⇒ α2 = α1 − γ

−C ≤ γ < 0 : 0 ≤ γ ≤ C :

C+γ

C

C0

−γ

γ

C−γ

C

C0 γ

y1 = y2 ⇒ α1 + α2 = γ ⇒ α2 = −α1 + γ

0 ≤ γ < C : C ≤ γ ≤ 2C :

C

C0

γ

γ

−C+γ

−C+γ1

α

C

C0 γ

That leads to the stated constraints (written in blue on the previous slide).

46.

Page 48: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

c. Prove the following complementarity constraints (KKT) for [stopping] theSMO algorithm:

NOT [(αi < C AND yiEi < −tol ) OR (αi > 0 AND yiEi > tol )] fori = 1, . . . ,m,

where tol is a (small) positive constant.

47.

Page 49: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Proof:

We will make use of the following equality: yiEi = yi(f(xi)− yi) = yif(xi)− 1.

αi < C :

αi = 0 ⇒ yi (w · xi + w0)︸ ︷︷ ︸

f(xi)

≥ 1 ⇒ yiEi ≥ 0

αi ∈ (0, C) ⇒ yif(xi) = 1 ⇒ yiEi = 0

⇒ yiEi ≥ 0

αi > 0 :

αi ∈ (0, C) ⇒ yif(xi) = 1 ⇒ yiEi = 0

αi = C ⇒ yif(xi) ≤ 1 ⇒ yiEi ≤ 0

⇒ yiEi ≤ 0.

Taking p1not.= (αi < C), q1

not.= (yiEi ≥ 0), p2

not.= (αi > 0), and q2

not.= (yiEi ≤ 0), it

follows that

(p1 → q1) ∧ (p2 → q2) ≡ ¬¬[(p1 → q1) ∧ (p2 → q2)] ≡ ¬[¬(p1 → q1) ∨ ¬(p2 → q2)] ≡¬[¬(¬p1 ∨ q1) ∨ ¬(¬p2 ∨ q2)] ≡ ¬[(p1 ∧ ¬q1) ∨ (p2 ∧ ¬q2)],

which leads to the stated constraint.

48.

Page 50: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Exemplifying the application of

the Sequential Minimal Optimization (SMO) algorithm

CMU, 2008 fall, Eric Xing, HW2, pr. 1.3

[Solution proposed by Liviu Ciortuz]

49.

Page 51: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Suppose we are given 4 data points in 2-d space: x1 = (0, 1), y1 = −1; x2 = (2, 0),y2 = +1; x3 = (1, 0), y3 = +1; and x4 = (0, 2), y4 = −1. We will use these 4 datapoints to train a soft-margin linear SVM.

Let α1, α2, α3, α4 be the Lagrange multipliers for x1, x2, x3, x4 respectively.And also let the regularization parameter C be 100.

a. Write down the dual optimization formulation for this problem.

b. Suppose we initialize α1 = 5, α2 = 4, α3 = 8, α4 = 7. And we want to updateα1 and α4 (keep α2 and α3 fixed) in the first iteration. Derive the updateequations for α1 and α4 (in terms of α2 and α3). What are the values for α1

and α4 after update?

c. Now fix α1 and α4, derive the update equations for α2 and α3 (in terms ofα1 and α4). What are the values for α2 and α3 after update?

50.

Page 52: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Solution (in Romanian)

a. Particularizand forma duala a problemei de optimizare C-SVM(vezi CMU, 2012 spring, Ziv Bar-Joseph, HW3, pr. 3.2) pentruacest set de date, dupa efectuarea tuturor produselor scalare xi ·xj

vom obtine:a

maxα1,α2,α3,α4α1 + α2 + α3 + α4 −

1

2(α2

1 + 4α22 + α2

3 + 4α24 + 4α1α4 + 4α2α3)

a. ı. 0 ≤ αi ≤ 100, pentru i = 1, . . . , 4−α1 + α2 + α3 − α4 = 0

aConcret, x2

1= x2

3= 1, x2

2= x2

4= 4, x1 · x4 = x2 · x3 = 2, iar restul produselor xi · xj cu i 6= j au valoarea 0.

51.

Page 53: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

b. Facem observatia ca, la acest punct, exercitiul cere sa se execute douaiteratii ale algoritmului SMO pe datele furnizate, fara a aplica criteriul deselectie a variabilelor libere si conditiile de oprire formulate de John Platt,autorul acestui algoritm.

In continuare, vom da doua rezolvari.

Prima va fi mai simpla. Ea va urma ideea algoritmului SMO — care aplicao metoda de optimizare numita crestere pe coordonate (engl., coordinate as-cent) — fara a recurge efectiv la formulele stabilite de John Platt. In schimb,vom proceda direct la optimizarea functiilor obiectiv determinate de alegerea(impusa, conform enuntului) a celor doua perechi de variabile duale specifi-cate.

La a doua rezolvare, vom aplica direct formulele generale pentru ,,actu-alizarea“ valorilor libere din algoritmul SMO. Facem observatia ca si aceastarezolvare va fi utila cititorului, pentru ca vom scoate ın evidenta anumite de-talii / modalitati de calcul care nu sunt chiar ,,imediate“ pentru cineva carenu este ınca obisnuit cu algoritmul SMO.

52.

Page 54: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Prima solutie

i. La prima iteratie, avem initial α1 = 5, α2 = 4, α3 = 8, α4 = 7, iar apoi se lasa ,,libere“variabilele α1 si α4. Datorita restrictiei

∑4i=1 yiαi = 0 din forma duala a problemei

de optimizare C-SVM (a se vedea (D′) la CMU, 2012 spring, Ziv Bar-Joseph, HW3,pr. 3.2), vom avea urmatoarea relatie de legatura dintre valorile variabilelor libere:αnew1 + αnew

4 = α2 + α3 = 12.Din restrictia αi ≥ 0 — dar si datorita faptului ca y1 si y4 au acelasi semn — va rezultaca valorile posibile (,,fezabile“) pentru α1 si α4 vor fi limitate la intervalul [0, 12], inclusın intervalul [0, C] = [0, 100].Inlocuind α2 = 4, α3 = 8 si α4 = 12− α1 ın functia obiectiv a problemei de optimizare dela punctul a, vom obtine expresia functiei pe care va trebui sa o maximizam la aceastaiteratie:

LD,1(α1)not.= 24− 1

2(α2

1 + 4 · 16 + 82 + 4(12− α1)2 + 4 · 4 · 8 + 4α1(12− α1))

= 24− 1

2(α2

1 − 48α1 + 832)

Valoarea lui α1 pentru care se atinge optimul acestei functii este

notata cu αnew, unclipped1 si, evident, este

48

2= 24. Aceasta valoare se

afla ın afara intervalului [0, 12]. Este imediat ca maximul functiei

LD,1(α1) pe intervalul [0, 12] se atinge ın punctul αnew, clipped1 = 12. In

consecinta, α4 va primi valoarea αnew, unclipped4 = 12−αnew, clipped

1 = 0.−30 −10 0 10 20 30

−150

0−1

000

−500

α1

W1

−30 −10 0 10 20 30

−15

00−

1000

−50

0

53.

Page 55: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

ii. La a doua iteratie, vom avea α1 = 12 si α4 = 0 fixati, iar α2 = 4 si α3 = 8 liberi. Dinrelatia

∑4i=1 yiαi = 0 rezulta

αnew2 + αnew

3 = α1 + α4 = 12.

Ca si mai sus, intervalul ın care vor fi permise noile valori ale variabilelor α2 si α3 este[0, 12]. Functia obiectiv din problema de optimizare convexa devine:

LD,2(α2)not.= 24− 1

2(122 + 4α2

2 + (12− α2)2 + 4 · 02 + 4 · 12 · 0 + 4α2(12− α2))

= −1

2α22 − 12α2 − 120 = −1

2(α2

2 + 24α2 + 240)

Maximul global al acestei functii se atinge ın punctul

αnew, unclipped2 = −24

2= −12. Acest punct se situeaza ın exteriorul

intervalului de fezabilitate [0, 12]. Maximul functiei LD,2(α2) pe

intervalul [0, 12] se atinge ın punctul αnew, clipped2 = 0. In consecinta,

αnew, unclipped3 = 12− αnew, clipped

2 = 12. −30 −10 0 10 20 30

−800

−600

−400

−200

α2

W2

−30 −10 0 10 20 30

−80

0−

600

−40

0−

200

54.

Page 56: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

A doua solutie

Formulele date de John Platt pentru actualizarea variabilei libere αj (la oiteratie oarecare a algoritmului SMO) sunt:

αnew, unclippedi = αi +

yi(Ei − Ej)

η

αnew, clippedi =

H daca αnew, unclippedi > H

αnew, unclippedi daca L ≤ αnew, unclipped

i ≤ H

L daca αnew, unclippedi < L

unde

Ek = w · xk + w0 − yk

w =4∑

i=1

yiαixi,

η = − ‖ xi − xj ‖2

L = max(0, αi − αj) si H = min(C,C + αi − αj) daca yi 6= yj

L = max(0, αi + αj − C) si H = min(C,αi + αj) daca yi = yj .

55.

Page 57: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

i. La prima iteratie, α1 = 5, α2 = 4, α3 = 8, α4 = 7, iar η = − ‖ x1 − x4 ‖2= −1.Fara a face deocamdata calculele, explicitam erorile E1 = f(x1) − y1 = w · x1 + w0 − y1 siE4 = f(x4)− y4 = w · x4 + w0 − y4, deci rezulta E1 − E4 = w · (x1 − x4).

Din relatia w =∑4

i=1 yiαixi, vom avea:

w = −α1x1 + α2x2 + α3x3 − α4x4

= −5

[01

]

+ 4

[20

]

+ 8

[10

]

− 7

[02

]

=

[16−19

]

de unde rezulta ca E1 − E4 = w · (x1 − x4) = (16,−19) · (0,−1) = 19 si vom putea calcula

αnew, unclipped1 = α1 +

y1(E1 − E4)

η= 5 + 19 = 24.

Acum verificam daca αnew, unclipped1 este ın intervalul de ,,fezabilitate“: deoarece y1 = y4,

vom avea L = max(0, α1 +α4− 100) = 0, pentru ca α1 +α4 = 12. Similar, H = min(100, 12) =12.Intrucat αnew, unclipped

1 > H = 12, vom avea αnew, clipped1 = H = 12 si, ın consecinta,

αnew, clipped4 = 12− αnew, clipped

1 = 0.

56.

Page 58: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

ii. La a doua iteratie, α1 = 12, α2 = 4, α3 = 8, α4 = 0 si αnew2 +αnew

3 = α2+α3 = α1+α4 = 12.De asemenea, η = − ‖ x2−x3 ‖2= −1 si E2 = f(x2)−y2 = w ·x3+w0−y2, iar E3 = f(x3)−y3 =w · x3 + w0 − y3, deci E2 − E3 = w · (x2 − x3). Calculam w astfel:

w = −α1x1 + α2x2 + α3x3 − α4x4

= −12

[01

]

+ 4

[20

]

+ 8

[10

]

− 0

[02

]

=

[16−12

]

,

deci E2 − E3 = (16,−12) · (1, 0) = 16. Asadar,

αnew, unclipped2 = α2 +

y2(E2 − E3)

η= 4− 16 = −12.

Deoarece y2 = y3, vom avea L = max(0, α2 + α3 − C) = 0 si H = min(C,α2 + α3) = 12. In

consecinta, αnew, unclipped2 = −12 < L = 0, deci ın final vom avea αnew, clipped

2 = L = 0 si

αnew, clipped3 = 12− αnew, clipped

2 = 12.

Se remarca faptul ca am regasit rezultatele de la prima solutie.

57.

Page 59: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Observatie

Calculand valoarea functiei obiectiv LD(α), folosind mai ıntai val-orile initiale ale parametrilor αi, apoi valorile rezultate la fiecaredin cele doua iteratii, obtinem valorile: −268.5, −258, −129. Asacum era de asteptat, aceste numere sunt ın ordine crescatoare.

Daca am fi calculat si valorile parametrului w0 care apare ın formaprimala a problemei de optimizare date, precum si valorile vari-abilelor de ,,destindere“ ξi, am fi putut calcula si valorile functiei

obiectiv1

2‖ w ‖2 +C

i ξi. Aceste valori trebuie sa fie ın ordine de-

screscatoare si mai mari decat valorile determinate pentru functiaLD(α) mai sus.

58.

Page 60: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

SVM with RBF kernel:sufficient conditions for getting 0 training error

Stanford, 2007 fall, Andrew Ng, HW2, pr. 3

59.

Page 61: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

[In Romanian]

Consideram o masina cu vectori-suport care foloseste nucleul de tip gaus-sian (RBF) K(x, z) = exp(− ‖ x − z ‖2 /τ2), unde am notat cu exp( ) functiaexponentiala. Parametrul τ determina marimea deschiderii ,,clopotului“gaussian.

La punctele a si b de mai jos va vom ghida pas cu pas ca sa demonstratiurmatoarea proprietate:

In ipoteza ca oricare doua instante din setul de date de antrena-mentnu sunt distincte, se poate fixa o valoare a parametrului τ astfelıncat eroarea la antrenare produsa de aceasta SVM sa fie zero.

La punctul c se va arata ca aceasta proprietate nu este valabila si ın cazulfolosirii clasificatorului C-SVM.

60.

Page 62: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

a. Presupunem ca setul de date de antrenament {(x1, y1), . . . , (xm, ym)} estealcatuit din puncte care sunt separate unele de altele de o distanta de celputin ε, adica ‖ xj − xi ‖≥ ε pentru orice i 6= j.a

Va readucem aminte ca functia de decizie ınvatata de catre SVM cu functie-nucleu K se poate scrie sub forma:

f(x) =m∑

i=1

αiyiK(xi, x) + w0, (21)

unde α1, . . . , αm ∈ R+, w0 este termenul liber/bias-ul din forma primala a

problemei de optimizare [C-]SVM, iar K(xi, x)def.= Φ(xi) · Φ(x), unde Φ este

,,maparea“ corespunzatoare functiei-nucleu K.

Gasiti valori pentru α1, . . . , αm, w0, precum si pentru parametrul gaussian τ ,astfel ıncat toate punctele xi sa fie corect clasificate de catre clasificatorulsign(f(x)).

aEvident, daca xi 6= xj pentru orice i 6= j si, bineınteles, daca m este finit, atunci putem lua ε = mini6=j ‖ xj−xi ‖.

61.

Page 63: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Observatie importanta

Aceasta formula presupunea acolo ca αi sunt solutiile problemei SVM ın formaduala. Insa aici nu lucram cu aceasta presupozitie.

Practic, aici vom alege αi, w0 si τ astfel ıncat sa rezulte ca f (de aceastaforma) determina separabilitate liniara ın spatiul de ,,trasaturi“ ın care suntΦ(x1), . . . ,Φ(xm), deci si separabilitate (simpla, deci ın general neliniara) ınspatiul original (ın care sunt instantele x1, . . . , xm).

Ulterior (la punctul b), folosind acelasi f , vom arata ca problema SVM admitecel putin o solutie ,,fezabila“ — deci, cf. conditiei lui Slater si o solutie optima— pentru forma primala (si, de fapt, si pentru cea duala, dar asta pur si simplunu are relevanta aici) si, ın consecinta, solutia optima va satisface si ea aceastaproprietate de separabilitate, care ne asigura ca eroarea la antrenare este 0.

62.

Page 64: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Sugestii:1. Verificati faptul ca, lucrand cu yi ∈ {−1,+1}, predictia facuta pentru xi decatre sign(f(x)) va fi corecta daca |f(xi)− yi| < 1. Altfel spus, verificati ca areloc implicatia |f(xi)− yi| < 1 ⇒ yif(xi) > 0.a

2. Fixand αi = 1 pentru i = 1, . . . ,m si w0 = 0, gasiti o valoare a lui τ pentrucare inegalitatea |f(xi)− yi| < 1 sa fie satisfacuta pentru i = 1, . . . ,m.

Raspuns

Sunt imediate urmatoarele echivalente:

|f(xi)− yi| < 1 ⇔ −1 < f(xi)− yi < 1 ⇔ −1 + yi < f(xi) < 1 + yi.

Pentru yi = −1, partea dreapta a ultimei inegalitati duble de mai sus devinef(xi) < 0. Pentru yi = 1, partea stanga a aceleiasi inegalitati duble devinef(xi) > 0. Asadar, daca inegalitatea |f(xi) − yi| < 1 este adevarata, atunciinstanta xi este corect clasificata de catre functia sign(f(x)).

Conform sugestiei din enunt, vom considera αi = 1 pentru i = 1, . . . ,m si w0 = 0.

aAici f poate fi gandit la cazul general (f : Rd → R), nu doar ca separator produs de clasificatorul SVM cu nucleuRBF.

63.

Page 65: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Pentru un exemplu de antrenament oarecare (xi, yi), vom avea:

|f(xi)− yi| =∣∣∣

m∑

j=1

yjK(xj , xi)− yi

∣∣∣ =

∣∣∣

m∑

j=1

yj exp(− ‖ xj − xi ‖2 /τ2

)− yi

∣∣∣

=∣∣∣yi +

j 6=i

yi exp(− ‖ xj − xi ‖2 /τ2

)− yi

∣∣∣

=∣∣∣

j 6=i

yi exp(− ‖ xj − xi ‖2 /τ2

)∣∣∣

≤∑

j 6=i

∣∣yj exp

(− ‖ xj − xi ‖2 /τ2

)∣∣ =

j 6=i

|yj | exp(− ‖ xj − xi ‖2 /τ2

)

=∑

j 6=i

exp(− ‖ xj − xi ‖2 /τ2

)

≤∑

j 6=i

exp(−ε2/τ2) = (m− 1) exp(−ε2/τ2).

Prima dintre inegalitatile de mai sus este datorata aplicarii repetate a ine-galitatii triunghiului (|a + b| ≤ |a| + |b|), iar a doua inegalitate decurge dinpresupunerea ca ||xj − xi|| ≥ ε pentru orice i 6= j.

64.

Page 66: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Asadar, pentru a avea |f(xi) − yi| < 1 pentru i = 1, . . . ,m este suficient sa-lalegem pe τ astfel ıncat

(m− 1) exp(−ε2/τ2) < 1,

sau, echivalent,a

τ <ε

log(m− 1).

De exemplu, putem lua τ = ε/√logm.

Rezumand, am aratat pana acum ca exista o instantiere pentru variabileleduale (αi = 1) si pentru variabila primala w0 (si anume, w0 = 0), pentru carefunctia f(x) =

∑mi=1 αiyiK(xi, x) +w0 separa perfect exemplele de antrenament

date.

Punctele b si c de mai jos vor analiza daca solutiile optime produse de SVMsi respectiv C-SVM pe acelasi set de antrenament vor avea si ele aceastaproprietate.

Va reamintim ca solutiile optime pentru problema de optimizare [C-]SVMexista, atat pentru forma primala cat si pentru forma duala — si ele sunt ınrelatia w =

∑mi=1 αiyiK(xi, x) — daca, spre exemplu, este satisfacuta conditia

lui Slater.

aPresupunand m > 1.

65.

Page 67: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

b. Presupunem ca rulam o SVM fara variabile de ,,destindere“ (engl., slackvariables) folosind pentru parametrul τ valoarea pe care ati gasit-o la punctulprecedent.Va obtine oare acest clasificator (ın mod necesar) eroare de antrenare zero?De ce da, sau de ce nu?

Raspuns

Datorita faptului ca f(x) =∑m

i=1 αiyiK(xi, x) + w0 =∑m

i=1 αiyiΦ(xi) · Φ(x) + w0,rationamentul prin care am facut alegerea valorii parametrului τ de la punctula este ın sine o demonstratie a faptului ca multimea {Φ(xi)}mi=1, unde Φ estemaparea corespunzatoare nucleului RBF este liniar separabila.

Vom arata ca putem pune ın corespondenta functia f din proprietateade separabilitate liniara obtinuta ın spatiul de ,,trasaturi“ (ın care suntΦ(x1), . . . ,Φ(xm)) cu o anumita pereche w, w0 care satisface conditia lui Slaterın spatiul de trasaturi (ın care sunt Φ(x)1, . . . ,Φ(x)m).

Conditia lui Slater, relativa la problema de optimizare SVM este urmatoarea:exista o solutie strict ,,fezabila“, adica o asignare pentru w′ si w′

0, astfel ıncatrestrictiile din problema primala SVM sunt satisfacute cu inegalitate stricta:yi(w · Φ(xi) + w0) > 1 pentru i = 1, . . . ,m.

66.

Page 68: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Fie i ∈ {1, . . . ,m}, fixat. Luand α1 = 1, . . . , αm = 1, w0 = 0, f(x) =∑m

i=1 αiyiK(xi, x) + w0 si τ ca la punctul a, vom avea |f(xi) − yi| < 1, deci

yif(xi) > 0. Notam w =∑m

j=1 αjyjΦ(xj). In consecinta,

yi(w · Φ(xi) + w0) = yiw · Φ(xi) = yi

m∑

j=1

αjyjΦ(xj) · Φ(xi) = yi

m∑

j=1

αjyjK(xj , xi)

= yif(xi) > 0.

Asadar,

yi(w · Φ(xi) + w0) = yi

m∑

j=1

αjyjK(xj , xi) > 0 pentru i = 1, . . . ,m.

Evident, putem multiplica toti αi cu o constanta pozitiva astfel ıncat relatiaprecedenta sa devina

yi

m∑

j=1

αjyjK(xj , xi) > 1 pentru i = 1, . . . ,m.

Asadar, am gasit o solutie strict ,,fezabila“ (w′, w′0) pentru problema noastra

de optimizare. Prin urmare, conditia lui Slater este ındeplinita.

67.

Page 69: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

In concluzie, solutia optima (w, w0) a acestei probleme va fi ıntr-adevar gasitade catre SVM cu nucleu RBF, ın vreme ce separabilitatea liniara a multimii{Φ(xi)}mi=1 din ,,spatiul de trasaturi“ garanteaza ca aceasta solutie optima pro-duce eroare nula la antrenare.

c. Presupunem ca antrenam un C-SVM (adica o SVM cu variabile de ,,des-tindere“) pe datele specificate mai sus, folosind pentru parametrul τ valoareape care ati ales-o la punctul a, iar pentru parametrul C o valoare fixata ınmod arbitrar, dar pe care nu o cunoastem dinainte.

Va obtine oare acest clasificator (ın mod necesar) eroare de antrenare zero?De ce da, sau de ce nu?

68.

Page 70: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Raspuns

Clasificatorul C-SVM cu nucleu RBF nu va obtine ın mod neaparat eroarenula la antrenare pe setul de date considerat, chiar daca asignam parametruluiτ valoarea gasita la punctul a.

[In general, pentru problema de optimizare convexa C-SVM solutia optimaobtinuta (w, w0, ξ) pentru o valoare fixata a parametrului de ,,destindere“C nu produce ın mod necesar eroare la antrenare 0, chiar daca datele deantrenament sunt liniar separabile. Se poate sa existe un alt triplet (w′, w′

0, ξ′),

pentru care eroarea la antrenare rezultata sa fie 0, dar pentru care1

2‖ w′ ‖2

+C∑

i ξ′i >

1

2‖ w ‖2 +C

i ξi.]

De exemplu, putem considera cazul extrem ın care C = 0. In acest caz,

functia obiectiv este1

2||w||2 si, evident, w = 0 este solutie [optima] a problemei

de optimizare C-SVM, indiferent de alegerea valorii parametrului τ . Insa nuavem garantia ca ın aceste conditii eroarea la antrenare este 0.

69.

Page 71: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

The RBF (Gaussian) Kernel:

Some interesting properties

MIT, 2009 fall, Tommy Jaakkola, HW2, pr. 1

70.

Page 72: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

We can write the radial basis kernel in the following form:

K(x, x′) = exp

(

− 1

2σ2‖x− x′‖2

)

,

where x and x′ belong to Rd, and σ is a width parameter specifying how

quickly the kernel vanishes as the points move further away from each other.

We will show that this kernel has some remarkable properties:It can perfectly separate any finite set of distinct training points. Moreover,this result holds for any positive finite value of σ.

[However, while the kernel width does not affect whether we’ll be able to per-fectly separate the training points, it does affect generalization performance.]

71.

Page 73: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Let’s proceed in stages.

We’ll first show that the optimisation problem

minimize1

2‖w‖2 subject to yiw · φ(xi) = 1, . . . , n

has a solution regardless of how we set the ±1 training labels yi.Here φ(xi) is the feature vector (function actually) corresponding to the radialbasis kernel K.

Note: Our formulation here is a bit non-standard for two reasons:

1. We try to find a solution where all the points are support vectors.2. We also omit the bias term since it is not needed for the result.

72.

Page 74: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

a. Introduce Lagrange multipliers for the constraints [similarly to finding theSVM solution] and show the form that the solution w∗ has to take, i.e. expressw as a function of the Lagrange multipliers. (This should not involve lengthycalculations.)

Notes:

i. The Lagrange multipliers here are no longer constrained to be positive.(Since you are trying to satisfy equality constraints, the Lagrange multiplierscan take any real value.)ii. You can assume that w and φ(xi) are finite vectors for the purposes ofthese calculations.

b. Put the resulting solution back into the classification (margin) constraintsand express the result in terms of a linear combination of the radial basiskernels.

73.

Page 75: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Solution

a. The Lagrangian for this optimization problem is:

L(w,α) =1

2‖w‖2 −

n∑

i=1

αi(yiw · φ(xi)− 1) =1

2‖w‖2 − w ·

( n∑

i=1

αiyiφ(xi))

+n∑

i=1

αi

Here each αi is unconstrained, because we have equality constraints rather than in-equality constraints. As usual, the dual optimization problem is

maxα

g(α) = maxα

minw

L(w,α)︸ ︷︷ ︸

not.: g(α)

.

For a fixed α, the expression L(w,α) is positively quadratic in w. We can obtain the

optimal w∗ from the first-order condition∂L(w,α)

∂w= 0:

w∗ =n∑

j=1

α∗jyjφ(xj).

For convenience, we will use the short-hand w∗ = Φ[y•α∗]. Here • represents an element-wise product and Φ is a m× n matrix, where the ith column is φ(xi). (Of course, m = ∞for the RBF kernel.)

74.

Page 76: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

b. Our constraints are equivalent to:

φ(xi)⊤w = yi, i = 1, . . . , n.

Using matrix short-hand notation and substituting w∗ = Φ[y • α∗], we obtain:

Φ⊤w∗ = y

Φ⊤Φ[y • α∗] = y

K[y • α∗] = y,

where Knot.= Φ⊤Φ denotes the kernel matrix (or, Gram matrix).

75.

Page 77: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

c. Indicate briefly how we can use the following Michelli theorem to show thatany n by n RBF kernel matrix Kij = K(xi, xj) for i, j = 1, . . . , n is invertible.

Theorem (Michelli): If ρ(t) is a monotonic function in t ∈ [0,∞), then thematrix ρij = ρ(‖xi−xj‖) is invertible for any distinct set of points xi, i = 1, . . . , n.

d. Based on the above results put together the argument to show that wecan indeed find a solution where all the points are support vectors.

76.

Page 78: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

c. Note that ρ(t) = exp

(

− 1

2σ2t2)

is a monotonic function in t ∈ [0,∞). Using

the Michelli theorem, for any distinct set of points xi, i = 1, . . . , n, the matrix

K, with entries Kij = exp

(

− 1

2σ2‖xi − xj‖2

)

, is invertible.

d. As we have a distinct set of points, K is invertible. Then the linear systemK[y • α∗] = y is feasible, and has a unique solution given by α∗ = y • (K−1y).Therefore, w∗ = Φ[y • α∗] = ΦK−1y.

77.

Page 79: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

e. Of course, the fact that we can in principle separate any set of trainingexamples does not mean that our classifier does well (on the contrary). So,why do we use the radial basis kernel? The reason has to do with margin thatwe can attain by varying σ. Note that the effect of varying σ on the margin isnot simple rescaling of the feature vectors. Indeed, for the radial basis kernelwe have

‖φ(x)‖2 = φ(x) · φ(x) = K(x, x) = 1.

Let’s begin by setting σ to a very small positive value. What is the marginthat we attain in response to any n distinct training points?

f. Provide a 1-dimensional example to show how the margin can be largerthan the answer to part e. You are free to set σ and the points so as tohighlight how they might “contribute to each other’s margin”.

78.

Page 80: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

e. As σ → 0, the points become very far apart with respect to σ, and ourkernel matrix K → I, the identity matrix. Because our constraints dictatethat K[y • α∗] = y, then α∗ → 1, the all-ones vector. Therefore, ‖w∗‖2 =

[y • α∗]⊤K[y • α∗] → y⊤Iu = n, and we obtain a margin of

1

nin the limit.

f. The simplest example to create is a set of 2 distinct points x and x′, both

labeled +1. Denote k = K(x, x′) = exp

(

− 1

2σ2‖x− x′‖2

)

. The kernel matrix can

be written as:

K =

[1 kk 1

]

Solving the system K[1 • α] = 1 yields the solution α∗ =

[1

k + 1

1

k + 1

]⊤

and

‖w∗‖2 = α∗⊤Kα∗ =2

k + 1. Therefore, the margin is

1

‖w∗‖ =

k + 1

2. As long as

x and x′ are distinct, we have that k > 0, so the margin is always greater than√

1

2.

79.

Page 81: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Remark

As we take the 2 points arbitrarily close together (or alternatively, imaginethat σ → +∞), then k → 1, and we obtain a margin of 1.

Is 1 always the largest possible margin that we can obtain?

For the RBF kernel, one can think of the corresponding infinite-dimensionalfeature vectors φ(xi) as lying on the unit ball, as they are unit-normalized:‖φ(xi)‖2 = K(xi, xi) = 1. So yes, the largest possible margin must be 1.

Intuitively, as σ → +∞, kernels centered at distinct points gradually becomeindistinguishable. In effect, all the feature vectors collapse onto each other(to a single point) on the unit ball. As they do, the margin goes to 1 in thelimit.

80.

Page 82: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

One-class (Max Margin) SVM,

the hard margin version

Stanford, 2007 fall, Andrew Ng, practice midterm, pr. 4

81.

Page 83: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

[In Romanian]

Dat fiind un set de instante neetichetatex1, . . . , xm ∈ R

d, un algoritm SVM de tip one-class cauta sa identifice (daca este posibil) odirectie w ∈ R

d care separa ın sens maximaldatele de originea sistemului de coordonate, caın figura alaturata.

Mai precis, acest algoritm rezolva problema deoptimizare (data ın forma primala):1

minw1

2||w ||2

a. ı. w · xi ≥ 1, pentru i = 1, . . . ,m.

O instanta noua (de test) va fi etichetata cu +daca w · x ≥ 1, si cu − ın caz contrar.

Credit: Tommi Jaakkola, MIT,ML course, 2009 fall, lecture notes

5.

La MIT, 2009 fall, Tommi Jaakkola, ML course, lecture notes 5 se da o alta varianta a problemei one-class SVM,definita cu ajutorul sferei de incluziune minimala (engl., minimum enclosing ball, MEB). Pentru a putea distinge maiusor cele doua versiuni una de cealalta, vom numi varianta de aici Max Margin.

82.

Page 84: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Observatie: Un astfel de algoritm este util pentru detectia anomaliilor (engl.,anomaly detection). In astfel de situatii, ni se da mai ıntai un set de datecare se considera ,,normale“. Apoi ni se cere sa decidem pentru alte instantedaca sunt (sau nu sunt) ,,anomalii“ (engl., outliers).

a. Scrieti forma duala corespunzatoare problemei de optimizare SVM one-class de mai sus. Simplificati raspunsul cat mai mult posibil; w nu trebuie saapara ın rezultatul final. Verificati daca forma primala satisface conditia luiSlater.

Putem verifica de la bun ınceput faptul ca problema de optimizare care a fostdata ın enunt ın forma primala satisface conditia lui Slater. Daca exista unhiperplan care trece prin originea sistemului de coordonate si ,,lasa“ toateinstantele xi cu i = 1, . . . ,m de o aceeasi parte a sa, aceasta ınseamna ca existaw ∈ R

d astfel ıncat w ·xi > 0 pentru i = 1, . . . ,m. Intrucat m este finit, ınmultindacest w cu o anumita constanta pozitiva obtinem w ·xi > 1 pentru i = 1, . . . ,m,deci conditia lui Slater este satisfacuta.

83.

Page 85: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Lagrangeanul generalizat care corespunde problemei primale date este:

LP (w,α) =1

2w · w +

m∑

i=1

αi(1− w · xi)

cu αi ≥ 0 pentru i = 1, . . . ,m.

Egaland cu 0 derivata partiala a lui LP ın raport cu w, obtinem w =∑m

i=1 αixi.Substituind aceasta egalitate ın expresia lui LP , vom avea:

LD(α) =1

2

( m∑

i=1

αixi

)

·( m∑

j=1

αjxj

)

+m∑

i=1

αi

(

1−( m∑

j=1

αjxj

)

· xi

)

=m∑

i=1

αi −1

2

m∑

i=1

m∑

j=1

αiαjxi · xj

Asadar, forma lui LD coincide cu cea de la problema SVM cu margine “hard”,dar problema duala este ın cazul de fata mai simpla: pmaxα≥0 LD(α).

84.

Page 86: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

b. Am putea ,,kerneliza“ algoritmul SVM one-class atat la antrenare cat si latestare? Adica: data fiind o functie-nucleu K, este posibil ca dupa mapareacorespunzatoare, variabilele x sa apara

− atat ın expresia lagrangeanului (LD) care reprezinta functia obiectiv aformei duale ale problemei SVM one-class (varianta Max Margin),

− cat si ın functia de decizie / clasificare pentru o instanta noua x′,

doar ca argumente ale functiei-nucleu K?

In ce priveste kernelizarea la antrenare, este imediat ca raspunsul este afir-mativ, deoarece ın expresia lagrangeanului LD care a fost dedusa la punctulprecedent xi si xj apar doar ca [perechi de] factori ın produsele scalare. Con-cret, dupa ,,mapare“ folosind functia Φ corespunzatoare nucleului K, vomavea:

LD(α) =

m∑

i=1

αi −1

2

m∑

i=1

m∑

j=1

αiαjΦ(xi) · Φ(xj) =

m∑

i=1

αi −1

2

m∑

i=1

m∑

j=1

αiαjK(xi, xj).

Pentru testare, raspunsul este de asemenea afirmativ: data fiind o instantade test x′, ea va fi clasificata ın functie de valoarea expresiei

• w · x′ = (∑m

i=1 αixi) · x′ =∑m

i=1 αixi · x′, ın cazul ın care nu se face mapare

• w · x′ =∑m

i=1 αiΦ(xi) · Φ(x′) =∑m

i=1 αiK(xi, x′), cand se face mapare.

85.

Page 87: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Observatie

Imaginile de mai jos ilustreaza rezultatul folosirii a doua functii-nucleu detip RBF ın conjunctie cu one-class SVM (varianta Max Margin) pe setul dedate din enunt. Pentru rezultatul ilustrat ın partea dreapta, s-a lucrat cu ovaloare [mai] mica pentru parametrul σ2 din definitia nucleului RBF.

Credit: Tommi Jaakkola, MIT, ML course, 2009 fall, lecture notes 5.

86.

Page 88: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

c. Concepeti un algoritm de tip SMOa care sa rezolve problema duala obtinutala punctul a. Ideea de baza a unui astfel de algoritm este ca la fiecare pas saproduca solutia optima pe cea mai mica dintre toate submultimile posibile devariabile. Dati formulele analitice (engl., in closed form) pentru actualizareavariabilelor din aceasta submultime. Trebuie sa justificati / explicati de ceeste suficient sa consideram simultan respectivul numar de variabile (engl.,“at a time”) la fiecare pas.

Intrucat ın formularea data ın enunt pentru problema one-class SVM nu sefoloseste termenul liber (engl., “bias”) w0, nu avem ın forma duala a problemeide optimizare o restrictie de tipul

∑ni=1 yiαi = 0 (cum a fost cazul la CMU,

2008 fall, Eric Xing, HW2, pr. 1.3). Asadar, vom folosi tot metoda cresteriipe coordonate (engl., coordinate ascent) ca ın algoritmul clasic SMO, ınsavom alege la fiecare iteratie doar (cate) o variabila Lagrange (αi).

Din expresia lagrangeanului dual LD(α) pe care l-am calculat la punctul a,obtinem functia pe care trebuie s-o optimizam la iteratia curenta:

L(αi) = αi +∑

j 6=i

αi −∑

j 6=i

αiαjxi · xj −1

2α2i xi · xi + const.

aVezi CMU, 2008 fall, Eric Xing, HW2, pr. 1.3 si (mai ales!) referintele bibliografice indicate acolo.

87.

Page 89: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Punctul de maxim este dat de solutia derivatei de ordinul ıntai a acesteifunctii:

∂L(αi)

∂αi= 0 ⇔ 1−

j 6=i

αjxj · xi − αixi · xi = 0 ⇔ αnew, unclippedi =

1−∑j 6=i αjxj · xi

xi · xi.

Tinand cont de restrictia αi ≥ 0, rezulta ca noua valoare pe care o atribuim

variabilei alese este αnew, clippedi = max

{

0,1−

∑j 6=i

αjxj ·xi

xi·xi

}

.

Mai raman de specificat:

− criteriul de selectie a variabilei ,,libere“; de preferinta aceasta se va face ınasa fel ıncat sa se obtina o crestere cat mai mare a valorii functiei obiectiv dela o iteratie la alta;

− criteriul de oprire a algoritmului; spre exemplu, atunci cand crestereafunctiei obiectiv de la o iteratie la alta devine nesemnificativa este inutil samai executam noi iteratii.

88.

Page 90: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

ν-SVM

B. Schoelkopf, A. Smola, 2002

89.

Page 91: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

[In Romanian]

Comentariu: Parametrul de destindere / regularizare C > 0, din formaproblemei SVM cu margine “soft” (introdus de catre Cortes si Vapnik, ınSupport Vector Networks, 1995),a permite realizarea unui compromis ıntredoua obiective antagoniste: maximizarea marginiib si minimizarea erorii laantrenare. La valori mari ale lui C rezulta un nivel scazut al sumei erorilor ınraport cu marginea (

∑mi=1 ξi, ın notatia uzuala). Invers, la valori mici ale lui

C rezulta un nivel ridicat al sumei erorilor. Totusi, semnificatia parametruluiC este prea putin intuitiva, iar valoarea sa nu poate fi determinata a priori.

De aceea, ın locul acestei variante de SVM cu margine “soft”, B. Scholkopf,A. Smola, R. Williamson si P. Bartlett au propus ın articlolul New SupportVector Machinesc o abordare diferita, ın care se foloseste un alt parametrunumeric, ν, astfel ıncat daca m reprezinta numarul instantelor de antrena-ment, atunci νm va limita superior numarul de erori produse la antrenare.d

Se poate demonstra ca νm este totodata o margine inferioara pentru numarulde vectori-suport.e

Varianta aceasta este cunoscuta sub numele de ν-SVM.

aVezi CMU, 2012 spring, Ziv Bar-Joseph, HW3, pr. 3.2.bDefinita ca distanta de la hiperplanul de separare optimala w · x+ w0 = 0 pana la vectorii-suport xi pentru care

(w · xi +w0)yi = 1.cPublicat ın revista Neural Computation, 12:1207-1245, 2000.dSee the explanation on next slide.eVezi rezolvarea punctului a de mai jos.

90.

Page 92: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

[LC:]

La CMU, 2017 fall, Nina Balcan, HW4, pr. 4.Q12 s-a aratat ca ın contextulproblemei de optimizare C-SVM numarul de erori comise la antrenare estemarginit superior de catre suma variabilelor de ,,destindere“ (

i ξi).

Similar, pentru problema (P′′) de mai jos se poate arata imediat, analizanddoar(!) forma restrictiilor, ca numarul de erori comise la antrenare este aici

marginit superior de1

ρ

i ξi.

Prin urmare, daca impunem conditia sa avem maximum νm erori la antrenare,aceasta implica

νm ≤ 1

ρ

i

ξi ⇔ νρ ≤ 1

m

i

ξi.

Aceasta explica forma functiei obiectiv din problema (P′′) de mai jos.

91.

Page 93: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Credit: B. Scholkopf, A. Smola, Learning with Kernels, MIT Press, 2002, pag 207.

92.

Page 94: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Problema de optimizare ν-SVM are forma primala urmatoare:

minw,w0,ξ,ρ

(1

2||w ||2 −νρ+

1

m

∑mi=1 ξi

)

a. ı. yi(w · xi + w0) ≥ ρ− ξi, pentru i = 1, . . . ,m

ξi ≥ 0 pentru i = 1, . . . ,m

ρ ≥ 0.

(P′′)

De remarcat prezenta variabilei suplimentare ρ, care va trebui sa fie supusa

procesului de optimizare la fel ca si variabilele w, w0 si ξnot.= (ξ1, . . . , ξm).a

a. Derivati forma duala corespunzatoare problemei ν-SVM. Simplificati rezul-tatul cat mai mult posibil.

aDin forma restrictiilor liniare rezulta ca distanta de la separatorul optimal la vectorii-suport care nu vor fi clasificatieronat ın raport cu marginea — adica cei pentru care multiplicatorul Lagrange αi corespunzator va apartine intervalului

(0,1

m), conform problemei duale (D′′) de mai jos — va fi

ρ

||w||.

93.

Page 95: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Vom urma liniile ,,metodologice“ care au fost folosite laCMU, 2010 fall, ZivBar-Joseph, HW4, pr. 1.3-5 si CMU, 2012 spring, Ziv Bar-Joseph, HW3, pr.3.2. Mai ıntai, este usor de verificat faptul ca problema de optimizare (P′′)satisface conditia lui Slater: luand w = 0, w0 = 0, ρ = 0 si ξi = 1, restrictiileyi(w · xi + w0) > ρ − ξi sunt ındeplinite, pentru i = 1, . . . ,m. Prin urmare, vomputea opta ca ın loc sa rezolvam problema (P′′), sa rezolvam duala ei.

Apoi, lagrangeanul generalizat pentru problema (P′′) este

LP (w,w0, ξ, ρ, α, β, δ) =1

2||w||2 − νρ+

1

m

m∑

i=1

ξi

−m∑

i=1

αi(yi(w · xi + w0)− ρ+ ξi)−m∑

i=1

βiξi − δρ,

unde αi, βi si δ sunt multiplicatori Lagrange.

Calculand derivatele partiale ale lui LP

ın raport cu variabilele primale w, w0, ξsi ρ si apoi egalandu-le cu 0, vom obtineimediat:

w =∑m

i=1 αiyixi

∑mi=1 αiyi = 0

αi + βi =1

mpentru i = 1, . . . ,m

∑mi=1 αi − δ = ν.

94.

Page 96: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Din relatia αi + βi =1

m, tinand cont ca αi ≥ 0 si βi ≥ 0, rezulta αi ∈

[

0,1

m

]

si βi ∈[

0,1

m

]

pentru 1, . . . ,m. De asemenea, din relatia∑m

i=1 αi − δ = ν rezulta ca∑m

i=1 αi ≥ ν.

Conditiile de complementaritate KKT sunt: αi(yi(w · xi + w0) − ρ + ξi) = 0, βiξi = 0 siδρ = 0.

Substituind w =∑m

i=1 αiyixi ın expresia lui LP si apoi facand diversele simplificari posi-bile, va rezulta ca forma duala corespunzatoare problemei (P′′) este:

maxα

(

− 1

2

i,j αiαjyiyjxi · xj

)

a. ı. 0 ≤ αi ≤1

mpentru i = 1, . . . ,m

∑mi=1 αiyi = 0

∑mi=1 αi ≥ ν.

(D′′)

De remarcat ca

− multiplicatorii Lagrange βi si δ nu apar ın (D′′);

− ın functia obiectiv a problemei duale (D′′) nu apare termenul∑m

i=1 αi, care era prezentın functia obiectiv a problemei duale atat pentru SVM cu margine “hard” cat si pentruC-SVM;

− ın schimb, ın partea de restrictii apare conditia suplimentara∑m

i=1 αi ≥ ν.

95.

Page 97: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

b. Stabiliti relatiile de legatura ıntre w, w0, ρ, prin care am notat solutiileproblemei (P′′), si solutiile problemei duale de la punctul b.

In ce priveste legatura dintre solutiile formei primale (P′′) si cele ale formeiduale (D′′), avem mai ıntai w =

∑mi=1 αiyixi. Apoi, din conditiile de comple-

mentaritate KKT deducem ca daca exista un αi astfel ıncat 0 < αi <1

m, atunci

βi > 0 si deci ξi = 0. De asemenea, αi > 0 implica yi(w · xi + w0)− ρ+ ξi = 0, deunde rezulta yi(w · xi + w0) = ρ.

Folosind aceasta ultima relatie, valorile optime w0 si ρ se vor determina astfel:daca avem x+ o instanta pozitiva si x− o instanta negativa astfel ıncat multipli-catorii Lagrange corespunzatori sunt ın intervalul

(0, 1

m

),a atunci w ·x++w0 = ρ

si w · x− + w0 = −ρ. Aceste ultime doua ecuatii formeaza un sistem din care seobtin imediat w0 si ρ:

w0 = −1

2w · (x+ + x−) ρ =

1

2w · (x+ − x−)

Observatie: In articolul despre ν-SVM citat ın enunt, autorii extind acestprocedeu de calcul pentru valorile w0 si ρ la mai multe perechi de instantepozitive si respectiv negative, pentru a obtine un rezultat cat mai robust.

aDe remarcat ca daca exista una din cele doua instante, atunci ın mod necesar exista si cea de-a doua, fiindca∑i αiyi = 0.

96.

Page 98: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

c. Care este regula de clasificare a unei instante de test oarecare x′ ın acestmodel?

Data fiind o instanta noua (de test) x′, ea va fi clasificata conform expresiei

sign

(m∑

i=1

αiyixi · x′ + w0

)

.

Observatie:

La MIT, 2004 fall, Tommi Jaakkola, HW3, pr. 3.3, pe un set de date [LC:oarecare] este pusa ilustrata corelarea ıntre diferite valori (crescatoare) alelui ν si valorile obtinute de nu-SVM (folosind RBF cu σ = 1) pentru erorilela antrenare si, respectiv, la testare:

ν 0.01 0.1 0.3 0.5 0.7training error 0 0.01 0.05 0.07 0.18test error 0.013 0.014 0.073 0.105 0.195

ceea ce arata ca parametrul ν are ıntr-adevar efectul de dorit, ın sensul canumarul de erori la antrenare creste [ın general] odata cu valoarea parametru-lui ν.

97.

Page 99: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

The one-class SVM (Max Margin) with soft margin

solved using [ the approach/idea from ] ν−SVM

MIT, 2009 fall, Tommy Jaakkola, Lecture notes 5

[ adapted by Liviu Ciortuz ]

98.

Page 100: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Aici vom continua sa lucram asupra problemei de optimizare SVM one-classvarianta Max Margin (vezi Stanford, 2007 fall, Andrew Ng, practice midtermexam, pr. 4).

Mai ıntai precizam ca, ın ceea ce priveste versiunea cu margine “hard”, fatade forma primala pe care am considerat-o ın problema mentionata, acum vomlucra cu o versiune mai generala:a

minw,ρ

(1

2‖w‖2 − ρ

)

a. ı. w · xi ≥ ρ, pentru i = 1, . . . ,m.

Forma aceasta este mai convenabila pentru obiectivul pe care ni-l fixam aici,acela de a elabora cazul marginii “soft” pentru problema SVM one-class(varianta Max Margin) urmand abordarea de tip ν-SVM.

Va reamintim ca ν-SVM foloseste un parametru numeric (ν) care va functionaca margine superioara pentru proportia de erori la antrenare din totalulinstantelor de antrenament.

aDistanta de la hiperplanul de separare optimala la vectorii-suport care nu produc erori ın raport cu marginea va

fiρ

||w||.

Din punctul de vedere al ,,marginii“ geometrice, noi am vrea ca [si] ρ sa fie maximizat. Aceasta echivaleaza cu aminimiza −ρ. Asa se justifica (ın mod intuitiv) expresia functiei obiectiv din problema de optimizare pe care urmeazasa o formulam.

99.

Page 101: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Forma primala pe care o vom considera aici pentru problema ν−SVM one-class (varianta Max Margin) este de asemenea usor schimbata ın raportcu formularea originala a problemei ν-SVM (B. Scholkopf, A. Smola, R.Williamson si P. Bartlett, New Support Vector Machines, 2000):

minw,ξ,ρ

(1

2‖w‖2 − ρ+

1

νm

∑mi=1 ξi

)

a. ı. w · xi ≥ ρ− ξi, pentru i = 1, . . . ,m

ξi ≥ 0 pentru i = 1, . . . ,m.

(P′′′)

Remarcati faptul ca ın ambele probleme de optimizare date mai sus n-au fostimpuse restrictii asupra variabilei ρ.

a. Demonstrati ca forma duala corespunzatoare formei primale (P′′′) a prob-lemei ν-SVM one-class (Max Margin) este urmatoarea:

maxα

(

− 1

2

i,j αiαjxi · xj

)

a. ı. 0 ≤ αi ≤1

νmpentru i = 1, . . . ,m

∑mi=1 αi = 1.

(D′′′)

100.

Page 102: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Expresia lagrangeanului generalizat este:

LP (w, ξ, ρ, α, β) =1

2w2 − ρ+

1

νm

i

ξi −∑

i

αi(w · xi − ρ+ ξi)−∑

i

βiξi.

Calculand derivatele partiale ale lui LP ın raport cu variabilele primale w, ξsi respectiv ρ, iar apoi egalandu-le cu 0 vom avea:

∂wLP (w, ξ, ρ, α, β) = 0 ⇒ w −∑i αixi = 0 ⇒ w =

i αixi

∂ξiLP (w, ξ, ρ, α, β) = 0 ⇒ 1

νm− αi − βi = 0 ⇒ αi + βi =

1

νm

Observatie: Intrucat αi ≥ 0 si βi ≥ 0, rezulta

αi ∈[

0,1

νm

]

si βi ∈[

0,1

νm

]

pentru i = 1, . . . ,m (⋆)

∂ρLP (w, ξ, ρ, α, β) = 0 ⇒ −1 +

i αi = 0 ⇒∑

i αi = 1. (⋆⋆)

101.

Page 103: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Folosind relatiile acestea, expresia lagrangeanului de mai sus devine:

1

2

i

j

αiαjxi · xj − ρ+1

νm

i

ξi −∑

i

αi

(∑

j

αjxj

)

· xi

+ρ∑

i

αi

︸ ︷︷ ︸

1

−∑

i

(αi + βi)︸ ︷︷ ︸

1/(νm)

ξi =

−1

2

i

j

αiαjxi · xjnot.= LD(α)

Prin urmare, tinand cont de restrictiile (⋆) si (⋆⋆), duala problemei ν-SVMone-class (Max Margin) va avea exact forma indicata ın enunt (D′′′).

102.

Page 104: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

b. Scrieti conditiile de complementaritate KKT pentru problema primala(P′′′), apoi aratati cum anume se poate calcula ρ, valoarea rezultata pentruvariabila ρ la rezolvarea problemei duale (D′′′), ın functie de valorile celorlaltevariabile (w, αi, ξi, etc).

Credit: Tommi Jaakkola, MIT,ML course, 2009 fall, lecture notes 5.

103.

Page 105: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Conditiile de complementaritate KKT pentru problema primala (P′′′) sunt:

αi(w · xi − ρ+ ξi) = 0 si βiξi = 0 pentru i = 1, . . . ,m.

Valorile care rezulta pentru multiplicatorii Lagrange αi la rezolvarea proble-mei (D′′′) conduc la urmatoarele cazuri:

• αi ∈(

0,1

νm

)

⇒ αi > 0 si βi > 0KKT=⇒

{w · xi − ρ+ ξi = 0ξi = 0

⇒ w · xi = ρ

• αi = 0 ⇒ βi =1

νm

KKT=⇒ ξi = 0 ⇒ w · xi ≥ ρ

• αi =1

νm⇒ βi = 0

KKT=⇒ ξi ≥ 0 si w · xi ≥ ρ− ξi

104.

Page 106: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Asadar,

− daca ın urma rezolvarii problemei duale (D′′′) exista un αi ∈(

0,1

νm

)

, atunci

valoarea lui ρ va fi w · xi;

− altfel, adica daca pentru orice i ∈ {1, . . . ,m} avem fie αi =1

νmfie αi = 0,

atunci putem alege pentru ρ valoarea valuarea maxima a lui ρ astfel ıncat∑

{i : αi=1/(νm)}(ρ− w · xi) ≤ νmρ.

Observatie (1):

Este imposibil ca αi = 0 pentru toti i = 1, . . . ,m, ıntrucat∑m

i=1 αi = 1.a Nici

cazul αi =1

νmpentru toti i = 1, . . . ,m nu este posibil ıntrucat aceasta ar

implica ν = 1 (tot datorita relatiei∑m

i=1 αi = 1 care a fost folosita si mai sus),ceea ce contravine faptului ca pentru ν−SVM avem ıntotdeauna ν ∈ (0, 1).

aLC: Multumesc lui Sebastian Ciobanu pentru aceasta observatie.

105.

Page 107: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Observatie(2): Prof. Tommi Jaakkola de la MIT a aratat ca problema (P′′′)este echivalenta cu urmatoarea problema (fara restrictii, ınsa folosind functiade cost hinge):

minw,ρ

1

2‖w‖2 − ρ+

1

νm

m∑

i=1

max{0, ρ− w · xi}

si, ın consecinta, solutia ρ se poate alege ca fiind cea mai mare valoare a luiρ astfel ıncat

j

max{0, ρ− w · xj} =∑

j

max{0, ρ−∑

i

αixi · xj} ≤ νmρ.

Credit: Tommi Jaakkola, MIT,ML course, 2009 fall, lecture notes 5.

106.

Page 108: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

The Minimum Enclosing Ball (MEB) problem

solved using ν−SVM

MIT, 2009 fall, Tommy Jaakkola, Lecture notes 5

[ adapted by Liviu Ciortuz ]

107.

Page 109: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Date fiind instantele x1, . . . , xm ∈ Rd, ne propunem sa gasim o sfera care sa

includa toate punctele xi si sa aiba cea mai mica raza posibila. Pentru aceasta,formulam urmatoarea problema de optimizare convexa:

minR,w

R2 astfel ıncat ||w − xi||2 ≤ R2 pentru i = 1, . . . ,m.

De remarcat faptul ca restrictiile din formularea acestei probleme sunt deordin patratic (spre deosebire de cazul problemei SVM clasice, ın carerestrictiile sunt liniare). La finalul rezolvarii acestei probleme,

− valoarea gasita pentru w va reprezenta centrul sferei;

− vectorii-suport vor fi punctele xi de pe suprafata sferei; restrictiile core-spunzatoare lor vor fi satisfacute cu egalitate (||w − xi||2 = R2).

Ca si ın cazul problemei ν−SVM, putem impune conditia ca maximum νmpuncte (din totalul celor m) sa fie lasate ın afara sferei. Corespunzator, prob-lema de optimizare convexa de tip ν-SVM va fi:

minR,w,ξ

(

R2 +1

νm

m∑

i=1

ξi

)

astfel ıncat ||w − xi||2 ≤ R2 + ξi pentru i = 1, . . . ,m

ξi ≥ 0 pentru i = 1, . . . ,m.

(Piv)

108.

Page 110: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Observatie

Aceasta este o alta varianta a problemei one-class decat varianta Max Margin .

Tommi Jaakkola (MIT, 2009 fall, HW2, pr. 2.a) a aratat ca ıntr-un spatiude ,,trasaturi“ ın care ‖φ(xi)‖ = c pentru i = 1, . . . ,m, unde c este o constantaoarecare, cele doua variante ale problemei one-class sunt echivalente.

Remarcati faptul ca pentru nucleul RBF aceasta conditie este ındeplinita,

fiindca ‖φ(x)‖2 = K(x, x) = exp(

− ‖x− x‖22σ2

)

= 1, pentru orice instanta x.

109.

Page 111: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

a. Demonstrati ca forma duala a problemei de tip ν-SVM de mai sus este:

maxα

(

−∑i

j αiαjxi · xj +∑m

i=1 αixi · xi

)

a. ı. 0 ≤ αi ≤1

νmpentru i = 1, . . . ,m

∑mi=1 αi = 1

(Div)

unde αi este multiplicatorul Lagrange corespunzator restrictiei i ∈ {1, . . . ,m}.

Lagrangeanul generalizat pentru problema de tip ν-SVM data ın enunt ınforma primala (Piv)) este:

LP (R,w, ξ, α, β) = R2 +1

νm

m∑

i=1

ξi +

m∑

i=1

αi[(w − xi)2 −R2 − ξi]−

m∑

i=1

βiξi,

unde βi sunt multiplicatorii Lagrange corespunzatori restrictiilor ξi ≥ 0.

110.

Page 112: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Conditiile de optimalitate KKT pentru LP se obtin egaland cu 0 derivatelesale partiale ın raport cu R, w si respectiv ξi:

∂RLP (R,w, ξ, α, β) = 0 ⇒ 2R− 2R

m∑

i=1

αi = 0 ⇒m∑

i=1

αi = 1

∂wLP (R,w, ξ, α, β) = 0 ⇒ 2

m∑

i=1

αi(w − xi) = 0 ⇒ w =1

m∑

i=1

αi

︸ ︷︷ ︸1

m∑

i=1

αixi =

m∑

i=1

αixi

∂ξiLP (R,w, ξ, α, β) = 0 ⇒ 1

νm− αi − βi = 0 ⇒ αi + βi =

1

νm.

Din ultima relatie obtinuta mai sus, tinand cont ca αi ≥ 0 si βi ≥ 0, rezulta ca

αi ∈[

0,1

νm

]

si βi ∈[

0,1

νm

]

pentru i = 1, . . . ,m.

111.

Page 113: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Inlocuind w cu∑m

i=1 αixi ın expresia lui LP si tinand cont si de celelalte douaegalitati deduse mai sus, vom avea:

R2 +1

νm

m∑

i=1

ξi +

m∑

i=1

αi

[(w − xi)

2 −R2 − ξi]−

m∑

i=1

βiξi

= R2 +1

νm

m∑

i=1

ξi + w2m∑

i=1

αi

︸ ︷︷ ︸

1

−2w

m∑

i=1

αixi

︸ ︷︷ ︸w

+

m∑

i=1

αix2i −R2

m∑

i=1

αi

︸ ︷︷ ︸

1

−m∑

i=1

(αi + βi)︸ ︷︷ ︸

1

νm

ξi

= −m∑

i=1

w2 +

m∑

i=1

αix2i = −

m∑

i=1

m∑

j=1

αiαjxi · xj +

m∑

i=1

αix2i

not.= LD(α)

In concluzie, vom regasi exact forma duala a problemei ν-SVM din enunt.

112.

Page 114: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

b. Indicati relatia de legatura dintre solutia w a problemei primale si solutiaα a problemei duale. (Remarcati semnificatia geometrica a rezultatului!)

Conform rezultatului de la punctul a, vom avea: w =∑m

i=1 αixi. Aceastaınseamna ca centrul sferei de incluziune minimala este media ponderata ainstantelor xi, ponderile fiind chiar multiplicatorii Lagrange αi.

c. Cum se poate calcula valoarea optima R (pentru variabila R din formaprimala (Piv)) pornind de la solutia problemei duale?

Sugestie: Pentru aceasta, este util ca, pentru problema de tip ν-SVM de maisus (Div), sa enuntati conditiile de complementaritate KKT.

113.

Page 115: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Conditiile de complementaritate KKT sunt: αi[(w−xi)2−R2−ξi] = 0 si βiξi = 0,

pentru i = 1, . . . ,m.

Solutia pentru R se obtine din aceste doua conditii, si anume:

− daca exista un i astfel ıncat αi > 0, atunci din prima conditie de comple-mentaritate KKT rezulta R2 = (w − xi)

2 − ξi;

− daca, ın plus, αi <1

νm, atunci din egalitatea αi + βi =

1

νmrezulta ca βi > 0;

mai departe, ın baza celei de-a doua conditii de complementaritate KKT, varezulta ξi = 0. Asadar, R2 = (w − xi)

2.

Observatii:

1. Evident, ın solutia problemei duale va exista cel putin un αi > 0, fiindca∑

i αi = 1 si αi ≥ 0 pentru i = 1, . . . ,m.

2. Daca ın solutia problemei duale nu exista niciun αi astfel ıncat 0 < αi <1

νm,

ınseamna ca pentru orice αi 6= 0 avem αi =1

νm. Atunci, datorita relatiei

αi + βi =1

νm, vom avea ca orice αi 6= 0 implica βi = 0 si deci ξi > 0. In aceasta

situatie, vom putea lua R = max{i : αi=0}(w − xi)2.

114.

Page 116: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Linear kernel, ν = 0 Linear kernel, ν = 0.1

115.

Page 117: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Quadratic kernel, ν = 0 Quadratic kernel, ν = 0.1

116.

Page 118: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Gaussian kernel, ν = 0

Gaussian kernel, ν = 0.1

117.

Page 119: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

SVR — Support Vector Regression,

the hard margin version

Stanford, 2014 fall, Andrew Ng, midterm, pr. 4

118.

Page 120: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Until now, we saw how the SVM can be used for classification. In this prob-lem, we will develop a modified algorithm, called the Support Vector Re-gression (SVR) algorithm, which can instead be used for regression, withcontinuous valued labels y ∈ R.

Suppose we are given a training set {(x1, y1), . . . , (xm, ym)}, where xi ∈ Rn+1 and

yi ∈ R. We would like to find a hypothesis of the form hw,b(x) = w · x+ b witha small value of ‖w‖. Our (convex) optimization problem is:

minw,b

1

2‖w‖2

s. t. yi − (w · xi + b) ≤ ε(i = 1, . . . ,m) (22)

(w · xi + b)− yi ≤ ε(i = 1, . . . ,m). (23)

where ε > 0 is a given, fixed value. Notice how the original functional mar-gin constraint has been modified to now represent the distance between thecontinuous y and our hypothesis’ output.

119.

Page 121: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

a. Write down the Lagrangian for the optimization problem above. Wesuggest that you use two sets of Lagrange multipliers αi and α∗

i , correspondingto the two inequality constraints (labeled (22) and (23) above), so that theLagrangian would be written L(w, b, α, α∗).

Let αi, α∗i ≥ 0 (i = 1, . . . ,m) be the Lagrange multiplier for (22) and (23) respec-

tively. Then, the Lagrangian can be written as:

L(w, b, α, α∗) =1

2‖w‖2 +

m∑

i=1

αi(yi − w · xi − b− ε) +

m∑

i=1

α∗i (−yi + w · xi + b− ε).

120.

Page 122: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

b. Derive the dual optimization problem. You will have to take derivativesof the Lagrangian with respect to w and b.

First, the dual objective function can be written as:

LD(α, α∗) = minw,b

L(w, b, α, α∗).

Now, taking the derivatives of Lagrangian with respect to all primal variables,and equating them to 0, we have:

∂L

∂w= w −

m∑

i=1

(αi − α∗i )xi = 0 ⇒ w =

m∑

i=1

(αi − α∗i )xi

∂L

∂b=

m∑

i=1

(α∗i − αi) = 0.

121.

Page 123: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Substituting the above two relations back into L(w, b, α, α∗), we have:

LD(α, α∗) =1

2‖w‖2 +

m∑

i=1

αi(yi − w · xi − b − ε) +

m∑

i=1

α∗i (−yi + w · xi + b− ε)

=1

2‖w‖2 − ε

m∑

i=1

(αi + α∗i ) +

m∑

i=1

yi(αi − α∗i )

−m∑

i=1

(αi − α∗i )w · xi − b

m∑

i=1

(αi − α∗i )

︸ ︷︷ ︸0

=1

2‖

m∑

i=1

(αi − α∗i )xi‖2 − ε

m∑

i=1

(αi + α∗i ) +

m∑

i=1

yi(αi − α∗i )

−m∑

i=1

(αi − α∗i )

m∑

j=1

(αj − α∗j )xj · xi

= −1

2

m∑

i=1

m∑

j=1

(αi − α∗i )(αj − α∗

j )xi · xj − ε

m∑

i=1

(αi + α∗i ) +

m∑

i=1

yi(αi − α∗i ).

122.

Page 124: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Now the dual problem can be formulated as:

maxα,α∗

(

− 1

2

m∑

i=1

m∑

j=1

(αi − α∗i )(αj − α∗

j )xi · xj − ε

m∑

i=1

(αi + α∗i ) +

m∑

i=1

yi(αi − α∗i ))

s. t.m∑

i=1

(αi − α∗i ) = 0

αi, α∗i ≥ 0 (i = 1, . . . ,m).

c. Show that this algorithm can be kernelized. For this, you have to show that(i) the dual optimization objective can be written in terms of inner-productsof training examples, and (ii) at test time, given a new x, the hypothesishw,b(x) can also be computed in terms of inner products.

This algorithm can be kernelized because when making prediction at x, wehave:

f(w, x) = w · x+ b =

m∑

i=1

(αi − α∗i )xi · x+ b =

m∑

i=1

(αi − α∗i ) k(xi, x) + b.

This shows that predicting function can be written in a kernel form.

123.

Page 125: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

SVR — Support Vector Regression,using the ε-sensitive loss function

CMU, 2014 spring, B. Poczos, A. Singh, HW2, pr. 1

CMU, 2015 fall, Z. Bar-Joseph, E. Xing, HW3, pr. 2.1-4

124.

Page 126: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

In this exercise you have to derive the dual form of the support vector regres-sion (SVR) optimization problem, using the epsilon sensitive loss function.

Lε(x, y, f) = |y − f(x)|ε not.= max(0, |y − f(x)| − ε). (24)

Here x is the input, y is the output, and f is the function used for predictingthe label.

Your training data is (x1, y1), . . . , (xn, yn), where xi ∈ Rm, yi ∈ R.

[Note that the hinge loss that we used at CMU, 2008 fall, Eric Xing, HW2,pr. 1.2 is only designed for classification, we cannot use that for regression.]

Using this notation, the SVR objective function is defined as

1

2‖w‖2 + C

n∑

i=1

Lε(xi, yi, f),

where f(x) = w · x, and C, ε > 0 are parameters.

125.

Page 127: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

a. Introduce appropriate slack variables, and rewrite this problem as a quadratic prob-lem (i.e. quadratic objective with linear constraints). This form is called the primalform of the SVR optimization problem.

In the above loss function, ε defines the region inside which errors are ignored. Notethat the ε-sensitive loss function (24) is non-differentiable due to the absolute value in theloss function. We can introduce slack variables ξ and ξ∗ to account for errors in pointsthat lie outside the tube (similarly to the way slack variables used in classification).a

yi − w · xi − ε ≤ ξi (25)

w · xi − yi − ε ≤ ξ∗i (26)

ξi, ξ∗i ≥ 0 for i = 1, . . . , n (27)

Thus, we can rewrite the primal form as,

minw∈Rd, ξ∈Rn, ξ∗∈Rn

(1

2‖w‖2 + C

n∑

i=1

(ξi + ξ∗i ))

s.t. equations (25)–(27) are satisfied.

aNote that the relationships (25) and (26) are equivalent to the following ones:

w · xi ≥ yi − ε− ξi

w · xi ≤ yi + ε+ ξi

126.

Page 128: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

b. Write down the Lagrangian function for the above primal form.

Having the above objective and constraints, the Lagrangian function can bewritten as follows.

Lnot.= L(w, ξ, ξ∗, αi, α

∗i , βi, β

∗i ) (28)

=1

2‖w‖2 + C

n∑

i=1

(ξi + ξ∗i )−n∑

i=1

(βiξi + β∗i ξ

∗i )

−n∑

i=1

αi(ε+ ξi − yi + w · xi)−n∑

i=1

α∗i (ε+ ξ∗i + yi − w · xi),

where the Lagrange multipliers have to satisfy the positivity constraints

αi, α∗i , βi, β

∗i ≥ 0, for i = 1, . . . , n.

127.

Page 129: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

c. Using the Karush-Kunh-Tucker conditions derive the dual form.

We need to solve the following min-max problem:

(w, ξ, ξ∗, α, α∗i , β, β

∗) = arg minw,ξ,ξ∗

maxα,α∗

i,β,β∗

L(w, ξ, ξ∗, α, α∗i , β, β

∗)

= arg maxα,α∗

i,β,β∗

minw,ξ,ξ∗

L(w, ξ, ξ∗, α, α∗i , β, β

∗)

[The max and min can be switched because the so-called strong duality holdsfor SVR problems similarly to SVM problems (see CMU, 2010 fall, Ziv Bar-Joseph, HW4, pr. 1.3-5 and CMU, 2012 spring, Ziv Bar-Joseph, HW3, pr. 3.2).]

Taking the derivative of L w.r.t. the primal variables w, ξi and ξ∗i , and then equatingthem to 0, we get

∂L

∂w= w −

n∑

i=1

(αi − α∗i )xi = 0

∂L

∂ξi= C − αi − βi = 0 and

∂L

∂ξ∗i= C − α

∗i − β

∗i = 0, for i = 1, . . . , n.

From the last two equations we have that

0 ≤ βi = C − αi and 0 ≤ β∗i = C − α

∗i for i = 1, . . . , n.

128.

Page 130: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Substituting the results back into the Lagrangian (28), we get

LD(28)=

1

2‖w‖2 + C

n∑

i=1

(ξi + ξ∗i )−n∑

i=1

(βiξi + β∗i ξ

∗i )

−n∑

i=1

αi(ε+ ξi − yi + w · xi)−n∑

i=1

α∗i (ε+ ξ∗i + yi − w · xi),

=1

2‖

n∑

i=1

(αi − α∗i )xi‖2 +

n∑

i=1

ξi (C − βi − αi)︸ ︷︷ ︸

0

+

n∑

i=1

ξ∗i (C − β∗i − α∗

i )︸ ︷︷ ︸

0

−ε

n∑

i=1

(αi + α∗i ) +

n∑

i=1

yi(αi − α∗i )−

n∑

i=1

(αi − α∗i ) w · xi

︸ ︷︷ ︸(

∑nj=1

(αj−α∗j)xj

)

·xi

= −1

2

n∑

i=1

n∑

j=1

(αi − α∗i )(αj − α∗

j )xi · xj − εn∑

i=1

(αi + α∗i ) +

n∑

i=1

yi(αi − α∗i )

129.

Page 131: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Therefore, the dual problem is

maxα,α∗

(

− 1

2

n∑

i=1

n∑

j=1

(αi − α∗i )(αj − α∗

j )xi · xj − ε

n∑

i=1

(αi + α∗i ) +

n∑

i=1

yi(αi − α∗i ))

s. t. αi, α∗i ∈ [0, C] for i = 1, . . . , n.

[Note that if you use w ·x+b instead of w ·x, then you have an extra constraint:∑n

i=1(αi − α∗i ) = 0.]

130.

Page 132: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

d. Can we use quadratic optimization solvers to solve the dual problem?

The problem has a quadratic objective with linear constraints, therefore itcan be solved by a Quadratic Programming solver.

e. How would you define support vectors in this problem?

The KKT complementary slackness conditions are as follows. In the optimalsolutions of (5) and (6) we have that

αi(ε+ ξi − yi + w · xi) = 0 (29)

α∗i (ε+ ξ∗i + yi − w · xi) = 0

βiξi = 0

β∗i ξ

∗i = 0

for all i = 1, . . . , n.

Equation (29) implies that if αi > 0, then (ε+ ξi − yi + w · xi) = 0.

Now, if ξi = 0 (which implies βi ∈ [0, C] and therefore αi ∈ [0, C] too), then itmeans that xi is on the border of the ε-tube, therefore xi is a margin supportvector. If ξi > 0 (which implies βi = 0 and therefore αi = C), then it means thatwe are outside of the ε-tube. These xi vectors are the non-margin supportvectors. Similar reasoning holds for ξ∗i and α∗

i .

131.

Page 133: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

f. Write down the equation that can be used for predicting the label of anunseen sample X.

Since for prediction we use f(x) = w · x, and w =∑n

i=1(αi − α∗i )xi, therefore

f(x) =

n∑

i=1

(αi − α∗i )xi · x.

g. Is it possible to kernelize this algorithm?

Yes, we can write the above equation in the kernel form.

f(x) =n∑

i=1

(αi − α∗i ) k(xi, x).

h. Give one reason why do we usually solve the dual problem of SVR andSVM instead of the primal.

Because we can introduce kernel functions here [LC: so we can (hopefully)get better hypotheses in the feature space, and also because kernel values canbe efficiently computed].

132.

Page 134: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

i. What happens if we change ε? ε plays the opposite role of C. The smaller

the value of ε, the harder SVR tries to fit smaller errors around the learntSVR function, and leads to a more complex model. Smaller ε also leads to aless sparse solution (more support vectors).Small ε – More complex model. Low Bias, High variance.Large ε – Less complex model. High Bias, Low Variance.

j. What happens if we change C?

C plays a similar role as it did during classification. It is a measure of howstrongly we penalize errors. It should be tuned for bias vs. variance withmodel selection. The higher the value of C, the larger the tendency of SVMto penalize errors and overfit the data. The lower the value of C, the largerits tendency to ignore errors and underfit the data.Large C – More complex model. Low Bias, High variance.Small C – Less complex model. High Bias, Low Variance.

133.

Page 135: Support Vector Machines (SVMs)ciortuz/ML.ex-book/SLIDES/ML... · 2020-02-20 · CMU, 2016 fall, N. Balcan, M. Gormley, HW4, ... the i-th column xi are the features of the i-th training

Exemplification

Credit: CMU, 2014 fall, Eric Xing, Barnabas Poczos, HW2, pr. 1.2.4

134.