cs260: machine learning algorithmsweb.cs.ucla.edu/.../teaching/cs260_winter2019/lecture3.pdf ·...

57
CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui Hsieh UCLA Jan 14, 2019

Upload: others

Post on 15-Sep-2020

21 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

CS260: Machine Learning AlgorithmsLecture 3: Optimization

Cho-Jui HsiehUCLA

Jan 14, 2019

Page 2: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Optimization

Goal: find the minimizer of a function

minw

f (w)

For now we assume f is twice differentiable

Machine learning algorithm: find the hypothesis that minimizestraining error

Page 3: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Convex function

A function f : Rn → R is a convex function

⇔ the function f is below any line segment between two points on f :

∀x1, x2, ∀t ∈ [0, 1], f (tx1 + (1− t)x2) ≤ tf (x1) + (1− t)f (x2)

Page 4: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Convex function

A function f : Rn → R is a convex function

⇔ the function f is below any line segment between two points on f :

∀x1, x2, ∀t ∈ [0, 1], f (tx1 + (1− t)x2)≤tf (x1) + (1− t)f (x2)

Strict convex: f (tx1 + (1− t)x2) < tf (x1) + (1− t)f (x2)

Page 5: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Convex function

Another equivalent definition for differentiable function:

f is convex if and only if f (x) ≥ f (x0) +∇f (x0)T (x − x0), ∀x , x0

Page 6: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Convex function

Convex function:

(for differentiable function) ∇f (w∗) = 0⇔ w∗ is a global minimumIf f is twice differentiable ⇒

f is convex if and only if ∇2f (w) is positive semi-definiteExample: linear regression, logistic regression, · · ·

Page 7: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Convex function

Strict convex function:

∇f (w∗) = 0⇔ w∗ is the unique global minimummost algorithms only converge to gradient= 0

Example: Linear regression when XTX is invertible

Page 8: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Convex vs Nonconvex

Convex function:∇f (w∗) = 0⇔ w∗ is a global minimumExample: linear regression, logistic regression, · · ·

Non-convex function:∇f (w∗) = 0⇔ w∗ is Global min, local min, or saddle point

(also called stationary points)most algorithms only converge to stationary points

Example: neural network, · · ·

Page 9: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Gradient descent

Page 10: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Gradient Descent

Gradient descent: repeatedly do

w t+1 ← w t − α∇f (w t)

α > 0 is the step size

Generate the sequence w1,w2, · · ·converge to stationary points ( limt→∞ ‖∇f (w t)‖ = 0)

Step size too large ⇒ diverge; too small ⇒ slow convergence

Page 11: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Gradient Descent

Gradient descent: repeatedly do

w t+1 ← w t − α∇f (w t)

α > 0 is the step size

Generate the sequence w1,w2, · · ·converge to stationary points ( limt→∞ ‖∇f (w t)‖ = 0)

Step size too large ⇒ diverge; too small ⇒ slow convergence

Page 12: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Gradient Descent

Gradient descent: repeatedly do

w t+1 ← w t − α∇f (w t)

α > 0 is the step size

Generate the sequence w1,w2, · · ·converge to stationary points ( limt→∞ ‖∇f (w t)‖ = 0)

Step size too large ⇒ diverge; too small ⇒ slow convergence

Page 13: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Why gradient descent?

Successive approximation view

At each iteration, form an approximation function of f (·):

f (w t + d ) ≈ g(d ) := f (w t) +∇f (w t)Td +1

2α‖d‖2

Update solution by w t+1 ← w t + d ∗

d ∗ = arg mind g(d )

∇g(d ∗) = 0⇒ ∇f (w t) + 1αd ∗ = 0⇒ d ∗ = −α∇f (w t)

d ∗ will decrease f (·) if α (step size) is sufficiently small

Page 14: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Why gradient descent?

Successive approximation view

At each iteration, form an approximation function of f (·):

f (w t + d ) ≈ g(d ) := f (w t) +∇f (w t)Td +1

2α‖d‖2

Update solution by w t+1 ← w t + d ∗

d ∗ = arg mind g(d )

∇g(d ∗) = 0⇒ ∇f (w t) + 1αd ∗ = 0⇒ d ∗ = −α∇f (w t)

d ∗ will decrease f (·) if α (step size) is sufficiently small

Page 15: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Illustration of gradient descent

Page 16: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Illustration of gradient descent

Form a quadratic approximation

f (w t + d ) ≈ g(d ) = f (w t) +∇f (w t)Td +1

2α‖d‖2

Page 17: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Illustration of gradient descent

Minimize g(d ):

∇g(d ∗) = 0⇒ ∇f (w t) +1

αd ∗ = 0⇒ d ∗ = −α∇f (w t)

Page 18: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Illustration of gradient descent

Updatew t+1 = w t + d ∗ = w t−α∇f (w t)

Page 19: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Illustration of gradient descent

Form another quadratic approximation

f (w t+1 + d ) ≈ g(d ) = f (w t+1) +∇f (w t+1)Td +1

2α‖d‖2

d ∗ = −α∇f (w t+1)

Page 20: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Illustration of gradient descent

Updatew t+2 = w t+1 + d ∗ = w t+1−α∇f (w t+1)

Page 21: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

When will it diverge?

Can diverge (f (w t)<f (w t+1)) if g is not an upperbound of f

Page 22: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

When will it converge?

Always converge (f (w t)>f (w t+1)) when g is an upperbound of f

Page 23: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Convergence

Let L be the Lipchitz constant

(∇2f (x) � LI for all x)

Theorem: gradient descent converges if α < 1L

Why?

When α < 1/L, for any d ,

g(d ) = f (w t) +∇f (w t)Td +1

2α‖d‖2

> f (w t) +∇f (w t)Td +L

2‖d‖2

≥ f (w t + d )

So, f (w t + d ∗) < g(d ∗) ≤ g(0) = f (w t)In formal proof, need to show f (w t + d ∗) is sufficiently smaller thanf (w t)

Page 24: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Convergence

Let L be the Lipchitz constant

(∇2f (x) � LI for all x)

Theorem: gradient descent converges if α < 1L

Why?

When α < 1/L, for any d ,

g(d ) = f (w t) +∇f (w t)Td +1

2α‖d‖2

> f (w t) +∇f (w t)Td +L

2‖d‖2

≥ f (w t + d )

So, f (w t + d ∗) < g(d ∗) ≤ g(0) = f (w t)In formal proof, need to show f (w t + d ∗) is sufficiently smaller thanf (w t)

Page 25: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Convergence

Let L be the Lipchitz constant

(∇2f (x) � LI for all x)

Theorem: gradient descent converges if α < 1L

Why?

When α < 1/L, for any d ,

g(d ) = f (w t) +∇f (w t)Td +1

2α‖d‖2

> f (w t) +∇f (w t)Td +L

2‖d‖2

≥ f (w t + d )

So, f (w t + d ∗) < g(d ∗) ≤ g(0) = f (w t)

In formal proof, need to show f (w t + d ∗) is sufficiently smaller thanf (w t)

Page 26: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Convergence

Let L be the Lipchitz constant

(∇2f (x) � LI for all x)

Theorem: gradient descent converges if α < 1L

Why?

When α < 1/L, for any d ,

g(d ) = f (w t) +∇f (w t)Td +1

2α‖d‖2

> f (w t) +∇f (w t)Td +L

2‖d‖2

≥ f (w t + d )

So, f (w t + d ∗) < g(d ∗) ≤ g(0) = f (w t)In formal proof, need to show f (w t + d ∗) is sufficiently smaller thanf (w t)

Page 27: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Applying to Logistic regression

gradient descent for logistic regression

Initialize the weights w0

For t = 1, 2, · · ·Compute the gradient

∇f (w) = − 1

N

N∑n=1

ynxn1 + eynwT xn

Update the weights: w ← w − α∇f (w)

Return the final weights w

When to stop?

Fixed number of iterations, or

Stop when ‖∇f (w)‖ < ε

Page 28: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Applying to Logistic regression

gradient descent for logistic regression

Initialize the weights w0

For t = 1, 2, · · ·Compute the gradient

∇f (w) = − 1

N

N∑n=1

ynxn1 + eynwT xn

Update the weights: w ← w − α∇f (w)

Return the final weights w

When to stop?

Fixed number of iterations, or

Stop when ‖∇f (w)‖ < ε

Page 29: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Line Search

In practice, we do not know L · · ·need to tune step size when running gradient descent

Line Search: Select step size automatically (for gradient descent)

Page 30: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Line Search

In practice, we do not know L · · ·need to tune step size when running gradient descent

Line Search: Select step size automatically (for gradient descent)

Page 31: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Line Search

The back-tracking line search:

Start from some large α0

Try α = α0,α0

2 ,α0

4 , · · ·Stop when α satisfies some sufficient decrease condition

A simple condition: f (w + αd ) < f (w)often works in practice but doesn’t work in theory

A (provable) sufficient decrease condition:

f (w + αd ) ≤ f (w) + σα∇f (w)Td

for a constant σ ∈ (0, 1)

Page 32: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Line Search

The back-tracking line search:

Start from some large α0

Try α = α0,α0

2 ,α0

4 , · · ·Stop when α satisfies some sufficient decrease condition

A simple condition: f (w + αd ) < f (w)

often works in practice but doesn’t work in theoryA (provable) sufficient decrease condition:

f (w + αd ) ≤ f (w) + σα∇f (w)Td

for a constant σ ∈ (0, 1)

Page 33: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Line Search

The back-tracking line search:

Start from some large α0

Try α = α0,α0

2 ,α0

4 , · · ·Stop when α satisfies some sufficient decrease condition

A simple condition: f (w + αd ) < f (w)often works in practice but doesn’t work in theory

A (provable) sufficient decrease condition:

f (w + αd ) ≤ f (w) + σα∇f (w)Td

for a constant σ ∈ (0, 1)

Page 34: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Line Search

The back-tracking line search:

Start from some large α0

Try α = α0,α0

2 ,α0

4 , · · ·Stop when α satisfies some sufficient decrease condition

A simple condition: f (w + αd ) < f (w)often works in practice but doesn’t work in theory

A (provable) sufficient decrease condition:

f (w + αd ) ≤ f (w) + σα∇f (w)Td

for a constant σ ∈ (0, 1)

Page 35: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Line Search

gradient descent with backtracking line search

Initialize the weights w0

For t = 1, 2, · · ·Compute the gradient

d = −∇f (w)

For α = α0, α0/2, α0/4, · · ·Break if f (w + αd ) ≤ f (w) + σα∇f (w)Td

Update w ← w + αd

Return the final solution w

Page 36: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Stochastic Gradient descent

Page 37: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Large-scale Problems

Machine learning: usually minimizing the training loss

minw{ 1

N

N∑n=1

`(wTxn, yn)} := f (w) (linear model)

minw{ 1

N

N∑n=1

`(hw (xn), yn)} := f (w) (general hypothesis)

`: loss function (e.g., `(a, b) = (a− b)2)

Gradient descent:w ← w − η ∇f (w)︸ ︷︷ ︸

Main computation

In general, f (w) = 1N

∑Nn=1 fn(w),

each fn(w) only depends on (xn, yn)

Page 38: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Large-scale Problems

Machine learning: usually minimizing the training loss

minw{ 1

N

N∑n=1

`(wTxn, yn)} := f (w) (linear model)

minw{ 1

N

N∑n=1

`(hw (xn), yn)} := f (w) (general hypothesis)

`: loss function (e.g., `(a, b) = (a− b)2)

Gradient descent:w ← w − η ∇f (w)︸ ︷︷ ︸

Main computation

In general, f (w) = 1N

∑Nn=1 fn(w),

each fn(w) only depends on (xn, yn)

Page 39: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Stochastic gradient

Gradient:

∇f (w) =1

N

N∑n=1

∇fn(w)

Each gradient computation needs to go through all training samples

slow when millions of samples

Faster way to compute “approximate gradient”?

Use stochastic sampling:

Sample a small subset B ⊆ {1, · · · ,N}Estimated gradient

∇f (w) ≈ 1

|B|∑n∈B

∇fn(w)

|B|: batch size

Page 40: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Stochastic gradient

Gradient:

∇f (w) =1

N

N∑n=1

∇fn(w)

Each gradient computation needs to go through all training samples

slow when millions of samples

Faster way to compute “approximate gradient”?

Use stochastic sampling:

Sample a small subset B ⊆ {1, · · · ,N}Estimated gradient

∇f (w) ≈ 1

|B|∑n∈B

∇fn(w)

|B|: batch size

Page 41: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Stochastic gradient descent

Stochastic Gradient Descent (SGD)

Input: training data {xn, yn}Nn=1

Initialize w (zero or random)

For t = 1, 2, · · ·Sample a small batch B ⊆ {1, · · · ,N}Update parameter

w ← w − ηt 1

|B|∑n∈B

∇fn(w)

Extreme case: |B| = 1 ⇒ Sample one training data at a time

Page 42: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Stochastic gradient descent

Stochastic Gradient Descent (SGD)

Input: training data {xn, yn}Nn=1

Initialize w (zero or random)

For t = 1, 2, · · ·Sample a small batch B ⊆ {1, · · · ,N}Update parameter

w ← w − ηt 1

|B|∑n∈B

∇fn(w)

Extreme case: |B| = 1 ⇒ Sample one training data at a time

Page 43: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Logistic Regression by SGD

Logistic regression:

minw

1

N

N∑n=1

log(1 + e−ynwT xn)︸ ︷︷ ︸

fn(w)

SGD for Logistic Regression

Input: training data {xn, yn}Nn=1

Initialize w (zero or random)

For t = 1, 2, · · ·Sample a batch B ⊆ {1, · · · ,N}Update parameter

w ← w − ηt 1

|B|∑i∈B

−ynxn1 + eynwT xn︸ ︷︷ ︸∇fn(w)

Page 44: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Why SGD works?

Stochastic gradient is an unbiased estimator of full gradient:

E [1

|B|∑n∈B∇fn(w)] =

1

N

N∑n=1

∇fn(w)

= ∇f (w)

Each iteration updated by

gradient + zero-mean noise

Page 45: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Why SGD works?

Stochastic gradient is an unbiased estimator of full gradient:

E [1

|B|∑n∈B∇fn(w)] =

1

N

N∑n=1

∇fn(w)

= ∇f (w)

Each iteration updated by

gradient + zero-mean noise

Page 46: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Stochastic gradient descent

In gradient descent, η (step size) is a fixed constant

Can we use fixed step size for SGD?

SGD with fixed step size cannot converge to global/local minimizers

If w∗ is the minimizer, ∇f (w∗) = 1N

∑Nn=1∇fn(w∗)=0,

but1

|B|∑n∈B∇fn(w∗)6=0 if B is a subset

(Even if we got minimizer, SGD will move away from it)

Page 47: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Stochastic gradient descent

In gradient descent, η (step size) is a fixed constant

Can we use fixed step size for SGD?

SGD with fixed step size cannot converge to global/local minimizers

If w∗ is the minimizer, ∇f (w∗) = 1N

∑Nn=1∇fn(w∗)=0,

but1

|B|∑n∈B∇fn(w∗)6=0 if B is a subset

(Even if we got minimizer, SGD will move away from it)

Page 48: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Stochastic gradient descent

In gradient descent, η (step size) is a fixed constant

Can we use fixed step size for SGD?

SGD with fixed step size cannot converge to global/local minimizers

If w∗ is the minimizer, ∇f (w∗) = 1N

∑Nn=1∇fn(w∗)=0,

but1

|B|∑n∈B∇fn(w∗)6=0 if B is a subset

(Even if we got minimizer, SGD will move away from it)

Page 49: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Stochastic gradient descent

In gradient descent, η (step size) is a fixed constant

Can we use fixed step size for SGD?

SGD with fixed step size cannot converge to global/local minimizers

If w∗ is the minimizer, ∇f (w∗) = 1N

∑Nn=1∇fn(w∗)=0,

but1

|B|∑n∈B∇fn(w∗)6=0 if B is a subset

(Even if we got minimizer, SGD will move away from it)

Page 50: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Stochastic gradient descent

In gradient descent, η (step size) is a fixed constant

Can we use fixed step size for SGD?

SGD with fixed step size cannot converge to global/local minimizers

If w∗ is the minimizer, ∇f (w∗) = 1N

∑Nn=1∇fn(w∗)=0,

but1

|B|∑n∈B∇fn(w∗)6=0 if B is a subset

(Even if we got minimizer, SGD will move away from it)

Page 51: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Stochastic gradient descent, step size

To make SGD converge:

Step size should decrease to 0

ηt → 0

Usually with polynomial rate: ηt ≈ t−a with constant a

Page 52: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Stochastic gradient descent vs Gradient descent

Stochastic gradient descent:

pros:cheaper computation per iterationfaster convergence in the beginning

cons:less stable, slower final convergencehard to tune step size

(Figure from https://medium.com/@ImadPhd/

gradient-descent-algorithm-and-its-variants-10f652806a3)

Page 53: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Revisit perceptron Learning Algorithm

Given a classification data {xn, yn}Nn=1

Learning a linear model:

minw

1

N

N∑n=1

`(wTxn, yn)

Consider the loss:

`(wTxn, yn) = max(0,−ynwTxn)

What’s the gradient?

Page 54: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Revisit perceptron Learning Algorithm

`(wTxn, yn) = max(0,−ynwTxn)

Consider two cases:

Case I: ynwTxn > 0 (prediction correct)

`(wTxn, yn) = 0∂∂w `(w

Txn, yn) = 0

Case II: ynwTxn < 0 (prediction wrong)

`(wTxn, yn) = −ynwTxn∂∂w `(w

Txn, yn) = −ynxnSGD update rule: Sample an index n

w t+1 ←

{w t if ynwTxn≥0 (predict correct)

w t + ηtynxn if ynwTxn<0 (predict wrong)

Equivalent to Perceptron Learning Algorithm when ηt = 1

Page 55: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Revisit perceptron Learning Algorithm

`(wTxn, yn) = max(0,−ynwTxn)

Consider two cases:

Case I: ynwTxn > 0 (prediction correct)

`(wTxn, yn) = 0∂∂w `(w

Txn, yn) = 0

Case II: ynwTxn < 0 (prediction wrong)

`(wTxn, yn) = −ynwTxn∂∂w `(w

Txn, yn) = −ynxn

SGD update rule: Sample an index n

w t+1 ←

{w t if ynwTxn≥0 (predict correct)

w t + ηtynxn if ynwTxn<0 (predict wrong)

Equivalent to Perceptron Learning Algorithm when ηt = 1

Page 56: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Revisit perceptron Learning Algorithm

`(wTxn, yn) = max(0,−ynwTxn)

Consider two cases:

Case I: ynwTxn > 0 (prediction correct)

`(wTxn, yn) = 0∂∂w `(w

Txn, yn) = 0

Case II: ynwTxn < 0 (prediction wrong)

`(wTxn, yn) = −ynwTxn∂∂w `(w

Txn, yn) = −ynxnSGD update rule: Sample an index n

w t+1 ←

{w t if ynwTxn≥0 (predict correct)

w t + ηtynxn if ynwTxn<0 (predict wrong)

Equivalent to Perceptron Learning Algorithm when ηt = 1

Page 57: CS260: Machine Learning Algorithmsweb.cs.ucla.edu/.../teaching/CS260_Winter2019/lecture3.pdf · 2019. 1. 14. · CS260: Machine Learning Algorithms Lecture 3: Optimization Cho-Jui

Conclusions

Gradient descent

Stochastic gradient descent

Questions?