logistic regressionwcohen/10-601/logreg.pdf• learning as optimization • logistic regression via...

55
Logistic Regression William Cohen 1

Upload: others

Post on 07-Jul-2020

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Logistic Regression

William Cohen

1

Page 2: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Outline

•  Quick review – classi5ication, naïve Bayes, perceptrons – new result for naïve Bayes

•  Learning as optimization •  Logistic regression via gradient ascent •  Over5itting and regularization •  Discriminative vs generative classi5iers

2

Page 3: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

NAÏVE BAYES IS LINEAR

Page 4: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Question: what does the boundary between positive and negative look like for Naïve Bayes?

Page 5: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative
Page 6: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Naïve Bayes is Linear

•  Convenient assumptions: – Binary X’s – Binary Y’s

6

Page 7: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

argmaxy P(Xi = xi |Y = y )i∏ P(Y = y )

= argmaxy logP(Xi = xi |Y = y )i∑ + logP(Y = y )

= sign logP(xi | y+1)i∑ − logP(xi | y−1)

i∑ + logP(y+1)− logP(y−1)

#

$%

&

'(

= argmaxy∈{+1,−1} logP(xi | y )i∑ + logP(y )

= sign log P(xi | y+1)P(xi | y−1)i

∑ + log P(y+1)P(y−1)

#

$%

&

'(

two classes only

rearrange terms

Page 8: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

argmaxy P(Xi = xi |Y = y )i∏ P(Y = y )

if xi = 1 or 0 ….

= sign log P(xi | y+1)P(xi | y−1)i

∑ + log P(y+1)P(y−1)

#

$%

&

'(

P(Xi =1|Y = +1)P(Xi =1|Y = −1)

if xi =1

P(Xi = 0 |Y = +1)P(Xi = 0 |Y = −1)

if xi = 0

xi (a− b)+ b = [a if xi else b]

Page 9: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

argmaxy P(Xi = xi |Y = y )i∏ P(Y = y )

if xi = 1 or 0 ….

= sign log P(xi | y+1)P(xi | y−1)i

∑ + log P(y+1)P(y−1)

#

$%

&

'(

= sign xi logP(Xi =1| y+1)P(Xi =1| y−1)

− log P(Xi = 0 | y+1)P(Xi = 0 | y−1)

⎝⎜

⎠⎟

i∑ + log P(Xi = 0 | y+1)

P(Xi = 0 | y−1)⎛

⎝⎜

⎠⎟

i∑ + log P(y+1)

P(y−1)

⎝⎜⎜

⎠⎟⎟

= sign xiuii∑ +u0"

#$

%

&'

ui = log P(Xi =1| y+1)P(Xi =1| y−1)

− log P(Xi = 0 | y+1)P(Xi = 0 | y−1)

⎝⎜

⎠⎟

u0 = log P(xi = 0 | y+1)P(xi = 0 | y−1)

"

#$

%

&'

i∑ + log P(y+1)

P(y−1)

= sign xiuii∑ + x0u0"

#$

%

&'

x0 =1 for every x (bias term)

= sign x ⋅u( )

Page 10: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Predict “pos” on (x1,x2) iff ax1 + bx2 + c >0

= sign(ax1 + bx2 + c)

Page 11: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative
Page 12: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative
Page 13: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Naïve is Linear •  Summary:

– Assume conditional independence of attributes given class

– Derive a classi5ier: argmax y Pr(Y=y|X=x) – Turns out to be linear…?

•  Comments – Linear classi5iers are important and interesting

•  Alternative story: – Assume a “linear model” of Pr(Y=y|X=x) – Find the linear model that 5its the data “best”

13

Page 14: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

REVIEW

14

Page 15: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Quick Review: Probabilistic Learning Methods •  Probabilities are useful for modeling uncertainty and

non-deterministic processes •  A classi2ier takes as input feature values x = x1,…,xn and

produces a predicted class label y •  Classi5ication algorithms can be based on probability:

– model the joint distribution of <X1,…,Xn,Y> and predict argmaxy Pr(y|x1,…,xn )

•  Model Pr(X1,…,Xn,Y) with a big table (brute-force Bayes) •  Model Pr(X1,…,Xn,Y) with Bayes’s rule and conditional

independence assumptions (naïve Bayes) •  Naïve Bayes has a linear decision boundary: i.e., the

classi5ier is of the form sign(x . w)

15

Page 16: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

LEARNING AS OPTIMIZATION

16

Page 17: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Learning as optimization •  The main idea today

– Make two assumptions: •  the classi5ier is linear, like naïve Bayes

–  ie roughly of the form fw(x) = sign(x . w) • we want to 5ind the classi5ier w that “5its the data

best” – Formalize as an optimization problem

•  Pick a loss function J(w,D), and 5ind argminw J(w)

•  OR: Pick a “goodness” function, often Pr(D|w), and 5ind the argmaxw fD(w)

17

Page 18: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Learning as optimization: warmup Find: optimal (MLE) parameter θ of a binomial Dataset: D={x1,…,xn}, xi is 0 or 1, k of them are 1

P(D |θ ) = const θ xi (1−i∏ θ )1−xi =θ k (1−θ )n−k

ddθ

P(D |θ ) = ddθ

θ k (1−θ )n−k( )

=ddθθ k"

#$

%

&'(1−θ )n−k +θ k d

dθ(1−θ )n−k

= kθ k−1(1−θ )n−k +θ k (n− k)(1−θ )n−k−1(−1)

=θ k−1(1−θ )n−k−1 k(1−θ )−θ(n− k)( )18

Page 19: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Learning as optimization: warmup Goal: Find the best parameter θ of a binomial Dataset: D={x1,…,xn}, xi is 0 or 1, k of them are 1

= 0

θ= 0 θ= 1 k- kθ – nθ + kθ = 0

è nθ = k è θ = k/n

19

Page 20: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Learning as optimization with gradient ascent

•  Goal: Learn the parameter w of …

•  Dataset: D={(x1,y1)…,(xn , yn)} •  Use your model to de5ine

– Pr(D|w) = …. •  Set w to maximize Likelihood

– Usually we use numeric methods to 5ind the optimum

– i.e., gradient ascent: repeatedly take a small step in the direction of the gradient

20

Difference between ascent and descent is only the sign: I’m going to be sloppy here about that.

Page 21: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Gradient ascent Tofindargminxf(x):• Startwithx0• Fort=1….

• xt+1=xt+λf’(xt)whereλissmall

21

Page 22: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Gradient descent

22

Likelihood: ascent Loss: descent

Page 23: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Pros and cons of gradient descent •  Simple and often quite effective on ML tasks •  Often very scalable •  Only applies to smooth functions (differentiable) •  Might find a local minimum, rather than a global one

23

Page 24: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Pros and cons of gradient descent There is only one local optimum if the function is convex

24

Page 25: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

LEARNING LINEAR CLASSIFIERS WITH GRADIENT ASCENT

PART 1: THE IDEA

25

Page 26: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative
Page 27: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Using gradient ascent for linear classifiers

1.  Replace sign(x . v) with something differentiable: e.g. the logistic(x . v)

logistic(u) ≡ 11+ e−u

sign(x)

P(Y = pos | X = x) ≡ 11+ e−x⋅w

27

Page 28: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Using gradient ascent for linear classifiers 1.  Replace sign(x . w) with

something differentiable: e.g. the logistic(x . w)

2.  De5ine a function on the data that formalizes “how well w predicts the data” (loss function)

•  Assume y=0 or y=1 •  Our optimization target is log

of conditional likelihood

P(Y =1| X = x) ≡ 11+ e−x⋅w

LCLD (w) ≡ logP(yi | xi,w)i∑€

P(yi | x i,w) ≡

11+ exp(−x i ⋅ w)

if yi =1

1−1

1+ exp(−x i ⋅ w)%

& '

(

) * if yi = 0

+

, - -

. - -

/

0 - -

1 - -

28

Page 29: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Using gradient ascent for linear classifiers 1.  Replace sign(x . w) with

something differentiable: e.g. the logistic(x . w)

2.  De5ine a function on the data that formalizes “how well w predicts the data” (log likelihood)

3.  Differentiate the likelihood function and use gradient ascent –  Startwithw0–  Fort=1….

•  wt+1=wt+λLoss’D(wt)•  whereλissmall

Warning: this is not a toy

LCLD (w) ≡ logP(yi | xi,w)i∑

logistic regression

29

Page 30: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Stochastic Gradient Descent (SGD)

•  Goal: Learn the parameter θ of … •  Dataset: D={x1,…,xn}

–  or maybe D={(x1,y1)…,(xn , yn)} •  Write down Pr(D|θ) as a function of θ •  For large datasets this is expensive: we don’t want to load

all the data D into memory, and the gradient depends on all the data

•  An alternative: –  pick a small subset of examples B<<D –  approximate the gradient using them

•  “on average” this is the right direction –  take a step in that direction –  repeat….

B = one example is a very popular choice

30

Difference between ascent and descent is only the sign: we’ll be sloppy and use the more common “SGD”

Page 31: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Using SGD for logistic regression

1.  P(y|x)=logisDc(x.w)2.  Definethelikelihood:

3.  DifferenDatetheLCLfuncDonandusegradientdescenttominimize–  Startwithw0

–  Fort=1,…,T-un%lconvergence•  Foreachexamplex,yinD:•  wt+1=wt+λLx,y(wt)whereλissmall

LCLD (w) ≡ logP(yi | xi,w)i∑

More steps, noisier path toward the minimum, but each step is cheaper 31

Page 32: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

LEARNING LINEAR CLASSIFIERS WITH GRADIENT DESCENT

PART 2: THE GRADIENT

I will start by deriving the gradient for one example (eg SGD) and then move

to “batch” gradient descent

32

Page 33: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

We’re going to dive into this thing here: d/dw(p)

Likelihood on one example is:

33

Page 34: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

p

( f n )'= nf n−1⋅ f '

34

Page 35: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

35

Page 36: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

36

Page 37: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Breaking it down: SGD for logistic regression

1.  P(y|x)=logisDc(x.w)2.  DefineafuncDon

3.  DifferenDatethefuncDonandusegradientdescent–  Startwithw0

–  Fort=1,…,T-un%lconvergence•  Foreachexamplex,yinD:

•  wt+1=wt+λLx,y(wt)whereλissmall

LCLD (w) ≡ logP(yi | xi,w)i∑

= wt + λ(y − pi)x

pi = 1+ exp(−x⋅ w)( )−1

37

Page 38: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Details: Picking learning rate

•  Use grid-search in log-space over small values on a tuning set: – e.g., 0.01, 0.001, …

•  Sometimes, decrease after each pass: – e.g factor of 1/(1 + dt), t=epoch – sometimes 1/t2

•  Fancier techniques I won’t talk about: – Adaptive gradient: scale gradient differently

for each dimension (Adagrad, ADAM, ….) 38

Page 39: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Details: Debugging •  Check that gradient is indeed a locally good approximation to

the likelihood –  “5inite difference test”

39

Page 40: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

LOGISTIC REGRESSION: OBSERVATIONS

40

Page 41: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

This LCL function is convex: there is only one local minimum. So gradient descent will give the global minimum.

Convexity and logistic regression

41

Page 42: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Non-stochastic gradient descent

•  In batch gradient descent, average the gradient over all the examples D={(x1,y1)…,(xn , yn)}

42

Page 43: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Non-stochastic gradient descent

•  This can be interpreted as a difference between the expected value of y|xj=1 in the data and the expected value of y|xj=1 as predicted by the model

•  Gradient ascent tries to make those equal

43

Page 44: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

This LCL function “overfits”

•  This can be interpreted as a difference between the expected value of y|xj=1 in the data and the expected value of y|xj=1 as predicted by the model

•  Gradient ascent tries to make those equal

•  That’s impossible for some wj ! •  e.g., if xj =1 only in positive examples, the gradient

is always positive

44

Page 45: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

This LCL function “overfits”

•  This can be interpreted as a difference between the expected value of y|xj=1 in the data and the expected value of y|xj=1 as predicted by the model

•  Gradient ascent tries to make those equal

•  That’s impossible for some wj e.g., if they appear only in positive examples, gradient is always possible.

•  Using this LCL function for text: practically, it’s important to discard rare features to get good results.

45

Page 46: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

This LCL function “overfits”

•  Over5itting is often a problem in supervised learning. •  When you 5it the data (minimize LCL) are you 5itting

“real structure” in the data or “noise” in the data? •  Will the patterns you see appear in a test set or not?

Error/LCL on training set D

Error/LCL on an unseen test set Dtest

more features

hi error

46

Page 47: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

REGULARIZED LOGISTIC REGRESSION

47

Page 48: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Regularized logistic regression as a MAP •  Minimizing our LCL function maximizes log

conditional likelihood of the data (LCL):

–  like MLE: maxw Pr(D|w) – … but focusing on just a part of D

•  Another possibility: introduce a prior over w – maximize something like a MAP Pr(w|D) = 1/Z * Pr(D|w)Pr(w)

•  If the prior wj is zero-mean Gaussian then Pr(wj) = 1/Z exp(wj)-2

LCLD (w) ≡ logP(yi | xi,w)i∑

48

Page 49: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Regularized logistic regression •  Replace LCL

•  with LCL + penalty for large

weights, eg

•  So the update for wj becomes:

•  Or

(Log Conditional Likelihood, our LCL function) 49

Page 50: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Breaking it down: regularized SGD for logistic regression

1.  P(y|x)=logisDc(x.w)2.  DefineafuncDon

3.  DifferenDatethefuncDonandusegradientdescent–  Startwithw0

–  Fort=1,…,T-un%lconvergence•  Foreachexamplex,yinD:

•  wt+1=wt+λLx,y(wt)whereλissmall

LCLD (w) ≡ logP(yi | xi,w)i∑ +µ w

2

2

= wt + λ(y − pi)x − 2λµwt

pi = 1+ exp(−x⋅ w)( )−1

pushes w toward all-zeros vector (“weight

decay”)

50

Page 51: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

DISCRIMINATIVE AND GENERATIVE CLASSIFIERS

51

Page 52: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Discriminative vs Generative

•  Generative classi2iers are like Naïve Bayes: – Find θ = argmax w Πi Pr(yi,xi|θ) – Different assumptions about generative

process for the data: Pr(X,Y), priors on θ,… •  Discriminative classi2iers are like logistic

regression: – Find θ = argmax w Πi Pr(yi|xi,θ) – Different assumptions about conditional

probability: Pr(Y|X), priors on θ, … 52

Page 53: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Discriminative vs Generative •  With enough data:

– naïve Bayes is the best classi5ier when features are truly independent • why? picking argmax y Pr(y|x) is optimal

•  Question: – will logistic regression converge to the same

classi5ier as naïve Bayes when the features are actually independent?

53

Page 54: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

solid: NB dashed: LR

54

Page 55: Logistic Regressionwcohen/10-601/logreg.pdf• Learning as optimization • Logistic regression via gradient ascent • Over5itting and regularization • Discriminative vs generative

Naïve Bayes makes stronger assumptions about the data but needs fewer examples to estimate the parameters “On Discriminative vs Generative Classifiers: ….” Andrew Ng and Michael Jordan, NIPS 2001. 55