csc 411: lecture 02: linear regression › ... › slides › csc411 › 02_regression.pdf · csc...

97
CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s lectures Sanja Fidler University of Toronto Jan 13, 2016 (Most plots in this lecture are from Bishop’s book) Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 1 / 22

Upload: others

Post on 29-Jun-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

CSC 411: Lecture 02: Linear Regression

Class based on Raquel Urtasun & Rich Zemel’s lectures

Sanja Fidler

University of Toronto

Jan 13, 2016

(Most plots in this lecture are from Bishop’s book)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 1 / 22

Page 2: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Problems for Today

What should I watch this Friday?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 2 / 22

Page 3: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Problems for Today

What should I watch this Friday?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 2 / 22

Page 4: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Problems for Today

Goal: Predict movie rating automatically!

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 2 / 22

Page 5: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Problems for Today

Goal: How many followers will I get?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 2 / 22

Page 6: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Problems for Today

Goal: Predict the price of the house

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 2 / 22

Page 7: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Regression

What do all these problems have in common?

I Continuous outputs, we’ll call these t

(eg, a rating: a real number between 0-10, # of followers, house price)

What do I need in order to predict these outputs?

Predicting continuous outputs is called regression

I Features (inputs), we’ll call these x (or x if vectors)

I Training examples, many x (i) for which t(i) is known (eg, many movies

for which we know the rating)

I A model, a function that represents the relationship between x and t

I A loss or a cost or an objective function, which tells us how well our

model approximates the training examples

I Optimization, a way of finding the parameters of our model that

minimizes the loss function

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 3 / 22

Page 8: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Regression

What do all these problems have in common?

I Continuous outputs, we’ll call these t

(eg, a rating: a real number between 0-10, # of followers, house price)

What do I need in order to predict these outputs?

Predicting continuous outputs is called regression

I Features (inputs), we’ll call these x (or x if vectors)

I Training examples, many x (i) for which t(i) is known (eg, many movies

for which we know the rating)

I A model, a function that represents the relationship between x and t

I A loss or a cost or an objective function, which tells us how well our

model approximates the training examples

I Optimization, a way of finding the parameters of our model that

minimizes the loss function

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 3 / 22

Page 9: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Regression

What do all these problems have in common?

I Continuous outputs, we’ll call these t

(eg, a rating: a real number between 0-10, # of followers, house price)

What do I need in order to predict these outputs?

Predicting continuous outputs is called regression

I Features (inputs), we’ll call these x (or x if vectors)

I Training examples, many x (i) for which t(i) is known (eg, many movies

for which we know the rating)

I A model, a function that represents the relationship between x and t

I A loss or a cost or an objective function, which tells us how well our

model approximates the training examples

I Optimization, a way of finding the parameters of our model that

minimizes the loss function

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 3 / 22

Page 10: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Regression

What do all these problems have in common?

I Continuous outputs, we’ll call these t

(eg, a rating: a real number between 0-10, # of followers, house price)

What do I need in order to predict these outputs?

Predicting continuous outputs is called regression

I Features (inputs), we’ll call these x (or x if vectors)

I Training examples, many x (i) for which t(i) is known (eg, many movies

for which we know the rating)

I A model, a function that represents the relationship between x and t

I A loss or a cost or an objective function, which tells us how well our

model approximates the training examples

I Optimization, a way of finding the parameters of our model that

minimizes the loss function

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 3 / 22

Page 11: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Regression

What do all these problems have in common?

I Continuous outputs, we’ll call these t

(eg, a rating: a real number between 0-10, # of followers, house price)

What do I need in order to predict these outputs?

Predicting continuous outputs is called regression

I Features (inputs), we’ll call these x (or x if vectors)

I Training examples, many x (i) for which t(i) is known (eg, many movies

for which we know the rating)

I A model, a function that represents the relationship between x and t

I A loss or a cost or an objective function, which tells us how well our

model approximates the training examples

I Optimization, a way of finding the parameters of our model that

minimizes the loss function

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 3 / 22

Page 12: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Regression

What do all these problems have in common?

I Continuous outputs, we’ll call these t

(eg, a rating: a real number between 0-10, # of followers, house price)

What do I need in order to predict these outputs?

Predicting continuous outputs is called regression

I Features (inputs), we’ll call these x (or x if vectors)

I Training examples, many x (i) for which t(i) is known (eg, many movies

for which we know the rating)

I A model, a function that represents the relationship between x and t

I A loss or a cost or an objective function, which tells us how well our

model approximates the training examples

I Optimization, a way of finding the parameters of our model that

minimizes the loss function

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 3 / 22

Page 13: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Regression

What do all these problems have in common?

I Continuous outputs, we’ll call these t

(eg, a rating: a real number between 0-10, # of followers, house price)

What do I need in order to predict these outputs?

Predicting continuous outputs is called regression

I Features (inputs), we’ll call these x (or x if vectors)

I Training examples, many x (i) for which t(i) is known (eg, many movies

for which we know the rating)

I A model, a function that represents the relationship between x and t

I A loss or a cost or an objective function, which tells us how well our

model approximates the training examples

I Optimization, a way of finding the parameters of our model that

minimizes the loss function

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 3 / 22

Page 14: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Regression

What do all these problems have in common?

I Continuous outputs, we’ll call these t

(eg, a rating: a real number between 0-10, # of followers, house price)

What do I need in order to predict these outputs?

Predicting continuous outputs is called regression

I Features (inputs), we’ll call these x (or x if vectors)

I Training examples, many x (i) for which t(i) is known (eg, many movies

for which we know the rating)

I A model, a function that represents the relationship between x and t

I A loss or a cost or an objective function, which tells us how well our

model approximates the training examples

I Optimization, a way of finding the parameters of our model that

minimizes the loss function

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 3 / 22

Page 15: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Today: Linear Regression

Linear regression

I continuous outputs

I simple model (linear)

Introduce key concepts:

I loss functions

I generalization

I optimization

I model complexity

I regularization

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 4 / 22

Page 16: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Simple 1-D regression

Circles are data points (i.e., training examples) that are given to us

The data points are uniform in x , but may be displaced in y

t(x) = f (x) + ε

with ε some noise

In green is the ”true” curve that we don’t know

Goal: We want to fit a curve to these points

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 5 / 22

Page 17: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Simple 1-D regression

Circles are data points (i.e., training examples) that are given to us

The data points are uniform in x , but may be displaced in y

t(x) = f (x) + ε

with ε some noise

In green is the ”true” curve that we don’t know

Goal: We want to fit a curve to these points

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 5 / 22

Page 18: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Simple 1-D regression

Circles are data points (i.e., training examples) that are given to us

The data points are uniform in x , but may be displaced in y

t(x) = f (x) + ε

with ε some noise

In green is the ”true” curve that we don’t know

Goal: We want to fit a curve to these points

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 5 / 22

Page 19: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Simple 1-D regression

Circles are data points (i.e., training examples) that are given to us

The data points are uniform in x , but may be displaced in y

t(x) = f (x) + ε

with ε some noise

In green is the ”true” curve that we don’t know

Goal: We want to fit a curve to these points

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 5 / 22

Page 20: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Simple 1-D regression

Key Questions:

I How do we parametrize the model?

I What loss (objective) function should we use to judge the fit?

I How do we optimize fit to unseen test data (generalization)?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 6 / 22

Page 21: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Simple 1-D regression

Key Questions:

I How do we parametrize the model?

I What loss (objective) function should we use to judge the fit?

I How do we optimize fit to unseen test data (generalization)?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 6 / 22

Page 22: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Simple 1-D regression

Key Questions:

I How do we parametrize the model?

I What loss (objective) function should we use to judge the fit?

I How do we optimize fit to unseen test data (generalization)?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 6 / 22

Page 23: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Simple 1-D regression

Key Questions:

I How do we parametrize the model?

I What loss (objective) function should we use to judge the fit?

I How do we optimize fit to unseen test data (generalization)?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 6 / 22

Page 24: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Example: Boston Housing data

Estimate median house price in a neighborhood based on neighborhoodstatistics

Look at first possible attribute (feature): per capita crime rate

Use this to predict house prices in other neighborhoods

Is this a good input (attribute) to predict house prices?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 7 / 22

Page 25: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Example: Boston Housing data

Estimate median house price in a neighborhood based on neighborhoodstatistics

Look at first possible attribute (feature): per capita crime rate

Use this to predict house prices in other neighborhoods

Is this a good input (attribute) to predict house prices?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 7 / 22

Page 26: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Example: Boston Housing data

Estimate median house price in a neighborhood based on neighborhoodstatistics

Look at first possible attribute (feature): per capita crime rate

Use this to predict house prices in other neighborhoods

Is this a good input (attribute) to predict house prices?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 7 / 22

Page 27: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Example: Boston Housing data

Estimate median house price in a neighborhood based on neighborhoodstatistics

Look at first possible attribute (feature): per capita crime rate

Use this to predict house prices in other neighborhoods

Is this a good input (attribute) to predict house prices?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 7 / 22

Page 28: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Represent the Data

Data is described as pairs D = {(x (1), t(1)), · · · , (x (N), t(N))}I x ∈ R is the input feature (per capita crime rate)

I t ∈ R is the target output (median house price)

I (i) simply indicates the training examples (we have N in this case)

Here t is continuous, so this is a regression problem

Model outputs y , an estimate of t

y(x) = w0 + w1x

What type of model did we choose?

Divide the dataset into training and testing examples

I Use the training examples to construct hypothesis, or function

approximator, that maps x to predicted y

I Evaluate hypothesis on test set

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 8 / 22

Page 29: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Represent the Data

Data is described as pairs D = {(x (1), t(1)), · · · , (x (N), t(N))}I x ∈ R is the input feature (per capita crime rate)

I t ∈ R is the target output (median house price)

I (i) simply indicates the training examples (we have N in this case)

Here t is continuous, so this is a regression problem

Model outputs y , an estimate of t

y(x) = w0 + w1x

What type of model did we choose?

Divide the dataset into training and testing examples

I Use the training examples to construct hypothesis, or function

approximator, that maps x to predicted y

I Evaluate hypothesis on test set

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 8 / 22

Page 30: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Represent the Data

Data is described as pairs D = {(x (1), t(1)), · · · , (x (N), t(N))}I x ∈ R is the input feature (per capita crime rate)

I t ∈ R is the target output (median house price)

I (i) simply indicates the training examples (we have N in this case)

Here t is continuous, so this is a regression problem

Model outputs y , an estimate of t

y(x) = w0 + w1x

What type of model did we choose?

Divide the dataset into training and testing examples

I Use the training examples to construct hypothesis, or function

approximator, that maps x to predicted y

I Evaluate hypothesis on test set

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 8 / 22

Page 31: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Represent the Data

Data is described as pairs D = {(x (1), t(1)), · · · , (x (N), t(N))}I x ∈ R is the input feature (per capita crime rate)

I t ∈ R is the target output (median house price)

I (i) simply indicates the training examples (we have N in this case)

Here t is continuous, so this is a regression problem

Model outputs y , an estimate of t

y(x) = w0 + w1x

What type of model did we choose?

Divide the dataset into training and testing examples

I Use the training examples to construct hypothesis, or function

approximator, that maps x to predicted y

I Evaluate hypothesis on test set

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 8 / 22

Page 32: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Represent the Data

Data is described as pairs D = {(x (1), t(1)), · · · , (x (N), t(N))}I x ∈ R is the input feature (per capita crime rate)

I t ∈ R is the target output (median house price)

I (i) simply indicates the training examples (we have N in this case)

Here t is continuous, so this is a regression problem

Model outputs y , an estimate of t

y(x) = w0 + w1x

What type of model did we choose?

Divide the dataset into training and testing examples

I Use the training examples to construct hypothesis, or function

approximator, that maps x to predicted y

I Evaluate hypothesis on test set

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 8 / 22

Page 33: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Represent the Data

Data is described as pairs D = {(x (1), t(1)), · · · , (x (N), t(N))}I x ∈ R is the input feature (per capita crime rate)

I t ∈ R is the target output (median house price)

I (i) simply indicates the training examples (we have N in this case)

Here t is continuous, so this is a regression problem

Model outputs y , an estimate of t

y(x) = w0 + w1x

What type of model did we choose?

Divide the dataset into training and testing examples

I Use the training examples to construct hypothesis, or function

approximator, that maps x to predicted y

I Evaluate hypothesis on test set

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 8 / 22

Page 34: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Represent the Data

Data is described as pairs D = {(x (1), t(1)), · · · , (x (N), t(N))}I x ∈ R is the input feature (per capita crime rate)

I t ∈ R is the target output (median house price)

I (i) simply indicates the training examples (we have N in this case)

Here t is continuous, so this is a regression problem

Model outputs y , an estimate of t

y(x) = w0 + w1x

What type of model did we choose?

Divide the dataset into training and testing examples

I Use the training examples to construct hypothesis, or function

approximator, that maps x to predicted y

I Evaluate hypothesis on test set

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 8 / 22

Page 35: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Noise

A simple model typically does not exactly fit the data – lack of fit can be

considered noise

Sources of noise:

I Imprecision in data attributes (input noise, eg noise in per-capita crime)

I Errors in data targets (mis-labeling, eg noise in house prices)

I Additional attributes not taken into account by data attributes, affect

target values (latent variables). In the example, what else could affect

house prices?

I Model may be too simple to account for data targets

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 9 / 22

Page 36: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Noise

A simple model typically does not exactly fit the data – lack of fit can be

considered noise

Sources of noise:

I Imprecision in data attributes (input noise, eg noise in per-capita crime)

I Errors in data targets (mis-labeling, eg noise in house prices)

I Additional attributes not taken into account by data attributes, affect

target values (latent variables). In the example, what else could affect

house prices?

I Model may be too simple to account for data targets

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 9 / 22

Page 37: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Noise

A simple model typically does not exactly fit the data – lack of fit can be

considered noise

Sources of noise:

I Imprecision in data attributes (input noise, eg noise in per-capita crime)

I Errors in data targets (mis-labeling, eg noise in house prices)

I Additional attributes not taken into account by data attributes, affect

target values (latent variables). In the example, what else could affect

house prices?

I Model may be too simple to account for data targets

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 9 / 22

Page 38: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Noise

A simple model typically does not exactly fit the data – lack of fit can be

considered noise

Sources of noise:

I Imprecision in data attributes (input noise, eg noise in per-capita crime)

I Errors in data targets (mis-labeling, eg noise in house prices)

I Additional attributes not taken into account by data attributes, affect

target values (latent variables). In the example, what else could affect

house prices?

I Model may be too simple to account for data targets

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 9 / 22

Page 39: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Noise

A simple model typically does not exactly fit the data – lack of fit can be

considered noise

Sources of noise:

I Imprecision in data attributes (input noise, eg noise in per-capita crime)

I Errors in data targets (mis-labeling, eg noise in house prices)

I Additional attributes not taken into account by data attributes, affect

target values (latent variables). In the example, what else could affect

house prices?

I Model may be too simple to account for data targets

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 9 / 22

Page 40: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Noise

A simple model typically does not exactly fit the data – lack of fit can be

considered noise

Sources of noise:

I Imprecision in data attributes (input noise, eg noise in per-capita crime)

I Errors in data targets (mis-labeling, eg noise in house prices)

I Additional attributes not taken into account by data attributes, affect

target values (latent variables). In the example, what else could affect

house prices?

I Model may be too simple to account for data targets

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 9 / 22

Page 41: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Least-Squares Regression

Define a modelStandard loss/cost/objective function measures the squared error between yand the true value tHow do we obtain weights w = (w0,w1)?

For the linear model, what kind of a function is `(w)?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 10 / 22

Page 42: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Least-Squares Regression

Define a modely(x) = function(x ,w)

Standard loss/cost/objective function measures the squared error between yand the true value tHow do we obtain weights w = (w0,w1)?

For the linear model, what kind of a function is `(w)?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 10 / 22

Page 43: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Least-Squares Regression

Define a model

Linear: y(x) = w0 + w1x

Standard loss/cost/objective function measures the squared error between yand the true value tHow do we obtain weights w = (w0,w1)?

For the linear model, what kind of a function is `(w)?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 10 / 22

Page 44: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Least-Squares Regression

Define a model

Linear: y(x) = w0 + w1x

Standard loss/cost/objective function measures the squared error between yand the true value t

`(w) =N∑

n=1

[t(n) − y(x (n))]2

How do we obtain weights w = (w0,w1)?

For the linear model, what kind of a function is `(w)?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 10 / 22

Page 45: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Least-Squares Regression

Define a model

Linear: y(x) = w0 + w1x

Standard loss/cost/objective function measures the squared error between yand the true value t

Linear model: `(w) =N∑

n=1

[t(n) − (w0 + w1x(n))]2

How do we obtain weights w = (w0,w1)?

For the linear model, what kind of a function is `(w)?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 10 / 22

Page 46: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Least-Squares Regression

Define a model

Linear: y(x) = w0 + w1x

Standard loss/cost/objective function measures the squared error between yand the true value t

Linear model: `(w) =N∑

n=1

[t(n) − (w0 + w1x(n))]2

For a particular hypothesis (y(x) defined by a choice of w, drawn in red),what does the loss represent geometrically?

How do we obtain weights w = (w0,w1)?

For the linear model, what kind of a function is `(w)?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 10 / 22

Page 47: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Least-Squares Regression

Define a model

Linear: y(x) = w0 + w1x

Standard loss/cost/objective function measures the squared error between yand the true value t

Linear model: `(w) =N∑

n=1

[t(n) − (w0 + w1x(n))]2

The loss for the red hypothesis is the sum of the squared vertical errors(squared lengths of green vertical lines)

How do we obtain weights w = (w0,w1)?

For the linear model, what kind of a function is `(w)?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 10 / 22

Page 48: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Least-Squares Regression

Define a model

Linear: y(x) = w0 + w1x

Standard loss/cost/objective function measures the squared error between yand the true value t

Linear model: `(w) =N∑

n=1

[t(n) − (w0 + w1x(n))]2

How do we obtain weights w = (w0,w1)?

For the linear model, what kind of a function is `(w)?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 10 / 22

Page 49: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Least-Squares Regression

Define a model

Linear: y(x) = w0 + w1x

Standard loss/cost/objective function measures the squared error between yand the true value t

Linear model: `(w) =N∑

n=1

[t(n) − (w0 + w1x(n))]2

How do we obtain weights w = (w0,w1)? Find w that minimizes loss `(w)

For the linear model, what kind of a function is `(w)?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 10 / 22

Page 50: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Least-Squares Regression

Define a model

Linear: y(x) = w0 + w1x

Standard loss/cost/objective function measures the squared error between yand the true value t

Linear model: `(w) =N∑

n=1

[t(n) − (w0 + w1x(n))]2

How do we obtain weights w = (w0,w1)?

For the linear model, what kind of a function is `(w)?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 10 / 22

Page 51: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Optimizing the Objective

One straightforward method: gradient descent

I initialize w (e.g., randomly)

I repeatedly update w based on the gradient

w← w − λ ∂`∂w

λ is the learning rate

For a single training case, this gives the LMS update rule:

Note: As error approaches zero, so does the update (w stops changing)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 11 / 22

Page 52: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Optimizing the Objective

One straightforward method: gradient descent

I initialize w (e.g., randomly)

I repeatedly update w based on the gradient

w← w − λ ∂`∂w

λ is the learning rate

For a single training case, this gives the LMS update rule:

Note: As error approaches zero, so does the update (w stops changing)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 11 / 22

Page 53: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Optimizing the Objective

One straightforward method: gradient descent

I initialize w (e.g., randomly)

I repeatedly update w based on the gradient

w← w − λ ∂`∂w

λ is the learning rate

For a single training case, this gives the LMS update rule:

Note: As error approaches zero, so does the update (w stops changing)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 11 / 22

Page 54: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Optimizing the Objective

One straightforward method: gradient descent

I initialize w (e.g., randomly)

I repeatedly update w based on the gradient

w← w − λ ∂`∂w

λ is the learning rate

For a single training case, this gives the LMS update rule:

Note: As error approaches zero, so does the update (w stops changing)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 11 / 22

Page 55: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Optimizing the Objective

One straightforward method: gradient descent

I initialize w (e.g., randomly)

I repeatedly update w based on the gradient

w← w − λ ∂`∂w

λ is the learning rate

For a single training case, this gives the LMS update rule:

Note: As error approaches zero, so does the update (w stops changing)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 11 / 22

Page 56: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Optimizing the Objective

One straightforward method: gradient descent

I initialize w (e.g., randomly)

I repeatedly update w based on the gradient

w← w − λ ∂`∂w

λ is the learning rate

For a single training case, this gives the LMS update rule:

w← w + 2λ(t(n) − y(x (n)))x (n)

Note: As error approaches zero, so does the update (w stops changing)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 11 / 22

Page 57: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Optimizing the Objective

One straightforward method: gradient descent

I initialize w (e.g., randomly)

I repeatedly update w based on the gradient

w← w − λ ∂`∂w

λ is the learning rate

For a single training case, this gives the LMS update rule:

w← w + 2λ (t(n) − y(x (n)))︸ ︷︷ ︸error

x (n)

Note: As error approaches zero, so does the update (w stops changing)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 11 / 22

Page 58: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Optimizing Across Training Set

Two ways to generalize this for all examples in training set:

1. Batch updates: sum or average updates across every example n, thenchange the parameter values

w← w + 2λN∑

n=1

(t(n) − y(x (n)))x (n)

2. Stochastic/online updates: update the parameters for each trainingcase in turn, according to its own gradients

I Underlying assumption: sample is independent and identicallydistributed (i.i.d.)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 12 / 22

Page 59: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Optimizing Across Training Set

Two ways to generalize this for all examples in training set:

1. Batch updates: sum or average updates across every example n, thenchange the parameter values

w← w + 2λN∑

n=1

(t(n) − y(x (n)))x (n)

2. Stochastic/online updates: update the parameters for each trainingcase in turn, according to its own gradients

I Underlying assumption: sample is independent and identicallydistributed (i.i.d.)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 12 / 22

Page 60: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Optimizing Across Training Set

Two ways to generalize this for all examples in training set:

1. Batch updates: sum or average updates across every example n, thenchange the parameter values

w← w + 2λN∑

n=1

(t(n) − y(x (n)))x (n)

2. Stochastic/online updates: update the parameters for each trainingcase in turn, according to its own gradients

Algorithm 1 Stochastic gradient descent1: Randomly shuffle examples in the training set2: for i = 1 to N do3: Update:

w← w + 2λ(t(i) − y(x (i)))x (i) (update for a linear model)

4: end for

I Underlying assumption: sample is independent and identicallydistributed (i.i.d.)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 12 / 22

Page 61: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Optimizing Across Training Set

Two ways to generalize this for all examples in training set:

1. Batch updates: sum or average updates across every example n, thenchange the parameter values

w← w + 2λN∑

n=1

(t(n) − y(x (n)))x (n)

2. Stochastic/online updates: update the parameters for each trainingcase in turn, according to its own gradients

I Underlying assumption: sample is independent and identicallydistributed (i.i.d.)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 12 / 22

Page 62: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Analytical Solution?

For some objectives we can also find the optimal solution analytically

This is the case for linear least-squares regression

How?

Compute the derivatives of the objective wrt w and equate with 0

Define:

t = [t(1), t(2), . . . , t(N)]T

X =

1, x (1)

1, x (2)

. . .1, x (N)

Then:

w = (XTX)−1XT t

(work it out!)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 13 / 22

Page 63: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Analytical Solution?

For some objectives we can also find the optimal solution analytically

This is the case for linear least-squares regression

How?

Compute the derivatives of the objective wrt w and equate with 0

Define:

t = [t(1), t(2), . . . , t(N)]T

X =

1, x (1)

1, x (2)

. . .1, x (N)

Then:

w = (XTX)−1XT t

(work it out!)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 13 / 22

Page 64: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Analytical Solution?

For some objectives we can also find the optimal solution analytically

This is the case for linear least-squares regression

How?

Compute the derivatives of the objective wrt w and equate with 0

Define:

t = [t(1), t(2), . . . , t(N)]T

X =

1, x (1)

1, x (2)

. . .1, x (N)

Then:

w = (XTX)−1XT t

(work it out!)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 13 / 22

Page 65: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Multi-dimensional Inputs

One method of extending the model is to consider other input dimensions

y(x) = w0 + w1x1 + w2x2

In the Boston housing example, we can look at the number of rooms

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 14 / 22

Page 66: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Multi-dimensional Inputs

One method of extending the model is to consider other input dimensions

y(x) = w0 + w1x1 + w2x2

In the Boston housing example, we can look at the number of rooms

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 14 / 22

Page 67: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Linear Regression with Multi-dimensional Inputs

Imagine now we want to predict the median house price from thesemulti-dimensional observations

Each house is a data point n, with observations indexed by j :

x(n) =(x(n)1 , · · · , x (n)j , · · · , x (n)d

)We can incorporate the bias w0 into w, by using x0 = 1, then

y(x) = w0 +d∑

j=1

wjxj = wTx

We can then solve for w = (w0,w1, · · · ,wd). How?

We can use gradient descent to solve for each coefficient, or compute wanalytically (how does the solution change?)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 15 / 22

Page 68: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Linear Regression with Multi-dimensional Inputs

Imagine now we want to predict the median house price from thesemulti-dimensional observations

Each house is a data point n, with observations indexed by j :

x(n) =(x(n)1 , · · · , x (n)j , · · · , x (n)d

)

We can incorporate the bias w0 into w, by using x0 = 1, then

y(x) = w0 +d∑

j=1

wjxj = wTx

We can then solve for w = (w0,w1, · · · ,wd). How?

We can use gradient descent to solve for each coefficient, or compute wanalytically (how does the solution change?)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 15 / 22

Page 69: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Linear Regression with Multi-dimensional Inputs

Imagine now we want to predict the median house price from thesemulti-dimensional observations

Each house is a data point n, with observations indexed by j :

x(n) =(x(n)1 , · · · , x (n)j , · · · , x (n)d

)We can incorporate the bias w0 into w, by using x0 = 1, then

y(x) = w0 +d∑

j=1

wjxj = wTx

We can then solve for w = (w0,w1, · · · ,wd). How?

We can use gradient descent to solve for each coefficient, or compute wanalytically (how does the solution change?)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 15 / 22

Page 70: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Linear Regression with Multi-dimensional Inputs

Imagine now we want to predict the median house price from thesemulti-dimensional observations

Each house is a data point n, with observations indexed by j :

x(n) =(x(n)1 , · · · , x (n)j , · · · , x (n)d

)We can incorporate the bias w0 into w, by using x0 = 1, then

y(x) = w0 +d∑

j=1

wjxj = wTx

We can then solve for w = (w0,w1, · · · ,wd). How?

We can use gradient descent to solve for each coefficient, or compute wanalytically (how does the solution change?)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 15 / 22

Page 71: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Linear Regression with Multi-dimensional Inputs

Imagine now we want to predict the median house price from thesemulti-dimensional observations

Each house is a data point n, with observations indexed by j :

x(n) =(x(n)1 , · · · , x (n)j , · · · , x (n)d

)We can incorporate the bias w0 into w, by using x0 = 1, then

y(x) = w0 +d∑

j=1

wjxj = wTx

We can then solve for w = (w0,w1, · · · ,wd). How?

We can use gradient descent to solve for each coefficient, or compute wanalytically (how does the solution change?)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 15 / 22

Page 72: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

More Powerful Models?

What if our linear model is not good? How can we create a morecomplicated model?

We can create a more complicated model by defining input variables that arecombinations of components of x

Example: an M-th order polynomial function of one dimensional feature x :

y(x ,w) = w0 +M∑j=1

wjxj

where x j is the j-th power of x

We can use the same approach to optimize for the weights w

How do we do that?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 16 / 22

Page 73: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Fitting a Polynomial

What if our linear model is not good? How can we create a morecomplicated model?

We can create a more complicated model by defining input variables that arecombinations of components of x

Example: an M-th order polynomial function of one dimensional feature x :

y(x ,w) = w0 +M∑j=1

wjxj

where x j is the j-th power of x

We can use the same approach to optimize for the weights w

How do we do that?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 16 / 22

Page 74: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Fitting a Polynomial

What if our linear model is not good? How can we create a morecomplicated model?

We can create a more complicated model by defining input variables that arecombinations of components of x

Example: an M-th order polynomial function of one dimensional feature x :

y(x ,w) = w0 +M∑j=1

wjxj

where x j is the j-th power of x

We can use the same approach to optimize for the weights w

How do we do that?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 16 / 22

Page 75: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Fitting a Polynomial

What if our linear model is not good? How can we create a morecomplicated model?

We can create a more complicated model by defining input variables that arecombinations of components of x

Example: an M-th order polynomial function of one dimensional feature x :

y(x ,w) = w0 +M∑j=1

wjxj

where x j is the j-th power of x

We can use the same approach to optimize for the weights w

How do we do that?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 16 / 22

Page 76: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Fitting a Polynomial

What if our linear model is not good? How can we create a morecomplicated model?

We can create a more complicated model by defining input variables that arecombinations of components of x

Example: an M-th order polynomial function of one dimensional feature x :

y(x ,w) = w0 +M∑j=1

wjxj

where x j is the j-th power of x

We can use the same approach to optimize for the weights w

How do we do that?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 16 / 22

Page 77: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Which Fit is Best?

from Bishop

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 17 / 22

Page 78: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Generalization

Generalization = model’s ability to predict the held out data

What is happening?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 18 / 22

Page 79: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Generalization

Generalization = model’s ability to predict the held out data

What is happening?

Our model with M = 9 overfits the data (it models also noise)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 18 / 22

Page 80: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Generalization

Generalization = model’s ability to predict the held out data

What is happening?

Our model with M = 9 overfits the data (it models also noise)

Not a problem if we have lots of training examples

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 18 / 22

Page 81: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Generalization

Generalization = model’s ability to predict the held out data

What is happening?

Our model with M = 9 overfits the data (it models also noise)

Let’s look at the estimated weights for various M in the case of fewerexamples

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 18 / 22

Page 82: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Generalization

Generalization = model’s ability to predict the held out data

What is happening?

Our model with M = 9 overfits the data (it models also noise)

Let’s look at the estimated weights for various M in the case of fewerexamples

The weights are becoming huge to compensate for the noise

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 18 / 22

Page 83: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Generalization

Generalization = model’s ability to predict the held out data

What is happening?

Our model with M = 9 overfits the data (it models also noise)

Let’s look at the estimated weights for various M in the case of fewerexamples

The weights are becoming huge to compensate for the noise

One way of dealing with this is to encourage the weights to be small (thisway no input dimension will have too much influence on prediction). This iscalled regularization.

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 18 / 22

Page 84: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Regularized Least Squares

Increasing the input features this way can complicate the model considerably

Goal: select the appropriate model complexity automatically

Standard approach: regularization

˜̀(w) =N∑

n=1

[t(n) − (w0 + w1x(n))]2 + αwTw

Intuition: Since we are minimizing the loss, the second term will encouragesmaller values in w

The penalty on the squared weights is known as ridge regression in statistics

Leads to a modified update rule for gradient descent:

w← w + 2λ[N∑

n=1

(t(n) − y(x (n)))x (n) − αw]

Also has an analytical solution: w = (XTX + α I)−1XT t (verify!)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 19 / 22

Page 85: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Regularized Least Squares

Increasing the input features this way can complicate the model considerably

Goal: select the appropriate model complexity automatically

Standard approach: regularization

˜̀(w) =N∑

n=1

[t(n) − (w0 + w1x(n))]2 + αwTw

Intuition: Since we are minimizing the loss, the second term will encouragesmaller values in w

The penalty on the squared weights is known as ridge regression in statistics

Leads to a modified update rule for gradient descent:

w← w + 2λ[N∑

n=1

(t(n) − y(x (n)))x (n) − αw]

Also has an analytical solution: w = (XTX + α I)−1XT t (verify!)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 19 / 22

Page 86: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Regularized Least Squares

Increasing the input features this way can complicate the model considerably

Goal: select the appropriate model complexity automatically

Standard approach: regularization

˜̀(w) =N∑

n=1

[t(n) − (w0 + w1x(n))]2 + αwTw

Intuition: Since we are minimizing the loss, the second term will encouragesmaller values in w

The penalty on the squared weights is known as ridge regression in statistics

Leads to a modified update rule for gradient descent:

w← w + 2λ[N∑

n=1

(t(n) − y(x (n)))x (n) − αw]

Also has an analytical solution: w = (XTX + α I)−1XT t (verify!)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 19 / 22

Page 87: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Regularized Least Squares

Increasing the input features this way can complicate the model considerably

Goal: select the appropriate model complexity automatically

Standard approach: regularization

˜̀(w) =N∑

n=1

[t(n) − (w0 + w1x(n))]2 + αwTw

Intuition: Since we are minimizing the loss, the second term will encouragesmaller values in w

The penalty on the squared weights is known as ridge regression in statistics

Leads to a modified update rule for gradient descent:

w← w + 2λ[N∑

n=1

(t(n) − y(x (n)))x (n) − αw]

Also has an analytical solution: w = (XTX + α I)−1XT t (verify!)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 19 / 22

Page 88: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Regularized Least Squares

Increasing the input features this way can complicate the model considerably

Goal: select the appropriate model complexity automatically

Standard approach: regularization

˜̀(w) =N∑

n=1

[t(n) − (w0 + w1x(n))]2 + αwTw

Intuition: Since we are minimizing the loss, the second term will encouragesmaller values in w

The penalty on the squared weights is known as ridge regression in statistics

Leads to a modified update rule for gradient descent:

w← w + 2λ[N∑

n=1

(t(n) − y(x (n)))x (n) − αw]

Also has an analytical solution: w = (XTX + α I)−1XT t (verify!)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 19 / 22

Page 89: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Regularized Least Squares

Increasing the input features this way can complicate the model considerably

Goal: select the appropriate model complexity automatically

Standard approach: regularization

˜̀(w) =N∑

n=1

[t(n) − (w0 + w1x(n))]2 + αwTw

Intuition: Since we are minimizing the loss, the second term will encouragesmaller values in w

The penalty on the squared weights is known as ridge regression in statistics

Leads to a modified update rule for gradient descent:

w← w + 2λ[N∑

n=1

(t(n) − y(x (n)))x (n) − αw]

Also has an analytical solution: w = (XTX + α I)−1XT t (verify!)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 19 / 22

Page 90: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Regularized Least Squares

Increasing the input features this way can complicate the model considerably

Goal: select the appropriate model complexity automatically

Standard approach: regularization

˜̀(w) =N∑

n=1

[t(n) − (w0 + w1x(n))]2 + αwTw

Intuition: Since we are minimizing the loss, the second term will encouragesmaller values in w

The penalty on the squared weights is known as ridge regression in statistics

Leads to a modified update rule for gradient descent:

w← w + 2λ[N∑

n=1

(t(n) − y(x (n)))x (n) − αw]

Also has an analytical solution: w = (XTX + α I)−1XT t (verify!)

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 19 / 22

Page 91: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

Regularized least squares

Better generalization

Choose α carefully

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 20 / 22

Page 92: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

1-D regression illustrates key concepts

Data fits – is linear model best (model selection)?

I Simple models may not capture all the important variations (signal) inthe data: underfit

I More complex models may overfit the training data (fit not only thesignal but also the noise in the data), especially if not enough data toconstrain model

One method of assessing fit: test generalization = model’s ability to predictthe held out data

Optimization is essential: stochastic and batch iterative approaches; analyticwhen available

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 21 / 22

Page 93: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

1-D regression illustrates key concepts

Data fits – is linear model best (model selection)?

I Simple models may not capture all the important variations (signal) inthe data: underfit

I More complex models may overfit the training data (fit not only thesignal but also the noise in the data), especially if not enough data toconstrain model

One method of assessing fit: test generalization = model’s ability to predictthe held out data

Optimization is essential: stochastic and batch iterative approaches; analyticwhen available

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 21 / 22

Page 94: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

1-D regression illustrates key concepts

Data fits – is linear model best (model selection)?

I Simple models may not capture all the important variations (signal) inthe data: underfit

I More complex models may overfit the training data (fit not only thesignal but also the noise in the data), especially if not enough data toconstrain model

One method of assessing fit: test generalization = model’s ability to predictthe held out data

Optimization is essential: stochastic and batch iterative approaches; analyticwhen available

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 21 / 22

Page 95: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

1-D regression illustrates key concepts

Data fits – is linear model best (model selection)?

I Simple models may not capture all the important variations (signal) inthe data: underfit

I More complex models may overfit the training data (fit not only thesignal but also the noise in the data), especially if not enough data toconstrain model

One method of assessing fit: test generalization = model’s ability to predictthe held out data

Optimization is essential: stochastic and batch iterative approaches; analyticwhen available

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 21 / 22

Page 96: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

1-D regression illustrates key concepts

Data fits – is linear model best (model selection)?

I Simple models may not capture all the important variations (signal) inthe data: underfit

I More complex models may overfit the training data (fit not only thesignal but also the noise in the data), especially if not enough data toconstrain model

One method of assessing fit: test generalization = model’s ability to predictthe held out data

Optimization is essential: stochastic and batch iterative approaches; analyticwhen available

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 21 / 22

Page 97: CSC 411: Lecture 02: Linear Regression › ... › slides › CSC411 › 02_regression.pdf · CSC 411: Lecture 02: Linear Regression Class based on Raquel Urtasun & Rich Zemel’s

So...

Which movie will you watch?

Urtasun, Zemel, Fidler (UofT) CSC 411: 02-Regression Jan 13, 2016 22 / 22