machine learning crash course computer vision james hays slides: isabelle guyon, erik sudderth, mark...

115
Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon Erik Sudderth, Mark Johnson, Derek Hoiem hoto: CMU Machine Learning epartment protests G20

Upload: jasper-neal

Post on 13-Jan-2016

232 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Machine Learning Crash Course

Computer VisionJames Hays

Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson,Derek Hoiem

Photo: CMU Machine Learning Department protests G20

Page 2: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Recap: Multiple Views and Motion• Epipolar geometry

– Relates cameras in two positions– Fundamental matrix maps from a point in one image to a line (its

epipolar line) in the other– Can solve for F given corresponding points (e.g., interest points)

• Stereo depth estimation– Estimate disparity by finding corresponding points along epipolar lines– Depth is inverse to disparity

• Motion Estimation– By assuming brightness constancy, truncated Taylor expansion leads to

simple and fast patch matching across frames– Assume local motion is coherent– “Aperture problem” is resolved by coarse to fine approach

0 tT IvuI

Page 3: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Structure from motion (or SLAM)• Given a set of corresponding points in two or more

images, compute the camera parameters and the 3D point coordinates

Camera 1Camera 2 Camera 3

R1,t1 R2,t2R3,t3

? ? ? Slide credit: Noah Snavely

?

Page 4: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Structure from motion ambiguity• If we scale the entire scene by some factor k and, at

the same time, scale the camera matrices by the factor of 1/k, the projections of the scene points in the image remain exactly the same:

It is impossible to recover the absolute scale of the scene!

)(1

XPPXx kk

Page 5: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

How do we know the scale of image content?

Page 6: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department
Page 7: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department
Page 8: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Bundle adjustment• Non-linear method for refining structure and motion• Minimizing reprojection error

2

1 1

,),(

m

i

n

jjiijDE XPxXP

x1j

x2j

x3j

Xj

P1

P2

P3

P1Xj

P2Xj

P3Xj

Page 9: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Photo synth

Noah Snavely, Steven M. Seitz, Richard Szeliski, "Photo tourism: Exploring photo collections in 3D," SIGGRAPH 2006

http://photosynth.net/

Page 10: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Quiz 1 on Wednesday

• ~20 multiple choice or short answer questions• In class, full period• Only covers material from lecture, with a bias

towards topics not covered by projects• Study strategy: Review the slides and consult

textbook to clarify confusing parts.

Page 11: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Machine Learning Crash Course

Computer VisionJames Hays

Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson,Derek Hoiem

Photo: CMU Machine Learning Department protests G20

Page 12: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Machine learning: Overview

• Core of ML: Making predictions or decisions from Data.

• This overview will not go in to depth about the statistical underpinnings of learning methods. We’re looking at ML as a tool.

• Take a machine learning course if you want to know more!

Page 13: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Impact of Machine Learning

• Machine Learning is arguably the greatest export from computing to other scientific fields.

Page 14: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Machine Learning Applications

Slide: Isabelle Guyon

Page 15: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department
Page 16: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Dimensionality Reduction

• PCA, ICA, LLE, Isomap

• PCA is the most important technique to know. It takes advantage of correlations in data dimensions to produce the best possible lower dimensional representation, according to reconstruction error.

• PCA should be used for dimensionality reduction, not for discovering patterns or making predictions. Don't try to assign semantic meaning to the bases.

Page 17: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department
Page 18: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department
Page 19: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

• http://fakeisthenewreal.org/reform/

Page 20: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

• http://fakeisthenewreal.org/reform/

Page 21: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Clustering example: image segmentation

Goal: Break up the image into meaningful or perceptually similar regions

Page 22: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Segmentation for feature support or efficiency

[Felzenszwalb and Huttenlocher 2004]

[Hoiem et al. 2005, Mori 2005][Shi and Malik 2001]

Slide: Derek Hoiem

50x50 Patch

50x50 Patch

Page 23: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Segmentation as a result

Rother et al. 2004

Page 24: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Types of segmentations

Oversegmentation Undersegmentation

Multiple Segmentations

Page 25: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Clustering: group together similar points and represent them with a single token

Key Challenges:1) What makes two points/images/patches similar?2) How do we compute an overall grouping from pairwise similarities?

Slide: Derek Hoiem

Page 26: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Why do we cluster?

• Summarizing data– Look at large amounts of data– Patch-based compression or denoising– Represent a large continuous vector with the cluster number

• Counting– Histograms of texture, color, SIFT vectors

• Segmentation– Separate the image into different regions

• Prediction– Images in the same cluster may have the same labels

Slide: Derek Hoiem

Page 27: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

How do we cluster?

• K-means– Iteratively re-assign points to the nearest cluster center

• Agglomerative clustering– Start with each point as its own cluster and iteratively

merge the closest clusters• Mean-shift clustering

– Estimate modes of pdf• Spectral clustering

– Split the nodes in a graph based on assigned links with similarity weights

Page 28: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Clustering for Summarization

Goal: cluster to minimize variance in data given clusters– Preserve information

N

j

K

ijiN ij

21

,

** argmin, xcδcδc

Whether xj is assigned to ci

Cluster center Data

Slide: Derek Hoiem

Page 29: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

K-means algorithm

Illustration: http://en.wikipedia.org/wiki/K-means_clustering

1. Randomly select K centers

2. Assign each point to nearest center

3. Compute new center (mean) for each cluster

Page 30: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

K-means algorithm

Illustration: http://en.wikipedia.org/wiki/K-means_clustering

1. Randomly select K centers

2. Assign each point to nearest center

3. Compute new center (mean) for each cluster

Back to 2

Page 31: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

K-means1. Initialize cluster centers: c0 ; t=0

2. Assign each point to the closest center

3. Update cluster centers as the mean of the points

4. Repeat 2-3 until no points are re-assigned (t=t+1)

N

j

K

ij

tiN

t

ij

211argmin xcδδ

N

j

K

iji

tN

t

ij

21argmin xccc

Slide: Derek Hoiem

Page 32: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

K-means converges to a local minimum

Page 33: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

K-means: design choices

• Initialization– Randomly select K points as initial cluster center– Or greedily choose K points to minimize residual

• Distance measures– Traditionally Euclidean, could be others

• Optimization– Will converge to a local minimum– May want to perform multiple restarts

Page 34: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Image Clusters on intensity Clusters on color

K-means clustering using intensity or color

Page 35: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

How to evaluate clusters?

• Generative– How well are points reconstructed from the

clusters?

• Discriminative– How well do the clusters correspond to labels?

• Purity

– Note: unsupervised clustering does not aim to be discriminative

Slide: Derek Hoiem

Page 36: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

How to choose the number of clusters?

• Validation set– Try different numbers of clusters and look at

performance• When building dictionaries (discussed later), more

clusters typically work better

Slide: Derek Hoiem

Page 37: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

K-Means pros and cons• Pros

• Finds cluster centers that minimize conditional variance (good representation of data)

• Simple and fast*• Easy to implement

• Cons• Need to choose K• Sensitive to outliers• Prone to local minima• All clusters have the same parameters

(e.g., distance measure is non-adaptive)

• *Can be slow: each iteration is O(KNd) for N d-dimensional points

• Usage• Rarely used for pixel segmentation

Page 38: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Building Visual Dictionaries1. Sample patches from

a database– E.g., 128 dimensional

SIFT vectors

2. Cluster the patches– Cluster centers are

the dictionary

3. Assign a codeword (number) to each new patch, according to the nearest cluster

Page 39: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Examples of learned codewords

Sivic et al. ICCV 2005http://www.robots.ox.ac.uk/~vgg/publications/papers/sivic05b.pdf

Most likely codewords for 4 learned “topics”EM with multinomial (problem 3) to get topics

Page 40: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Agglomerative clustering

Page 41: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Agglomerative clustering

Page 42: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Agglomerative clustering

Page 43: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Agglomerative clustering

Page 44: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Agglomerative clustering

Page 45: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Agglomerative clusteringHow to define cluster similarity?- Average distance between points, maximum

distance, minimum distance- Distance between means or medoids

How many clusters?- Clustering creates a dendrogram (a tree)- Threshold based on max number of clusters

or based on distance between merges

dist

ance

Page 46: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Conclusions: Agglomerative Clustering

Good• Simple to implement, widespread application• Clusters have adaptive shapes• Provides a hierarchy of clusters

Bad• May have imbalanced clusters• Still have to choose number of clusters or threshold• Need to use an “ultrametric” to get a meaningful

hierarchy

Page 47: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

• Versatile technique for clustering-based segmentation

D. Comaniciu and P. Meer, Mean Shift: A Robust Approach toward Feature Space Analysis, PAMI 2002.

Mean shift segmentation

Page 48: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Mean shift algorithm• Try to find modes of this non-parametric

density

Page 49: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Kernel density estimation

Kernel density estimation function

Gaussian kernel

Page 50: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Region ofinterest

Center ofmass

Mean Shiftvector

Slide by Y. Ukrainitz & B. Sarel

Mean shift

Page 51: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Region ofinterest

Center ofmass

Mean Shiftvector

Slide by Y. Ukrainitz & B. Sarel

Mean shift

Page 52: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Region ofinterest

Center ofmass

Mean Shiftvector

Slide by Y. Ukrainitz & B. Sarel

Mean shift

Page 53: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Region ofinterest

Center ofmass

Mean Shiftvector

Mean shift

Slide by Y. Ukrainitz & B. Sarel

Page 54: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Region ofinterest

Center ofmass

Mean Shiftvector

Slide by Y. Ukrainitz & B. Sarel

Mean shift

Page 55: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Region ofinterest

Center ofmass

Mean Shiftvector

Slide by Y. Ukrainitz & B. Sarel

Mean shift

Page 56: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Region ofinterest

Center ofmass

Slide by Y. Ukrainitz & B. Sarel

Mean shift

Page 57: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Simple Mean Shift procedure:• Compute mean shift vector

• Translate the Kernel window by m(x)

2

1

2

1

( )

ni

ii

ni

i

gh

gh

x - xx

m x xx - x

g( ) ( )kx x

Computing the Mean Shift

Slide by Y. Ukrainitz & B. Sarel

Page 58: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

• Attraction basin: the region for which all trajectories lead to the same mode

• Cluster: all data points in the attraction basin of a mode

Slide by Y. Ukrainitz & B. Sarel

Attraction basin

Page 59: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Attraction basin

Page 60: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Mean shift clustering• The mean shift algorithm seeks modes of the

given set of points1. Choose kernel and bandwidth2. For each point:

a) Center a window on that pointb) Compute the mean of the data in the search windowc) Center the search window at the new mean locationd) Repeat (b,c) until convergence

3. Assign points that lead to nearby modes to the same cluster

Page 61: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

• Compute features for each pixel (color, gradients, texture, etc)• Set kernel size for features Kf and position Ks

• Initialize windows at individual pixel locations• Perform mean shift for each window until convergence• Merge windows that are within width of Kf and Ks

Segmentation by Mean Shift

Page 62: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Mean shift segmentation results

Comaniciu and Meer 2002

Page 63: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Comaniciu and Meer 2002

Page 64: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Mean shift pros and cons

• Pros– Good general-practice segmentation– Flexible in number and shape of regions– Robust to outliers

• Cons– Have to choose kernel size in advance– Not suitable for high-dimensional features

• When to use it– Oversegmentatoin– Multiple segmentations– Tracking, clustering, filtering applications

Page 65: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Spectral clustering

Group points based on links in a graph

AB

Page 66: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Cuts in a graph

AB

Normalized Cut• a cut penalizes large segments• fix by normalizing for size of segments

• volume(A) = sum of costs of all edges that touch A

Source: Seitz

Page 67: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Normalized cuts for segmentation

Page 68: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Which algorithm to use?• Quantization/Summarization: K-means

– Aims to preserve variance of original data– Can easily assign new point to a cluster

Quantization for computing histograms

Summary of 20,000 photos of Rome using “greedy k-means”

http://grail.cs.washington.edu/projects/canonview/

Page 69: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Which algorithm to use?• Image segmentation: agglomerative clustering

– More flexible with distance measures (e.g., can be based on boundary prediction)

– Adapts better to specific data– Hierarchy can be useful

http://www.cs.berkeley.edu/~arbelaez/UCM.html

Page 70: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Clustering

Key algorithm• K-means

Page 71: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department
Page 72: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

The machine learning framework

• Apply a prediction function to a feature representation of the image to get the desired output:

f( ) = “apple”

f( ) = “tomato”

f( ) = “cow”Slide credit: L. Lazebnik

Page 73: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

The machine learning framework

y = f(x)

• Training: given a training set of labeled examples {(x1,y1), …, (xN,yN)}, estimate the prediction function f by minimizing the prediction error on the training set

• Testing: apply f to a never before seen test example x and output the predicted value y = f(x)

output prediction function

Image feature

Slide credit: L. Lazebnik

Page 74: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Learning a classifier

Given some set of features with corresponding labels, learn a function to predict the labels from the features

x x

xx

x

x

x

x

oo

o

o

o

x2

x1

Page 75: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Prediction

StepsTraining Labels

Training Images

Training

Training

Image Features

Image Features

Testing

Test Image

Learned model

Learned model

Slide credit: D. Hoiem and L. Lazebnik

Page 76: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Features

• Raw pixels

• Histograms

• GIST descriptors

• …Slide credit: L. Lazebnik

Page 77: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

One way to think about it…

• Training labels dictate that two examples are the same or different, in some sense

• Features and distance measures define visual similarity

• Classifiers try to learn weights or parameters for features and distance measures so that visual similarity predicts label similarity

Page 78: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Many classifiers to choose from• SVM• Neural networks• Naïve Bayes• Bayesian network• Logistic regression• Randomized Forests• Boosted Decision Trees• K-nearest neighbor• RBMs• Etc.

Which is the best one?

Page 79: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Claim:

The decision to use machine learning is more important than the choice of a particular learning method.

Page 80: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Classifiers: Nearest neighbor

f(x) = label of the training example nearest to x

• All we need is a distance function for our inputs• No training required!

Test example

Training examples

from class 1

Training examples

from class 2

Slide credit: L. Lazebnik

Page 81: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Classifiers: Linear

• Find a linear function to separate the classes:

f(x) = sgn(w x + b)

Slide credit: L. Lazebnik

Page 82: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Many classifiers to choose from• SVM• Neural networks• Naïve Bayes• Bayesian network• Logistic regression• Randomized Forests• Boosted Decision Trees• K-nearest neighbor• RBMs• Etc.

Which is the best one?

Slide credit: D. Hoiem

Page 83: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

• Images in the training set must be annotated with the “correct answer” that the model is expected to produce

Contains a motorbike

Recognition task and supervision

Slide credit: L. Lazebnik

Page 84: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Unsupervised “Weakly” supervised Fully supervised

Definition depends on task

Slide credit: L. Lazebnik

Page 85: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Generalization

• How well does a learned model generalize from the data it was trained on to a new test set?

Training set (labels known) Test set (labels unknown)

Slide credit: L. Lazebnik

Page 86: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Generalization• Components of generalization error

– Bias: how much the average model over all training sets differ from the true model?• Error due to inaccurate assumptions/simplifications made by

the model. – Variance: how much models estimated from different training

sets differ from each other.

• Underfitting: model is too “simple” to represent all the relevant class characteristics– High bias (few degrees of freedom) and low variance– High training error and high test error

• Overfitting: model is too “complex” and fits irrelevant characteristics (noise) in the data– Low bias (many degrees of freedom) and high variance– Low training error and high test error

Slide credit: L. Lazebnik

Page 87: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Bias-Variance Trade-off

• Models with too few parameters are inaccurate because of a large bias (not enough flexibility).

• Models with too many parameters are inaccurate because of a large variance (too much sensitivity to the sample).

Slide credit: D. Hoiem

Page 88: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Bias-Variance Trade-off

E(MSE) = noise2 + bias2 + variance

See the following for explanations of bias-variance (also Bishop’s “Neural Networks” book): • http://www.inf.ed.ac.uk/teaching/courses/mlsc/Notes/Lecture4/BiasVariance.pdf

Unavoidable error

Error due to incorrect

assumptions

Error due to variance of training

samples

Slide credit: D. Hoiem

Page 89: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Bias-variance tradeoff

Training error

Test error

Underfitting Overfitting

Complexity Low BiasHigh Variance

High BiasLow Variance

Err

or

Slide credit: D. Hoiem

Page 90: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Bias-variance tradeoff

Many training examples

Few training examples

Complexity Low BiasHigh Variance

High BiasLow Variance

Test

Err

or

Slide credit: D. Hoiem

Page 91: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Effect of Training Size

Testing

Training

Generalization Error

Number of Training Examples

Err

or

Fixed prediction model

Slide credit: D. Hoiem

Page 92: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Remember…• No classifier is inherently better

than any other: you need to make assumptions to generalize

• Three kinds of error– Inherent: unavoidable– Bias: due to over-simplifications– Variance: due to inability to

perfectly estimate parameters from limited data

Slide credit: D. Hoiem

Page 93: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

How to reduce variance?

• Choose a simpler classifier

• Regularize the parameters

• Get more training data

Slide credit: D. Hoiem

Page 94: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Very brief tour of some classifiers• K-nearest neighbor• SVM• Boosted Decision Trees• Neural networks• Naïve Bayes• Bayesian network• Logistic regression• Randomized Forests• RBMs• Etc.

Page 95: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Generative vs. Discriminative Classifiers

Generative Models• Represent both the data and

the labels• Often, makes use of

conditional independence and priors

• Examples– Naïve Bayes classifier– Bayesian network

• Models of data may apply to future prediction problems

Discriminative Models• Learn to directly predict the

labels from the data• Often, assume a simple

boundary (e.g., linear)• Examples

– Logistic regression– SVM– Boosted decision trees

• Often easier to predict a label from the data than to model the data

Slide credit: D. Hoiem

Page 96: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Classification• Assign input vector to one of two or more

classes• Any decision rule divides input space into

decision regions separated by decision boundaries

Slide credit: L. Lazebnik

Page 97: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Nearest Neighbor Classifier

• Assign label of nearest training data point to each test data point

Voronoi partitioning of feature space for two-category 2D and 3D data

from Duda et al.

Source: D. Lowe

Page 98: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

K-nearest neighbor

x x

xx

x

x

x

xo

oo

o

o

o

o

x2

x1

+

+

Page 99: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

1-nearest neighbor

x x

xx

x

x

x

xo

oo

o

o

o

o

x2

x1

+

+

Page 100: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

3-nearest neighbor

x x

xx

x

x

x

xo

oo

o

o

o

o

x2

x1

+

+

Page 101: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

5-nearest neighbor

x x

xx

x

x

x

xo

oo

o

o

o

o

x2

x1

+

+

Page 102: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Using K-NN

• Simple, a good one to try first

• With infinite examples, 1-NN provably has error that is at most twice Bayes optimal error

Page 103: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Classifiers: Linear SVM

x x

xx

x

x

x

x

oo

o

o

o

x2

x1

• Find a linear function to separate the classes:

f(x) = sgn(w x + b)

Page 104: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Classifiers: Linear SVM

x x

xx

x

x

x

x

oo

o

o

o

x2

x1

• Find a linear function to separate the classes:

f(x) = sgn(w x + b)

Page 105: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Classifiers: Linear SVM

x x

xx

x

x

x

x

o

oo

o

o

o

x2

x1

• Find a linear function to separate the classes:

f(x) = sgn(w x + b)

Page 106: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

• Datasets that are linearly separable work out great:

• But what if the dataset is just too hard?

• We can map it to a higher-dimensional space:

0 x

0 x

0 x

x2

Nonlinear SVMs

Slide credit: Andrew Moore

Page 107: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Φ: x → φ(x)

Nonlinear SVMs• General idea: the original input space can

always be mapped to some higher-dimensional feature space where the training set is separable:

Slide credit: Andrew Moore

Page 108: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Nonlinear SVMs• The kernel trick: instead of explicitly computing

the lifting transformation φ(x), define a kernel function K such that

K(xi , xj) = φ(xi ) · φ(xj)

(to be valid, the kernel function must satisfy Mercer’s condition)

• This gives a nonlinear decision boundary in the original feature space:

bKybyi

iiii

iii ),()()( xxxx

C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998

Page 109: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Nonlinear kernel: Example

• Consider the mapping ),()( 2xxx

22

2222

),(

),(),()()(

yxxyyxK

yxxyyyxxyx

x2

Page 110: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Kernels for bags of features• Histogram intersection kernel:

• Generalized Gaussian kernel:

• D can be (inverse) L1 distance, Euclidean distance, χ2 distance, etc.

N

i

ihihhhI1

2121 ))(),(min(),(

2

2121 ),(1

exp),( hhDA

hhK

J. Zhang, M. Marszalek, S. Lazebnik, and C. Schmid, Local Features and Kernels for Classifcation of Texture and Object Categories: A Comprehensive Study, IJCV 2007

Page 111: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Summary: SVMs for image classification

1. Pick an image representation (in our case, bag of features)

2. Pick a kernel function for that representation

3. Compute the matrix of kernel values between every pair of training examples

4. Feed the kernel matrix into your favorite SVM solver to obtain support vectors and weights

5. At test time: compute kernel values for your test example and each support vector, and combine them with the learned weights to get the value of the decision function

Slide credit: L. Lazebnik

Page 112: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

What about multi-class SVMs?• Unfortunately, there is no “definitive” multi-

class SVM formulation• In practice, we have to obtain a multi-class

SVM by combining multiple two-class SVMs • One vs. others

• Traning: learn an SVM for each class vs. the others• Testing: apply each SVM to test example and assign to it the

class of the SVM that returns the highest decision value

• One vs. one• Training: learn an SVM for each pair of classes• Testing: each learned SVM “votes” for a class to assign to

the test example

Slide credit: L. Lazebnik

Page 113: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

SVMs: Pros and cons• Pros

• Many publicly available SVM packages:http://www.kernel-machines.org/software

• Kernel-based framework is very powerful, flexible• SVMs work very well in practice, even with very small

training sample sizes

• Cons• No “direct” multi-class SVM, must combine two-class SVMs• Computation, memory

– During training time, must compute matrix of kernel values for every pair of examples

– Learning can take a very long time for large-scale problems

Page 114: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

What to remember about classifiers

• No free lunch: machine learning algorithms are tools, not dogmas

• Try simple classifiers first

• Better to have smart features and simple classifiers than simple features and smart classifiers

• Use increasingly powerful classifiers with more training data (bias-variance tradeoff)

Slide credit: D. Hoiem

Page 115: Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department

Making decisions about data• 3 important design decisions:

1) What data do I use?2) How do I represent my data (what feature)?3) What classifier / regressor / machine learning tool do I use?

• These are in decreasing order of importance• Deep learning addresses 2 and 3

simultaneously (and blurs the boundary between them).

• You can take the representation from deep learning and use it with any classifier.