machine learning for signal processing neural networks ...mlsp.cs.cmu.edu › courses › fall2016...
TRANSCRIPT
Machine Learning for Signal Processing
Neural Networks Continue
Instructor: Bhiksha Raj
Slides by Najim Dehak
1 Dec 2016
1
So what are neural networks??
• What are these boxes?
N.Net Voice signal Transcription N.Net Image Text caption
N.Net Game State Next move
18797/11755 2
So what are neural networks??
• It began with this..
• Humans are very good at the tasks we just saw
• Can we model the human brain/ human intelligence?
– An old question – dating back to Plato and Aristotle.. 18797/11755 3
MLP - Recap
• MLPs are Boolean machines – They represent Boolean functions over linear boundaries – They can represent arbitrary boundaries
• Perceptrons are correlation filters – They detect patterns in the input
• MLPs are Boolean formulae over patterns detected by perceptron – Higher-level perceptrons may also be viewed as feature detectors
• MLPs are universal approximators
– Can model any function to arbitrary precision
• Extra: MLP in classification – The network will fire if the combination of the detected basic features
matches an “acceptable” pattern for a desired class of signal • E.g. Appropriate combinations of (Nose, Eyes, Eyebrows, Cheek, Chin) Face
4
MLP - Recap
• MLPs are Boolean machines
– They represent arbitrary Boolean functions over arbitrary linear boundaries
• Perceptrons are pattern detectors
– MLPs are Boolean formulae over these patterns
• MLPs are universal approximators
– Can model any function to arbitrary precision
• MLPs are very hard to train
– Training data are generally many orders of magnitude too few
– Even with optimal architectures, we could get rubbish
– Depth helps greatly!
– Can learn functions that regular classifiers cannot 5
What is a deep network?
Deep Structures
• In any directed network of computational
elements with input source nodes and output
sink nodes, “depth” is the length of the
longest path from a source to a sink
• Left: Depth = 2. Right: Depth = 3
Deep Structures
• Layered deep structure
• “Deep” Depth > 2
MLP as a continuous-valued regression
• MLPs can actually compose arbitrary functions to arbitrary precision
– Not just classification/Boolean functions
• 1D example
– Left: A net with a pair of units can create a pulse of any width at any location
– Right: A network of N such pairs approximates the function with N scaled pulses 9
x
1 T1
T2
1
T1
T2
1
-1 T1 T2 x
f(x) x
+
MLP features
• The lowest layers of a network detect significant features in the signal
• The signal could be reconstructed using these features
– Will retain all the significant components of the signal 10
DIGIT OR NOT?
Making it explicit: an autoencoder
• A neural network can be trained to predict the input itself
• This is an autoencoder
• An encoder learns to detect all the most significant patterns in the signals
• A decoder recomposes the signal from the patterns 11
𝑿
𝒀
𝑿
𝑾
𝑾𝑻
Deep Autoencoder
ENCODER
DECODER
What does the AE learn
• In the absence of an intermediate non-linearity
• This is just PCA 13
𝑿
𝑿
𝒀 𝑾
𝑾𝑻
𝐘 = 𝐖𝐗 𝐗 = 𝐖𝑇𝐘 𝐸 = 𝐗 −𝐖𝑇𝐖𝐗 2 Find W to minimize Avg[E]
The AE
• With non-linearity
– “Non linear” PCA
– Deeper networks can capture more complicated
manifolds 14
ENCODER
DECODER
The Decoder:
• The decoder represents a source-specific generative
dictionary
• Exciting it will produce typical signals from the source!
15
DECODER
The AE
ENCODER
DECODER
Cut the AE
16
DECODER
The Decoder:
• The decoder represents a source-specific generative
dictionary
• Exciting it will produce typical signals from the source!
17
Sax dictionary
The Decoder:
• The decoder represents a source-specific generative
dictionary
• Exciting it will produce typical signals from the source!
18
DECODER
Clarinet dictionary
NN for speech enhancement
19
Story so far
• MLPs are universal classifiers
– They can model any decision boundary
• Neural networks are universal approximators
– They can model any regression
• The decoder of MLP autoencoders represent
a non-linear constructive dictionary!
20
The need for shift invariance
• In many problems the location of a pattern is not important
– Only the presence of the pattern
• Conventional MLPs are sensitive to the location of the pattern
– Moving it by one component results in an entirely different input that the MLP wont recognize
• Requirement: Network must be shift invariant
=
History
Yann LeCun
Hubel and Wiesel: 1959 (biological model), Fukushima: 1980 (computational model), Altas: 1988, Lecunn: 1989 (Backprop in convnets)
Kunihiko Fukushima
Convolutional Neural Networks
Convolutional Neural Networks • A special kind of multi-layer neural networks.
• Implicitly extract relevant features.
• A feed-forward network that can extract topological
properties from an image.
• CNNs are also trained with a version of back-propagation algorithm.
All different weights
Convolution layer has much smaller number of parameters by local connection and weight sharing
All different weights Shared weights
Connectivity & weight sharing
25
Example: 200x200 image 40K hidden units ~2B parameters!!!
- Spatial correlation is local - Waste of resources + we have not enough training samples anyway..
Fully Connected Layer
Ranzato
26
Locally Connected Layer
Example: 200x200 image 40K hidden units Filter size: 10x10 4M parameters
Ranzato
Note: This parameterization is good when input image is registered (e.g., face recognition).
27
STATIONARITY? Statistics is similar at different locations
Ranzato
Locally Connected Layer
Example: 200x200 image 40K hidden units Filter size: 10x10 4M parameters
28
Convolutional Layer
Share the same parameters across different locations (assuming input is stationary): Convolutions with learned kernels
Ranzato
Convolution
Convolutional Layer
Ranzato
Ranzato
Convolutional Layer
Ranzato
Convolutional Layer
Ranzato
Convolutional Layer
Ranzato
Convolutional Layer
Ranzato
Convolutional Layer
Ranzato
Convolutional Layer
Ranzato
Convolutional Layer
Ranzato
Convolutional Layer
Ranzato
Convolutional Layer
Ranzato
Convolutional Layer
Ranzato
Convolutional Layer
Ranzato
Convolutional Layer
Ranzato
Convolutional Layer
Ranzato
Convolutional Layer
Ranzato
Convolutional Layer
46
Learn multiple filters.
E.g.: 200x200 image 100 Filters Filter size: 10x10 10K parameters
Ranzato
Convolutional Layer
before:
now:
input layer hidden layer
output layer
Convolutional Layers
32
32
3
32x32x3 image
width
height
depth
Convolution Layer
32
32
3
5x5x3 filter
32x32x3 image
Convolve the filter with the image
i.e. “slide over the image spatially,
computing dot products”
Convolution Layer
32
32
3
5x5x3 filter
32x32x3 image
Convolve the filter with the image
i.e. “slide over the image spatially,
computing dot products”
Filters always extend the full
depth of the input volume
Convolution Layer
32
32
3
32x32x3 image
5x5x3 filter
1 number:
the result of taking a dot product between the
filter and a small 5x5x3 chunk of the image
(i.e. 5*5*3 = 75-dimensional dot product + bias)
Convolution Layer
32
32
3
32x32x3 image
5x5x3 filter
convolve (slide) over all
spatial locations
activation map
1
28
28
Convolution Layer
32
32
3
32x32x3 image
5x5x3 filter
convolve (slide) over all
spatial locations
activation maps
1
28
28
consider a second, green filter
Convolution Layer
32
32
3
Convolution Layer
activation maps
6
28
28
For example, if we had 6 5x5 filters, we’ll get 6 separate activation maps:
We stack these up to get a “new image” of size 28x28x6!
Convolution Layer
Preview: ConvNet is a sequence of Convolution Layers, interspersed with
activation functions
32
32
3
28
28
6
CONV,
ReLU
e.g. 6
5x5x3
filters
CNN
Preview: ConvNet is a sequence of Convolutional Layers, interspersed with
activation functions
32
32
3
CONV,
ReLU
e.g. 6
5x5x3
filters 28
28
6
CONV,
ReLU
e.g. 10
5x5x6
filters
CONV,
ReLU
….
10
24
24
CNN
57
Let us assume filter is an “eye” detector. Q.: how can we make the detection robust to the exact location of the eye?
Ranzato
Pooling Layer
58
By “pooling” (e.g., taking max) filter responses at different locations we gain robustness to the exact spatial location of features.
Ranzato
Pooling Layer
- makes the representations smaller and more manageable
- operates over each activation map independently:
Pooling Layer
1 1 2 4
5 6 7 8
3 2 1 0
1 2 3 4
Single depth slice
x
y
max pool with 2x2 filters
and stride 2 6 8
3 4
Max Pooling
61
Convol. Pooling
One stage (zoom)
courtesy of K. Kavukcuoglu Ranzato
ConvNets: Typical Stage
Digit classification
ImageNet • 1.2 million high-resolution images from ImageNet LSVRC-2010 contest
• 1000 different classes (sofmax layer)
• NN configuration • NN contains 60 million parameters and 650,000 neurons, • 5 convolutional layers, some of which are followed by max-pooling layers • 3 fully-connected layers
Krizhevsky, A., Sutskever, I. and Hinton, G. E. “ImageNet Classification with Deep Convolutional
Neural Networks” NIPS 2012: Neural Information Processing Systems, Lake Tahoe, Nevada
ImageNet
Figure 3: 96 convolutional
kernels of size 11×11×3
learned by the first
convolutional layer on the
224×224×3 input images. The
top 48 kernels were learned
on GPU 1 while the bottom 48
kernels were learned on GPU
2. See Section 6.1 for details.
Krizhevsky, A., Sutskever, I. and Hinton, G. E. “ImageNet Classification with Deep Convolutional
Neural Networks” NIPS 2012: Neural Information Processing Systems, Lake Tahoe, Nevada
ImageNet
Krizhevsky, A., Sutskever, I. and Hinton, G. E. “ImageNet Classification with Deep Convolutional
Neural Networks” NIPS 2012: Neural Information Processing Systems, Lake Tahoe, Nevada
Eight ILSVRC-2010 test images and the five
labels considered most probable by our model.
The correct label is written under each image,
and the probability assigned to the correct label
is also shown with a red bar (if it happens to be
in the top 5).
Five ILSVRC-2010 test images in the first
column. The remaining columns show the six
training images that produce feature vectors in
the last hidden layer with the smallest Euclidean
distance from the feature vector for the test
image.
CNN for Automatic Speech Recognition
• Convolution over frequencies
• Convolution over time
• Neural network with specialized connectivity
structure
• Feed-forward:
- Convolve input
- Non-linearity (rectified linear)
- Pooling (local max)
• Supervised training
• Train convolutional filters by back-propagating error
• Convolution over time
• Adding memory to classical MLP network
• Recurrent neural network
Feature maps
Pooling
Non-linearity
Convolution (Learned)
Input image
CNN-Recap
Recurrent networks introduce (RNN) cycles and a notion of time.
• They are designed to process sequences of data 𝑥1, … , 𝑥𝑛 and can produce sequences of outputs 𝑦1, … , 𝑦𝑚.
Recurrent Neural Networks (RNNs)
𝑥𝑡 𝑦𝑡
ℎ𝑡 ℎ𝑡−1
One-step delay
Recurrent Neural Network
Elman Nets (1990) – Simple Recurrent Neural Networks
• Elman nets are feed forward networks with partial recurrence
• Unlike feed forward nets, Elman nets have a memory or sense of time
• Can also be viewed as a “Markovian” NN
(Vanilla) Recurrent Neural Network
The state consists of a single “hidden” vector h:
𝑥𝑡 𝑦𝑡
ℎ𝑡 ℎ𝑡−1
One-step delay
Simple Recurrent Neural Network
RNNs can be unrolled across multiple time steps. This produces a DAG which supports backpropagation. But its size depends on the input sequence length.
Unrolling RNNs
𝑥𝑡 𝑦𝑡
ℎ𝑡 ℎ𝑡−1
One-step delay
𝑥0
𝑦0
ℎ0
𝑥1
𝑦1
ℎ1
𝑥2
𝑦2
ℎ2
Recurrent Neural Network
• Recurrent networks have one more or more feedback loops
• There are many tasks that require learning a temporal sequence of events – Speech, video, Text, Market
• These problems can be broken into 3 distinct types of tasks
1. Sequence Recognition: Produce a particular output pattern when a specific input sequence is seen. Applications: speech recognition
2. Sequence Reproduction: Generate the rest of a sequence when the network sees only part of the sequence. Applications: Time series prediction (stock market, sun spots, etc)
3. Temporal Association: Produce a particular output sequence in response to a specific input sequence. Applications: speech generation
Learning time sequences
Often layers are stacked vertically (deep RNNs):
RNN structure
𝑥0
𝑦00 ℎ00
𝑥1
𝑦01 ℎ01
𝑥2
𝑦02 ℎ02
𝑥00 𝑥01 𝑥02
𝑦10 𝑦11 𝑦12
ℎ10 ℎ11 ℎ12
Time
Abstraction - Higher
level features
Same parameters at this level
Same parameters at this level
Recurrent Neural Network
RNN structure
𝑥0
𝑦00 ℎ00
𝑥1
𝑦01 ℎ01
𝑥2
𝑦02 ℎ02
𝑥00 𝑥01 𝑥02
𝑦10 𝑦11 𝑦12
ℎ10 ℎ11 ℎ12
Time
Abstraction - Higher
level features
Activations
Recurrent Neural Network Backprop still works: (it called Backpropagation Through Time)
Backprop still works:
RNN structure
𝑥0
𝑦00 ℎ00
𝑥1
𝑦01 ℎ01
𝑥2
𝑦02 ℎ02
𝑥00 𝑥01 𝑥02
𝑦10 𝑦11 𝑦12
ℎ10 ℎ11 ℎ12
Time
Abstraction - Higher
level features
Activations
Recurrent Neural Network
Backprop still works:
RNN structure
𝑥0
𝑦00 ℎ00
𝑥1
𝑦01 ℎ01
𝑥2
𝑦02 ℎ02
𝑥00 𝑥01 𝑥02
𝑦10 𝑦11 𝑦12
ℎ10 ℎ11 ℎ12
Time
Abstraction - Higher
level features
Activations
Recurrent Neural Network
Backprop still works:
RNN structure
𝑥0
𝑦00 ℎ00
𝑥1
𝑦01 ℎ01
𝑥2
𝑦02 ℎ02
𝑥00 𝑥01 𝑥02
𝑦10 𝑦11 𝑦12
ℎ10 ℎ11 ℎ12
Time
Abstraction - Higher
level features
Activations
Recurrent Neural Network
Backprop still works:
RNN structure
𝑥0
𝑦00 ℎ00
𝑥1
𝑦01 ℎ01
𝑥2
𝑦02 ℎ02
𝑥00 𝑥01 𝑥02
𝑦10 𝑦11 𝑦12
ℎ10 ℎ11 ℎ12
Time
Abstraction - Higher
level features
Activations
Recurrent Neural Network
Backprop still works:
RNN structure
𝑥0
𝑦00 ℎ00
𝑥1
𝑦01 ℎ01
𝑥2
𝑦02 ℎ02
𝑥00 𝑥01 𝑥02
𝑦10 𝑦11 𝑦12
ℎ10 ℎ11 ℎ12
Time
Abstraction - Higher
level features
Activations
Recurrent Neural Network
Backprop still works:
RNN structure
𝑥0
𝑦00 ℎ00
𝑥1
𝑦01 ℎ01
𝑥2
𝑦02 ℎ02
𝑥00 𝑥01 𝑥02
𝑦10 𝑦11 𝑦12
ℎ10 ℎ11 ℎ12
Time
Abstraction - Higher
level features
Activations
Recurrent Neural Network
Backprop still works:
RNN structure
𝑥0
𝑦00 ℎ00
𝑥1
𝑦01 ℎ01
𝑥2
𝑦02 ℎ02
𝑥00 𝑥01 𝑥02
𝑦10 𝑦11 𝑦12
ℎ10 ℎ11 ℎ12
Time
Abstraction - Higher
level features
Gradients
Recurrent Neural Network
Backprop still works:
RNN structure
𝑥0
𝑦00 ℎ00
𝑥1
𝑦01 ℎ01
𝑥2
𝑦02 ℎ02
𝑥00 𝑥01 𝑥02
𝑦10 𝑦11 𝑦12
ℎ10 ℎ11 ℎ12
Time
Abstraction - Higher
level features
Gradients
Recurrent Neural Network
Backprop still works:
RNN structure
𝑥0
𝑦00 ℎ00
𝑥1
𝑦01 ℎ01
𝑥2
𝑦02 ℎ02
𝑥00 𝑥01 𝑥02
𝑦10 𝑦11 𝑦12
ℎ10 ℎ11 ℎ12
Time
Abstraction - Higher
level features
Gradients
Recurrent Neural Network
RNN structure
𝑥0
𝑦00 ℎ00
𝑥1
𝑦01 ℎ01
𝑥2
𝑦02 ℎ02
𝑥00 𝑥01 𝑥02
𝑦10 𝑦11 𝑦12
ℎ10 ℎ11 ℎ12
Time
Abstraction - Higher
level features
Gradients
Recurrent Neural Network Backprop still works:
Backprop still works:
RNN structure
𝑥0
𝑦00 ℎ00
𝑥1
𝑦01 ℎ01
𝑥2
𝑦02 ℎ02
𝑥00 𝑥01 𝑥02
𝑦10 𝑦11 𝑦12
ℎ10 ℎ11 ℎ12
Time
Abstraction - Higher
level features
Gradients
Recurrent Neural Network
Backprop still works:
RNN structure
𝑥0
𝑦00 ℎ00
𝑥1
𝑦01 ℎ01
𝑥2
𝑦02 ℎ02
𝑥00 𝑥01 𝑥02
𝑦10 𝑦11 𝑦12
ℎ10 ℎ11 ℎ12
Time
Abstraction - Higher
level features
Gradients
Recurrent Neural Network
Backprop still works:
RNN structure
𝑥0
𝑦00 ℎ00
𝑥1
𝑦01 ℎ01
𝑥2
𝑦02 ℎ02
𝑥00 𝑥01 𝑥02
𝑦10 𝑦11 ℎ10 ℎ11 ℎ12
Time
Abstraction - Higher
level features
Gradients
Recurrent Neural Network
𝑦12
The memory problem with RNN • RNN models signal context
• If very long context is used -> RNNs become unable to learn the context information
Standard RNNs to LSTM
Standard
LSTM
LSTM illustrated: input and forming new memory
Input gate
New memory
LSTM cell takes the following input
• the input 𝑥𝑡
• past memory output ℎ𝑡−1
• past memory 𝐶𝑡−1
(all vectors)
Forget gate
Cell state
• Forming the output of the cell by using output gate
LSTM illustrated: Output
Overall picture:
LSTM Equations
92
• 𝑖 = 𝜎 𝑥𝑡𝑈𝑖 + 𝑠𝑡−1𝑊
𝑖
• 𝑓 = 𝜎 𝑥𝑡𝑈𝑓 + 𝑠𝑡−1𝑊
𝑓
• 𝑜 = 𝜎 𝑥𝑡𝑈𝑜 + 𝑠𝑡−1𝑊
𝑜
• 𝑔 = tanh 𝑥𝑡𝑈𝑔 + 𝑠𝑡−1𝑊
𝑔
• 𝑐𝑡 = 𝑐𝑡−1 ∘ 𝑓 + 𝑔 ∘ 𝑖
• 𝑠𝑡 = tanh 𝑐𝑡 ∘ 𝑜
• 𝑦 = 𝑠𝑜𝑓𝑡𝑚𝑎𝑥 𝑉𝑠𝑡
• 𝒊: input gate, how much of the new
information will be let through the memory
cell.
• 𝒇: forget gate, responsible for information
should be thrown away from memory cell.
• 𝒐: output gate, how much of the information
will be passed to expose to the next time
step.
• 𝒈: self-recurrent which is equal to standard
RNN
• 𝒄𝒕: internal memory of the memory cell
• 𝒔𝒕: hidden state
• 𝐲: final output
LSTM Memory Cell
LSTM output synchronization
(NLP) Applications of RNNs
• Section overview
– Language Model
– Sentiment analysis / text classification
– Machine translation and conversation modeling
– Sentence skip-thought vectors
RNN for
Sentiment analysis / text classification
• A quick example, to see the idea.
• Given text collections and their labels. Predict labels for unseen texts.
Translating Videos to Natural Language Using Deep Recurrent Neural Networks
Translating Videos to Natural Language Using Deep Recurrent Neural Networks Subhashini Venugopalan, Huijun Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko North American Chapter of the Association for Computational Linguistics, Denver, Colorado, June 2015.
Composing music with RNN
http://www.hexahedria.com/2015/08/03/composing-music-with-recurrent-neural-networks/
CNN-LSTM-DNN for speech recognition
• Ensembles of RNN/LSTM, DNN, & Conv Nets (CNN) give huge gains (state of the art):
• T. Sainath, O. Vinyals, A. Senior, H. Sak. “Convolutional, Long Short-Term Memory, Fully Connected Deep Neural Networks,” ICASSP 2015.
The Impact of deep learning in speech technologies
Cortana
Conclusions
• MLPs are Boolean machines – They represent Boolean functions over linear boundaries – They can represent arbitrary boundaries
• Perceptrons are correlation filters – They detect patterns in the input
• MLPs are Boolean formulae over patterns detected by perceptron – Higher-level perceptrons may also be viewed as feature detectors
• MLPs are universal approximators
– Can model any function to arbitrary precision
– Non linear PCA
• Convolute NN can handle shift invariance – CNN
• Special NN can model sequential data
– RNN, LSTM