neural networks chapter 9 joost n. kok universiteit leiden
TRANSCRIPT
Neural NetworksChapter 9
Joost N. Kok
Universiteit Leiden
Unsupervised Competitive Learning
• Competitive learning
• Winner-take-all units
• Cluster/Categorize input data
• Feature mapping
Unsupervised Competitive Learning
321
1 2 34 5
Unsupervised Competitive Learning
output
input (n-dimensional)
winner
Simple Competitive Learning
• Winner:
• Lateral inhibition
j
ijiji wwh
iiww *
Simple Competitive Learning
• Update weights for winning neuron jji
w *
ji
j j
j
jiww **
)( ** jijjiww
Simple Competitive Learning
• Update rule for all neurons:
)( * jijiij wOw
1* i
O
*0 iiifOi
Graph Bipartioning
• Patterns: edges = dipole stimuli
• Two output units
Simple Competitive Learning
• Dead Unit Problem Solutions– Initialize weights tot samples from the input
– Leaky learning: also update the weights of the losers (but with a smaller )
– Arrange neurons in a geometrical way: update also neighbors
– Turn on input patterns gradually
– Conscience mechanism
– Add noise to input patterns
Vector Quantization
• Classes are represented by prototype vectors
• Voronoi tessellation
Learning Vector Quantization
• Labelled sample data
• Update rule depends on current classification
incorrect is class if)(
correct is class if)(
*
*
*
jij
jij
ji w
ww
Adaptive Resonance Theory
• Stability-Plasticity Dilemma
• Supply of neurons, only use them if needed
• Notion of “sufficiently similar”
Adaptive Resonance Theory
• Start with all weights = 1• Enable all output units• Find winner among enabled units
• Test match• Update weights
j ji
ii
w
ww
iw
j j
iw
r*
** :ii
ww
Feature Mapping
• Geometrical arrangement of output units
• Nearby outputs correspond to nearby input patterns
• Feature Map
• Topology preserving map
Self Organizing Map
• Determine the winner (the neuron of which the weight vector has the smallest distance to the input vector)
• Move the weight vector w of the winning neuron towards the input i
Before learning
i
w
After learning
i w
Self Organizing Map
• Impose a topological order onto the competitive neurons (e.g., rectangular map)
• Let neighbors of the winner share the “prize” (The “postcode lottery” principle)
• After learning, neurons with similar weights tend to cluster on the map
Self Organizing Map
Self Organizing Map
Self Organizing Map
• Input: uniformly randomly distributed points
• Output: Map of 202 neurons
• Training– Starting with a large learning rate and
neighborhood size, both are gradually decreased to facilitate convergence
Self Organizing Map
Self Organizing Map
Self Organizing Map
Self Organizing Map
Self Organizing Map
Feature Mapping
• Retinotopic Map
• Somatosensory Map
• Tonotopic Map
Feature Mapping
Feature Mapping
Feature Mapping
Feature Mapping
Kohonen’s Algorithm
))(,( *ijjij wiiw
)2/||exp(),( 22**
ii rrii
Travelling Salesman Problem
))2())((( 11 iiiii wwwwiw
j j
i
w
wi
)2/exp(
)2/exp()(
22
22
Hybrid Learning Schemes
unsupervised
supervised
Counterpropagation
• First layer uses standard competitive learning
• Second (output) layer is trained using delta rule
jiiij VOw )(
jijiij Vww )(
Radial Basis Functions
• First layer with normalized Gaussian activation functions
k kk
jj
jg)2/exp(
)2/exp()(
22
22