unsupervised and representation learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...dynamics:...

44
Unsupervised and Representation Learning Davide Bacciu Dipartimento di Informatica Università di Pisa [email protected] Applied Brain Science - Computational Neuroscience (CNS)

Upload: others

Post on 08-Jul-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

Unsupervised and Representation Learning

Davide Bacciu

Dipartimento di InformaticaUniversità di [email protected]

Applied Brain Science - Computational Neuroscience (CNS)

Page 2: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

Module OverviewPractical Info

Part I

Module Information

Page 3: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

Module OverviewPractical Info

Objectives

Train computational neuroscience and machine learningspecialists capable of

understanding application challenges and choosing theright neural-network-based solutionsdesigning computational models from biological memorymechanismsdeveloping advanced applications using ML solutions

Page 4: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

Module OverviewPractical Info

Expected Outcome

Students completing the module are expected toGain in-depth knowledge of advanced hierarchical neuralnetwork architecturesLearn state of the art unsupervised and representationlearning algorithmsUnderstand their theory and applicationsBe able to individually read, understand and discussresearch works in the field

The course is targeted atStudents specializing in

Machine learning and computational intelligenceData mining, data sciences and information retrievalRobotics, bionics, bioengineering

Students seeking machine learning theses

Page 5: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

Module OverviewPractical Info

Topics

Synaptic plasticity, memory and learningAssociative learning, competitive learning and inhibition

Associative memory modelsHopfield networksBoltzmann MachinesAdaptive Resonance Theory

Representation learning and hierarchical modelsBiological inspiration: sparse coding, pooling andinformation processing in the visual cortexHMAX, CNN, Deep Learning

Page 6: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

Module OverviewPractical Info

Instructor

Davide BacciuEmail - [email protected]

Tel - 050 2212749Office - Room 367, Dipartimento di Informatica

Office hours - Monday 17-19 (email me!)

Page 7: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

Module OverviewPractical Info

Schedule

Lectures1 Unsupervised and representation learning (3h)2 Associative Memories I - Hopfield networks (2h)3 Associative Memories II - Boltzmann Machines (2h)4 Adaptive Resonance Theory (1h)5 Representation learning and hierarchical models (2h)6 Deep Learning (3h)

Laboratory activity1 Hands-on Lab I (3h)2 Hands-on Lab II (2h)

Try to fit an additional hour to deep learning lecture for someextra lab

Page 8: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

Module OverviewPractical Info

Homepage

Reference Webpage on Didawiki:

http://didawiki.di.unipi.it/doku.php/bionics-engineering/

computational-neuroscience/start

Here you can findCourse informationLecture slidesArticles and course materials

You can subscribe to get RSS feeds on page updates

Page 9: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

Module OverviewPractical Info

Reference Books

A classical reference book for Computational Neurosciencecourses:

P. Dayan and L.F. Abbott, Theoretical Neuroscience, The MITpress (2001)

An alternative book covering similar topics and freely availableonline:

W. Gerstner, W.M. Kistler, R. Naud and L. Paninski, NeuronalDynamics: From single neurons to networks and models ofcognition, Cambridge University Press (2014)

Page 10: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Part II

Unsupervised and RepresentationLearning

Page 11: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

OverviewSynaptic Plasticity and Learning

The Big Picture

Learning to encode complex/noisy input information in theactivations of a neural network (representational learning)Requires a mechanism to reconfigure the synapticresponse (plasticity)A computational approach through bio-inspiration

Page 12: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

OverviewSynaptic Plasticity and Learning

Reference Model

A simple computationalneuron abstractionSynaptic inputs uj areintegrated to determineactivation vi

Synaptic weights wij andactivation function F

Neural networkrepresenting connectedassemblies ofcomputational neuronsDistributed representationof stimuli

Page 13: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

OverviewSynaptic Plasticity and Learning

Learning in the Brain

Introducing relatively permanent changes in neuron behavioras a result of experience with stimuli

Many different learning flavoursPerceptualStimulus-responseMotorRelational

Many adaptation mechanismsHabituationSensitizationPrimingConditioning

A common underlying aspect⇒ synaptic plasticity

Page 14: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

OverviewSynaptic Plasticity and Learning

Synaptic Plasticity

Synapses arecharacterized by theirweight wij

Determines the responseof a postsynaptic neuron ito an action potential frompresynaptic neuron jAssumed fixed so far

Electrophysiological experiments show that the response(amplitude) is not fixed but can change over timeChanges of the synaptic strength are called synapticplasticity

Page 15: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

OverviewSynaptic Plasticity and Learning

Models of Synaptic Plasticity

Synaptic plasticity depends on a varietyof factors

Different causes: co-activation,repetition,...Different effects: enhancement,depressionDifferent timescales: short-term,long-term,...

Hebbian - Plasticity depends on bothpresynaptic and postsynaptic activityHow do we define synaptic activity?

Firing ratesSpikes (action potentials)

Page 16: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

OverviewSynaptic Plasticity and Learning

Learning Paradigms

How synaptic plasticity is used as part of a training process tochange the neural response?

Unsupervised learningNetwork responds to a series of inputs during trainingsolely on the basis of intrinsic connections and dynamicsprincipal component analysis, density estimation,representation learning

Supervised learningA desired set of input-output relationship is imposed on thenetwork by a teacherRegression, classification, imitation learning

Reinforcement learningNetwork response is adjusted through a reward/punishmentsignal assessing performance on the task

Page 17: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

Hebbian Learning

The Organization of Behavior, 1949 (Donald Hebb)

When an axon of cell A is near enough to excite cell B andrepeatedly or persistently takes part in firing it, some growthprocess or metabolic change takes place in one or both cellssuch that A’s efficiency, as one of the cells firing B, is increased.

Neurons that firetogether, wire together

Self-organization of neuronassemblies

Page 18: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

Hebb, Pavlov and his Dog

What happens now everytime a bell is rung?

Page 19: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

Hebb, Pavlov and his Dog

Page 20: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

Long Term Potentiation (LTP)

A enhancement of the synaptic response whose effect arelong-lasting (e.g. at least 10 minutes)

Page 21: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

Firing Rate Neuron (Refresher)

Relax theconstraint ofpositive firing rates

Models the steady-state output firingrate as

v∞ = F (w · u)

With a time-dependent input current,the firing-rate dynamics is

τrdvdt

= −v + F (w · u)

Refer to the simple linear model

τrdvdt

= −v +Nu∑j=1

wjuj

at steady-state v = w · u.

Page 22: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

Hebb Rule

The basic Hebb rule

τwdwdt

= vu

where τw is the learning rate.In general, an averaged version would be preferred

τwdwdt

= 〈vu〉

where 〈·〉 averages on input patternsInserting the linear firing-rate yields to the correlation rule

τwdwdt

= Qw s.t. Q = 〈uu〉

Page 23: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

Hebb Rule and Postsynaptic Depression

Basic Hebbian learning does not account for Long TermDepression (LTD)Synapse strength depresses if presynaptic activity ispaired with low postsynaptic activation

τwdwdt

= (v − θv )u postsynaptic

τwdwdt

= v(u− θu) presynaptic

where θv , θu switch LTD to LTP, e.g. θv = 〈v〉When averaged both equivalent to the covariance rule

τwdwdt

= Cw s.t. C = 〈(u− 〈u〉)(u− 〈u〉)〉

Page 24: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

Basic Hebbian Learning is Unstable

Positive feedback loop whereactivity and weights mutuallyreinforce

Learning is unstableUncontrolled weight growthSolution: Weight saturation constraints

Synapses are updated independentlyPoor selectivity to different inputsSolution: Synaptic competition

Page 25: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

Stabilization Strategies

(1) Synaptic normalization

(2) Nonlinear learning rules

Both strategies prevent unboundedweight growth as well as introducesynaptic competition

Page 26: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

Synaptic normalization

Key IdeaAdd terms to the Hebb rules that depend explicitly onweightsAssume a neuron can support only a fixed total amount ofsynaptic weights

Question: What machine learning practice does this recall?

Normalization constraint can be imposedRigidly: at every time stepDynamically: satisfied only asymptotically at the end oftraining

Different normalization constraint and strategies can leadto (consistently) different training outcomes

Page 27: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

Oja Rule (a.k.a. Multiplicative Normalization)

Sum-of-squares normalization term

τwdwdt

= vu− αv2w s.t. α ≥ 0

Stability is given since the length of the weight vector ‖w‖22 willrelax over time to 1/α, since

τwd‖w‖22

dt= 2v2(1− α‖w‖22)

Note that it also introduces synaptic competition (Why?)

Oja rule is local, dynamic but not biologically plausible

Page 28: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

The BMC RuleBienenstock, Cooper and Munro (1982)

A more biologically plausible synaptic update

τwdwdt

= vu(v − θv )

A non-linear learning rule introducing a sliding threshold

τθdθv

dt= v2 − θv

No unbounded weight growth Competition among stimuli

Page 29: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

Unsupervised Learning

A computational view of the effects of Hebbian synapticplasticity in artificial neural networksFocus on training without a teacher signal

Learning to encode stimuliCortical maps

Assess the effect ofSynaptic competitionNeuron competitionInhibition

Page 30: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

What does Oja Rule Learn?

When considering a linear neuron the Oja rule is

dwdt

= ν(uv − v2w) = ν(uuT w−wT uuT ww)

The expected value of dw (averaged on inputs) is

d〈w〉dt

= ν〈(uuT w−wT uuT ww)〉 = ν(Qw− (wT Qw)w)

At convergence

d〈w〉dt

= 0 = ν(Qw− (wT Qw)w)

⇔ Qw = (wT Qw)w = λw

The eigenvalue problem⇒ Eigenvectors of Q are potentialsolutions

Page 31: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

Hebb, Oja and the PCA

The fixed point of Oja rule (but also Hebb) is the largesteigenvector of Q

Hebbian learning rotates weight vector to align with theprincipal eigenvector of input correlation/covariance matrixThe weigh vector in Hebb rule has unbounded norm, whileOja rule learns a normalized weight vector

Page 32: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

Competitive LearningSynaptic Competition

Synaptic competition favours selectivityHebbian rule with constraints preventing unconstrainedgrowth explains orientation selectivity in primary visualcortex

Page 33: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

Competitive LearningPrincipal Component Analysis

Hebbian learning orient the weight vector so that theneuron is responsive to information in the principalcomponent of data covariance/correlationCan we identify other principal components? How?

Introduce competitionbetween neuronsDifferent neurons encodedifferent stimuliSelectivity and diversity

Page 34: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

Competitive Learning with Multiple Neurons

Recurrent connections serve neuron differentiationOutput activity

τvdvdt

= −v + Wu + Mv

Stable fixed point with steady-state output (ρ(M) < 1)

v = Wu + Mv,v = KWu and K = (I−M)−1

Page 35: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

Ocular Dominance

Fix the recurrent connections by a periodic function (e.g. cos)on the position of the neuron in the cortex

Plastic feedforward, fixed recurrent connections

Page 36: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

Competitive Hebbian Learning

A two step-process1 Long-range competition (feedforward)

zi =

(∑j Wijuj

)δ∑

k

(∑j Wkjuj

)δ2 Short-range cooperation between neighbors (recurrent)

vi =∑

k∈Ne(i)

Mikzk

Purely linear units produce little differentiation among neurons(added nonlinearity δ)

Page 37: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

Competitive Feature-based Hebbian Models

Presynaptic inputs encode different features of the stimuliNeuron activation measures how closely the input matchesits preferred stimulusCompetitive activation phase

zi =exp

(−∑

j(uj −Wij)2/(2σ2

j ))

∑k exp

(−∑

j(uj −Wkj)2/(2σ2j ))

Cooperative mechanism to ensure nearby neurons havesame selectivity

Self-organizing maps: vi =∑

k∈Ne(i)

Mik zk

Elastic-net: vi = zi

Page 38: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

Modeling Causality in Learning

Let’s review Hebb’s hypothesis

When an axon of cell A is near enough to excite cell B andrepeatedly or persistently takes part in firing it, some growthprocess or metabolic change takes place in one or both cellssuch that A’s efficiency, as one of the cells firing B, is increased.

The interpretation of taking part is the keyRelative timing between pre-synaptic and post-synapticactivity plays a critical roleProved experimentally and is also at the root of Hebb’soriginal interpretationSpike-timing dependence in synaptic plasticity (STDP)

Approximate firing-rate model (in place of spiking neuronmodel)

Page 39: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

STDP Reprise

LTP when post-synapticoccurs within 50msec frompre-synaptic spikeLTD when pre-synapticspike occurs within50msec from post-synapticaction potential

H(τ)⇒ Rate of synaptic modification when post-synapticactivity is separated from pre-synaptic by τmsec

τwdwdt

=

∫ ∞0

(H(τ)v(t)u(t − τ)︸ ︷︷ ︸LTP

+H(−τ)v(t − τ)u(t)︸ ︷︷ ︸LTD

)dτ

Page 40: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

Time-Dependent Plasticity with Multiple Neurons (I)

Keep W fixed and adapt recurrent connections M usingtime-dependent plasticity

Page 41: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Hebbian LearningUnsupervised learningTime Dependent Plasticity

Time-Dependent Plasticity with Multiple Neurons (II)

Strengthening synapsesbetween neurons that firein close sequence (LTP)Central neuron tuningcurve shifts denotinganticipatory response topattern

(Much) More on recurrent networks for time-dependent taskson the third module

Page 42: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Take Home Messages

Synaptic plasticity is the mechanism underlying all learningschemeHebbian learning

Promotes synapses between co-activated neurons (LTP)Depresses synapses responsible for asynchronousactivations (LTD)Leads to principal component analysis, self-organizingmaps, visual filters, ...

Competition is essential to ensureSelectivity and stability at synaptic levelDiversity between neurons

Hebbian time-dependent plasticity allows learningsequential patterns

Page 43: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Things We Haven’t Seen

Anti-Hebbian LearningReducing synaptic strength as result of co-activation

Timing-based plasticity and the spiking modelSupervised Hebbian learning

PerceptronDelta-rule

Page 44: Unsupervised and Representation Learningdidawiki.di.unipi.it/lib/exe/fetch.php/bionics...Dynamics: From single neurons to networks and models of cognition, Cambridge University Press

IntroductionModels of Learning and Plasticity

Next Lecture

Next Lecture

Associative memoriesRed hammers and primingLearning and recalling associations betweenstimuli/concepts

Hopfield networksAn associative memoryA recurrent neural networkAn energy-based model