project! synaptic!failure:benefitorinefficiency?!...! 4! itappears& that electrical conduction...

17
1 Tartu University Institute of Computer Science Computational Neuroscence Lab MTAT.03.291. Sissejuhatus arvutuslikku neuroteadusesse PROJECT Synaptic failure: benefit or inefficiency? Project duration: 22.04.2015 – 25.05.2015 Authors: Oliver Härmson, Rao Pärnpuu Supervisor: Ardi Tampuu Tartu 2015

Upload: others

Post on 26-Feb-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: PROJECT! Synaptic!failure:benefitorinefficiency?!...! 4! Itappears& that electrical conduction itsself& is& relatively constant.& Instead,& information& is& lost& where& chemical&

  1  

Tartu  University  

Institute  of  Computer  Science  

Computational  Neuroscence  Lab  

MTAT.03.291.  Sissejuhatus  arvutuslikku  neuroteadusesse  

 

 

 

 

PROJECT  

Synaptic  failure:  benefit  or  inefficiency?  

 

 

 

Project  duration:    22.04.2015  –  25.05.2015  

 

 

Authors:  Oliver  Härmson,  Rao  Pärnpuu  

Supervisor:  Ardi  Tampuu  

 

 

 

 

 

 

 

Tartu  2015  

Page 2: PROJECT! Synaptic!failure:benefitorinefficiency?!...! 4! Itappears& that electrical conduction itsself& is& relatively constant.& Instead,& information& is& lost& where& chemical&

  2  

Table  of  Contents  

 

TABLE  OF  CONTENTS   2  

1.  INTRODUCTION   3  

2.  OVERVIEW  OF  THE  LITERATURE   3  

3.  THE  MODEL   7  

4.  AIMS   9  

5.  RESULTS   9  

6.  CONCLUDING  REMARKS   16  

REFERENCES   17    

   

Page 3: PROJECT! Synaptic!failure:benefitorinefficiency?!...! 4! Itappears& that electrical conduction itsself& is& relatively constant.& Instead,& information& is& lost& where& chemical&

  3  

   

1.  Introduction      The  aim  of  our  project  was  to  study  the  effect  of  synaptic  failure  on  information  transmission.  Synaptic  failure  is  a  neurobiological  phenomenon.  In  the  brain,  up  to   70%   of   pre-­‐synaptic   signals   do   not   elicit   post-­‐synaptic   signals.   The   exact  mechanism  and  reason  for  this  effect  has  not  been  conclusively  established.      In  this  paper  we  focus  on  two  parts  related  to  this  question.  In  the  first  part  we  explore   scientific   literature   connected   to   this   phenomena,   to   see  what   are   the  possible  biological  or  informational  reasons  for  this  type  of  failure  and  what  are  the  exact  mechanisms   in   the  neuron   that   initiate   this  effect.   In   the  second  part  we  use  a  biologically  realistic  neuronal  model  to  simulate  different  experiments.  The  aim  of  these  experiments  is  to  see  whether  synaptic  failure  can  benefit  the  efficiency   of   information   processing.   More   precisely,   we   aim   to   see   whether  synaptic   failure   helps   to   differentiate   between   the   inputs   of   different   pre-­‐synaptic  neurons,  using  machine   learning.  This  can  be  helpful   in  explaining  the  positive   aspects   of   synaptic   failure   and   how   it   increases   the   amount   of  information   transferred   by   also   reducing   the   number   of   action   potentials,  therefore  preserving  energy.      

2.  Overview  of  the  literature        When  taking  into  account  the  number  of  nerve  cells  in  the  mammalian  nervous  system  and   the  number  of   their   connections,   one  must  question  whether   their  communication   is   always   effective.   In   fact,   not   all   action   potentials   evoke  neurotransmitter   release   and   not   all   instances   vesicle   releases   evoke   action  potentials  (AP)  in  the  postsynaptic  cell.  The  rate  of  successful  spike  transmission  varies   between   0.1   and   0.9   (Quo,   Li,   2012).   Also,   the   rate   of   synaptic  transmission   and   neurotransmitter   concentration   in   the   synapse   following   a  release   event   have   been   shown   to   decrease   over   time   during   high   frequency  stimulation.  There  are  reports   to  suggest   that   these  phenomena  are  not  simply  "failures"   or   results   of   the   overburdening   of   neural   networks.   In   fact,   synaptic  failure   and   differences   in   neural   firing   might   serve   to   enhance   encoding   of  stimuli  and  transmission  of   information.  As  an  addition,   these  phenomena  help  to  lower  energy  consumption  and  heat  production.  The  mechanisms  giving  rise  to  synaptic  failure  and  the  potential  benefits  it  could  bring  about  are  discussed  in  this  article.      

Page 4: PROJECT! Synaptic!failure:benefitorinefficiency?!...! 4! Itappears& that electrical conduction itsself& is& relatively constant.& Instead,& information& is& lost& where& chemical&

  4  

It   appears   that   electrical   conduction   itsself   is   relatively   constant.   Instead,  information   is   lost   where   chemical   transmission   meets   the   electrical.   In   the  neorcortex,   experimental   evidence   indicates   that   axonal   conduction   is,  essentially,   information   lossless.   But   there   are   3   information-­‐loosing  transformations.   Firstly,   there   is   quantal   failure.   An   impulse   that   has   faithfully  travelled  down  the  axon  can  fail   to  evoke  the  release  of  a  maximum  number  of  neurotransmitter   molecules   in   that   synapse,   otherwise   known   as   a   quantum  (Levy,   Baxter,   2002).   For   example,   the   hippocampal   and  neocortical   excitatory  synapses   are   able   to   transmit   at   most   ~104   neurotransmitter   molecules.  However,   the  probability  of  evoking  the  release  of  a  quantum  is  reported  to  be  0.25  to  0.5  (Thompson,  2000).  Secondly,  information  can  be  lost  due  to  quantal  amplitude   variation.   Thirdly,   information   is   approximated   as   a   result   of  dendrosomatic   summation.   The   three   aforementioned   transformations   are  depicted  on  figure  1.        

   Figure  1  The  three  levels  of  information  approximation  for  a  single  neuron  that  has  3  inputs  and  one  output.  The  presynaptic  axonal  inputs  to  the  postynaptic  neuron  is  a  vector  of  binary  values  X  =  [X1,  X2,   ...,  Xn].  Each  input  Xi  is  subject  to  quantal  failures,  the  result  of  which  is  denoted  by  ϕ(Xi).   When   this   expression   is   scaled   by   the   quantal   amplitude,Qi,   the   output   from   one   axon  becomes     ϕ(Xi)Qi.   This   is   integrated   in   the   postsynaptic   neuron   across   all   the   corresponding  inputs.   The   output   of   the   spike   generator   (soma)   is   a   binary   variable,   Z,   which   is   transmitted  down   the   axon   as   Z'.   As   axonal   conduction   is   nearly   lossless,   I(Z;   Z')   ~H(Z)   (the   amount   of  computational   information  transmitted  in  the  axon  roughly  equals  the  Shannon  entropy  for  the  information  encoded).    There  is  a  considerable  amount  of  experimental  data  to  support  the  hypothesis  that  synaptic  failure  reduces  energy  expenditure.  Whereas  AP  creation  in  the  cell  body  places  a  relatively   low  burden  on   the  brain  energy  demand,  ~47%  of   the  total  energy  consumption  is  associated  with  AP  creation  in  axons  and  ~34%  with  dendritic   excitation.   Embedded   in   those   numbers   are   smaller   energy   costs  attributable   to,   for   example,   recycling   of   neurotransmitter   vesicles   and  repackaging,  all  of  which  can  be  avoided  by  quantal  failure.  Quantal  failure  also  

tion of the computation. This upper bound occurs when the second termis zero, i.e. in the failure-free, noise-free situation with all quanta thesame size. Appealing to the central limit theorem, this entropy is wellapproximated by the entropy of a normal distribution. Therefore, if wesuppose each of the n inputs is an independent Bernoulli process with thesame parameter p ! p*, we get:

H"# Xi $ !12 log2%2"enp*"1 # p*$& ! 6.5 bits for n ! 104 and p* ! 0.05,

where this value of p* comes from the Levy and Baxter (1996) calcula-tions as well as the actual observed value of average firing rates inneocortex. Although 6.5 bits is a tremendous drop from H(X), whichunder these assumptions is 2860 bits (10,000 inputs each with 0.286 bits),this 6.5 bits is still a very large number of bits to be transmitted percomputational interval compared to the energy-efficient channel capacityof H(p*) ! 0.286 bits.

The reason why 6.5 bits is a tremendous excess arises when we considerShannon’s source/channel theorems. These say that the channel limits themaximum transmittable information to its capacity. As a result, anyenergy that goes toward producing I(X; # Xi ) that exceeds the channelcapacity H(p*) is wasted information. This idea is at the heart of theanalysis that follows. Because the total information of the computation ismany times the energy-efficient channel capacity, much waste is possible.Indeed, even if we dispense with the independence assumption (whilestill supposing some kind of central limit result holds for the summedinputs) and suppose that statistical dependence of the inputs is so badthat every 100 inputs act like 1 input, an approximation that strikes us asmore than extreme, there still are too many bits ('3.2 bits) beinggenerated by the computation compared with what can be transmitted.Thus, the computation is not going to be energy-efficient if it takesenergy (and it does) to develop this excess, nontransmittable computa-tional information.

RESULTSWe now begin the formal analysis that substantiates and quanti-fies the conjecture and that brings to light a set of assumptionsmaking the conjecture true.

AssumptionsA0: A computation by an excitatory neuron is the summation ofits inputs every computational interval. The mutual informationof such information processing is closely approximated as:

IC ! I"X; !i

$"Xi$Qi$.

A1: Axons are binary signaling devices carrying independentspikes and used at their energy optimum; that is, each axon isused at the information rate CE bits per computational interval,which implies firing probability p*.

A2: The number of inputs to a neuron is not too small—sayn ( 2/p*. Clearly this is true in neocortex; see Fig. 3 for evalua-tion of this assumption.

A3: With the proviso that A1 and A2 must be obeyed, a processrequiring less energy is preferred to a process requiring moreenergy.

A4: The spike generator at the initial segment, which incorpo-rates generic nonlinearities operating on the linear dendriticsummation, creates a bitwise code suitable for the axonal channel,and this encoding is nearly perfect in using the informationreceived from the dendrosomatic computation. That is, as an

Figure 1. Partitioning communication andcomputation for a single neuron and its in-puts. A, The presynaptic axonal inputs to thepostsynaptic neuron is a multivariate binaryvector, X ! [X1 , X2 , . . ., Xn]. Each input, Xi ,is subject to quantal failures, the result ofwhich is denoted by $ (Xi ), another binaryvector that is then scaled by quantal ampli-tude, Qi. Thus, each input provides excitation$(Xi )Qi. The dendrosomatic summation,#i $(Xi )Qi is the endpoint of the computa-tional process, and this sum is the input to thespike generator. Without specifying any par-ticular subcellular locale, we absorb genericnonlinearities that precede the spike genera-tor into the spike generator, g (#i $(Xi )Qi ).The spike generator output is a binary vari-able, Z, which is faithfully transmitted downthe axon as Z). This Z) is just another Xielsewhere in the network. In neocortex, ex-perimental evidence indicates that axonalconduction is, essentially, information loss-less, as a result I(Z; Z)) * H(Z). The infor-mation transmitted through synapses and den-drosomatic summation is measured by themutual information I(X; # $(Xi )Qi ) !H(X) + H(X"#i $(Xi )Qi ). Given the assump-tions in the text combined with one of Shan-non’s source-channel theorems implies that,H(X) + H(X"#i $(Xi )Qi ) ! H(p*), whereH(p*) is the energy-efficient maximum valueof H(Z). B, The model of failure prone syn-aptic transmission. An input value of 0, i.e., nospike, always yields an output value of 0, i.e.,no transmitter release. An input value of 1, anaxonal spike, produces an output value of 1,transmitter release, with probability successs ! 1 + f. A failure occurs when an input valueof 1 produces an output value of 0. The prob-ability of failure is denoted by f.

4748 J. Neurosci., June 1, 2002, 22(11):4746–4755 Levy and Baxter • Efficient Neural Computation Via Quantal Failures

Page 5: PROJECT! Synaptic!failure:benefitorinefficiency?!...! 4! Itappears& that electrical conduction itsself& is& relatively constant.& Instead,& information& is& lost& where& chemical&

  5  

enables  to  save  energy  on  the  cost  of  postsynaptic  depolarization  (Levy,  Baxter,  2002)    Since   synaptic   failure  has  been  noted   to  be   a   significant  determinant  of  neural  transmission,   attempts   to   quantify   it   have   been  made.   For   example,   Levy   and  colleagues  (2002)  estimated  the  proportion  of  synaptic  failures  depends  on  the  energy  efficient  maximum  value  of  H(Z)    

 and  this  statement  does  not  depend  on  the  number  of  inputs,  n.  For  example,  p*  =   0.05   (the   physiological   optimal   firing   probability   observed   in   the   nonmotor  neucortex   and   limbic   cortex)   implies   f   =   0.67   (fig   2).   Quantal   size   itsself,  however,  leads  to  very  little  variations  in  failure  rates.      

 Figure  2  Optimal  failure  rate  as  a  function  of  spike  probability  in  one  computational  interval.  The  optimal   failure   rate   decreases   as   the   optimal   firing   probability   (p*)   increases.   The   vicinity   of  physiological  p*  (0.025  -­‐  0.05  for  nonmotor  neocortex  and  limbic  cortex)  predicts  physiologically  observed  failure  rates.      Synaptic   failure  can  be  measured   in  biological  systems  indirectly  by  comparing  the  information  encoded  by  different  neurons  on  the  same  pathway  (for  example  the  visual  pathway).  For  example  Sincich  and  colleagues  (2009)  recorded  inputs  from   single   retinal   ganglion   cells   (RGC)   and   outputs   from   connected   lateral  geniculate  cells  (LGC)  and  found  the  geniculate  cells  to  modify  the  information  in  a   way   that   enabled  more   information   to   be   carried   by   each   spike.   In   specific,  average  information  rates  increased  from  0.815  to  1.155  bits/spike  for  the  RGC  and  LGC,  respectively.  It  was  found  that  the  LGC  neurons  exhibited  a  higher  spike  rate   gain   for   certain   stimulus   features   compared   to   RGCs   (fig   3).   Also,   it   was  found   that   only   those   RGC   spike   trains   that   carried   high   information   density  (2.43  bits/spike)  elicited  a  LGC  spike,  whereas  low-­‐density  RGC  spike  trains  had  the  opposite  effect.  These  differences  in  spiking  rates  are  surprising,  as  the  mean  

information source the spike generator produces information at arate of nearly H(p*).

From these assumptions we have a lemma.Lemma 1: IC ! H(p*). That is, the only way to use an axon at

its energy optimal rate, H(p*), is to provide at least that muchinformation to it for possible transmission.

Proof by contradiction: Providing anything less would meanthat the axon could be run at a lower rate than implied by p* andas a result save energy while failing to obtain its optimal efficiencywhich contradicts (A1).

The importance of this lemma is the following: no process thatis part of the computational transformation or part of energysaving in the computation or part of interfering fluctuationsarising within the computation should drive IC below H(p*). Inparticular, this lemma dictates that quantal failures, as an energysaving device, will be used (or failure rates will be increased) onlywhen IC is strictly greater than H(p*).

With this lemma and assuming increased synaptic excitationleads to monotonically increasing energy consumption (Attwelland Laughlin, 2001), we can prove a theorem that leads to anoptimal failure rate. Thus, the averaged summed postsynapticactivation, E[!i "(Xi )Qi], should be as small as possible becauseof energy savings (A3), whereas (A2) maintains n and (A1)maintains p*. This restricted minimization of average synapticactivation implies processes, including synaptic failures, that re-duce energy use. But when operating on the energy-efficient sideof the depolarization versus information curve, reducing theaverage summed activation monotonically reduces IC as well asreducing energetic costs with this reduction of IC unrestricteduntil Lemma 1 takes force. That is, this reduction of IC should goas far as possible because of A3-(energy saving) but no lower thanH(p*) because of the lemma. As a result, energy optimal compu-tation is characterized by:

IC # CE # H" p*#,

an equality that we call “Theorem G.” Accepting Theorem Gleads to the following corollary about synaptic failures:

Corollary FProvided np* $ 2, neuronal computation is made more energy-efficient by a process of random synaptic failures (see Appendixand below).

Obviously failures are in the class of processes that loweraverage postsynaptic excitation in part because IC is reduceduniformly as f increases and, in part, because the associatedenergy consumption is also reduced uniformly. Just below and inthe appendix we prove a quantified version of Corollary F thatshows that the failure rate f producing this optimization is ap-proximated purely as a function of p*; specifically,

Quantified Corollary F

f ! "14#H"p*#

.

Figure 2A illustrates the existence of a unique, optimal failurerate by showing the intersection between C $, the energy-efficientcapacity of the axon, with IC, the information of the computation.Here we have used n % 104, p* % 0.041. From another perspec-tive, Figure 2B shows how one might take some physiologicallyappropriate failure rate, f % 0.7, and determine the optimal p. Ineither case we note the single intersection of the two monotoniccurves.

The generality of what this figure shows is established in theAppendix. Specifically, Appendix Part A assumes equal Qi values,whereas Parts B and C allow for Qi to vary; they show:

&12 log" f # ! I"X; !i""Xi#Qi#.

Because Theorem G requires I(X, !i "(Xi )Qi ) % H(p*), the tworesults combine, yielding

f ! "14#H"p*#

,

a statement that is notable for its lack of dependence on n, thenumber of inputs to a neuron. This lack of dependence, illus-trated for one set of values in Figure 3, endows the optimizationwith a certain robustness. Moreover, the predicted values of f alsoseem about right. For example, p* % 0.05, implies f % 0.67,whereas other values can be read off of Figure 4. So, by choosinga physiologically observed p*, the relationship produces failurerates in the physiologically observed range. Thus, on these twoaccounts (the robustness and the prediction of one physiologicalobservation from another nominally independent experimental

Figure 2. A, The optimal failure rate (1 & s) of theorem G and corollaryF is obtained by noting the intersection of the two curves, I C (thecomputational information) and C E % H(p*) (the output channel capac-ity). At higher values of s, any input information greater than H(p*) thatsurvives the input-based computational process of summation is wastedbecause the information rate out cannot exceed H(p*), the output axonalenergy-efficient channel capacity. These values define an overcapacityregion. For lower values of s, neuronal integration is unable to provideenough information to the spike generator to fully use the available rateof the axon. This is the undercapacity region. Of course, changing p*changes the optimal failure rate because the C E curve will shift. Thesecurves also reveal that a slight relaxation of assumption A4 will not changethe intersection value of s very much (e.g., a 10% information loss at thespike generator produces a '3% change in the value of s). The successrate s equals one minus the failure rate. The optimal success rate isdemarcated by the vertical dotted line. In this figure the output channelcapacity, H(p*), uses p* % 0.041; n % 10,000 inputs. B, An alternativeperspective. Assuming the failure rate is given as 0.7 by physiologicalmeasurements, then we could determine p*, the p that matches compu-tational information I C to the energy-efficient channel capacity. Again thevertical dotted line indicates the predicted value; n % 10,000. Both A andB are calculated using the binomial probabilities of the Appendix.

Levy and Baxter • Efficient Neural Computation Via Quantal Failures J. Neurosci., June 1, 2002, 22(11):4746–4755 4749

observation), we reap further rewards from the analysis of micro-scopic neural function in terms of energy-efficient information.

The more involved proof of Appendix Part B sheds light on thesize of one source of randomness (quantal size) relative to an-other (failure rate). Taking the SD of quantal size to be 12.5% ofthe mean quantal size leads to an adjustment of about s/65 in theimplied values of f. For example, suppose no variation in Qi

produces an optimal failure rate of 70%, then taking variation ofQi into account adjusts this value up to 70.46%. Clearly the effectof quantal size variation is inconsequential relative to the failureprocess itself.

DISCUSSIONIn addition to the five assumptions listed on page 11, we made twoother implicit assumptions in the analysis. First, we assumedadditivity of synaptic events. While this assumption may seemunreasonable, recent work (Magee, 1999, 2000; Andrasfalvy andMagee, 2001) and (Cook and Johnston, 1997, 1999; Poolos andJonston, 1999) make even a linear additivity assumption reason-able. The observations of Destexhe and Pare (1999), showing avery limited range of excitation, also makes a linear assumption agood approximation. Even so, we have explicitly incorporated anynonlinearities that might operate on this sum and then group thisnonlinearity with the spike generator. Second, we have assumedbinary signaling. Very high temporal resolution, in excess of2–104 Hz, would allow an interspike interval code that outper-forms the energetic efficiency of a binary code. Our unpublishedcalculations (which of necessity must guess at spike timing preci-sion including spike generation precision, spike conductiondither, and spike time decoding precision; specifically, a value of10!4 msec was assumed) indicate a p* for such an interspikeinterval code would be "50% greater than the p* associated withbinary coding as well as being more energetically efficient. How-ever, we suspect such codes exist only in early sensory processingand at the input to cerebellar granule cells. Systems, such asconsidered here, with single quantum synapses, quantal failures,and 10–20 Hz average firing rates, would seem to suffer inordi-nately using interspike interval codes; a quantal failure can causetwo errors per failure and observed firing rates are suboptimal forinterspike interval code but fit the binary hypothesis.

The relationship f # 4!H(p*) partially confirms, but even moreso, corrects the intuition that led us to do this analysis. That is, wehad thought that the excess information in the dendrosomaticcomputation could sustain synaptic failures and still be largeenough to fully use the energy-efficient capacity of the axon, CE.However, this same intuitive thinking also said that the moreinformation a neuron receives, i.e., as either p* or as n grows, themore a failure rate can be increased, and this thought is wrongwith regard to both variables.

First, the relationship f # 4!H(p*) tells us that the optimalfailure rate actually decreases as p* increases, so intuitive think-ing had it backwards. We had thought in terms of the postsynapticneuron adding up its inputs. In this case, the probability of spikesis like peaches and dollars, the more you possess the less each oneis worth to you. This viewpoint led to the intuition that, whenthere are more spikes, any one of them can be more readilydiscarded; i.e., f can be safely increased when p increases. How-ever, this intuition ignored the output spike generator that neu-ronal integration must supply with information. Here at thegenerator (and its axon and each of its synapses) the probabilityof spikes is very different than peaches and dollars: because thecurve for binary entropy, H(p), increases as p increases from 0 to1/2, increasing probability effectively increases the average worthof each spike and, as well, nonspikes; so it is more costly todiscard one. This result, one that only became clear to us byquantifying the relationships, leads to optimal failure rates thatare a decreasing function of p*.

Second, in the neocortically relevant situation, where n is in thethousands, if not tens of thousands, changing n has essentially noeffect on the optimal failure rate (Fig. 3). Indeed, the lowerbound, (A3), is so generous relative to actual neocortical connec-tivity, that there is no way to limit connectivity (and thus, no wayto optimize it) based on saving energy in the dendrosomatic

Figure 4. Optimal failure rate as a function of spike probability in onecomputational interval. The optimal failure rate decreases monotonicallyas firing probability increases so that this theory accommodates a widerange of firing levels. The vicinity of physiological p* (0.025–0.05 fornonmotor neocortex and limbic cortex) predicts physiologically observedfailure rates. The dashed line plots f $ (1/4)H(p*), whereas the solid line iscalculated without the Gaussian approximations described in the Appen-dix. Note the good quality of the approximation in the region of interest(p* # .05), although for very active neurons the approximation willoverestimate the optimal failure rate. More important than this smallapproximation error, we would still restrict this theory to places whereinformation theoretic principles, as opposed to decision theoretic orcontrol theoretic principles, best characterize information processing.

Figure 3. At the optimal failure rate, matching I C to C E is increasinglyrobust as number of inputs, n, increases. Nevertheless I C, the mutualinformation measure of computation, attains the approximate value ofoutput capacity, C E, for n as small as 200. Calculations used the binomialdistributions of the Appendix with failure rate fixed at 0.7 and p* set to0.041. The dashed line indicates H(p*).

4750 J. Neurosci., June 1, 2002, 22(11):4746–4755 Levy and Baxter • Efficient Neural Computation Via Quantal Failures

Page 6: PROJECT! Synaptic!failure:benefitorinefficiency?!...! 4! Itappears& that electrical conduction itsself& is& relatively constant.& Instead,& information& is& lost& where& chemical&

  6  

rate  of  RGC  EPSPs  (EPSPs/s)  was  found  to  be  lower  than  LGN  mean  spike  rates  (Sincich,   Horton,   Sharpee,   2009).   However,   it   shows   that   a   reduction   in   the  output   rate   can  be  made   to  an  extent   that  preserves   information   transmission,  possibily   even   enhancing   it   via   selective   activation   for   certain   stimuli   (Quo,   Li,  2012).  As  in  many  other  realms,  quantity  is  not  the  same  as  quality.      

 Figure   3   Population   data   comparing   RGC   and   LGN   spike   rate   gains   for   all   filters   (STA   -­‐  substracted  average,  MID1,  2   -­‐  maximally   informative  dimensions  1   and  2).  By  picking  out   the  stimulus   features   that  explained  a   lot  of   the   spiking  activity   in  both  RGC  and  LGC  neurons,   the  vectors  MID1  and  MID2  were  produced.  Spike  rate  gains  are  defined  as  the  average  spiking  rate  resulting  from  stimuli  with  the  same  components  along  the  relevant  features.  When  both  MID1  and  MID2  filters  were  used,  it  was  found  that  the  same  stimuli  elicited  more  spiking  in  the  LGCs.      There   are   even   studies   to   suggest   that   evoking   synaptic   failure   might   have   a  therapeutic   value.   For   example,   it   has   long   been   known   that   some   of   the  symptoms   of   Parkinson's   disease   (such   as   tremor   and   bradykinesia)  might   be  caused   by   pathological   activation   of   the   thalamus   by   the   subthalamic   nucleus  (STN)  (So,  Kent,  Grill,  2012).  Rosenbaum  and  colleagues  (2013)  have  shown  by  in  vivo  and  in  vitro  recordings  that  deep  brain  stimulation  (DBS)  evokes  synaptic  failures   in   STN   projections   and   reduces   parkinsonian   beta-­‐oscillations   and  synchrony  as  a  result.  The  model  that  they  proposed  imitated  the  findings  of   in  vivo   recordings   in  a  precise  manner  (fig  4)  and  took   into  account   the   following  phenomena   (i)   each   stimulation   pulse   evokes   an   action   potential   with   a  probability  that   is  decreased  by  each  pulse,  (ii)  between  pulses,   the  probability  of  successful  action  potential  initation  recovers  over  time,  (iii)  the  time  at  which  an   action   potential   reaches   the   axon   terminal   is   increased   by   each   pulse   and  recovers  exponentially  in  time  and  (iiii)  during  each  pulse  the  amount  of  vesicles  in  the  axon  terminus  decreases  (Rosenbaum,  Zimnik,  Zheng,  2014).        of how well the MID filters capture retinogeniculate information

transmission. For the example cell pair, the combined MIDmodel performed better than the MID filters individually, ac-counting for 91% and 92% of the information carried by singlespikes of the retinal and LGN cells respectively (supplementalTable 1, available at www.jneurosci.org as supplemental mate-rial). There was no significant difference in the percentage ofinformation accounted for by the combined MIDs across thepopulation ( p ! 0.73, Wilcoxon paired test). The MID filtersexplained a mean of "85% of the total information available inthe spike trains (Fig. 5b), and often nearing 100%. Therefore, as a

reduced model of the retinal and LGN neural activity, the com-bined MID filters captured most of the information that could beextracted about the stimulus in the spike trains. Because infor-mation here was computed indirectly by a stimulus reconstruc-tion procedure, the quantities represent a lower bound on theamount of total information carried in single spikes (Strong et al.,1998; Reinagel et al., 1999; Reinagel and Reid, 2000; Adelman etal., 2003; Fairhall et al., 2006). The remaining "15% of the infor-mation in the spike train was presumably provided by higher-order filters (which we could not compute because of data limi-tations), or by inputs from other neurons that were undetectable

Figure 4. Stimulus representation changes from retina to LGN. a, b, Normalized MID filters for the RGC (red) and LGN neuron (black) are temporally altered, as revealed by subtracting the filters(blue). MID1 yields the most information, while MID2 represents an orthogonal stimulus dimension adding maximal information to the first. Error estimates computed over data subsets had averageSEMs #0.01 for each point along all filters (data not shown). c, d, Spike rate gain measures how far above the mean firing rate (at gain ! 1) any stimulus can drive the cell, plotted as a function ofprojection value distributions (see Materials and Methods for details). Filter combinations significantly increase the spike rate gain of the LGN neuron (d) over the RGC (c). Projection values withpositive SDs represent stimuli with increasing resemblance to the filters. e, Population data comparing RGC and LGN peak spike rate gains for all filters (e.g., the peaks in supplemental Fig. 1g–i,available at www.jneurosci.org as supplemental material), normalized to the peak RGC STA rate gain. The combined MID filters exhibited the highest gains. f, Peristimulus time histograms of spikerate gains for 3 cells, in response to the same repeated stimuli. Stimulus segment is the same as shown in Figure 1d. When LGN gains (black) are above the unity gain line (gray), they exceed RGC gains(red); and when below unity, they are lower than RGC gains, suggesting a greater modulation range for LGN neurons. The information transmission ratios for these cell pairs were: magno ON ! 1.0;magno OFF ! 0.74; parvo ON ! 0.75.

6212 • J. Neurosci., May 13, 2009 • 29(19):6207– 6216 Sincich et al. • Preserving Information in Neural Transmission

Page 7: PROJECT! Synaptic!failure:benefitorinefficiency?!...! 4! Itappears& that electrical conduction itsself& is& relatively constant.& Instead,& information& is& lost& where& chemical&

  7  

   Figure  4  Synaptic  and  axonal  failure  during  high  frequency  stimulation  of  STN.  A,B  amplitude  of  the  mean  post-­‐synaptic  currents  (PSCs)  and  fiber  volleys  (FVs)  in  the  SNc  elicited  by  130  Hz  HFS  in  STN,  plotted  as  a  function  of  the  time  evolived  since  onset  of  stimulation.  C  Latency  of  the  FV  peak   after   each   HFS   pulse.  D   FV   amplitude   after   HFS   is   replaced   by   slow   0.1   Hz   stimulation,  normalized  by   the   final   (recovered)   amplitude.  Blue   error  bars   are   from   in   vitro   recordings   in  rodent  SNc.  Red  curves  are  from  simulations  of  the  computational  model.      

 

3.  The  Model      The   details   of   the   model   we   are   planning   to   use   can   be   found   in   the  supplementary   information   of   studies   by   Yeung   and   colleagues   (2004)   and  Shouval   and   colleagues   (2002)   and  will   be   implemented  with   help   from  Taivo  Pungas  (Yeung,  Shouval,  Blais  et  al.,  2004;  Shouval,  Bear,  Cooper,  2002).  In  brief,  an  Integrate-­‐and-­‐Fire  model  will  be  used  to  simulate  the  dynamics  of  the  somatic  membrane   potential.   The   EPSP  waveforms  will   take   into   account  NMDAR  Ca2+  currents.      The  model  was   based   on   Taivo   Pungas'   bachelor   thesis.   Necessary   changes   to  make   the   model   suit   our   goals   of   research   were   made   together   with   Ardi  Tampuu  and  Raul  Vicente.      In  essence,  the  model  is  an  integrate  and  fire  neuron.  The  original  model  was  a  neural  network  comprised  of  100  excitatory  neurons  and  20  inhibitory  neurons  (the   input   neurons)   connected   to   one   output   neuron.  We  used   a   network   of   2  excitatory   input  neurons  and  1  output  neuron.  The   input  generated  was  either  (a)  regular  and  uncorrelated,  (b)  Poissonian  and  100%  correlated,  (c)  Poissonian  and  80%  correlated  and  (d)  totally  uncorrelated.  The  input  was  generated  in  1-­‐

that can suppress the synaptic transfer of pathological spiking patternsfromSTN to basal ganglia output nuclei while still producing an increaseof total STN synaptic output during DBS.

We begin by deriving a model of axonal and synaptic failure fromin vitro recordings of rodent substantia nigra during DBS in STN. Weuse this model to demonstrate that DBS-induced short term depressioncan suppress the transfer offiring rate oscillations and information fromSTN to efferent brain regions even though the synaptic excitation ofthese regions by STN increases during DBS. Next, we present in vivo pri-mate data that provides evidence of short term depression in the pri-mate subthalamopallidal pathway during DBS in STN, consistent withprevious findings (Moran et al., 2011b). We combine our model of axo-nal and synaptic failure with a model of the subthalamopallidal path-way and use the model to show that DBS-induced short termdepression suppresses the transfer of pathological spiking patternsfrom STN to pallidus and can account for the widely reported suppres-sion of parkinsonian β oscillations and synchrony in GP during DBS(Brown et al., 2004; Eusebio et al., 2011; Kühn et al., 2008; Meissneret al., 2005; Moran et al., 2011a; Wingeier et al., 2006; Xu et al., 2008).

Our results support the previously posed hypothesis that DBS in STNmodifies spiking patterns of basal ganglia output nuclei (Ammari et al.,2011; Dorval et al., 2010; Garcia et al., 2005; Grill et al., 2004; Guo et al.,2008; Meissner et al., 2005; Montgomery et al., 2000; Reese et al., 2011;Rubin and Terman, 2004; Vitek, 2002), butwe argue that these patternsare modified by short term depression arising from axonal and synapticfailures. The therapeutic effects of lesions in STN and GP, studies fromPD patients receiving pharmacological treatments, and studies fromPD patients and 1-methyl-4-phenyl-1,2,3,6-tetra-hydropyridine(MPTP) treated primates receiving DBS together support the notionthat suppressing the transfer of pathological activity from STN to basalganglia output nuclei can alleviate motor symptoms of Parkinson's dis-ease (Hammond et al., 2007; Kühn et al., 2006; Kühn et al., 2008). Thus,our results support the hypothesis that short term depression arisingfrom axonal and synaptic failures is a major therapeutic mechanism ofDBS for PD.

Materials and methods

Experimental methods — in vitro rodent data

Methods for collection of in vitro data reported in Fig. 1 have beendescribed in detail in Zheng et al. (2011), and we give an overview ofthe methods here. Extracellular field potential recordings andwhole-cell voltage-clamp recordings of dopaminergic neurons inSNc were performed in parasagittal brain slices (350 μm thick) con-taining the basal ganglia circuits from juvenile Wistar rats. All proce-dures for slice preparation were carried out according to theguidelines of and with the approval of the local government. Slicesfor recordings were submerged in warm (33 ± 1 °C) artificial cere-brospinal fluid (aCSF) containing (in mM) 125 NaCl, 3 KCl, 2 CaCl2,2 MgCl2, 1.25 NaH2PO4, 25 NaHCO3 and 10 D-glucose, gassed with95% O2–5% CO2 (pH 7.4). Patch pipettes were filled with (in mM)135 K-gluconate, 5 HEPES, 3 MgCl2, 5 EGTA, 2 Na2ATP, 0.3 NaGTP,and 4 NaCl (pH 7.3). Extracellular recording pipettes were filledwith modified aCSF to avoid pH change. Constant current pulses(pulse width 60–90 μs) were delivered to a bipolar electrode posi-tioned in STN to evoke postsynaptic currents (PSCs) in dopaminergicneurons or field potentials (both axonal and synaptic responses) inSNc. After establishing baseline recording at 0.1 Hz stimulation,high frequency DBS was simulated using 130 Hz stimulation. Theportions of the field potential representing stimulation-induced axo-nal action potentials, termed fiber volleys (FVs), occurred within afew milliseconds of each stimulation pulse and were isolated by re-cording in the presence of the ionotropic glutamate receptor antago-nist kynurenic acid (2 mM) and GABAA receptor antagonistpicrotoxin (100 μM) in low calcium aCSF (0.2 mM CaCl2/3.8 mMMgCl2) to abrogate synaptic responses. Signals were filtered at1 kHz and sampled at 10 kHz using either an Axopatch 200 amplifierin conjunction with Digidata 1200 interface and pClamp 9.2 softwareor a Multiclamp 700B amplifier in conjunction with Digidata 1440Ainterface and pClamp 10 software (all from Molecular Devices).

A B

C D

Nor

mal

ized

F

V a

mpl

itude

FV

late

ncy

(ms)

Time since stimulation onset (s)

Nor

mal

ized

P

SC

am

plitu

de

Nor

mal

ized

F

V a

mpl

itude

In vitro recordings

Computationalmodel

0 5 100

0.5

1

0 2 42.6

2.8

3

3.2

3.4

3.6

3.8

0 1 20

0.5

1

0 100 200 300

0

0.5

1

Time since stimulation onset (s)

Time since stimulation onset (s)

Time since stimulation stopped (s)

Fig. 1. Synaptic and axonal failure during high frequency stimulation of STN. A–B) Amplitude of the mean post-synaptic currents (PSCs) and fiber volleys (FVs) in SNc elicited by 130 Hzhigh frequency stimulation (HFS) in STN, plotted as a function of the time evolved since stimulation onset and normalized by the amplitude of thefirst event. C) Latency of the FVpeak aftereachHFS pulse. D) FV amplitude after HFS is replaced by slow 0.1 Hz stimulation, normalized by the final (recovered) amplitude. Blue error bars are from in vitro recordings in rodent SNc(intracellular whole-cell recordings for A and extracellular field potential recordings for B–D, see Materials and methods). Red curves are from simulations of the computational model.Error bars here and in all subsequent figures have a radius of one standard error.

87R. Rosenbaum et al. / Neurobiology of Disease 62 (2014) 86–99

Page 8: PROJECT! Synaptic!failure:benefitorinefficiency?!...! 4! Itappears& that electrical conduction itsself& is& relatively constant.& Instead,& information& is& lost& where& chemical&

  8  

second  blocks   for  ease  of   implementation  and  the  release  of  neurotransmitters  caused  by  each  presynaptic  spike  is  assumed  to  last  1ms.  The  mean  initial  input  rate   of   the   input   neurons  was   set   to   either   10Hz   or   30Hz.   The   input   rate  was  normalized  to  the  synaptic  failure  probability:    

Re  =  Ri/(1-­‐p),                                          (1)    where  Re  is  the  mean  effective  firing  rate,  Ri  is  the  intial  mean  firing  rate  and  p  is  the   synaptic   failure   probability.   This   ensured   that   the   amount   of   information  provided  to   the  postsynaptic  neuron  would  stay  relatively  constant  despite   the  increasing   probability   of   synaptic   failure              (fig   5).   There   was   no   significant  correlation  between  the  probability  of  synaptic  failure  and  the  amount  of  spikes  fired  by  the  postsynaptic  neuron  (p  =  0.7,  r  =  0.3).      Figure   5   Relationship   between   synaptic   failure   probabilities   and   amount   of  postsynaptic  spikes  in  trials  of  equal  and  unequal  weights  for  the  2  inputs.      

     The  2  excitatory  inputs  were  connected  to  dendrite  compartments  in  which  the  opening  of   glutamate-­‐controlled   ion   channels  was   simulated.  The   resulting   ion  flow   through   the   channels   tends   to   drive   Vpost   towards   the   excitatory   reversal  potential,   0mV.   This   increases   the   probability   of   a   postsynaptic   neuron  producing  a  spike.  Each  time  the  postsynaptic  neuron  fired  a  spike,  reaching  its  spike  potential  40mV,    the  membrane  potential  was  again  set  to  Vreset  -­‐65mV  -­‐  the  resting  membrane  potential.    

 Probabilities  of  failure  ranging  from  0.0  to  0.9  were  studied  in  each  experiment,  in  steps  of  0.1.    The  probability  of  failure  manifested  itsself  as  a  random  removal  of  input  spikes  from  the  input  trains  generated.  The  probability  of  random  spike  removal   increased   in   proportion   with   the   probability   of   failure.   Also,   the   two  synapses  were  attributed  either  different  (3  and  7)  or  equal  (5  and  5)  weights.  The   aim   of   this   modification   was   to   train   a   support   vector   machine   (SVM)  classifier  to  tell,  at  each  trial,  what  set  of  weights  were  used.  This  serves  to  reflect  the  ability  of  the  neuron  to  distinguish  between  its  two  inputs.    

Page 9: PROJECT! Synaptic!failure:benefitorinefficiency?!...! 4! Itappears& that electrical conduction itsself& is& relatively constant.& Instead,& information& is& lost& where& chemical&

  9  

 In  several  experiments,  we  use  a  Poissonian  distribution  to  model  spiketrains.  In  statistics,   the   Poisson   distribution   is   a   discrete   probability   distribution   that  expresses   the   probability   of   a   given   number   of   events   occurring   in   a   fixed  interval   of   time  and/or   space   if   these   events  occur  with   a   known  average   rate  and   independently   of   the   time   since   the   last   event.   In   our   case,   we   used   an  average  rate  of  neurons  firing  per  second,  to  create  a  Poissonian  distribution  of  all  the  firing  rates  over  the  simulation.    Data  analysis  and  implementation  of  the  model  was  conducted  in  MATLAB,  the  source  code  for  the  conducted  experiments  is  included  as  a  separate  file.      

4.  Aims      (i)  To  see  whether  changing  the  membrane  potential  dynamics  in  response  to  a  spike  event  in  the  postsynaptic  neuron  will  change  the  amount  of  spikes  fired.      (ii)   To   see   whether   different   probabilities   of   synaptic   failure   will   affect   the  neuron’s  ability  to  distinguish  one  input  from  the  other.    (iii)  To  see  how  well  the  postsynaptic  neuron  distinguishes  its  inputs  when  input  spike  train  types  are  altered.        

5.  Results      Experiment  1  Classifier  accuracy  for  fully  regular  input    In   this   experiment   the   spike   times   of   both   of   the   pre-­‐synaptic   neurons   were  exactly  the  same.  Once  we  introduced  a  failure  rate,  some  spikes  were  randomly  eliminated  from  both  of  the  spiketrains  independently.      The  SVM  classifier  was  used  to  see  whether  the  post-­‐synaptic  neuron  was  able  to  differentiate  between  the  weights  of  the  inputs.  The  results  for  different  failure  rates  are  showed  in  the  graphs  below  (fig  6,7).    

 

 

 

 

Page 10: PROJECT! Synaptic!failure:benefitorinefficiency?!...! 4! Itappears& that electrical conduction itsself& is& relatively constant.& Instead,& information& is& lost& where& chemical&

  10  

Figure  6.  Normalised  input  10Hz  over  101  seconds  

   As   can   be   seen   from   the   graph,   the   increase   in   failure   rate   also   increases   the  accuracy  of  the  classification.  But  if  the  failure  rate  is  too  high,  the  accuracy  of  the  classifier  starts  to  decrease.  Highest  accuracy  is  achieved  with  a  20-­‐40%  failure  rate.  With  a  higher  input  rate,  the  results  become  more  heterogenous  and  overall  accuracy  decreases,  as  can  be  seen  on  figure  6,7.      Figure  7.  Normalised  input  of  30Hz,  over  101  seconds  

   

Page 11: PROJECT! Synaptic!failure:benefitorinefficiency?!...! 4! Itappears& that electrical conduction itsself& is& relatively constant.& Instead,& information& is& lost& where& chemical&

  11  

At   30Hz,   the   accuracy   becomes   more   varied   and   lower,   therefore   no  generalisations  can  be  made,  when  using  a  higher  input  for  the  experiment.    It  should  be  noted  that  the  accuracy  of  this  classification  is  varied.  In  order  to  get  results  with  an  error  rate  of  less  than  a  few  percent,  longer  simulations  should  be  run  and  standard  deviations  are  needed  over  several  simulations,  which  would  have  been  outside  the  scope  of  this  project.   It  suffices  to  say  that   in  the  case  of  totally  regular  input,  there  is  an  optimal  failure  rate  which  increases  the  ability  of  the  neuron  to  distinguish  its  channels  of  information.      

Experiment   2   Classifier   accuracy   for   Poissonian   and   100%   correlated   input  trains    In  this  experiment  two  Poissonian  100%  correlated  input  trains  were  generated.  Synaptic  failure  events  in  both  inputs  occurred  simultaneously  and  cancelled  out  spikes   in   both   spiketrains   at   the   exact   same   time   points.   As   can   be   expected,  increasing   the   probability   of   synaptic   failure   does   not   affect   the   ability   of   the  neuron   to   distinguish   its   inputs.   Classifying   accuracy   undulates   around   chance  precision  at  both  10  Hz  and  30  Hz  input  frequency  (fig  8,  9).        Figure  8.  Normalized  input  10  Hz  over  101  seconds  

         

Page 12: PROJECT! Synaptic!failure:benefitorinefficiency?!...! 4! Itappears& that electrical conduction itsself& is& relatively constant.& Instead,& information& is& lost& where& chemical&

  12  

Figure  9  Normalized  input  30  Hz  over  101  seconds  

   

 

Experiment  3  Classifier  accuracy  for  Poissonian  and  80%  correlated  input      In   a  way   this   experiment   represents   the   reality   in  populations  of   neurons.   For  example,  two  neurons  could  be  specific  for  the  same  stimuli,  but  encode  it  with  some  differences   in  spiking  patterns.   In  other  words,   the  neurons  might  not  be  synchronized.    In   this   simulation   two  Poissonian   spiking   trains  were  produced  that   were   80%   identical.   Synaptic   failure   events   cancelled   out   spikes   in   both  spiketrains   independently   and   therefore   at   different   time   points.   In   essence,  synaptic   failure   in   these   inputs   can   be   thought   of   as   decorrelating   the  spiketrains.      An  input  rate  of  10Hz  produced  SVM  prediction  accuracy  rates  between  0.5  and  0.6  (fig  10).  Changing  the  mean  input  rate  to  30  Hz  resulted  in  classifier  accuracy  rates  between  0.6  and  0.9  (fig  11).  Although  mean  errors  were  not  compared  in  this   study,   it   can   be   speculated   that   higher   probabilities   of   failure   result   in  increased   classifier   accuracies   in   partially   correlated   Poissonian   spike   trains.  This   can   be   explained   as   an   increasing   decorrelation   between   the   (initially  correlated)  spike  trains,  which  makes  the  inputs  easier  to  distinguish.              

Page 13: PROJECT! Synaptic!failure:benefitorinefficiency?!...! 4! Itappears& that electrical conduction itsself& is& relatively constant.& Instead,& information& is& lost& where& chemical&

  13  

 Figure  10.  Normalised  input  rate  of  10Hz  over  101  seconds  

       Figure  11.  Normalised  input  rate  of  30Hz  over  101  seconds  

       

   

Page 14: PROJECT! Synaptic!failure:benefitorinefficiency?!...! 4! Itappears& that electrical conduction itsself& is& relatively constant.& Instead,& information& is& lost& where& chemical&

  14  

 Experiment  4  Classifier  accuracy  for  uncorrelated  input    In  this  experiment  we  tried  to  see,  whether  totally  uncorrelated  spikings  would  result   in   higher   success   rates.   Common   sense  would   say   that   if   the   inputs   are  uncorrelated,   then   failure   rate   would   play   a   minimal   role   in   influencing   the  amount  of   information  a  post-­‐synaptic  neuron  receives.  This  was  corroborated  by  the  results.  The  results  are  portrayed  in  figure  12  and  13.        Figure  12.  Normalised  input  10Hz  over  101  seconds  

   

Figure  13.  Normalised  input  of  30Hz  over  101  seconds  

   

Page 15: PROJECT! Synaptic!failure:benefitorinefficiency?!...! 4! Itappears& that electrical conduction itsself& is& relatively constant.& Instead,& information& is& lost& where& chemical&

  15  

As   can   be   seen,   it   did   not   matter   what   the   probability   of   failure   was,   all   the  accuracies   were   above   80%.   The   results   at    failure   probability   10%   can   be  ignored,  because  those  can  be  attributed  to  the  failure  of  the  SVM  classifier  itself  during  training.  As  such,  it  does  not  represent  the  actual  result.        Experiment  5  Classifier  accuracy  for  Poissonian  and  independent  failure  rate    In   this   experiment   we   used   Poissonian   spiketrains   that   were   initially   fully  correlated.   Synaptic   failure   events,   however,   cancelled   spike   events  independently  in  both  spike  trains  and  had  a  decorrelating  effect  (fig  14,15).      Figure  14.  Normalised  input  of  10Hz  over  101  seconds  

     As   can   be   seen   from   the   graph   (fig   14),   the   accuracy   of   the   classifier   rises   in  proportion   to   the   synaptic   failure   probability.   At   a   0%   failure   rate   the   two  spiketrains  were  identical  and  there  was  no  way  for  the  classifier  to  differentiate  between  the  weights  of  the  input    neurons.  With  a  failure  probability  of  60%  or  more,   the   classification   accuracy   becomes   close   to  maximum  with   both   10   Hz  and  30  Hz  input  frequency  (fig  14,15).                

Page 16: PROJECT! Synaptic!failure:benefitorinefficiency?!...! 4! Itappears& that electrical conduction itsself& is& relatively constant.& Instead,& information& is& lost& where& chemical&

  16  

Figure  15.  Normalized  input  of  30  Hz  over  101  seconds  

     

6.  Concluding  remarks      It   can   be   concluded   that   synaptic   failure   is   a   significant   event   in   information  processing.   On   the   one   hand,   it   serves   to   decorrelate   nearly   identical   input  spiketrains  and   therefore   could   sustain   specificity  of   source   information   in   the  neuron  model   described   above,   and  possibly,   also   in  vivo.   In   fully  decorrelated  spiketrains  synaptic   failure  does  not  alter  the  neuron’s  ability  to  distinguish   its  source  of  information  and  therefore  does  not  enhance  information  transmission.  Although   no   optimal   probability   of   failure   was   pointed   out   in   this   study   nor  different  accuracies  compared  statistically,  this  data  might  suggest  that  for  each  type   of   input   type   (except   fully   decorrelated   spiketrains)   there   is   an   optimal  failure   probability   between   0.1   and   0.8   that   serves   to   enhance   information  processing  and  input  discrimination.        Future  studies  should  address  developing  the  model  proposed  above  to  contain  (a)   a   bigger   population   of   input   neurons,   (b)   longer   simulations   to   be   able   to  gather   more   data   and   allocate   more   information   to   training   and   test   sets,   (c)  different   methods   of   classification,   as   the   SVM   classifier   sometimes   fails   to  adequately  categorize  the  weights  of  the  input  neurons  and,  (d)  different  levels  of   correlation   between   Poissonian   input   spiketrains,   and    (e)   experiments   of  plasticity,   addressing   the   possibility   of   input-­‐specific   synaptic   plasticity   in   the  face  of  different  probabilities  of  failure.    

Page 17: PROJECT! Synaptic!failure:benefitorinefficiency?!...! 4! Itappears& that electrical conduction itsself& is& relatively constant.& Instead,& information& is& lost& where& chemical&

  17  

 We  would   like   to   thank  Ardi  Tampuu,  Raul  Vicente  and  Taivo  Pungas   for   their  kind  help  and  contributions  to  this  work.        

References      Levy,   W.B.,   Baxter,   R.A.   (2002)   Energy-­‐efficient   Neuronal   Computation   via  Quantal  Synaptic  Failures.  The  Journal  of  Neuroscience,  22(11):4746-­‐4755.      Quo,  D.,  Li,  C.  (2012)  Population  rate  coding  in  recurrent  neuronal  networks  with  unreliable  synapses.  Cogn  Neurodyn  (6):  75-­‐97    Rosenbaum,   R.,   Zimnik,   A.,   Zheng,   F,   Turner,   R.S.,   Alzheimer,   C.,   Doiron,   B   &  Rubin,   J.E.   (2014xonal   and   synaptic   failure   suppress   the   transfer   of   ring   rate  oscillations,   synchony   and   information   during   high   frquency   deep   brain  stimlation.  Neurobiology  of  Disease,  (62):  86-­‐99.      So,  R.Q.,  Kent,  A.R.  &  Grill,  W.M.   (2012)  Relative   contributions  of   local   cell   and  passing  fiver  activation  and  silencing  to  changes  in  thalamic  fidelity  during  deep  brain   stimulatino   and   lesioning:   a   computational   modeling   study.   J   Comput  Neurosci,  (32):  499-­‐519.      Shouval,  H.Z.,  Bear,  M.F.,  Cooper,  L.N.  (2002)  A  unified  model  of  NMDA  receptor-­‐dependent  bidirectional  synaptic  plasticity.  PNAS,  99(19),  10831-­‐10836.      Sincich,  L.C.,  Horton,  J.C.,  Sharpee,  O.T.  (2009)  Preserving  Information  in  Neural  Transmission.  The  Journal  of  Neuroscience,  29(19):  6207-­‐6216    Thomson   A   (2000)   Facilitation,   augmentation   and   potentiation   at   cen-­‐   tral  synapses.  Trends  Neurosci  23:305–312.    

Yeung,  L.C.,  Shouval,  H.Z.,  Blais,  B.S.  &  Cooper,  L.N.  (2004)  Synaptic  homeostasis  and   input   selectivity   follow   from   a   calcium-­‐dependent   plastiity   model.   PNAS,  101(41),  14943-­‐14948.