computational modeling of the memory recall … · 2012-09-27 · causes of memory loss include...

17
[7-1] COMPUTATIONAL MODELING OF THE MEMORY RECALL CAPACITY OF NEURAL NETWORKS AND THE EFFECTS OF RANDOM NEURON FAILURE Robert Aguilar, Harold Chen, Rajani Deshpande, Nicole Katchur Minsoo Khang, Kenneth Li, Aleta Murphy, Sudheesha Perera Shreyas Shirodkar, Niklas Sjö quist, Amulya Yalamanchili, Susanna Yu Advisor: Dr. Minjoon Kouh Assistant: Aaron Loether ABSTRACT A number of neurodegenerative diseases, such as Alzheimer’s Disease, are known to involve the decay or death of neurons and synapses, resulting in loss of memory. We built a computational model of auto-associative memory (Hopfield network) and explored the capacity of the artificial neural network to store and recall patterns. We investigated the effects of random synaptic failures on the memory capacity of the computational model. Our results show that the number of neurons in the simulated network was directly proportional to the number of patterns the model was able to recall. Furthermore, our research findings reveal that the interconnectedness of a neural network is as, if not more, critical to memory recall as the sheer size of the network in number of neurons. INTRODUCTION Neural activity is the basis for learning, information processing, and memory. The objective of this study was to build a computational model to simulate the neural activity involved in memory storage. This computational model was created with a computing platform, MatLab (Mathworks, Inc.). It was used to explore the memory capacity of the neural networks, especially when we induced failure in varying percentages of the neurons as an analogy to neurodegenerative disease. In these simulations, it was expected that the amount of information recalled by the model would increase as the number of neurons increased. Additionally, it was predicted that the percent recall would decrease as the number of patterns increased as information was stored through the connections between neurons. This was hypothesized based on the knowledge that more neurons allow for more synapses. Furthermore, a greater number of patterns is more difficult to recall because it requires more neural connections. In the same vein, more expansive neural networks were expected to recall more information over time when compared with smaller neural networks. When neurons were eliminated from the network in order to simulate the memory loss symptoms of neurodegenerative diseases, the percent of errors made was expected to increase, since the removal of each neuron results in the destruction of multiple connections. Therefore, it is predicted that large and closely wired neural networks will be able to maintain high levels of

Upload: others

Post on 16-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: COMPUTATIONAL MODELING OF THE MEMORY RECALL … · 2012-09-27 · Causes of memory loss include medications like antidepressants and painkillers, which alter levels of neurotransmitters

[7-1]

C OM PUT A T I ONA L M ODE L I NG OF T H E M E M OR Y R E C A L L C A PA C I T Y OF NE UR A L NE T W OR K S A ND T H E E F F E C T S OF R A NDOM NE UR ON F A I L UR E

Robert Aguilar, Harold Chen, Rajani Deshpande, Nicole Katchur

Minsoo Khang, Kenneth Li, Aleta Murphy, Sudheesha Perera Shreyas Shirodkar, Niklas Sjöquist, Amulya Yalamanchili, Susanna Yu

Advisor: Dr. Minjoon Kouh

Assistant: Aaron Loether

A B ST R A C T

A number of neurodegenerative diseases, such as Alzheimer’s Disease, are known to involve the decay or death of neurons and synapses, resulting in loss of memory. We built a computational model of auto-associative memory (Hopfield network) and explored the capacity of the artificial neural network to store and recall patterns. We investigated the effects of random synaptic failures on the memory capacity of the computational model. Our results show that the number of neurons in the simulated network was directly proportional to the number of patterns the model was able to recall. Furthermore, our research findings reveal that the interconnectedness of a neural network is as, if not more, critical to memory recall as the sheer size of the network in number of neurons.

I NT R ODUC T I ON

Neural activity is the basis for learning, information processing, and memory. The objective of this study was to build a computational model to simulate the neural activity involved in memory storage. This computational model was created with a computing platform, MatLab (Mathworks, Inc.). It was used to explore the memory capacity of the neural networks, especially when we induced failure in varying percentages of the neurons as an analogy to neurodegenerative disease. In these simulations, it was expected that the amount of information recalled by the model would increase as the number of neurons increased. Additionally, it was predicted that the percent recall would decrease as the number of patterns increased as information was stored through the connections between neurons. This was hypothesized based on the knowledge that more neurons allow for more synapses.

Furthermore, a greater number of patterns is more difficult to recall because it requires more neural connections. In the same vein, more expansive neural networks were expected to recall more information over time when compared with smaller neural networks. When neurons were eliminated from the network in order to simulate the memory loss symptoms of neurodegenerative diseases, the percent of errors made was expected to increase, since the removal of each neuron results in the destruction of multiple connections. Therefore, it is predicted that large and closely wired neural networks will be able to maintain high levels of

Page 2: COMPUTATIONAL MODELING OF THE MEMORY RECALL … · 2012-09-27 · Causes of memory loss include medications like antidepressants and painkillers, which alter levels of neurotransmitters

[7-2]

information recall amid random synaptic failures and, by extension, the delay the effects of neurodegenerative diseases.

Neuron Basics

The brain comprises cells called neurons, which communicate with each other by sending information throughout the brain and body. All neurons have three basic parts: the cell body, dendrites, and the axon. The cell body houses organelles such as the nucleus that perform necessary functions within all cells. The dendrites connected to the cell body are heavily branched structures that receive signals from other neurons. In order to connect to more neurons, the branched structures increase the surface areas of the cell. Axons are continually connected to the end of the cell body opposite the dendrites, which are long, mostly unbranched structures that facilitate signal transduction.1

Neurons have the ability to communicate with each other at points called synapses, highly variable structures between the dendrite of one neuron and the axon of another that can be created and destroyed. A fluctuation of membrane potential caused by the movements of ions, called an action potential, travels through the pre-synaptic neuron causing a release of neurotransmitter at the synapse. These neurotransmitters diffuse across the synapse and bind to receptors on the post-synaptic neuron. The neurotransmitters can then cause the post-synaptic neuron to fire an action potential. Action potentials can reach speeds of up to several hundred miles per hour along the axon’s membrane, and can fire multiple times per second.2

One principle that is essential to understanding action potentials is the all-or-none phenomenon, which states that the size of an action potential is independent of the strength of the stimulus; i.e., the amplitude of an action potential is always the same. There are no partially released action potentials.3 This all-or-none phenomenon can be used to code neuron activity in binary, which is very useful for computational modeling. Neural activity with action potentials that cross the threshold can be coded as 1’s, and resting, inactive neural activity potentials can be coded as 0’s.

Memory encoding acts as the first step to forming a memory in memory formation, beginning with the perception of an object or event, causing neurons in the brain to fire more frequently. This increases the likelihood of encoding the event as a memory. The pieces of information interpreted in the different sensory areas of the cortex congregate in the hippocampus. The hippocampus organizes new information by evaluating connections with previously recorded data until a neural network of cortical synapses records the various associations linked to the memory.3

Memory Formation

After the initial acquisition of the memory, the brain stabilizes the memory trace through

consolidation. The brain exploits long-term potentiating, enabling more signals to be transmitted

Page 3: COMPUTATIONAL MODELING OF THE MEMORY RECALL … · 2012-09-27 · Causes of memory loss include medications like antidepressants and painkillers, which alter levels of neurotransmitters

[7-3]

between synchronized neurons, thus strengthening synapses by making them more inclined to fire together later on. Neural messages also become more inclined to flow along certain patterns to stabilize the memory.3

Though other theories exist, it is accepted by many experts that as a person acquires new knowledge and creates new memories, synapses can change in strength and rearrange themselves; this was the hypothesis that we used as the basis for our model. A single memory is embedded in multiple connections, with each connection involved in several memories. It follows from this that multiple memories may be encoded within one neural network with different patterns. Conversely, a single memory may involve simultaneously activating different groups of neurons in different parts of the brain. This ability to reorganize connections, known as synaptic plasticity, may allow for lasting, more efficient changes in synaptic transmission and set the foundation for human learning.4

Memory retrieval involves the recall of previously encoded information as different stored elements scattered across the cortex and their reconstruction. Memory is accessed in two main ways: one involves directly uncovering information on an object, fact, or event from the memory, while the other involves the association of an object with one formerly encountered. The latter method allows for auto-associative memories, which involves retrieving an entire memory upon seeing only a small portion of it. For example, an adult may hear just the name of a childhood friend, bringing back old memories of how he met that individual and other interactions. An auto-associative memory retrieves a formerly encoded pattern that closely resembles the newly encountered pattern. The similar neural activity patterns and synaptic structure allows the brain to associate the previous and current information.4

As remembering memories involves their reconstruction, forgetting memories can be attributed to be either an error in the encoding of the memory or an inability, temporary or permanent, to piece together certain facets of the memory. The rate of forgetting a memory typically decreases as synaptic connections are destroyed; information loss may be very rapid when removing few synaptic connections, then slow down as more are removed. On the contrary, information that has been learned very well (i.e., names, basic vocabulary), is usually resistant to loss.5

Causes of memory loss include medications like antidepressants and painkillers, which alter levels of neurotransmitters necessary for neural communication.5 A stroke can also lead to loss of memory; they are either ischemic, which results when the blood supply to the brain is cut off, or hemorrhagic, which is the product of bleeding in the brain. Neurons die when there is an insufficient supply of oxygen from blood; in this way, strokes can cause short-term memory loss. Traumatic brain injury (TBI) may occur when blunt, non-penetrating trauma occurs to the head, resulting in brain damage, or when an object punctures the skull and directly injures the brain. Neurons are killed when brain tissue is damaged. Neurodegenerative diseases including Alzheimer’s, Parkinson’s, and Huntington’s involve the decay of neurons. For example, Parkinson’s disease is a motor system disorder that is caused by the death of brain cells that produce dopamine. These diseases can cause dementia, a group of symptoms involving detriment

Causes of Memory Loss

Page 4: COMPUTATIONAL MODELING OF THE MEMORY RECALL … · 2012-09-27 · Causes of memory loss include medications like antidepressants and painkillers, which alter levels of neurotransmitters

[7-4]

to thinking, memory, behavior, and other functions. Some individuals, particularly the elderly, may develop multiple neurodegenerative diseases, which increase their chances of developing dementia.6

Alzheimer’s disease is a neurodegenerative disease that impairs neural activity and connections, eventually causing the death of neurons. This disease is most prevalent among the elderly, and is the most common cause of dementia among people above the age of 65. Though anatomical and physiological information regarding the disease has improved, many agree that the disease is not fully understood.7

Alzheimer’s Disease

The brains of patients with Alzheimer’s disease have two neural abnormalities that affect brain function: the presence of amyloid plaques and neurofibrillary tangles. Amyloid plaques consist of a protein peptide called beta-amyloid and are found in the spaces between neurons in the brain. Scientists are unsure whether plaques themselves react with receptors on neighboring cells and synapses to affect neural function. Neurofibrillary tangles are twisted protein threads found in neurons, mostly composed of the protein tau. In Alzheimer’s disease tau, a protein that normally stabilizes the cytoskeleton of a neuron, separates from the microtubules and tangles together with other strands of tau. These resultant tangles within the cell can cause the microtubules to disintegrate, ruining the transport system within the neuron. Because this is true, the ability for these neurons to send signals across synapses is reduced, and this damage causes reduced inter-cellular communication resulting in memory loss.8

The disease begins in the entorhinal cortex, a part of the brain that is directly connected to the hippocampus, which plays a critical role in the formation and recall of memory. Neurons in this area begin to lose efficiency and the ability to communicate. This process spreads to the hippocampus, a region of the brain involved in converting short-term memories to long-term memories. As cells lose function and begin to die, the affected regions begin to shrink, and ventricles, the fluid-filled spaces in the brain, begin to enlarge.9

In computational neuroscience, the brain is treated as a function where neurons take inputs such as sounds, images, and smells, and create an output of perceptions, decisions, and behaviors. Outputs affect inputs as feedback, and the process repeats. Neurons are represented as a nonlinear, dynamic input-output device, while networks of neurons are represented as clusters of cells interconnected by synapses. Finally, learning through synaptic changes is expressed as plasticity.

Where Neuroscience Meets Computer Science

The Hopfield network is an artificial neural network designed to represent memory. A

memory is represented as a set of binary patterns, while the memory is stored by a matrix - a rectangular array of numbers - of “synaptic weights” based on the strengths of connections between neurons. Donald Hebb first proposed that the strength of the connection between two neurons arises from frequent and persistent stimulation of the post-synaptic cell, a view that is

Page 5: COMPUTATIONAL MODELING OF THE MEMORY RECALL … · 2012-09-27 · Causes of memory loss include medications like antidepressants and painkillers, which alter levels of neurotransmitters

[7-5]

often concisely summarized by neurologists as the neurobiological dogma “Neurons that fire together, wire together.” Accordingly, each instance in which neurons of the initial matrix hold similar values (“fire together”), their respective synaptic weights are increased. As more neurons fire together, more connections are built, eventually leading to a state where even slight stimulation to one associated neuron will cause all other associated neurons to fire.10

The dynamics of the Hopfield network are also defined depending on the order the neurons are updated. The first method is an asynchronous update, in which one random neuron is picked, the weighted input sum is calculated, and the neuron updated immediately. The second method is a synchronous update, when the weighted input sums of all neurons are calculated without updating the neurons and all neurons are simultaneously set to their new value. The Hopfield network can also be used to represent auto-associative memory. An associative memory carries connected patterns that are encoded in some form, and an input pattern is associated with itself as well other outputs. When a distorted or partial pattern is incited, the associated pattern pair that was “remembered” is recalled based on a weight matrix of all patterns to be remembered. The patterns stored in the network are divided as either cue or association. When the cue is entered into the network, the entire pattern, stored in the weight matrix, is retrieved by association.11 M E T H ODS

Computational Methods Utilized to Simulate the Neural network

Auto-associative Memory

In order to simulate the storage of patterns as modeled by Hopfield, synaptic weight changes must be made. The synaptic weights form the basis of neuron “memory”. As previously stated, according to Hebb’s principle neurons that “fire” together “wire” together, and vice-versa. Thus, simultaneous activation of neurons produces a learning effect on neural networks. Simulation of such behavior was achieved through the autoConnectivity.m function, which outputs a synaptic weight matrix based on a pattern input matrix. This synaptic weight matrix is involved in further neural network memory assessments.

The autoConnectivity.m function accepts one input, the pattern matrix which contains patterns intended for storage. The dimensions of this pattern matrix correspond to the numbers of neurons and patterns inputted respectively. The function loops through the matrix, noting the state (1 or 0) of the individual cells and updating the appropriate synaptic weight cells by adding 1 or subtracting 1 from the synaptic weight matrix; 1 is added if two neurons have the same state and 1 is subtracted if two neurons have opposite states. The resulting synaptic weight matrix output has dimensions of size Number of Neurons x Number of Neurons, since it specifies all pair-wise synaptic weights. The diagonal elements of the synaptic weight matrix, representing self-synapses, are set to zero.

Page 6: COMPUTATIONAL MODELING OF THE MEMORY RECALL … · 2012-09-27 · Causes of memory loss include medications like antidepressants and painkillers, which alter levels of neurotransmitters

[7-6]

Synchronous Update of Neural Network States

The synchronous update function, synchUpdate.m, simulates the recall of a memory based on the Hopfield Model. The function accepts three inputs: an initial pattern, a set of synaptic weights, and a specified number of time steps. It outputs a neural response pattern, which can be subsequently analyzed to assess the neural network memory capabilities.

The number of rows in the initial pattern input matrix corresponds to the number of neurons in the initial neural pattern. The function updates all neurons in the first time step by multiplying the synaptic weight matrix by the associated neuron's previous time step column in the output matrix. For instance, the second column of the output matrix corresponds to the second time step, and all neurons from that time step would be updated by multiplying the neuron’s previous time step value by the respective weight value in the synaptic weight matrix. Final output values are given in binary values. The dimensions of the output matrix correspond to Number of Neurons x Number of Time Steps.

Another version of the neural network update function, asynchUpdate.m, similarly updated the neural network with identical inputs as previously described for the synchUpdate.m function. However, instead of simultaneously updating all neurons at each time step, a random neuron is updated during each iteration. This increases the biological feasibility of the simulation but requires greater computational power, which was unavailable for this simulation. Master Script

The two basic functions described above are encapsulated in a master script known as Memory Model.m, which compiles all necessary commands in the correct order so as to yield a two dimensional matrix containing cell values representing the percent recall of a specific neuron-pattern combination. For instance, the cell located at address 2,2 contains the percent recall of a simulated neural network of 2 neurons recalling a pattern from a list of 2 possible patterns. The output matrix of the Memory Model.m simulation is then used to produce the three dimensional graph in Figure 2, which shows the correlation between the number of neurons in a neural network, the number of patterns the neurons must store, and the percent recall of every ensuing combination.

R E SUL T S

Our main research goal was to determine the mathematical relationship between the number of neurons in a neural network, the amount of patterns that the neural network had to remember, and the ability of that neural network to recall the patterns. To do this, we built a

Neural Ability to Remember Patterns

Page 7: COMPUTATIONAL MODELING OF THE MEMORY RECALL … · 2012-09-27 · Causes of memory loss include medications like antidepressants and painkillers, which alter levels of neurotransmitters

[7-7]

computational model that simulates a neural network’s ability to memorize and recall patterns and memories. The main components of the model were appropriate renderings of neurons and the synapses between them. The program we used was MatLab, which gave us flexibility in modeling the neural network.

One such component of the linear algebra necessary to model neural networks was the use of synaptic weight values. The synaptic strength between neurons is modeled by a matrix, holding weight values that correspond to strength of the connections between communicating neurons. This matrix is created by a function that analyzes a set of initial patterns and then creates a matrix that contains synaptic weight values corresponding to the relative association between two neurons. The program then uses this matrix to simulate biologically feasibly action potentials between neurons. In this way, autoConnectivity.m follows the Hebbian theory of synaptic plasticity. Each instance in which neurons of the initial matrix held similar values (“fire together”) led to an increase in their respective synaptic weights.

The weight Matrix is then put into another function that models the firing of neurons. If an action potential is fired, then a “1” is recorded in the new output matrix. Otherwise, a 0 is recorded. This binary simulation of memory accurately models the firings of action potentials due to the fact that the firings of action potentials are “all or nothing” responses.

Illustration of Model

With our program, we applied a Hopfield network to matrices of patterns that were to be remembered (here, we chose an image of Escher’s E ye) in order to memorize and recall a binarized picture. The image was imported as a matrix, and then binarized so each pixel was either black or white, creating a final image of around 4,500 pixels. We then ran our autoConnectivity.m program, which produced a weight matrix dependent on the image and trained our model to remember the image.

Next, we purposefully reset half of the image that we had originally had to 0’s. This was our initial state, upon which we would test recall of our pattern to be remembered. We ran asynchUpdate.m (asynchronous update) in order to recall said image. With enough time steps (here, we used 50,000), we observed that the original pattern was recalled with perfect accuracy (figure 2).

However, recalling only one pattern will result in essentially perfect recall, given enough time. We introduced around 400 new randomized patterns of the same dimensions. Creating another weight matrix based on the patterns, we again had the program recall the same initial state. We observed that at the same time step (50,000) as the single-pattern recall, the pattern was still not fully recalled (Figure 2, F) and even at time step 500,000 there was still some discrepancy (Figure 2, I). From this, we concluded that the patterns to be remembered interfered with each other, and as such resulted in some error in recall.

Page 8: COMPUTATIONAL MODELING OF THE MEMORY RECALL … · 2012-09-27 · Causes of memory loss include medications like antidepressants and painkillers, which alter levels of neurotransmitters

[7-8]

Figure 1: A, D, G - Pattern to be remembered B, E, H - Degraded image C, F, I - Recalled Image C: Single pattern recall with 50,000 time steps and 4446 neurons F: 444 pattern recall with 50,000 time steps and 4446 neurons. Note that at this point in time, the pattern has not yet been fully recalled with 444 patterns in memory, while the pattern with 1 pattern in memory has. I: 444 pattern recall with 500,000 time steps and 4446 neurons. The pattern has advanced from the middle row, but it has still not been fully recalled. We can safely say that it is not likely to become more accurate.

Using the same basic functions that we used in the recall of the eye, we were then able to model healthy neural networks of up to 256 neurons, each with a number of patterns equal to 30 percent of their neural size. For example, when the simulation models a 100-neuron network, the network would try to recall the pattern with noise introduced in the form of 1-30 patterns at the same time. Due to the intrinsically random nature of the functions, the simulation was repeated 1,000 times. Finally, the average performance was found.

Modeling a Neural Network

The results of the Memory Model.m

simulation are more readily understood when represented in terms of the relationship between two variables with a third factor held constant, as

opposed to the relationship between all three variables. Figure 3 shows the association between the number of neurons and the percent recall of a neural network when the number of patterns is held constant. The percent recall of the cognitive model approaches 100 percent as the number of neurons approaches 256 (the maximum number of neurons used in a given trial). This graph is derived by taking a two-dimensional “slice” of the three-dimensional graph in Figure 2 and displaying it on a two dimensional plane.

F igur e 2 Ability of X Neurons to Recall Y Patterns with No Cognitive Deficit; i.e., A healthy neural network trying to recall random patterns. Note that the maximum recall is 100% and that the network’s capacity to recall memories reaches a horizontal asymptote at approximately 60% accuracy.

Page 9: COMPUTATIONAL MODELING OF THE MEMORY RECALL … · 2012-09-27 · Causes of memory loss include medications like antidepressants and painkillers, which alter levels of neurotransmitters

[7-9]

As the number of neurons increased, the percent recall approaches a horizontal asymptote

of value 100 percent. However, as the number of patterns increase, the rate of increase of percent recall with respect to number of neurons decreases. This means that a larger neural network has the capacity to store and recall more information, with greater efficiently.

F igur e 3: Ability for neurons to recall a given number of patterns. At relatively low numbers of patterns (6-10), all trials approached 100% recall when the number of neurons in the network approached 256 (the largest neural network simulated). This indicates that larger neural networks, consisting of more functional synapses, have the capacity to memorize and recall patterns with greater accuracy and efficiency compared to those consisting of fewer neurons.

F igur e 4: Measure of the ability of a set number of neurons to recall an increasing number of patterns. The graph above reveals that all neural networks eventually reach a “breaking point”, the point at which the number of patterns overwhelms the neural network leading to a significant drop in memory recall.

Page 10: COMPUTATIONAL MODELING OF THE MEMORY RECALL … · 2012-09-27 · Causes of memory loss include medications like antidepressants and painkillers, which alter levels of neurotransmitters

[7-10]

Another key relationship illuminated by the memory model was the relationship between the percent recall and number of patterns with a fixed number of neurons. The relationship that develops with percent recall as a function of patterns is an S-shaped curve, with the higher asymptote at 100 percent and the lower asymptote at around 60 percent. Figure 4 shows that neurons are able to handle a limited amount of patterns until a critical value is reached. At this point, the percent recall decreases more rapidly with respect to the number of patterns. For every given set of neurons (taken at 50-neuron intervals), this critical value lies between 10 and 20 patterns. The critical value marks the point at which the neural network’s ability to effectively recall patterns begins to decrease at a much greater rate.

F igur e 5: A top-down view of Figure 2, with darker, redder colors representing greater recall, and yellow colors representing significantly worse recall. The line represents the “breaking point,” which shows when a group number of neurons is overwhelmed by the number of patterns.

After considering Figures 2 and 4, we concluded that depending on the number of

neurons in the network, the steep drop in recall percent will change greatly. By analyzing Figure 8, we derived a rough approximation of the breaking point, and a mathematical approximation of the number of patterns (P) that a number of Neurons (N) can handle: P < NK, where K is the slope of the above line (8 percent) and the acceptable amount of recall was arbitrarily chosen as approximately 90 percent.

In the second part of our research, we focused on the effect of random neuron failure on our model’s capacity to store and recall patterns. This was done by randomly “killing” neurons in our simulation by setting the weight of any synapse going in or out to 0. By doing this, the neuron is essentially invisible to all of the surrounding neurons, and no signals come in or out of the cell.

Memory Capacity when Affected by Random Neural Failure

Page 11: COMPUTATIONAL MODELING OF THE MEMORY RECALL … · 2012-09-27 · Causes of memory loss include medications like antidepressants and painkillers, which alter levels of neurotransmitters

[7-11]

The memory degradation symptoms of Alzheimer's Disease, an ancillary focus of our

research, can be recognized as early as when 10% of a patient’s neurons are either dead or compromised. We analyzed our model in intervals of 10%, starting at 20% degradtion, in order to explore a correlation between neuron death and the ability for a neural network to recall patterns.4 Figure 6-A

F igur e 6: Model of random synaptic failures’ affect on memory recall with a set number of patterns. A : The top pattern (in purple) is the control with no inhibition, taken as a slice of the three dimensional graph pictured in Figure 2. All subsequent lines were altered to produce random synaptic failures in a given percent of the cells. A larger percent of neural damage equated with lower percent recall values across all neural network sizes.

B : Note that when 50% of the synapses in the 60-neuron network randomly fail that the recorded percent recall is 75%. By comparison, the healthy 30-neuron network of the control showed an 87% recall, a 12% advantage over its larger, but damaged counterpart. Therefore, our findings point toward the conclusion that the interconnectedness of a neural network is as, if not more, critical to memory recall as the sheer size of the network in number of neurons.

Figure 6-B

Page 12: COMPUTATIONAL MODELING OF THE MEMORY RECALL … · 2012-09-27 · Causes of memory loss include medications like antidepressants and painkillers, which alter levels of neurotransmitters

[7-12]

Note that in Figure 6 that there is a significant decrease in neural ability to recall patterns

efficiently. As the number of patterns increase, the overall percent recall decreases. Unlike the number of patterns or neurons, the removal of neurons does not seem to change the rate of recall loss. However, it does affect the overall ability to recall patterns, changing the value of the asymptotes observed in Figure 6. Interestingly, the amount of change in recall ability seems to be fairly constant between differing amounts of neural decay. This indicates a trend in the way a dead neuron affects the network as a whole.

Our findings point toward the conclusion that the interconnectedness of a neural network is as, if not more, critical to memory recall that the sheer size of the network in number of neurons. This is seen in Figure 6 by comparing the recall of larger damaged networks to that of smaller, healthy networks. For example, a 60-neuron network with 50 percent of the synapses randomly failing has a recall of about 75 percent. By comparison, the healthy 30-neuron network of the control had an 87 percent recall, representing a 12 percent advantage over its larger, but damaged counterpart.

F igur e 7: Model of random synaptic failure and its effect on the recall ability of a 100-neuron network. Note that increased synaptic failures had a dramatic effect on the maximum recall of the model. While the control managed to maintain 100% recall through 5 patterns, the trial involving just 20% synaptic failures never surpassed 95% recall. Furthermore, when 50% synapses failed, the model was never able to exceed 83% recall.

In Figure 7, one can observe an S-shaped graph similar to Figure 4, but with a few

differences. The left side asymptotes are affected very similarly to the value changes in Figure 6. However, as the graphs begin to stabilize, the range of differences decrease from 20% to around 10%. This brings up some interesting possibilities. One explanation may be that as more patterns are stored, the amount of active neurons plays less of a role in recalling memories. Another more likely possibility is that due to the nature of the program, it stabilizes around 50-60% because of the increasingly random nature of the synaptic weights.

Page 13: COMPUTATIONAL MODELING OF THE MEMORY RECALL … · 2012-09-27 · Causes of memory loss include medications like antidepressants and painkillers, which alter levels of neurotransmitters

[7-13]

F igur e 8: Model of increasing Neural Decay and its effect on the neural network’s ability to recall memories. The percent of neural decay is measured by the percent of neural synapses that were randomly destroyed and made non-functional. This graph shows the linear correlation between neural decay and percent recall present when a set number of neurons are asked to recall a given number of patterns. In this case, 100 neurons were asked to recall 5 patterns.

After examining the individual graphs in Figures 6 and 7, and observing patterns in the

graphs regarding a seemingly constant relationship between recall and neural decay, we decided to investigate the relationship further. Analyzing Figure 8, note the linear relationship percent recall and decay. At a constant number of neurons (100) and patterns (5), the rate of change in percent recall with respect to percent decay is roughly -4/9. This number, however, may change when a different number of neurons or patterns is analyzed. C ONC L USI ON

Throughout our research experiments, we observed the relationship between number of neurons, patterns, neural decay, and percent recall in our computational model of memory. The number of neurons was related to percent recall in a function that resembled a logarithm. This is consistent with our assumptions, as the percent recall could not possibly exceed 100 percent. The asymptotes were at 100 percent in the trial with no neural decay, which confirmed that more neurons can handle much more data.

We determined that the relationship between percent recall and number of patterns needed to memorize took the form of a S-shaped curve with an upper asymptote to the left at 100 percent, and the lower asymptote to the right and at around 60 percent. Both the model and our prediction had the same initial values, with neurons able to handle a certain number of patterns before hitting a “breaking point,” defined as a 90 percent recall rate. This is important because it tells us that we can calculate how many patterns a certain number of neurons can recall with great accuracy. With our data, this association is represented by the equation P < NK, where K is

Page 14: COMPUTATIONAL MODELING OF THE MEMORY RECALL … · 2012-09-27 · Causes of memory loss include medications like antidepressants and painkillers, which alter levels of neurotransmitters

[7-14]

the constant 0.08, N stands for the number of neurons in a network, and P is the number of patterns. This association is relatively accurate up to at 256 neurons.

It is also important to note the relationship between the size and interconnectedness of a neural network when analyzing the model for memory recall. This is seen in Figure 6 by comparing the recall of larger damaged networks to that of smaller, healthy networks. For example, a 60-neuron network with 50 percent of the synapses randomly failing has a recall of about 75 percent. By comparison, the healthy 30-neuron network of the control had an 87 percent recall, representing a 12 percent advantage over its larger, damaged counterpart. This evidence readily reveals that smaller, closely connected networks can more efficiently store and recall information than those with weaker connections, regardless of overall size. From Figures 8 and 10, we can observe that there is a linear correlation between recall percentage and percentage of neural decay with models having the same number of patterns. However, when the neurodegenerative simulations were run with an increasing number of patterns (Figure 4), there was a significant decrease in the amount that the damaged networks deviated from the healthy network. This was most likely due to all of the networks reaching a stabilized state, which is roughly equivalent with the Alzheimer’s simulation due to the binary nature of the patterns.

In retrospect, we could have changed our experiment in a way that may have led to more definite conclusions. First, if we had eliminated time restraints, accessed more efficient programs, and increased the available computing power, we could have created a simulation modeling more neurons with much larger accuracy, giving us a larger sample size with which to observe trends in the data. It is important to note that current scientific estimates put the number of neurons in the human brain at anywhere between 86 billion and 100 billion. In sharp contrast, our largest simulated neural networks were a mere 256 neurons in size. Therefore, the data is subject to bias based on the relatively small sample size used in this experiment.

We chose to model the precision of a network, modeling the percent recalled. Most

clinical trials of memory recall provide “all-or-nothing” data, finding out only whether or not there was a successful recall and nothing else. Their data shows if a patient remembered, while our data shows how much the network remembered, which reached 85 percent of the memory. By not modeling an all-or-nothing response like with these trials, we better understood the actual relationship between recall and neurodegeneration.

Ultimately, our findings point toward the conclusion that the interconnectedness of a neural network is as, if not more, critical to memory recall as the sheer size of the network in number of neurons. Although our simplified model does not account for the multitude of cause and effect relationships associated with neurodegenerative diseases such as Alzheimer’s Disease, it successfully models the memory loss symptoms of such neural pathologies and provides insight into the mathematical relationships associated with the brain’s functions of memory storage and recall.

Page 15: COMPUTATIONAL MODELING OF THE MEMORY RECALL … · 2012-09-27 · Causes of memory loss include medications like antidepressants and painkillers, which alter levels of neurotransmitters

[7-15]

R E F E R E NC E S

1. Morris R, Tarassenko L, Kenward M. Cognitive systems: information processing meets

brain science. Jordan Hill (GBR): Academic Press. 325 p. 2. Nadel L, Samsonovich A, Ryan L, Moscovitch M. Multiple trace theory of human

memory: computational, neuroimaging, and neuropsychological results. NCBI (2000) 19-20.

3. Knowles, RB, Wyart, C, Buldyrev, SV, Cruz, L, Urbanc, B, Hasselmo, ME, Stanley, HE, and Hyman, BT. Plaque-induced neurite abnormalities: implications for disruption of neural networks in alzheimer's disease. National Academy of Science. (1999) 12-14.

4. Squire L, Berg D, Bloom F, Lac S, Ghosh A, Spitzer N. Fundamental neuroscienc. Burlington (MA): Academic Press; 2008. 1225 p.

5. James L, BurkeD. Journal of experimental psychology: learning memory and cognition [Internet] American Psychological Association; 2000 [cited 2012 July 26] Available from: http://www.apa.org/redirect.html?aspxerrorpath=/journals/xlm.aspx/

6. Lu L, Bludau J. 2011. Causes. In: Library of Congress, editors. Alzheimer’s Disease. Santa Barbara (CA): Greenwood. p85-124

7. [NINDS] National Institute of Neurological Disorders and Stroke. c2012. Stroke: hope through research. NIH; [cited 2012 July 26]. Available from: http://www.ninds.nih.gov/disorders/stroke/detail_stroke.htm

8. [NINDS] National Institute of Neurological Disorders and Stroke. c2012. Parkinson’s disease: hope through research. NIH; [cited 2012 July 26]. Available from: http://www.ninds.nih.gov/disorders/parkinsons_disease/parkinsons_disease.htm

9. [NIA] National Institutes of Aging. 2008. Alzheimer’s disease: unraveling the mystery [Internet] NIH; [cited 2012 Jul 29]. Available from: http://www.nia.nih.gov/alzheimers/publication/part-3-ad-research-better-questions-new-answers/search-new-treatments

10. Hopfield J. Neural networks and physical systems with emergent collective computational abilities. CIT (1982). 8-9.

11. Lee C. 2006. Artificial Neural Networks [Internet] Waltham (MA): MIT; [cited 2012 Jul 29]; 5p. Available from: http://web.mit.edu/mcraegroup/wwwfiles/ChuangChuang/thesis.htm

Page 16: COMPUTATIONAL MODELING OF THE MEMORY RECALL … · 2012-09-27 · Causes of memory loss include medications like antidepressants and painkillers, which alter levels of neurotransmitters

[7-16]

A PPE NDI X

C odes used for C omputational M odeling of Neur al Networ ks

Auto-Associative Memory

function W = autoConnectivity (P) [Nneuron,Npattern] = size(P); W = zeros(Nneuron,Nneuron); W = (2*P-1)*(2*P-1)'; MSK = (ones(Nneuron) - eye(Nneuron)); W = W .* MSK;

Asynchronous Update of Neural Networks

function Y = asynchUpdate(Y0, W, nTs) y=zeros(length(Y0'),nTs); y(:,1) = Y0; for t=2:nTs which = randi(length(Y0')); y(:,t) = y(:,t-1); y(which,t) = W(which,:)*y(:,t-1); if (y(which,t) > 0) y(which,t) = 1; else y(which,t) = 0; end end Y=y;

Synchronous Update of Neural Networks

function Y = synchUpdate (Y0, W, nTs) Nneuron = length(Y0); Y=zeros(Nneuron,nTs); Y(:,1) = Y0; for t = 2:nTs

q = W*Y(:,t-1); %Y(:,t) = q; Y(:,t) = q>0;

Page 17: COMPUTATIONAL MODELING OF THE MEMORY RECALL … · 2012-09-27 · Causes of memory loss include medications like antidepressants and painkillers, which alter levels of neurotransmitters

[7-17]

end

Memory Model

% Calculating the ability for "x" number of neurons to remember "y" number of percent = 0; P = zeros(10,1) ; I = 0; runs = 0; %Creates a matrix of 0's with length the number of neurons, width the number of patterns, and depth 5. %This allows for the calculation of standard deviation and average values. Matrix = zeros(5,300,ceil(300*.2)); %Specifies the number of times the program runs the memorize and recall functions for times = 1:100 for n = 1:300

%Randomly generates a matrix of patterns for the neurons to memorize and recall

patterns= round(rand(n,ceil(n*.2))); for p = 1:ceil(n*.2) I = randi(ceil(n*.2)); P = patterns(:, I); patterns(:,I) = patterns(:, 1); patterns(:,1) = P; Y0 = P; W = autoConnectivity(patterns(:,1:p)); Y = synchUpdate(Y0, W, 20);

%Returns the percent correct as a value between 0 and 1 percent = Percent_Correct(Y,P);

%Updates matrix values to reflect the correct percent recall of a given neuron and pattern configuration

Matrix(times,n,p) = Matrix(times,n,p) + percent*100; end end end