an artificial cognitive system for ... - impuls mittelschule · the ability to retain knowledge and...

117
An Artificial Cognitive System for Autonomous Navigation Theory and Simulation SAVANNAH ECKHARDT MATURITÄTSARBEIT 2019 BETREUT DURCH KATARINA GROMOVA KANTONSSCHULE ZÜRCHER OBERLAND

Upload: others

Post on 07-Aug-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

An Artificial Cognitive System for Autonomous Navigation

Theory and Simulation SAVANNAH ECKHARDT

MATURITÄTSARBEIT 201 9

BETREUT DURCH KATARI NA GROMOVA

KANTONSSCHULE ZÜRCHE R OBERLAND

Page 2: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

Abstract

This thesis covers the biological and computational properties of navigation and

memory in a cognitive system. It is discussed how biological processes can be

formulated mathematically and thus used to model artificial cognitive systems. Two

computational systems, namely ratSLAM and the DFT Framework, are further

introduced. These models are used as basis for building an artificial neural network that

can perform flexible navigational behaviors. Inspired by the biological foundations of

higher-cognitive level processes, their computational implementation and the two

prevalent computational models, a new brain simulation is introduced. The brain

simulation makes use of dynamic neural fields, built with the software cedar, to

autonomously detect objects of a specific color and navigate to said object. The thesis

concludes by giving an outlook on further possible expansions of the constructed brain

simulation.

Page 3: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

i

Table of Chapters

Preface 1

1 Introduction 4

2 Theoretical Foundations 6

3 Practical Realizations 56

4 Results 81

5 Discussion 96

6 Conclusion 101

Acknowledgements 102

Bibliography 103

Page 4: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

ii

Table of Contents

Preface 1

1 Introduction 4

1.1 Subject matter 4

1.2 Goal of the project 4

1.3 Structure of the thesis 4

2 Theoretical Foundations 6

2.1 Neural Networks 6

2.1.1 Functioning of neurons 6

2.1.1.1 Anatomy of a Neuron 7

2.1.1.2 Synapses 8

2.1.1.3 Electrochemical Qualities of a Neuron 8

2.1.1.4 Communication Between Neurons: Action Potentials 9

2.1.1.5 Threshold Potential and Refractory Periods 11

2.1.1.6 Chemical Signaling in Synapses 12

2.1.2 Spiking Neural Networks 14

2.1.2.1 Spiking Models 16

2.1.2.2 Plasticity in SNNs 18

2.1.2.3 (Leaky) Integrate and Fire Model 20

2.2 Memory 24

2.2.1 Biological background 24

2.2.1.1 Early Phase of Long-term Plasticity (LTP/LTD) 24

2.2.1.2 Late Phase of Long-term Plasticity (LTP/LTD) 27

2.2.1.3 Homosynaptic and Heterosynaptic Plasticity 28

2.2.2 Modelling Memory 29

2.2.2.1 Cell Assemblies 29

2.2.2.2 Dynamic Neural Fields 30

2.2.2.3 Sequence Learning 31

2.2.2.4 Delay Dynamical Systems 31

2.3 Navigation 33

2.3.1 Brain structures involved in navigation 33

2.3.2 Computational Models for navigation 38

2.3.2.1 RatSLAM: Simultaneous Localization and Mapping 38

2.3.2.2 The Architecture 40

2.3.2.3 Experience Mapping 45

2.3.2.4 SPA: Simultaneous Planning and Action 46

2.3.2.5 Dynamic Field Theory 47

2.3.2.6 Architecture of the Dynamical Systems 49

Page 5: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

iii

3 Practical Realizations 56

3.1 Course of Action 56

3.2 Brain Simulation With Cedar 61

3.2.1 Overview 61

3.2.2 Serial Order 64

3.2.3 Perception 66

3.2.4 Kinematics 67

3.2.5 Experience Map 69

3.2.6 Condition of Satisfaction System 71

3.3 Robotic demonstration 72

3.3.1 Parameter Tuning 72

3.3.2 Zero Dimensional Nodes 74

3.3.3 One-dimensional Fields 76

3.3.4 Two-dimensional Fields 76

3.3.5 Three-dimensional Fields 79

4 Results 81

5 Discussion 96

5.1 Error Analysis 96

5.2 Learning 98

5.3 Expansion and Development 99

6 Conclusion 101

Acknowledgements 102

Bibliography 103

Page 6: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

Preface 1

Preface

“The book of nature is written in the language of mathematics.”

– Galileo Galilei

We tend to think of technology and nature as opposites, even opponents, where one

cannot exist or flourish without inhibiting the other. But what if we built a bridge across

this chasm and combined advantages of both sides? I am interested in exploring the

possibilities of more organic computing, where computational systems rely more

heavily on biological principles. When unraveling the workings of nature, we face

immensely complex processes that are handled in the most efficient way possible –

nature is not lavish. Additionally, we can observe that biological organisms are able to

adapt dynamically to changing environments by estimating what behavior ensures

their survival. If we try to mimic these means of natural efficiency and dynamic behavior

in our technology, we could realize even greater machinery and beneficial tools for

everyday life or further scientific research. Inspired by the quote of Galileo Galilei, by

figuring out the core of rules in nature, mathematical formulas can be constructed,

which pose as the bridge combining nature and cutting-edge technology.

In this project, I wanted to emphasize the importance and potential of combining

biology with informatics to create intelligent artificial cognitive systems. By trying to

model a cognitive system myself, I wanted to better understand the processes

happening in an organism’s brain that lead to intelligent behavior. Since memory is

crucial to the autonomy and intelligence of any organism, such a cognitive system can

be further expanded when emphasizing on biologically inspired processes of memory

storage. Furthermore, memories allow us to identify ourselves. Past actions and events

shape our personality continuously, also allowing us to extend our horizons and to

ameliorate our behavior as well as our way of thinking. The ability to retain knowledge

and applying it to new experiences enables broader networking of the neural cells and

therefore quicker, more efficient and more creative thinking.

Bearing that in mind, the potential of AI can be fully exploited when working with more

dynamic systems that also take memory formation into focus. Another interesting

aspect is the ability to associate emotions with certain events which later become a

memory of the artificial organism. By rewarding a robot with long-term memory

capacities when exerting one kind of behavior and punishing it when exercising

another, we are able to implement a built-in value system. Over time, the robot will

evolve some form of standard ethics whose values are remembered and employed in

new situations. The robot will be able to relate planned behaviors to its moral system

and reflect whether that behavior lies within those principals of morality or not.

Page 7: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

Preface 2

If we want to take this even a step further, we could ask ourselves what would happen

if the robot becomes so skilled at reflective thinking, that he might question the moral

values that have been taught by a supervising figure. One can take any human as an

example and acknowledge that we do not always act as we have been brought up to.

Since humans and other intelligent beings are able to make decisions on their own by

weighing various determining factors, we are also able to employ harmful, destructive

and malevolent behaviors. A lot of people hence become skeptical when talking about

autonomous robots, for the fear of them turning evil and eventually subjugating the

entire human race due to their superior intelligence. This argument might sound a little

apocalyptic, but it is not simply plucked out of thin air. Even the most renowned

scientific personages like Stephen Hawking1 warn that fully developed artificial

intelligence could destroy the human race.

However, I would consider these kinds of mindsets to be too conventional and too little

disruptive. We have to expand our ideas and propositions beyond traditional thinking,

which does not allow thoughts disruptive to the status quo. In order to exhaust artificial

intelligence’s full potential, we must renounce the limitations humans and other

biological organisms face. We must not solely use our understanding of human

behavior and the general laws of nature to form the future of technology but expand

that knowledge with creative thinking to come forward with revolutionizing ideas.

Why do humans do bad things? A highly philosophical question, though very

important when considering the forming of behavior and thinking of intelligent robots.

Violent and sexual drives are hardwired in our brain. Our most primitive urges derive

from every organism’s ultimate goal: survival and reproduction, hence survival of one’s

own genes. Human’s ancestors were able to survive without having to use higher

cognitive abilities, namely by relying on instinct and spontaneous emotion. Rage,

aggression and lust were the preliminary sentiments that decided whether we died or

whether we lived and produced offspring, and thus fulfilled our “purpose”. Even though

we like to think of ourselves as very rational beings, our innate drives often get the

better of us. Egoism, anger, conceit, envy and greediness are the major five human

personality traits that lead to destructive and evil behavior. Our additional

consciousness of our own mortality further fuels impulsive and self-serving actions to

diminish our uneasiness regarding our ephemeral existence.

In contrast, artificial systems are not captured in gradually decaying bodies, besides, a

software can be transferred to a new hardware system if the old one should become

unserviceable. There is also no need to implement primitive needs and drives, that go

against the systems rational and moral sense. We get to decide the foundational make

up of the artificial intelligence’s neural set up without having to follow all of nature’s

1 Source: https://www.bbc.com/news/technology-30290540 , 20.10.18.

Page 8: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

Preface 3

rules. We pick those rules we think are beneficial to a moral and rational cognitive

system and abandon characteristics which may lead to a more destructive (more

human) form of artificial intelligence. Taking this even further and including a virtual

environment à la virtual reality, we as programmers are totally free from any biological

and physical restrictions, and are able to build a world with subjects following our own

rules.

We get to play God.

Page 9: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

Introduction 4

1 Introduction

1.1 Subject matter

This thesis will take into focus how brain cells function and what kind of functions are

necessary for an organism to navigate an environment that is beforehand unknown to

it. It is further explained how these biochemical processes can be formulated in

mathematical equations, which are used to build computational models. The areas in

the brain responsible for navigation are reconstructed with mathematical formulations,

which then are able to be implemented in a digital or analogical dynamic system. The

preliminary idea is to implement said algorithm on a real-life robot, which is then able

to orientate through a certain environment similarly to its biological counterparts.

1.2 Goal of the project

The goal of my project was to construct an artificial cognitive system that is able to

perform a simple task like color-based navigation, which also requires other higher-

level cognitive skills such as memory storage. I wanted to program a brain simulation

as biologically plausible as possible, which is also why I dealt with the theoretical

backgrounds of biological and computational cognitive systems so thoroughly. I

wanted to understand how biochemical processes are expressed in a mathematical and

computational way, and why certain techniques are applied when building a

computational model, while others are neglected, may it be for biological implausibility

or computational cost. As for my practical work, I further wanted to implement my

brain algorithm on a real-life robot, to mimic a biological organism that navigates

through unknown surroundings, where my brain algorithm would be the parallel to the

cerebral processes happening in the biological organism. Since real-life

implementation can be very costly, I wanted to test my brain simulation beforehand in

a virtual environment, meaning a graphic simulation where any environment and robot

could be built and then connected to the brain algorithm to test the algorithm’s fidelity.

1.3 Structure of the thesis

The thesis breaks down as follows. In the first part, the theoretical foundations are lain

out, meaning the theory behind artificial and biological cognitive systems is worked

out. This section is divided into four subchapters, where the first one looks at neural

networks and introduces the reader to the functioning of biological and computational

neurons. The second subchapter first addresses higher-level cognitive processes, by

explaining how memories are formed in biological organisms and how that knowledge

Page 10: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

Introduction 5

can be used to model memory in a computational system The third subchapter looks

at navigation by elaborating the brain structures involved in navigation and introducing

the two main models for navigation in a computational system that inspired the

practical approach of my own brain simulation. The second section addresses the

practical aspects of my project, where the reader can find QR Codes, which can be

scanned using a mobile phone. These QR Codes may direct the reader to an excerpt of

my personal notes, to commenced code, or to video or other visual data of my brain

simulation and implementation. In the Table of QR Codes one can additionally find the

links leading to said information. In said section, the first subchapter discusses the

course of action of my work where three simulators that can be used for computing a

neural network are introduced. The second subchapter demonstrates my artificial

neural network and explains the logistics behind its architecture. In the third subchapter

of the second section the simulation process, i.e. the means of implementing the

network on a virtual robotic arm will be explained. Section four then analyzes the results

of the implemented brain simulation and section five discusses improvement

suggestions as well as possible extensions and future projects. In section six, the thesis

as well as practical project will be reevaluated and concluded.

Page 11: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

6 Theoretical Foundations

2 Theoretical Foundations

2.1 Neural Networks

2.1.1 Functioning of neurons

Natural nervous systems are made up from specialized cells called neurons on which

artificial neural networks are based on. The basic functions of neurons are to receive

signals which encode information about the state of the environment or the subject’s

body, to determine which information should be passed along and to convey signals

to target cells. [1] The ability to transmit information from the peripheral (PNS)2 to the

central nervous system (CNS)3 (and vice versa) makes the neurons imperative for

thought creation, the generation of behavior, and also forming emotions. Since

neurons have to cover a broad range of different functions, many scientists think that

neurons are the most diverse kind of cell in the body. [2] Generally speaking, neurons

can be grouped into three classes:

1. Sensory neurons receive information about the internal state of the body and

its surroundings and transmit that information in form of signals to the CNS

where the signal then is processed. [1]

2. Motor neurons receive information from other neurons and convey these

signals to muscles, glands and organs, which in term exert a commanded

behavior. [1]

3. Interneurons, which are only found in the CNS, act as connection between

sensory and motor neurons and are able to receive information either directly

from these neurons or indirectly from other projecting interneurons.

Interneurons are pivotal to information processing, both in simple reflex circuits

and also more complex circuits in the brain. [1]

2 System of the nerves that fan out from the central nervous system and connect with the skin, internal

organs, muscles and exocrine glands (glands that produce secrete substances such as sweat) 3 Referring to the nervous system of the brain and spinal cords of vertebrates

Page 12: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

7 Theoretical Foundations

2.1.1.1 Anatomy of a Neuron

If we want to replicate a neuron digitally, we have to understand its biological set up

first. A neuron consists of three basic parts: the cell body (soma), the axon and the

dendrites. The cell body is comparable to other body cells, containing the organelles,

the neuron’s nucleus, the cytoplasm and other cell structures. It is responsible for the

synthesis of proteins which enable chemical reactions and act as building material. The

dendrites are the branching structures of a neuron and act as the receiving end of a

nerve cell. Connected to the dendrites by the axon

hillock is the axon of a nerve cell, where a nerve

impulse is conducted along and further projected to

other cells. [2] Some axons are also insulated by myelin,

a lipid-rich substance which increases the speed of the

nerve impulse propagation. [1] The myelin sheaths are

formed by Schwann cells4 in a process called

myelination. Another factor in the propagation speed

of a signal are the nodes of Ranvier. These nodes are

microscopic gaps within myelinated axons that increase the conduction velocity of the

nerve impulses5. The nerve impulse eventually exits the neuron at the terminals of the

axon and is again communicated through synapses to the next cell.

Exhibit 1. Simplified structure of a neuron

This depiction shows the general anatomy of a nervous cell. It must be noted that there are many different types

of nervous cells with various structures in order to fulfill specific functions.6

4 Schwann cells are a type of glial cells 5 Further explained in chapter 2.1.1.4: Communication Between Neurons: Action Potentials 6 Source: https://upload.wikimedia.org/wikipedia/commons/b/b5/Neuron.svg, 20.10.18.

To remember…

Axons conduct

nerve impulses

Dendrites receive

nerve impulses

Page 13: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

8 Theoretical Foundations

2.1.1.2 Synapses

A synapse is a structure consisting of the axon terminal, a dendrite and the synaptic

cleft which allows neurons to communicate with other nervous cells or cells of the

target effector.7 [3]

In the case of neuron-to-neuron transmission, the source neuron is called the

presynaptic neuron whereas the target neuron is defined as the postsynaptic neuron.

We differentiate synapses into two groups: chemical and electrical synapses. [1] The

chemical synapse is a gap between the neurons (synaptic cleft) where

neurotransmitters exit the presynaptic neuron and dock to the chemoreceptors of the

postsynaptic neuron. On the other hand, electrical synapses describe connections

between neurons that are formed by channel proteins, which allows the electrical nerve

signal to travel directly from the pre- to the postsynaptic neuron. Electrical synapses

allow for a much faster transmission of information, however by using

neurotransmitters as a mean of transportation, chemical synapses enable the tuning of

the strength of the stimuli. [4] Therefore, the postsynaptic neuron can either be excited

(excitatory ion channel synapses) or inhibited (inhibitory ion channel synapses) by the

binding of specific neurotransmitters, meaning it can become more or less likely to

further propagate the nerve impulse. This quality is highly favorable, since it allows us

to form associations between neurons that can be finely tuned and therefore also

differentiated. The thesis hence disregards the electrical counterpart in the following

sections and focuses on the chemical synapses.

2.1.1.3 Electrochemical Qualities of a Neuron

A key factor behind electrical signal transmission is the concentration gradient between

the outside (extracellular fluid) and the inside of a neuron. Disproportionate

concentrations of positively and negatively charged ions within the membrane and

outside of it, result in an electric potential difference. [5]

Most of the time, neurons have a negative concentration gradient, meaning there are

more negatively charged ions inside than outside the cell. Although the concentration

gradient is not always static, the cell’s membrane maintains a fairly consistent negative

concentration gradient between -40 to -90 millivolts. This equalized voltage potential

of a neuron’s membrane is known as the resting membrane potential. [5]

During the resting potential, the concentration of sodium (Na+) and chlorine (Cl-) is

higher in the extracellular fluid, whereas there are greater amounts of potassium ions

(K+) inside of the cell. [5] The membrane is highly permeable to K+ ions, which allows

the ion to move diffuse in and out of the cells, but only slightly permeable to Na+ ions.

7 Effector: A bodily organ such as a muscle that becomes active if stimulated.

Page 14: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

9 Theoretical Foundations

[5] In order to avoid diffusion along the concentration gradient and therefore reaching

a state of electrochemical equilibrium (same concentration of potassium and sodium

ions on both sides), active sodium-potassium pumps transport ions in the opposite

direction of their natural flow. The ions stimulate the protein channels, where the

cytosolic face8 has a high affinity for Na+ and low affinity for K+, while the exoplasmic

face9 of the molecule has a high affinity for K+ and a low affinity for Na+. The positive

net efflux is induced by an inequality of ionic transfer, where the ratio of transported

sodium to potassium is about 3:1, allowing the resting membrane potential to become

stable at a negative voltage potential. [6] Furthermore, this explains why the membrane

even exhibits a voltage potential.

Neuronal membranes possess various ion channels, such as solely potassium and

sodium channels, as well as calcium and anion channels. The peculiarity of these

proteins is that they are sensitive to electrical but also chemical input coming from

neurotransmitters and therefore alter the membrane potential in response to a

received stimulus, by changing the net flow of ions.

The energy needed for pumping the ions against the concentration gradient is sourced

from adenosine triphate (ATP), the principal energy-carrying molecule of the cell. ATP

is synthesized in the mitochondria found in the soma of the cell, since axons do not

have any organelles. Enzymes in the sodium-potassium pump then split a phosphate

from the ATP molecule, which releases energy needed to overcome the electric

potential barrier. [6]

2.1.1.4 Communication Between Neurons: Action Potentials

Allothetic and idiothetic information is processed by nervous cells as an electrical event

called action potential. Since the neuronal membrane is voltage-dependent, it is

characterized by being able to conduct, transmit and receive electrical signals by

opening and closing specific ion pumps, and conclusively allows it to promote an action

potential. [7] Three main events take place during an action potential or spike:

Depolarization, repolarization and hyperpolarization.

First, the cell body is depolarized by a triggering event. Neurotransmitters, which were

released by an electrical signal from a previous cell, bind to ionotropic receptor

proteins of the postsynaptic cell that then open their channels in response to the

chemical signal. Ions hence are able to move along the concentration gradient and the

membrane potential is brought closer to 0, which is known as depolarization. [8] [9] As

the membrane potential becomes less polar due to the equalization of the

concentration gradient, voltage-gated sodium channels at the part of the axon closest

to the cell body activate. Positively charged sodium ions (Na+) rush into the negatively

8 Surface of a cell membrane directed toward the cytoplasm (inside) 9 Surface of a cell membrane directed toward the extracellular fluid (outside)

Page 15: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

10 Theoretical Foundations

charged axon, causing the membrane to reach a positive potential of about 30 to 40

millivolts. [10] The process of depolarization leads to a lateral cascade of activation of

the sodium channels along the axon, which allows the action potential to travel from

the axon hillock to the axon terminals.

Second, the membrane is repolarized. As it becomes more positive, sodium channels

close and become inactive, whereas potassium channels open up. Now the positively

charged potassium ions (K+) flow out of the positive membrane, since the extracellular

fluid has become negative. This equalizes the concentration gradient again and the

neuronal membrane potential approaches again toward the resting membrane

potential. [8] [10] The inactivation of sodium channels after the peak of an action

potential prevents the back propagation of the action potential, which would lead to a

confused signal for the nervous cell. [10]

Third, the membrane is hyperpolarized, meaning its voltage potential becomes more

negative than in its equilibrium state. Hyperpolarization occurs for the reason that

potassium channels stay open a little longer after the membrane has reached its resting

potential, further allowing cations to exit the axon. [8] The increase in membrane

potential negativity inhibits an action

potential from being triggered, since

the stimulus needed to depolarize the

membrane and thus setting off a spike

is much higher. [8] The duration of

hyperpolarization hence is a significant

limiting factor in the rate at which

information (action potentials) can be

communicated. Eventually, the

potassium-channels close again and

the equilibrium state is reestablished

through the sodium-potassium

channels.

The nerve impulse created by the

sequence of “sodium activation –

sodium inactivation – potassium

activation” is of very short duration (few milliseconds) and travels down the nerve fiber

of the axon like a wave; the membrane depolarizing in front and repolarizing behind

the peak. The action potential is not reduced in amplitude along the axon, however

conduction velocities can vary, depending on the diameter of the nerve fiber, the

10 Source: https://www.khanacademy.org/science/biology/human-biology/neuron-nervous-

system/a/depolarization-hyperpolarization-and-action-potentials, 20.10.18.

Exhibit 2. Evolution of an action potential

The diagram displays the change of the membrane voltage

when an action potential is emitted. The kernel above the

threshold of excitation marks the actual action potential,

the spike, which is characterized by the depolarization (rise

of voltage) and repolarization (fall of voltage) period.10

Page 16: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

11 Theoretical Foundations

surrounding temperature and further whether the fibers are myelinated or not. [11] An

important factor for conduction velocity is saltatory conduction. Saltatory conduction

describes the process where an action potential jumps from one node of Ranvier to

another. This results in a higher conduction velocity, since myelinated sections of the

nerve fiber are skipped which can be looked at as the action potential taking bigger

leaps and hence reaching the axon terminals more quickly. [11] The velocity of

conductance is of great importance when it comes to information transmission, since

action potentials are similar in form and shape11 and only the rate at which they are

emitted defines the information that they are conveying.

Exhibit 3. Propagation of an action potential

The image represents the nerve fiber (axon) and its potential during

a nerve impulse. The depolarized sections mark the location of the

propagating action potential.12

2.1.1.5 Threshold Potential and Refractory Periods

The voltage-dependent sodium channels only

become fully activated if the membrane potential

reaches a threshold potential. [11] The depolarization

caused by the activated ionotropic receptors occurs

at a slower rate, until the membrane reaches the

critical voltage potential, where almost instantaneously the sodium channels are

opened, which causes the membrane potential to spike. At the peak action potential,

the sodium channels close as instantaneously as they opened, causing the potential to

plummet. The reversal of membrane polarity above the action potential threshold

11 Their form is not altered since their amplitude remains constant. 12 Source: https://commons.wikimedia.org/wiki/File:Figure_35_02_04.png, 20.10.18.

To remember…

Synapses allow

different neurons to

communicate with

each other

Due to an unequal

distribution of ions,

each membrane has a

negative voltage

potential (= resting

membrane potential)

Action potentials

evolve in three steps

– depolarization,

repolarization and

hyperpolarization

The rate of emitted

action potentials

defines the

transmitted

information

Page 17: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

12 Theoretical Foundations

defines the nerve impulse, which then travels to the axon terminals without being

reduced in amplitude. [11]

The reaction of a nerve impulse is called an “all-or-none” reaction due to the fact that

there are no gradations between threshold potential and fully activated potential,

meaning the neuron is either at rest with a polarized membrane or it is conducting a

nerve impulse at reverse polarization. [11] With no gradations, the importance of a

signal has to be determined by the firing rate of a neuron. The stronger the stimulus,

the more frequent a neuron fires. In order to prevent an overstimulation of the nervous

system, a maximum action potential frequency is defined by refractory periods. During

the absolute refractory period (1-2 ms), it is impossible for the cell to send another

nerve impulse because of the inactivated sodium channels. The relative refractory

period refers to the time after the absolute refractory period, where it is extremely

difficult to emit another action potential. This is where the cell is still hyperpolarized,

thus needing a higher influx of positive ions to reach the threshold potential. [5]

2.1.1.6 Chemical Signaling in Synapses

As previously discussed, chemicals are able to transmit electrical signals from a neuron

to a target cell. Neurotransmitters are usually small molecules, like amino acids13 and

amines14, that either excite (=stimulate) a neuron to fire or inhibit it from firing. When

the membrane of the presynaptic terminals of the axon are depolarized by an action

signal, calcium (Ca2+) channels open, allowing calcium to enter the membrane15. It is

still uncertain what exactly happens whenever calcium diffuses into the cell membrane,

but it is thought that it attaches to the

membranes of synaptic vesicles containing the

neurotransmitters and somehow then

facilitating their fusion with the axon terminal

membrane. [11] The fusion of vesicle and

membrane then allows the neurotransmitters

to be released into the synaptic cleft. [12] This

expulsion process is known as exocytosis and

demonstrates how an electrical signal can be

turned into a chemical one, allowing finer

tuning of information transmission.

In the synaptic cleft, the neurotransmitters

bind spontaneously in a lock-to-key16

mechanism to its type of receptor, which lies

13 E.g. glutamate or aspartate 14 E.g. dopamine or noradrenaline 15 Entering by diffusion due to the concentration gradient 16 Metaphorically for how each neurotransmitter (key) fits only into a certain receptor (lock)

To remember…

Every neuron has a

potential threshold that

when surpassed causes an

action potential

There are no gradations

in action potentials

Neurotransmitters can

modify the rate of action

potentials

Page 18: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

13 Theoretical Foundations

in the postsynaptic membrane. Ionotropic receptors – ion channel pores – open or

close whereas metabotropic receptors cause an intracellular biochemical cascade when

stimulated, meaning they indirectly open or close membrane ion channels [12]. This

sudden change of permeability to specific ions results in a change in electrical potential

across the membrane, the so-called postsynaptic potential (PSP). An excitatory

postsynaptic potential (EPSP) arises with a net influx of cations (Na+) causing a

depolarization whereas an inhibitory postsynaptic potential (IPSP) emerges with a net

efflux of potassium cations (K+),

making the cytoplasm more

negative. [11]

The PSP is a local potential that

varies in amplitude according to the

duration and amount of stimulation

from neurotransmitters. A PSP gains

in amplitude, the more

neurotransmitters are released,

therefore increasing (EPSP) or

decreasing (IPSP) the probability of

an action potential. [13]

Additionally, the hundreds to

thousands of synapses on a single

neuron enable the tuning of the

strength of the propagated action

potential even further by summing

up the inhibitory and excitatory

junctions.

After a neurotransmitter has been recognized by its receptor molecule, it is released

back into the synaptic cleft. In order to prevent repetitive and excessive stimulation of

the post-synaptic cell, the neurotransmitters have to be quickly removed or chemically

inactivated. Transporter proteins in the presynaptic cell membrane carry the

neurotransmitters – like serotonin – in the cleft back into the cell by using energy from

ATP molecules. The chemicals are then again encapsulated in synaptic vesicles and can

be reused. Other neurotransmitters are inactivated by a specific enzyme in the synaptic

cleft as soon as it diffuses away from the receptors. The enzymes make the

neurotransmitters inactive by breaking them into their component parts, which then

diffuse into the presynaptic cell membrane. Some components diffuse into the

17 Source: https://myelitedetail.us/clipart/synapse-clipart-neurotransmitter-clipart_2263160.html,

20.10.18.

Exhibit 4. Exocytosis

The exhibit models the magnification of a chemical synapse,

defined by the axon terminal of the presynaptic neuron, the

synaptic cleft where the neurotransmitters diffuse into and the

membrane of the dendrites of the postsynaptic neuron.17

Page 19: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

14 Theoretical Foundations

surrounding extracellular fluid, while others are retaken into the presynaptic cell and

used for further synthetization of original neurotransmitter. [14]

2.1.2 Spiking Neural Networks

Spiking neural networks (SNN) are artificial neural networks (ANN) of the third

generation which mimic natural neural networks more accurately by incorporating the

concept of time into the model, additionally to the neuronal and synaptic state. The

idea is that a neuron only transmits a signal (action potential), and therefore

information, if its activity surpasses a certain threshold from below. This signal may

increase or decrease the activity of neighboring neurons, depending on the type of

connection established between them. [15]

Differing to other ANNs, SNNs operate using spikes as outputs, rather than continuous

values. Spikes can be interpreted as discrete events that take place at specific points in

time. An event either occurs (activity of neuron passes threshold) or it does not (activity

of neuron not great enough to pass threshold), i.e. the output of any SNN is limited to

a binary, spike {1} or no spike {0}. This could be seen as a deficit compared to the

continuous outputs of ANNs of previous generations, however, SNNs make up for it

by being able to process spatio-temporal data18. To be able to process spatio-temporal

data, a network possesses the quality that neurons are only connected to neurons local

to them, making up a neural circuit which is able to process input (=information)

separately. Furthermore, the network’s temporal aspect allows us to record temporal

information about when a spike occurs. These qualities make the SNN theoretically and

fundamentally more powerful than traditional ANNs like convolutional neural networks

(CNN) or recurrent neural networks (RNN). [16]

Short Input: Artificial Neural Networks

This thesis discusses how Spiking Neural Networks are set up and how they can be

implemented on a robotic agent. The following segment poses as a further insight into

the world of artificial networks and shortly explains other networks used to mimic

neural processes.

ANN – Artificial neural networks are computing systems which are inspired by

biological neural networks to (partly) constitute animal brains. These artificial systems

consist of a pool of connected nodes called artificial neuron, whereas the connections

can be interpreted as the synapses in a biological brain. These connections or synapses

transmit signals which are then processed by the receiving node or neuron and then

projected to the next target node.

18 Real-world sensory data

Page 20: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

15 Theoretical Foundations

RNN – Recurrent neural networks model nodes which are connected in a directed

graph along a sequence. In RNNs all inputs are related to one another which enables

the system to predict the next output by relying on the previously processed inputs

which are stored in an internal state. This as memory acting internal state is achieved

by looping the network, i.e. one nodes output is the following nodes input and so on.

[17]

RNNs are used inter alia for next word prediction, stock market prediction and even

music composition. [17]

Exhibit 5. RNN

The exhibit demonstrates how recurrent networks are looped. The output hn is the input xn+1 for the following

neuron.19

CNN – Convolutional neural networks are inspired by biological processes for image

processing. The connectivity pattern between the nodes resembles the organization of

an animal’s visual cortex. The visual field is segmented into overlapping regions called

receptive fields. The different receptive fields are made up of individual cortical neurons

which only respond to a stimulus if it affects their belonging receptive field. A major

advantage of CNNs is that they are independent from prior knowledge, i.e. they do not

rely on manually imposed filters and pre-processing but are able to learn the hand-

engineered algorithms autonomously.

CNNs are applied inter alia in image and video recognition, natural language

processing and recommender systems. The learning process can be exemplified by a

CNN being ‘fed’ millions of images of cats of different races which the Network then

analyzes and generalizes (What features do all cats have in common?) until it is able to

identify an image of an unintroduced cat race as an image of a cat. [18]

19 Source: https://medium.com/ai-journal/lstm-gru-recurrent-neural-networks-81fe2bcdf1f9, 20.10.18.

Page 21: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

16 Theoretical Foundations

Exhibit 6. CNN

Simplified Model of how a convolutional neural network works when processing an image. CNNs pass the image

through a series of convolutional, nonlinear, pooling (downsampling) and fully connected layers in order to

classify the features of said image.20

2.1.2.1 Spiking Models

Spiking models try to mirror the neural dynamics of a biological cognitive system, i.e.

they pose as computational models of the processes happening when information is

transferred from one neuron to another as elaborated in the subchapter 2.1.1. The

neural dynamics can be looked at as a summation process combined with a mechanism

which triggers action potentials above some critical voltage (threshold). The main

components of a spiking model are an equation for the evolution of the activity of the

membrane potential and a mechanism which generates spikes. [19]

All spiking models share the following biologically accurate properties:

1. Processing information coming from many inputs and producing a single output

in form of a time dependent spike.

2. Probability of firing is increased by excitatory inputs and decreased by inhibitory

inputs.

3. The dynamics is defined by at least one state variable, which generates one or

more spikes if it is modified enough. [20]

The basic assumption which underlies most spiking models is that the firing time, the

point in time where an action potential is emitted, carries the neural information which

implies that the specific shape of a spike can be neglected in regard to information

mediation. [19] [20]

20 Source: https://adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-

Neural-Networks/, 20.10.18.

Page 22: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

17 Theoretical Foundations

A function of the sequence of firing times, or spike trains21, gives us the number of

spikes fired in a certain time frame.

𝑆(𝑡) = ∑ 𝛿(𝑡 − 𝑡𝑖𝑓

)

𝑛

(1)

Where f = 1, 2, … is the label of the spike, tif is the spiking time and δ() is a Dirac

function with δt) ≠0 for t = 0 and ∫∞-∞ δ(t) dt = 1. The Greek letter Sigma Σ stands for

the summation of the number of spikes fired during the time frame t. The Dirac Delta

function is used in (1) since it models the density of an idealized point of charge which

in this case is the action potential. [21] An important quality of the function is that at

its origin, its value is infinite, whereas everywhere else the function is zero:

δ(𝑥) = {+∞, 𝑥 = 0

0, 𝑥 ≠ 0 (2)

If we calculate the integral from equation (2) in [-∞, ∞[ at the point of origin x, we

receive a value of 1:

∫ 𝛿(𝑥)𝑑𝑥 = 1 ∞

−∞

(3)

This gives us a binary output from either 0 or 1 which correlates with the suggestion

that a neuron either propagates a spike {1} or doesn’t {0} and that the shape of the

spike (in the Dirac function in form of a pulse) is not of importance to the model.

21 Spike trains are the number of spikes emitted during a certain time period. 22 Sources: https://www.researchgate.net/figure/Dirac-delta-function-centered-at-the-point-x-for-one-

dimensional-problems_fig1_221905992 and

https://en.wikipedia.org/wiki/File:Dirac_distribution_PDF.svg

Exhibit 7. Dirac Delta Function

If we let approach 0, the function becomes infinite

at its point of origin but is condensed to model only

a point charge, meaning an electrical charge at a

mathematical point (point of origin) with no

dimensions.22

Page 23: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

18 Theoretical Foundations

Short Input: Examples of SNNs

Spiking Neural Networks can be implemented in different ways, depending how the

neural processes are interpreted or depending which functionality of the neuron is

focused on.

Hodgkin-Huxley (HH) model:

The best-known model for spiking neural networks describes the biophysical aspect of

neurons. The Hodgkin-Huxley neuron contains three types of ion channels; One causes

leakage and therefore being responsible for the resting membrane potential, whereas

the other two channels generate the action potential since they are both voltage-

dependent, with the probability of activation either increasing with the depolarization

of the membrane or respectively decreasing. The two voltage-dependent channels

mimic the sodium and potassium channels in a biological neuron. In the model, there

are three sodium activation and one, slower-responding, sodium inactivation gate,

which are responsible for the absolute refractory period of an action potential. The

other activation channel is made up of four potassium gates, which are open (active)

when the membrane is depolarized and shut (become inactive) slowly with its

repolarization, inhibiting another action potential from immediately happening due to

their slower dynamics, a process which is partly responsible for the relative refractory

period. [22] In Exhibit 9, the Hodgkin-Huxley model is additionally visually depicted.

FitzHugh-Nagumo model:

The FitzHugh-Nagumo model is a two-dimensional simplification of the HH model

since it isolates the essential mathematical properties of the electrochemical processes

happening in the neural membrane. The model consists of a voltage-like variable that

allows self-excitation as well as a recovery variable with linear dynamics that slows

down the negative feedback. Interestingly, the model has like the HH model no well-

defined threshold for firing, but rather uses a canard trajectory, where a small change

in the value of a parameter may lead to an ‘all-or-none’ type of response, namely by a

process called canard explosion23.

Other Spiking Neural Network models include the Hindmarsh-Rose and Integrate-

and-fire model, the latter being explained in detail in chapter 2.1.2.3.

2.1.2.2 Plasticity in SNNs

To model an artificial cognitive system that mimics its biological counterpart more

accurately, synaptic plasticity is often reflected and implemented in the model. Synaptic

plasticity is the ability of a synapse to either strengthen or weaken over time, due to an

increase or decrease in activity, meaning the more a connection between two neurons

23 Fast transition from small amplitude limit cycle to a large amplitude relaxation cycle.

Page 24: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

19 Theoretical Foundations

is used, the stronger it becomes. This quality is thought to be fundamental to learning

and memory creation. [23] The increase in efficacy in proportion to the degree of

correlation between pre- and post-synaptic activity is called Hebbian learning and has

first been proposed by Donald O. Hebb. [24] This means that neurons which are

repeatedly active at the same time will become associated to each other, a synapse will

therefore be strengthened. Associative learning leads to an establishment of activity

patterns in the neural network, leading to automated interactions between the

associated neurons, called cell assemblies, when a certain input current is given. [25]

In simulations for higher level phenomena like navigation or formation of memory,

phenomenological models are typically used, where the biochemical and physiological

aspects of synaptic plasticity are not taken into account, in order to simplify the model

and reduce unnecessary computational cost. Both phenomenological models, rate and

spike based, take a set of variables as an input and produce a change in synaptic

efficacy as an output. [26]

(a) Rate based models: The synaptic efficacy is determined by the rate of pre- and

postsynaptic firing rates which can be formulated as:

𝑑𝑊𝑖

𝑑𝑡= 𝑓(𝑥𝑖, 𝑦, 𝑊𝑖, 𝑜𝑡ℎ𝑒𝑟) (4)

where Wi is the synaptic efficacy of synapse i, xi

is the firing rate of the presynaptic neuron i,

and y is the firing rate of the postsynaptic

neuron. The function f may be any SNN model

specific function. Other variables might

account for reward signals or averages of the

rate variables. [26] Examples may include a

learning rate and firing rate constants for the

source neuron and the target neuron. To avoid

uncontrolled weight growth, normalizing as

well as competitive factors are added, resulting

in more stable and selective receptive fields.

[26]

(b) Spike timing-based models: Spike-timing

dependent plasticity (STDP) has been found to

occur between hippocampal or cortical pyramidal neurons in juvenile rodent’s brains.

In this model, the synaptic efficacy is dependent on the difference t in firing times

between the pre- and postsynaptic neuron Δ𝑡 = 𝑡𝑝𝑜𝑠𝑡 − 𝑡𝑝𝑟𝑒. [27] If the spike timing

difference between the postsynaptic and presynaptic spike is positive, the synapse is

To remember…

The neural dynamics of an

SNN can be looked at as

summation process of

spikes

The dirac function

abolishes the shape of an

action potential

Hebbian learning:

Neurons that fire together,

wire together

Page 25: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

20 Theoretical Foundations

potentiated24, if the difference is negative, the synapse is depressed.25 Typically,

potentiation happens when t roughly amounts to 10ms26. [27]

The simplest model of STDP reproduces a curve as in Exhibit 8) where the change in

amplitude of an excitatory synapse is plotted against the spike time difference, which

results in a schematic asymptotic curve.

Exhibit 8. Asymptotic curve STDP

Change of the EPSP amplitudes as a function of the

time difference of the firing rates t. The synaptic

change is greatest when the postsynaptic neuron

fires almost immediately after the presynaptic

neuron; its excitability then decays exponentially as

the temporal delay becomes greater. On the other

hand, the postsynaptic potential hardly changes

when the presynaptic neuron fires a relatively long

time after the postsynaptic neuron, due to then

assumable remote distance of the neurons, whereas

it becomes depressed if the firing time of the

postsynaptic neuron is only slightly delayed. Inset:

postsynaptic action potential relative to the rime of

the synaptic spike (vertical line).27

2.1.2.3 (Leaky) Integrate and Fire Model

The most commonly used models for SNN are the Integrate-and-Fire (IF) and Leaky-

Integrate-and-Fire (LIF) units. These are relatively simple models for how neurons

behave when stimulated by a given input. The simplicity stems from the model’s

property, that action potentials are described as discrete events, without regard to the

shape of the action potentials. (L)IF-Models are set to divide voltage changes of the

neuronal membrane into two parts:

(a) The membrane behaves passively if its voltage lies below a given action potential

threshold θ. This means that the membrane has no voltage-dependent ion channels

which contributes to the membrane potential decaying to a certain resting potential

due to its leaky capacitor28. The resting voltage level defines the equilibrium state of a

neuron. [28]

24 strengthened 25 weakened 26 Such estimates are useful for later tuning of parameters in a brain simulation. 27Source: https://www.semanticscholar.org/paper/Spike-Timing-Dependent-Plasticity%2C-Learning-

Rules-Senn-Pfister/11e05896ae8cc3dfc94b8c909e71fb46b0939409/figure/0, 20.10.18. 28 Membrane gradually loses charge Q which results in a lower voltage level U: U = RI = R

Δ𝑄

Δ𝑡

Page 26: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

21 Theoretical Foundations

(b) The voltage of the membrane reaches the action potential threshold θ due to

injected currents (input). The model assumes a spike at the time of such a threshold

crossing, after which the membrane is reset to a hyperpolarized29 voltage level. [28]

In order to link the momentary voltage of the membrane to an input current, the laws

of electricity are applied to the neuronal model. If the neuron receives a short current

pulse in form of an action potential, some of the additional electrical charge is saved

in the cell membrane, which acts as a relatively good insulator. Due to this quality, the

cell membrane is interpreted as a capacitor in the IF model. The LIF model assumes

that the insulation is not perfect and thus characterizes the cell membrane by a finite

leak resistance, which causes the exponential decay to the membranes resting

potential. [19]

Exhibit 9. Hodgkin-Huxley model

Electrical model of how action potentials are initiated and

propagated in neurons. Cm is the capacitance of the cell

membrane, gn is the nonlinear conductance of voltage

dependent and leaky ion channels, whereas gL respectively

representing the linear conductance. E is the electrochemical

gradient that drives the flow of ions, and Ip demonstrates the

ion pumps. To include the resistor, the resistance to all ions

which diffuse across the membrane must be considered. 30

The functioning of an Integrate-and-Fire neuron can be outlined as an electrical circuit

as in Exhibit 9 which consists of a capacitor C in parallel with a resistor R which are

driven by a current I(t). [19] The dynamics of this kind of LIF unit is described by the

following formula31:

𝐶𝑑𝑢

𝑑𝑡(𝑡) = −

1

𝑅𝑢(𝑡) + [𝑖𝑜(𝑡) + ∑ 𝑤𝑗𝑖𝑗(𝑡)] (5)

where u(t) corresponds to the neural membrane potential, C is the membrane

capacitance, R is the input resistance, i0(t) is the external current which causes the neural

state to evolve, ij(t) is the input current from the j-th synaptic input, and wj represents

the strength of the j-th synapse. The first term of equation (5) is the so-called ‘leak

29 Voltage level of the membrane becomes more negative than the resting level 30 Source: https://en.wikipedia.org/wiki/Hodgkin%E2%80%93Huxley_model#/media/File:Hodgkin-

Huxley.svg, 20.10.18. 31 This formula – along with most of the following mathematical expressions – could be further derived

by applying the elementary rules of electricity. Said more detailed derivation however was disregarded

due to timely restrictions.

Page 27: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

22 Theoretical Foundations

term’. Whereas the second term describes the external and synaptic input as electrical

current, which can be added up as I(t). [20]

To yield a more standard form of formula (5) the time constant m = RC is set.

Understanding the time constant of a neuronal membrane is pivotal when tuning the

parameters in a spiking neural network:

𝜏𝑚

𝑑𝑢

𝑑𝑡(𝑡) = −𝑢(𝑡) + 𝑅𝐼(𝑡) (6)

For R→∞, formula (5) describes the IF model, since (-1/R)→0. In both models, the

neuron fires a spike at the firing time tf, where the membrane potential u reaches a

critical value θ, called firing threshold. If u(tf) = θ, the neuron outputs a spike after which

the membrane potential is directly reset to a certain reset value ur (hyperpolarization

state32) and the input currents are updated. [19]

𝑢𝑟 = lim𝛿→0;𝛿>0

𝑢 (𝑡𝑓 + 𝛿) (7)

The value ur can also be described as spike-

afterpotential and marks the neural absolute

refractory period. The membrane’s voltage

trajectory is driven by a constant current I0,

which results in the membrane reaching its

equilibrium state after hyperpolarization

(applies to the LIF). In many models,

hyperpolarization is disregarded, and the

membrane potential is simply reset to urest,

its resting potential, after a spike. The

membrane is then manually inhibited from spiking for a set time step, which replicates

the function of hyperpolarization in the biological neural firing mechanism.

For a neuron to generate a spike, its membrane potential must cross a certain voltage

threshold θ. Thinking back to subchapter 2.1.1.5, said threshold is the membrane

potential where the voltage-dependent sodium channels become fully activated. We

can define the firing threshold by looking at the reset from u(tf) to ur. This reset

corresponds to removing a charge qr from the capacitor, i.e. adding -qr to the capacitor:

−𝑞𝑟 = −𝐶(𝜃 − 𝑢𝑟) (8)

32 As explained in subchapter 2.1.1.4, a hyperpolarized cell membrane inhibits an action potential from

forming instantaneously after a previous spike,

To remember…

The capacitance and

resistance of a membrane

make up its time constant

The firing time of a spike is

its defining characteristic

Page 28: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

23 Theoretical Foundations

The discharge can be described as a short current pulse by multiplying the dirac

function δ(t-tf) with the reset charge -qr. We are defining the reset current Ir as a pulse,

therefore t →0 holds true (9). Since the reset happens each time the neuron fires, we

additionally need to sum these current pulses for each spike f as in (1). [19]

𝐼 = Δ𝑄

Δ𝑡 (9)

𝐼𝑟 = −𝑞𝑟 ∑ 𝛿(𝑡 − 𝑡𝑓)

𝑓

(10)

The second term of (10) can be expressed as the spike train S(t) from (1) and -qr as in

equation (8), resulting in (11):

𝐼 = −𝐶(𝜃 − 𝑢𝑟)𝑆(𝑡) (11)

If we solve (11) for θ, we can express the firing threshold and therefore maximum

membrane voltage for the neuron mathematically, which again can help us with tuning

the parameters of the model, for the threshold determines a neuron’s sensitivity:

𝜃 = 𝐼𝑟

−𝐶 ∗ 𝑆(𝑡)+ 𝑢𝑟 (12)

For neurons to transfer action potentials, they must be connected by synapses.

Synapses are simply put specialized junctions which can be modified in strength by

adding weights. [20] An input signal i(t) to the postsynaptic neuron is triggered with

the arrival of a presynaptic spike at the linking synapse. Such a signal corresponds to

the synaptic electric current flowing into the biological neuron. [20] The dynamic

evolution of i(t) can be described by the following formula:

𝑖(𝑡) = ∫ 𝑆𝑗(𝑠 − 𝑡) exp (−𝑠

𝜏𝑠) 𝑑𝑠

0

(13)

where s is the synaptic time constant and Sj(t) a presynaptic spike train. The synaptic

time constant differs from the membrane time constant by either only representing the

electrical, chemical or both properties of a synapse.

The constant is given by the rate of the inhibitory

postsynaptic potential (IPSP) or excitatory

postsynaptic potential (EPSP) of the synapse, the

electrical time constant of the local region of cells

where the synapse is located and by the chemical

time constants, which account for the binding and

unbinding of neurotransmitters to the receptor

and for configuration changes in the ‘docking

To remember…

Modelling synapses,

electrical and chemical

properties are

generalized to a

synaptic time

constant

Page 29: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

24 Theoretical Foundations

stations’ of the receptor. [29] It has been observed that the IPSP always decays faster

than the EPSP, though both postsynaptic potentials have an approximately exponential

decay. [30] Since the (L)IF model is generally modeled in a phenomenological manner,

the electrical and chemical time constants are disregarded when tuning the synaptic

time constant.

2.2 Memory

Spiking Neural Networks have limited memory capacity, which means that a stimulus

arriving at a certain time would vanish over 200-300 milliseconds, prohibiting neural

computations with long history dependencies. [31] Long-term memory is desirable for

the fact that it enables any agent to make use of their previously attained knowledge

at a later point in time. Memories are also pivotal when learning new behaviors or tasks,

since they act as a reference for new information, which can be processed more easily

when associated with already well-established neural circuits. In the following section,

the biochemical processes that enable memory formation will be discussed to better

understand the complexities of a system that is able to store memories in the same

place where it is processed, enabling greater flexibility when it comes to navigational

behavior.

2.2.1 Biological background

If we want to model a program that is able to entertain long-term memory, we first

have to understand how memories are formed and how they work in the first place.

Although memories are perceived as a complex and abstract concept, they can be

viewed as the reactivation of a neural circuit. A neural circuit is a group of neurons that

have become connected by firing in response to one another. The connection between

neurons is measured in the synaptic strength of the shared synapse, which is defined

as the product of presynaptic release probability, the postsynaptic response to the

release of a single neurotransmitter and the number of release sites of said

neurotransmitters. [32] If a neuron is continuously activated by a preceding source

neuron, the junction becomes stronger, whereas synapses that are hardly ever used

become weaker and eventually vanish completely (cf. Exhibit 8).33

2.2.1.1 Early Phase of Long-term Plasticity (LTP/LTD)

Lasting increases in synaptic strength are known as long-term potentiation (LTP),

lasting increases as long-term depression (LTD). LTP facilitates the reactivation of a

33 Concept of STDP; check section for plasticity in SNNs (p.18)

Page 30: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

25 Theoretical Foundations

specific activation pattern in a neuron group when given the matching stimulus

whereas LTD diminishes the probability of activation of the target neuron. LTP and LDT

are elicited by NMDA-type34 receptors (NMDAR) which act as a lock for glutamine

excitatory neurotransmitters.35 When glutamate is released from the presynaptic

terminal, it first binds to AMPA-type36 receptors (AMPAR), another major ionotropic

receptor, which have a high conductance for sodium and therefore cause the first steps

of depolarization within the cell. External magnesium ions (Mg2+) enter and clog

NMDARs pore during resting membrane potential and are removed when the cell is

sufficiently depolarized. [33] NMDARs then activate more slowly than AMPARs, but also

stay open a lot longer and thus are able to bind glutamate even after AMPARs37 have

closed. This means that NMDARs only conduct currents when glutamate is bound, and

the postsynaptic neuron is depolarized, hence pre- and postsynaptic neurons have to

be active at the same time. [34] Due to these characteristics, NMDA is perceived as a

coincidence detector. Coincidence detectors are able to encode information by

identifying the occurrence of temporally close input signals, therefore enabling STDP.

[33]

Coincident activity38 of the pre- and postsynaptic neuron result in an influx of calcium

ions (Ca2+) through NMDARs. It is believed that if the concentration of calcium is

increased to a certain amount in the target

neuron, calcium-dependent enzymes called

CaMKII are activated in the dendritic spines of

the neuron. [33] Evidence indicates that CaMKII

are responsible for the protein synthesis and

phosphorylation39 of AMPAR in the dendrites.

[33] [35] The phosphorylation of AMPARs

increases the conductance of AMPAR channels,

which facilitates the depolarization of the

postsynaptic neuron. [33] As depicted in Exhibit

10, CaMKII further also synthesize more AMPARs

when activated, thus changing the structure of

the synapses. CaMKII increases the number of

receptor sites on the postsynaptic membranes

and increasing the contact surfaces for glutamate

34 NMDA: N-methyl-D-aspartate 35 Glutamate is the most common excitatory neurotransmitter 36 AMPA: α-amino-3-hydroxy-5-methyl-4-isoxazoleropropionic acid receptor 37 AMPARs usually close after very few milliseconds 38 In STDP Model: t>0; postsynaptic neuron (target) projects action potential after presynaptic neuron

(source). 39 Chemical addition of a phosphoryl group to a molecule.

To remember…

An increased amount of

glutamate receptors

increases synaptic

efficacy

The higher synaptic

efficacy, the more likely

a neuron is to

reactivate

Increased excitability

in a group of neurons

form a memory

Page 31: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

26 Theoretical Foundations

neurotransmitters. These new synapses, defined as perforated synapses, modulate the

synapses efficacy by increasing the cells excitability. [35] LTP is induced by these

perforated synapses, since it is easier to stimulate a circuit of neurons where the

synapses are made up of a higher number of receptors. Since a long-term memory

comes about with the reactivation of a specific activation pattern of neurons, it makes

sense to assume that an important factor to LTP is the increased efficacy of synapses

due to structural changes in the postsynaptic membranes. Due to the structural

changes, this kind of plasticity is also referred to as structural plasticity.

LTD on the other hand, is triggered when only a moderate calcium influx is generated.

Phosphatase calcineurin and phosphatase 1 (PP1) are both calcium-dependent

proteins which have a very high affinity for calcium ions and therefore activate by a

modest increase of Ca2+. It is suggested that phosphatases influence the

phosphorylation state of AMPARs in a way that reduces their conductance and hence

decreases synaptic efficacy. Moreover, phosphatases may also induce apoptotic

mechanisms40 of AMPA receptor proteins which reduces the number of AMPARs in the

membrane and further decreases a neurons excitability. [33] Synapses with fewer

glutamate receptors or less excitable receptors are weak junctions between neurons,

therefore decreasing the probability of joint firing.

40 Programmed cell death caused by chained biochemical events.

Page 32: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

27 Theoretical Foundations

2.2.1.2 Late Phase of Long-term Plasticity (LTP/LTD)

Long-term plasticity depends on the maintenance of structural and functional changes.

After the synthesis of new receptor proteins, LTP induction leads to an increase of

dendritic spine density as well as the formation of new dendrites, further increasing

excitability. In LTD, shrinkage and even disappearance of dendrites have been observed

to happen, since the few AMPARs result in redundant dendritic spines. [33] Where LTD

poses a cascade of biochemical reactions that eventually lead to a reduction in

synapses, LTP (or its structural and functional manifestation) has to be sustained over

a great amount of time by different proteins. It has been found that one of these

proteins, CREB42, plays a vital role in the late state of LTP. CREB is a cellular transcription

factor which binds to certain DNA sequences called cAMP response elements (CRE).

[36] CREB target genes including c-fos, activity-regulated cytoskeleton-associated

protein (ARC), and brain-derived neurotropic factor (BDNF). [37] By binding to these

sequences, CREB increases the transcription of certain genes which encode for proteins

needed to stabilize the synaptic changes that occur during learning and are manifested

41Source: https://www.researchgate.net/figure/Model-of-AMPA-trafficking-during-LTP-and-LTD-In-

the-basal-state-top-receptors-cycle_fig8_311268363, 20.10.18. 42 cAMP-responsive element-binding protein

Exhibit 10. LTP and LTP at hippocampal CA1 synapses

The exhibit exemplifies how connections are strengthened and weakened by STDP. Potentiated postsynaptic

neurons become more excitable to incoming stimuli due to the greater amount of glutamate receptors (higher

synaptic efficacy). The exact opposite happens in depressed synaptic connections since the synaptic efficacy of

the postsynaptic neuron decreases with the fewer amount of glutamate receptors.41

Page 33: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

28 Theoretical Foundations

in LTP. Experiments with mice and rats have shown that an overexpression of CREB

result in memory enhancements. [38]

Short Input: Genetic sequences

BDNF encourages growth of new synapses and survival of already existing ones.

Although in mammals most neurons are formed prenatally, some parts of the adult

brain, namely hippocampal structures, have been found to grow new neurons in a

process called neurogenesis. BDNF is hence an important factor when it comes to

neural development and neurogenesis. [39]

ARC is a protein coding gene accounting for the ARC protein which is a key regulator

and stabilizer of synaptic plasticity. ARC mRNA is required for protein synthesis of

structural changes in synapses and also promotes endocytosis43 of AMPARs when a

neuron is sufficiently stimulated. [40] Studies with mice have also shown that knocking

out the Arc gene results in deficiencies in long-term memory. [41] Additionally, further

experiments that inhibited Arc protein expression in the hippocampus have shown that

the protein is essential for the maintenance of LTP and strengthening of long-term

spatial memory. [42]

2.2.1.3 Homosynaptic and Heterosynaptic Plasticity

In order to balance and maintain synaptic weights and therefore retain memory,

different forms of plasticity are needed. We have so far only looked at Hebbian synaptic

plasticity, since it provides powerful cellular mechanisms for learning. [43] Hebbian

synaptic plasticity means plasticity that comes about with changes of strength at

postsynaptic targets (STDP). This kind of plasticity is a ubiquitous form of homosynaptic

plasticity, which is described as input specific, since the synapses between neurons only

gain in strength, when the presynaptic action potential (=input) stimulates the

postsynaptic neuron in a certain time interval. Studies however have shown, that there

is also input unspecific plasticity, called heterosynaptic plasticity, which acts

complimentary to homosynaptic plasticity. [44] A well-studied example of

heterosynaptic plasticity is neuromodulation.

Modulatory neurons are able to release neuromodulators when an action potential

reaches the axon terminal. Neuromodulators are like neurotransmitters chemical

molecules, but they affect a diverse population of neurons and in that way have a

greater range of influence. This is to say that the release of neuromodulators can

influence neurons near and far from the source neuron. [45] Contrarily to homosynaptic

plasticity, heterosynaptic plasticity can be induced by solely postsynaptic mechanisms.

[43] Neuromodulators can modify the properties of transmitter proteins such as

AMPARs and NMDARs in a way that the postsynaptic membranes become more

43 Process where external molecules are transported into the cell.

Page 34: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

29 Theoretical Foundations

efficient in communicating. [46] Since the neuron

releasing the neuromodulator (interneuron) does

not deliver electrical input, the process is a form

of heterosynaptic plasticity. Another way to

create heterosynaptic plasticity is intracellular

tetanization, episodes of strong postsynaptic

activity at synapses that have not been activated

directly during an event of stimulation. [47]

It has been found that heterosynaptic plasticity

further helps normalize and stabilize synaptic

weights by depressing by homosynaptic LTP

strengthened synapses and potentiating by

homosynaptic LTD weakened synapses. This

characteristic, also known as homeostatic

plasticity, helps prevent potentiation or

depression towards extreme weights, since both

Hebbian type LTP and LTD result in a positive

feedback effect, also known as runaway

dynamics. [47]

2.2.2 Modelling Memory

2.2.2.1 Cell Assemblies

In models for functional memory, the majority of the decidedly complex biochemical

processes are disregarded to simplify the simulation and further save computational

costs. In most of these models, a memory recall is represented by a delayed activation

of a cell assembly when the agent has to remember and use relevant information whilst

employing a certain behavior (working memory tasks) or when it has to recognize an

abstract object by using stored information. These cell assemblies are groups of

neurons which are strongly connected and can be interpreted as a functional circuit of

brain activity. [44] Due to their weighted connections, the neurons fire in a particular

manner when activated and demonstrate a pattern of activation. Every neuronal

activation pattern accounts for a specific memory or long-term stored information.

During memory recall, the cell assemblies are strongly active while the other cells –

background cells – show only weak spontaneous activation. It is important to bear in

mind that the content of the memory must not be changed during its recall. The

received input hence should not cause a new pattern of stimulation in neurons, but

should stimulate one neuron, which then produces a cascade of activation signals

along the weighted connections. [44]

To remember…

Homosynaptic

plasticity describes

local stimulations and

their resulting plastic

changes

Heterosynaptic

plasticity can cause

global changes in

synaptic excitability

Homeostasis limits

changes in excitability

of neurons for

stabilization reasons

Page 35: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

30 Theoretical Foundations

When modelling a simulation with long-term memory where the synaptic plasticity is

still biologically plausible, three major problems arise:

1. Neurons come in a variety of forms of plasticity (diversity and differentiation of

cells).

2. Plasticity itself may depend on different factors. We can model plasticity based

on the firing rates of neuronal cells, their voltage potential or based on the

spiking times. [44]

3. Structural, homeostatic and short term44 plasticity complicate modelling plastic

neurons since the processes are similar and very different at the same time, and

thus are complicate to define in a mathematical formula. [44]

To simplify plasticity for memory formation in

computational models, mathematical rules of

synaptic plasticity within the cell assemblies are

either considered local or global. The local rule

tries to replicate homosynaptic plasticity by

connecting the neighboring neurons, i.e. neurons

lying within a small radius, with excitatory or

inhibitory junctions. The excitation or depression

rate can be modulated by a static gain, that may

be positive (excitatory) or negative (inhibitory).45

Heterosynaptic activity is modelled by making

certain source neurons “global influencers”: These

neurons are connected over a long-range distance to other neurons or even to all of

the individual neurons of a certain group. This technique mimics the ability of

modulatory neurons stimulating a diverse population of neurons. [48] A practical

example for memory modelling is the memory trace, which is introduced and explained

in chapter 2.3.2.6 (p.49).

2.2.2.2 Dynamic Neural Fields

Dynamic Neural Fields (DNFs) are a mathematical framework that tries to mimic cortical

neural tissues. These neural fields form patterns of excitation, meaning the recurring

interactions of a cell assembly are the fundamental mechanism when it comes to

information processing in the cortex. [49] In his pioneering work, Amari [50] proposes

dynamic neural fields as a model, which uses the average firing rate of a certain cell

population, instead of the temporal dynamics of every neuron within said population.

44 Synaptic efficacy changes depending on the current activity of a neuron group. These dynamics take

place on short time scales as in stimulus-driven activity. 45 Further explication in Chapter 2.3.2.6, p.31: Architecture of the Dynamical Systems

To remember…

Short-range

connections simulate

homosynaptic plasticity

Long-range

connections simulate

heterosynaptic

plasticity

Page 36: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

31 Theoretical Foundations

Neural fields are spanned over one to three dimensions, each dimension encoding for

a specific variable, such as head direction, color or location coordinates. The dynamics

of neural fields include localized regions of activity, formed as “bumps”, but the

dynamics may also occur in the form of waves, the latter having been observed in the

hippocampus and thalamus when electrically simulated. The localized regions of

activity bumps, i.e. neurons along the field’s dimensions that show above threshold

activation levels, arise from a Mexican-hat connectivity, otherwise known as Gaussian

distribution as later explained in chapter 2.3.2.1. [51]

These bumps can be looked at as a property of the working memory. [51] The active

bump at a certain location propagates information about a specific variable along the

dimension, which means that this information is stored as long as said bump stays

above threshold. By further connecting neural fields with one another, higher level

cognitive functions – like navigation – can be achieved by having cell populations with

different functions stimulate one another. The use and implementation of DNFs will

additionally be discussed in chapter 2.3.2.5, where also its mathematical formulation is

explained.

2.2.2.3 Sequence Learning

Sequence learning is an ability fundamental to the behavior and also cognitive

processes in many organisms. Since everything depends on time, it can be represented

as a sequence. For example, when we speak, we produce a sequence of words, which

can be broken down to a sequence of letters, which in return are sequences of different

sounds. Knowing the correct order of any executable or processable information, such

as sounds, is pivotal in performing a task or behavior. The fundament for sequence

learning is the ability to remember the order of the information and remembering,

which information has already been executed/processed, to prevent regression along

the sequence and repeating the same execution multiple times. In chapter 2.3.2.6 and

3.2.2, the computational structures for sequences that I have implemented in my model

will be elaborated. [52]

2.2.2.4 Delay Dynamical Systems

In chapter 2.1.2.2 (p.18), homosynaptic plasticity in SNNs has been briefly discussed,

outlining the concepts of STDP, and in the previous subchapter 2.2.2.1 (p.29) a

computational approach on homo- and heterosynaptic plasticity has been looked at.

However, these plastic changes are only of short duration since in an SNN with no slow

processes associated, information is only retained on the timescale of the time constant

of a single neuron . In consequence, increased or decreased excitability of neurons

only occurs within few milliseconds. [31] In delay dynamical systems (DDS), the addition

of delays increases the dynamic range and the range of timescales at which neural

systems process and further retain stimulus.

Page 37: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

32 Theoretical Foundations

M. Castellano and G. Pipa have found that when combining a DDS with SNNs by

nonlinear coupling, extends the system’s capacity to store memory of neural activity.

This means that the artificial cognitive system obtains greater long-term memory. [31]

I did not explicitly incorporate DDS in my model, but this technique poses as an

interesting possibility to approach higher cognitive capacities46 in artificial intelligence

in a future project.

46 Memory

Page 38: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

33 Theoretical Foundations

2.3 Navigation

Navigation is the process of estimating one’s position within an environment and

planning a route from said position to a target point. In order to reach certain goals in

space, the agent47 needs to be able to store, order, encode and act on spatial

information [53] which may be allothetic or idiothetic48.

2.3.1 Brain structures involved in navigation

A cognitive spatial representation of the environment poses as the basis of

autonomous navigation and acts as the cognitive map of an organism. In rats, it has

been found that place cells exhibit

representational properties of the

environment by showing activity in reference

to the rat’s orientation, relative to a specific

landmark. [54] Each place cell is associated

with a certain location, the place field, and

only fires action potentials49 when the rat is

located within said associated place field.

However, further experiments have shown

that place cells primarily account for

representation of translative distances whereas other neurons, the so-called head-

direction cells, show varying activity according to the orientation of the rat’s head in

the horizontal plane. Each head-cell fires maximally when in accordance with a certain

angle without regard to the head’s direction relative to rat’s body, nor to the rat’s

spatial location. This means that those cells are tuned to some fixed axis and therefore

act like an allocentric50 compass. [55] An internal allocentric compass can also partly

explain why rats are still able to navigate in the dark with no visual feedback since their

head direction cells still give them a reference of their own location within the

environment and support path integration51. [56]

47 Agent may be human, animal, robot or a software program 48 Idiothetic cues include vestibular, optic flow and proprioception, i.e. self-motion cues which are

essential for path integration. Allothetic cues are external cues like visual or olfactory inputs which help

a system to make sense of its surroundings. 49 Electrochemical signals in brain cells. 50 Linked to a reference frame based on the external environment. 51 Egocentric coding process which allows an animal to memorize its starting location in relation to its

current position.

To remember…

Brain cells representational

of the environment are

called place cells

Head cells are brain cells

used for directional sense

Page 39: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

34 Theoretical Foundations

Place cells have been found in the hippocampus proper as well as other extra-

hippocampal areas which are depicted in Exhibit 11. Diagram of the rat hippocampus.

The hippocampus proper is made up of four subfields (CA1-CA4) that are partly made

up of pyramidal neurons which are said to play a crucial role in complex cognitive

functions. [57] The neurons in the hippocampus proper receive information from the

entorhinal cortex (EC), a multimodal limbic association area which belongs to the

hippocampal formation. Place cells in the superficial entorhinal cortex (sEC) respond

52 The septal and temporal pole will not be discussed in the paper. 53 Source: https://openi.nlm.nih.gov/detailedresult.php?img=PMC1156904_1471-2202-6-36-3&req=4,

20.10.18.

Exhibit 11. Diagram of the rat hippocampus

The rat brain is transparent for the hippocampal formation, which can be recognized by its “C”-structure. At the

bottom of the figure, the left hippocampus is divided vertically into three sections, their location being given by

the distance from the anterior in millimeters. CA1, CA2,CA3 : cornu ammonis fields 1-3; DG: dentate gyrus; EC:

entorhinal cortex; f: fornix; S:subiculum; s: septal pole of the hippocampus; t: temporal pole of the

hippocampus52.53

Page 40: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

35 Theoretical Foundations

to stimuli from neocortical areas, where sensory perception and spatial reasoning54 are

performed. On the other hand, place cells in the medial entorhinal cortex (mEC)

account for self-location by receiving proprioceptive55 information. Contrasting to the

hippocampus proper, the place cells in the EC rather react to directional activity and

general properties about the current states (metrical information) of the environment,

than encoding information about specific places (topological information). [53] [58]

The connectivity network between hippocampus proper, the dentate gyrus and

entorhinal cortices is further illustrated in Exhibit 12. The hippocampal network.

54 Also-called visual-spatial ability; The ability to mentally manipulate 2- or 3-dimensional figures. 55 Sense of position of one’s own body parts and sense of strength being used in movement. 56 Source: https://www.researchgate.net/figure/a-The-hippocampal-network-The-hippocampus-forms-

principally-a-uni-directional-network_fig39_323301864, 20.10.18.

Exhibit 12. The hippocampal network

a) Input from the entorhinal cortex (EC) is delivered to the dentate gyrus and CA3 pyramidal neurons over the

perforant path, which is stimulated by visual or auditory information. The perforant path not only converges

visual and auditory stimuli on the branches of CA3 cells, but also input from the medial and the superficial

entorhinal cortex (mEC/sEC), enabling multiple features to be incorporated into the cognitive space

representation. B.) Scheme of the different hippocampal connections, where the delay between the moment of

stimulation to action (input latency) of the connecting structures is given in milliseconds.56

Page 41: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

36 Theoretical Foundations

Sensory information, which is processed in the

neocortical areas in the brain like the visual

cortex, the somatosensory and the

sensorimotor cortex, is projected to the

posterior parietal cortex (PPC). The PPC is

believed to have separate representations for

different motor effectors, i.e. body parts. Cell

recordings in primates have also shown, that

some posterior parietal cells are activated even

before the execution of the motor skill has been

performed but remained active during its

execution. This suggests that those cells play a

significant role in deciding whether an action

should be employed. [59]

The information from the PPC reaches the

entorhinal regions through the

parahippocampal (PaHi) and the perirhinal

cortices (PeRh). Alongside the neocortical input,

the medial EC receives information from the

lateral mammillary nuclei (LMC), the

anterodorsal nucleus of anterior thalamus

(ADN), the postsubiculum (poSC) and indirectly from the dorsal tegmental nuclei

(DTC). [55] Lesions in the LMC impaired the performance on a radial arm maze whereas

lesions in the DTC resulted in defective landmark navigation and path integration

abilities. Further experiments support the belief that the putative location of head

direction cells is the LMC and the DTC. [56] This neural circuit is stimulated by vestibular

information primarily coming from the semicircular canals system, which can sense

angular accelerations and decelerations of the head. [60]

Allothetic information processed in the sEC and (mainly) idiothetic information

transformed in the mEC are projected to the dentate gyrus (DG) through the

performant path (PP). [55] The DG is a part of the hippocampus and is thought to

play a pivotal role in the formation of spatial memory and promotes exploration of the

subject’s surroundings. It has been observed that rodents with lesions in the DG

couldn’t remember a previously explored environment and weren’t able to remap their

surroundings due to the inability of storing new information about spatial properties

of their surroundings. [61] The spatial information storage also allows an organism to

anticipate certain objects in a previously explored environment without actually

receiving sensory stimuli which previously activated the cells. Neurons are therefore

able to have anticipated and experienced stimuli patterns after exploring an

To remember…

The hippocampus

proper is the main

location of place cells

The location of head

cells is the lateral

mammillary nuclei

and the dorsal

tegmental nuclei

The dentate gyrus is

an important structure

for memory formation

and storage

Reward-based learning

is, inter alia, controlled

in the nucleus

accumbens

Page 42: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

37 Theoretical Foundations

environment. If the anticipated pattern mismatches with the experienced inputs, the

organism is promoted to explore once again to be able to include changes in the

environment in the spatial map. [54]

The output from the hippocampus is produced by the subiculum whose role in spatial

navigation and mnemonic processing is pivotal but has yet to be investigated more

thoroughly. The subiculum (SC) further projects to the nucleus accumbens (NA),

where navigation control is believed to be achieved by reward-based learning. [55] The

nucleus accumbens is a part of a loop with the prefrontal cortex and the basal

ganglia. The output from the hippocampus is processed in two main sub-components

of the NA which biases the selection of goals in the prefrontal cortex. Dopaminergic

input from the ventral tegmental area to the NA are a contributing factor for the

selection of goals. [62] In the prefrontal cortex, the strongest stimulus, i.e. the

information about the goal orientated action, is projected to the primary motor

cortex, where neural impulses are generated which control the execution of body

movements. [63]

Exhibit 13. A simplified anatomical model

This diagram demonstrates the proposed

connections between brain structures

significant to navigation. The ADN-poSC-

LMN circuit poses as the location of HD

cells. In the sEC, the mEC , in the

hippocampus proper (CA3-CA1 layers),

and also in the DG place cells are located

and form a neural spatial representation.

Motivation for goal behavior partly takes

place in the neural circuit between NA-

VTA-SC, where in this model the pivotal

junctions leading to and from the

prefrontal cortex have been

disregarded.57

Building computational systems that are able to navigate in any location has

increasingly become a point of interest in the face of automated vehicles. Different

approaches have been undertaken in order to model a system that is flexible,

57 Borrowed from A. Arleo and W. Gerstner: “Spatial Cognition and Neuro-Mimetic Navigation (2000)”

(Online source: https://www.researchgate.net/publication/2241561_Spatial_Cognition_and_Neuro-

Mimetic_Navigation_A_Model_of_Hippocampal_Place_Cell_Activity, 20.10.18.)

Page 43: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

38 Theoretical Foundations

autonomous, scalable and also robust enough to cope with the broad range of

circumstances coming with an unknown environment. [64]

2.3.2 Computational Models for navigation

The previously elaborated cerebral structures involved in navigation can be replicated

computationally by constructing a software using the mathematical formulations of the

neuronal and synaptic processes within said brain areas. Navigational behavior may be

modeled by different approaches, regarding the architecture of the artificial neural

network. The following subchapters will present two distinct computational models

that use the concept of SNNs to mirror navigational brain structures.

2.3.2.1 RatSLAM: Simultaneous Localization and Mapping

RatSLAM is a robot navigation system based on the brain of rodents. [65] The system

models and makes use of rodent hippocampal structures involved in navigation since

it displays many of the properties needed to realize SLAM. The goal of SLAM

(Simultaneous Localization and Mapping) is to create a neural system for a mobile

robot, which enables it to build a map of an unknown environment while at the same

time it uses that map to navigate this environment. The problem of SLAM however,

consists of five major parts: landmark extraction, data association, state estimation,

state update and landmark update. [66]

M.J. Milford et al.58 introduced a computational neural system that takes on these

problems. In ratSLAM, the model has to be able to locate certain landmarks while at

the same time producing a cognitive map representative of the located landmarks. This

cognitive map stores information about a landmark’s coordinates and maybe even

color or shape, and further allows a representation of the relative distances between

the encountered landmarks. RatSLAM uses a competitive attractor network to integrate

odometric59 data with landmark sensing. [67] It therefore combines idiothetic with

allothetic information to create a stable and recallable cognitive representation of a

formerly unknown environment.

The competitive attractor dynamics is the internal dynamics of the ratSLAM model as

well as of DFT (cf. chapter 2.3.2.5, p.47) and ensures that the total activity in the pose

cells remains constant and stable. The activity distribution describes a discrete Gaussian

distribution, which is representative of the probability distribution of the robot’s pose.

This Gaussian distribution of activity is described as an activity packet, where each

58 Cognitive system introduced in [67]. 59 Odometry is the use of data from motor sensors to estimate the change in position over a specific

time.

Page 44: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

39 Theoretical Foundations

packet represents an estimate (or hypothesis) of the what the information given by an

input means.

A competitive attractor network can be modeled in different ways, depending on the

set up of neurons, however such a network is characterized by three properties:

1. Global inhibition: By connecting all cells to one another with inhibitory

synapses, the general activity without any visual or sensorimotor input will

stabilize to one stable packet, meaning the cells will relax to a resting level.

Global inhibition further allows two activated clusters – competing hypotheses

– to compete, where the more strongly activated packet suppresses the other

packet and hence becomes the dominating hypothesis which is then assumed

true. Since the multiple activity packets have to be reinforced by external stimuli

to gain strength, they need time to compete. The global inhibitory weight

therefore has to be rather gentle. [67]

2. Self-excitation: Each neuron is connected to itself by an excitatory junction. The

reinforcement of activity enables the neuron to stay active, even after external

input has been removed. The connection supposedly may be weighted, however

one must adjust the weights in a way that no runaway dynamics occur, i.e. the

neuron should not become indefinitely strongly activated.60

3. Local excitation: As explained in chapter 2.1.2.1 (p.16), SNNs have the spatial

quality that neurons are only connected to other local neurons, i.e. neurons in a

short range distance. Local excitation mirrors the field of influence of a leaky

source neuron. The excitatory local connections enable the formation of an

activity packet, which is less prone to be disrupted by noisy input, than if only a

single neuron would be activated.

60 Can be implemented by e.g. using a negative exponential function.

Page 45: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

40 Theoretical Foundations

Due to the lateral interactions

mentioned above, activation levels in

neural circuits will have a Gaussian

distribution. Like mentioned

beforehand, the distribution represents

the probability of what input is being

transmitted. For example, we have a

robot with neurons associated with a

certain color on the color spectrum. If

the robot faces a “true red” object, the

node associated with “true red” will

become most active, but through local

excitation its neighboring nodes,

associated with the color “ruby red” or

“crimson red” will too be stimulated

internally. This makes sense, since there

is always the possibility that the lighting

is off or that the camera is not very

exact. The Gaussian distribution in a

two-dimensional field (which will be

used in chapter 2.3.2.5) can be

expressed with the following formula:

𝐺(𝑥, 𝑡) = 𝐴𝑒𝑥𝑝 [− ((𝑥 − 𝑥0)2

2𝜎𝑥2

+(𝑦 − 𝑦0)2

2𝜎𝑦2

)] (14)

where A is the amplitude, x0 and y0 the center points, i.e. dominant neurons, x and y

the range of local distance, and x and y61 the x and y spreads of the Gaussian kernel.62

2.3.2.2 The Architecture

A. Head direction (HD) cells:

A group of neurons is modeled that includes nodes – or computational neurons- that

account for the heading direction of the agent. To recreate the in chapter 2.3.1 (p.33)

explained internal compass, we associate each one of these nodes with a preferred

angle. The 360° can be distributed to different number of HD cells, depending on what

range of angles each neuron should account for (e.g.: 360 HD cells: range of 1° per cell;

36 HD cell: range of 10° per cell). The single HD neurons are activated whenever the

agent is rotated in the cell’s associated angle. The input conveyed to the HD system

stems from the agent’s motor system, which can be looked at as self-motion cues. The

61 Sigmoidal non-linearity is explained in chapter 2.3.2.5, p.46. 62 The “exp” in the formula stands for “exponential” and is the same as 𝑒(𝑥).

To remember…

The neural dynamics in the DFT

and ratSLAM model is a

competitive attractor network

and is defined by global

inhibition, self-excitation and

local excitation

The activity of a neuron or neural

field is shaped like a Gaussian

kernel

Page 46: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

41 Theoretical Foundations

robot used may already have such a system implemented, however if this is not the

case, one has to program a path integration software which then can be implemented

on the hardware system.

HD cells are connected in a way so that competitive attractors arise as depicted in

Exhibit 14. This means that several nodes can be active simultaneously, but one node

will dominate and suppress the activity in the less strongly activated cells. This

dynamics is accomplished by using self-excitatory, local excitatory and global inhibitory

connections (synapses). The self-excitatory connections help stabilize the activity of the

stimulated neuron whereas local excitatory junctions, connecting the active cell to its

neighboring cells, enables the formation of an activity packet, which represents the

stable state of the network. These activity packets can be shifted depending on the

incoming input from the motor system. Finally, each neuron is connected with every

other neuron by inhibitory junctions. These junctions curb noisy activity in other

neurons, and also enable the activated neuron to become dominant and form a spike

since it thus not only excites its neighbors, but also inhibits their activity, preventing

multiple spikes at the same time and further stabilizing the system.

Exhibit 14. HD cells

The sketch visualizes the competitive attractor dynamics with the HD cells. The three neighboring stimulated

nodes (green) make up the activity packet, i.e. there are two activity packets. The stronger activity packet (left)

will establish dominance due to stronger global inhibitory connections, marked by the blue lines, and the self-

excitatory connections, marked by the green lines. The red arrows to the left symbolize the gain of activity,

whereas the arrow to the right show the decrease of activity.

B. Local view cells

Local view cells can be implemented in different ways, depending on how biologically

accurate the brain simulation should be. In my first few models, I have chosen to simply

correlate neurons of a certain group to the input given by the robot’s camera. By

extracting the hues of the images, the neurons can be associated to specific colors of

landmarks. This can be realized by using a range of RGB values that account for a

certain color and conditioning each neuron for a certain range. One must bear in mind,

that it is favorable to mask the image before obtaining the hue values, since only the

Page 47: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

42 Theoretical Foundations

hue of the object is wanted, and not the colors of the surrounding background. Another

demanding factor is modeling the neurons flexible enough to also react in subpar

lighting conditions.63

The individual local view cell is associated with an HD cell, whenever they fire at the

same time, following the rules of STDP. This enables the robot to form memory of what

landmark lies at which heading direction. By including weights to the connection, the

artificial synapse is strengthened every time the local view and HD cell fire

simultaneously, facilitating later reactivation of the memory.

Local view cells also help the cognitive system to recalibrate its current estimate of

where it is located. For example, if the robot calculates a new estimate of direction, but

faces the same landmark as associated with an old and associated angle, the activity of

the local view cells will stimulate the learned connection and hence inject activity in the

associated HD cell. This process results in the motor system’s estimated location being

reset, reducing errors in the cognitive map.

C. Place cells

Computational place cells, modeled after the neurons found in CA1-CA3 layers in the

hippocampus proper, are two dimensional attractors, which represent the location of

the agent in a two-dimensional space. This space can be divided into a cartesian grid,

where every square is a place field that is associated with a single place cell (as in

chapter 2.3.1, p.33). Although place cells are spanned in a two-dimensional field-

contrarily to local view and HD cells – they are like HD cells connected by short range

and self-excitatory junctions and global (long range) inhibitory junctions. These

synapses again lead to an activity packet, which is shifted within a representative two-

dimensional space when changes of input occur.

We can use path integration as input for the place cells. Depending on what robot is

used, helpful processing systems like motor encoders are already built into the system.

With motor encoders for example, the number of rotations of each wheel is produced

as output, whose value can further be used to calculate vectors. These vectors can then

be incorporated in a program which performs path integration, i.e. a program that

calculates the translative distance covered from a starting point.64 If no motor encoders

are available, one is also able to calculate the relative distances by measuring the

constant velocities in x- and y-direction of the robot. By programming a software with

simple mathematical formulas, the distances in each direction can be calculated, which

are then used to construct the resulting relative path and its distance.

63 This problem could be fixed by scaling the hue with saturation and value (black and white) data.

However, I have yet to approach this problem in any of my models. 64 This starting point is also referred to as “nest” and has been recognized to play a vital part in rodent

navigation since it serves as a sort of fixed point.

Page 48: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

43 Theoretical Foundations

Exhibit 15. Path integration

By offsetting the vectors from

the starting point (nest), the

system is able to estimate its

current location relative to

said nest.65

The boundary problem is a difficulty that often arises in place cells of cognitive systems

that need to navigate in large environments. If the robot moves in the same direction

for a very long time, the activity clusters will hit a boundary within the cognitive spatial

map. Regarding autonomous vehicles, we cannot divide every space the vehicle will

encounter into a cartesian map since it would be computationally very costly and also

since we do not know through which spaces the vehicle will have to navigate. This

problem has been solved by using wrapping connectivity, where the place cells on the

edge of the spatial grid are connected with the place cells on the opposite side. This

allows activity packets to reenter the field moved over one of the four edges.

D. Pose cells

In Milford’s approach, place cells and HD cells are connected to form three-

dimensional attractors, called pose cells. With this technique, an activity packet moving

from side to side represents the robot shifting along the ground (x’, y’), whereas an

activity packet moving up and down describes the rotational movement of the robot

(θ’). The pose cells are lain out in a three-dimensional construct, where the x and y

dimensions represent place cell activity and the theta θ dimension shows activity in HD

cells. Pose cells are also associated with the local view cells. By having this trinitarian

neural circuit – namely location estimate, direction estimate and landmark perception

– the system is able to clearly identify what landmark is perceived and where it is

perceived. To clarify, through this three-way connection, a landmark’s location is

defined by two estimates. This reduces error when the system has to identify one from

several landmarks, that are seen from either the same angular or translative position.

65 Source: https://en.wikipedia.org/wiki/Path_integration, 20.10.18.

Page 49: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

44 Theoretical Foundations

The activity in pose cells is again modeled with a competitive attractor network. The

pose cells are like the place cells wrapped, though the wraparound of connections is in

the θ’-direction.

Even though pose cells have been solely

thought up, Edvard Moser, May-Britt Moser

and their students have discovered so called

grid cells in all of the layers of the dorsocaudal

medial entorhinal cortex (dmEC) of rodents66

in 2005. Those grid cells share very similar

properties with pose cells such as being

associated to locations according to a certain

internal grid laid out over the environment.

These similar activity patterns are

demonstrated in Exhibit 16. Grid cells from the deepest layers additionally have head

direction properties and have also shown to perform path integration, keeping track of

the body’s location within the environment. [68] Contrasting the modeled pose cells,

grid cells are not dependent on visual input, a characteristic which may help explain

why rodents can navigate even in the dark. Nonetheless, the existence of grid cells

makes the pose cells a biologically plausible model for artificial cognitive systems.

Exhibit 16. Grid cells

The top pictures demonstrate grid cell activity recorded in the brain of a rodent, whereas the lower series show

simulated cells in a neural network.67

66 Joshua Jacobs et al. (Drexel University) have also discovered the existence of grid cells in human

brains in 20. (Source: https://www.spektrum.de/news/auch-menschenhirne-fallen-ins-raster/1202929,

20.10.18.) 67 Source: https://www.quantamagazine.org/artificial-neural-nets-grow-brainlike-navigation-cells-

20180509/, 20.10.18.

To remember…

Local view cells obtain

information about the

environment

Pose cells represent the

translative and directional

position of the agent

Page 50: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

45 Theoretical Foundations

2.3.2.3 Experience Mapping

Following the proposition of ratSLAM, M. Milford et al. have come up with a technique

to produce world representations. [69] Since pose cell activity packets often become

recalibrated by their associated local view cells after the robot has been exploring for

a certain amount of time, the cognitive spatial map becomes less and less

representative of the physical world. Further, this means that the internal spatial

representation cannot be used as map to navigate the environment. An experience

map combats these problems of discontinuity.

Experience mapping builds on top of the discontinuous spatial map represented by

pose cells. It creates real world representations that are spatially continuous and where

local areas of the map mirror the cartesian properties of the mapped area in the real

environment. The experience map remains continuous and representative, even as the

environment becomes larger and more complex, whereas the pose cell mapping

develops errors, like hash collisions and multiple representations, due to wraparound

connectivity and other influences.

Hash collisions is a phenomenon where multiple

landmarks are associated with the same pose

cell. This may happen when there is ambiguity

about the robot’s pose and false pose cells are

active when perceiving different landmarks. The

reverse error is having multiple representations

of the same landmark. Such an error may occur

due to ambiguous visual input. The experience

map resolves these issues with different

mechanisms. Firstly, to produce spatially

continuous world representations, output from

pose and local view cells are used to create a

map of robot experiences, hence the name experience map. An experience contains

information about the pose and visual scene at a certain point in time, as well as

odometric information about the transitions the robot has made from previous

experiences. This means that each experience is associated with a pose cell matrix, a

visual scene represented by local view cells and odometric values given by a path

integration system.68.

By constructing experiences in a higher-level map, one is able to implement several

mechanisms that reduce spatial errors. The hash collision problem is solved by said

68 Since the experience mapping I used in my program differs greatly from the one implemented in

ratSLAM, I will not explain the formation of experiences more detailed in this thesis, however to read

more about said mechanism, one can turn to [69]

To remember…

An experience map can

be used by a cognitive

system as spatially

continuous map where

the locations of certain

landmarks are

represented and

remembered

Page 51: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

46 Theoretical Foundations

introduction of the visual scene information. Even if landmarks look similar to one

another, the additional visual information about the surroundings differentiate the

landmarks, hence creating singular and distinct experiences in the experience map. The

problem of having multiple representations can be recovered by a process called map

correction. The multiple representations of the same landmark are overlapped in the

experience map’s own coordinate space, thus combining the differently associated

landmarks to one experience.

2.3.2.4 SPA: Simultaneous Planning and Action

Simultaneous planning and action differs from SLAM due to its ability to solve planning

problems. In this approach, the agent does not only perceive and remember an

environment, but it further is able to establish a plan for behaviors which are serially

executed. Understanding how action sequences form fluent and flexible behavior is

paramount to understand cognition. To reach such a behavior, an agent has to learn,

initiate and produce serially ordered sequences where each sequence represents

individual actions necessary for said behavior. [70]

Sequence generation for motor behaviors may depend on cognitive capacities like

memory choices and memory order, and also the ability to coordinate actions in time

and to recognize when an action (or behavior) has been successfully completed,

therefore having no constraints on the duration of each individual action. [70] When

sequence generation is incorporated in an embodied system69, three major problems

arise:

1. Stabilization: The agent is faced with highly variable sensory input which

demands that the neural states, which represent and control the motoric actions,

need to be stable. Neural representation as to where in a sequence the system

currently is, is also essential since the duration of the actions is temporally

unpredictable. [70]

2. Destabilization: In order to execute one behavior after another, the system has

to be able to proceed along the action sequences. This means that the stable

state of an action has to become destabilized, once the action’s successful

completion has been recognized, hence activating the next sequence step. [70]

3. Flexibility: Actions and perceptual states70are graded neural representations

rather than discrete states of highly preprocessed sensory input. This allows the

system to react more flexibly to complex environments, thus allowing smoother

behaviors. [70]

69 Meaning an intelligent artificial agent (robot) 70 Current inflow of perceived information

Page 52: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

47 Theoretical Foundations

A model, introduced by Yulia Sandamirskaya and Gregor Schöner, which addresses

these problems is the dynamic field theory: A framework for sequence learning and

production, based on competing attractors which represent neural dynamic plans. [70]

[71]

2.3.2.5 Dynamic Field Theory

The dynamic field theory tries to explain how behaviors come about by representing

the coordinated activity of populations of neurons. The theory is seen as a well-

established neurally-based framework which manages to bridge between lower-71 and

higher-level72 neural processes. [72] The crucial assumption underlying the DFT is that

behavior and its processes in the brain are embedded in a continuum. Behaviors can

be looked at as serially ordered action sequences which are fluent and may take an

unpredictable amount of time to fulfill. The capacity of producing action sequences is

dependent on the cognitive abilities like memory choosing, order and coordination of

sequences. [71]

The state of the cognitive system is

formulated by dynamic activation functions or

dynamic neural fields (DNF). [72] The neural

processes are denoted as continuous metric

variables which are encoded along the

behaviorally relevant dimension, x, of the DNF,

u(x,t). These dimensions may account for

perceptual (e.g. orientation), motor (e.g.

velocity) or cognitive (e.g. serial order)

parameters thus representing different neural

processes. [72] [71] The spatial location, motor plans or perceptual features are

characterized as localized peaks of activation which emerge as attractor solutions of

the field dynamics, along the dimension x. [71] The field dynamics are modelled as:

𝜏�̇�(𝑥, 𝑡) = −𝑢(𝑥, 𝑡) + ℎ + 𝑆(𝑥, 𝑡) + ∫ 𝑓(𝑢(𝑥′, 𝑡))𝑤(𝑥 − 𝑥′)𝑑𝑥′ (15)

where is the time constant, h<0 the resting level, S(x,t) an external stimuli, w(x-x’) a

synaptic interaction function with long-range inhibitory and short-range excitatory

connectivity, and f(u) a sigmoidal non-linearity. This general formulation of the field’s

dynamics is essential when programming a cognitive architecture’s neural fields since

it poses as the building elements of such an architecture (its practical implementation

will further be demonstrated in 3.3.1). The output of a DNF is shaped by a sigmoidal

nonlinearity, namely by squashing real valued numbers into an interval of [0,1]. This is

71 Lower-level neural processes include sensorimotor coupling 72 Higher-level neural processes refer to cognitive capacities

To remember…

The DFT assumes

behavioral processes to be

embedded in a continuum

Dynamic neural fields

represent the state of the

cognitive system

Page 53: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

48 Theoretical Foundations

useful, since its value can be interpreted as the firing rate of a neuron. It either does

not fire at all (0) or it fires at a maximal rate (1):

𝑓(𝑢(𝑥, 𝑡)) =1

1 + 𝑒𝑥𝑝[−𝛽𝑢(𝑥, 𝑡)] (16)

where is the slope of the sigmoidal non-linearity and u(x,t) the activation level of the

DNF.73 The interaction function – based off the Gaussian function as in (14) – describes

lateral interactions between different field sites:

𝑤(𝑥 − 𝑥′) = 𝑐𝑒𝑥𝑐 exp [(𝑥 − 𝑥′)2

2𝜎𝑒𝑥𝑐2

] − 𝑐𝑖𝑛ℎ exp [(𝑥 − 𝑥′)2

2𝜎𝑖𝑛ℎ2 ] − 𝑐𝑔𝑙𝑜𝑏𝑎𝑙 (17)

where c is the strength of the junctions, exc marks the short-range excitatory, inh the

longer-range inhibitory and global the globally inhibitory connections.

The field output can be viewed as corresponding to the mean spike rate of a local

group of neurons which is apparent by looking at its typical Mexican hat shape, as three

dimensionally depicted in Exhibit 37. Like discussed in chapter 2.3.2.1 (p.38), the

Gaussian like distribution describes the probability of the occurrence of a certain

feature. The resting level sets a stable attractor sub-threshold, indicated by a flat

distribution. When a weak localized input is induced, the attractor is shifted toward the

output threshold74, but output is only generated when

the input is strong enough to push the attractor field

– which can be looked at as depolarizing membrane

– above threshold. The lateral interactions then

promote the formation of a localized peak, by

depressing more distant field sites and potentiating

the input position. These properties lead to a self-

stabilized peak whose location specifies the

parameter values for the current state of the cognitive

system, similarly to the activation clusters in

ratSLAM’s pose cells (chapter 2.3.2.2, p.40). The peak’s

height and width can be interpreted as certainty of the

current estimation of the value and intensity of the

input. [72] [73]

The self-stabilized peak vanishes when the localized input is removed, resulting in the

reappearance of the resting level attractor. However, the attractor peak is stabilized by

the excitatory short-range connections in a way that the strength of the localized input

73 The sigmoidal function is also used in interaction connectivity in the ratSLAM model. 74 Firing threshold; The DNF fires repeatedly (transfers information) as long as attractor passes threshold

To remember…

The neural fields

show activation

shaped like a

Mexican hat shape

The output of a

neural field is the

sigmoidal function

of the activation

kernel

Page 54: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

49 Theoretical Foundations

may fluctuate to a certain degree, without the attractor peak decaying. [73] An

activation peak may also persist without localized input when a memory trace75 is

established, acting as a preshape or ridge of excitation of the desired output peak.

Transitions from one sequence to another happen autonomously after successful

completion of the preceding action. The reverse detection instability – the vanishing of

a peak – kicks in when the condition of satisfaction field is activated after successful

execution is recognized, which furthermore leads to a cascade of instabilities in the

associated dynamic fields. [71]

The framework’s self-stabilizing properties are pivotal to combatting the three major

problems that arise with computational sequence generation; flexible behavior,

autonomous progression of action sequences and stable neural states. [71]

2.3.2.6 Architecture of the Dynamical Systems

Since my model is based on the foundations of the Dynamic Field Theory, I will explain

the specific structure of my program more thoroughly in chapter 3.2 (p.61). Below, the

main components of DFT architectures76 as in [70] will firstly be introduced and

elaborated.

A. Memory traces

Memory traces, also called preshapes, have become a useful technique in modelling

memory, they are most commonly used in the Dynamic Field Theory (p. 47), which is

why I wanted to incorporate this mechanism in my computational model. First

introduced by Y. Sandamirskaya [74], memory traces are built up by positive activation

in the DNF and in return reactivate the DNF when the stored memory is called up. This

ability mirrors the increased excitability of assembly groups whose pattern of activation

account for a stored memory.

Preshapes are an additional layer to a DNF over the same dimension. The input received

from the DNF is integrated in the memory trace as an attractor, to which the field of

the memory trace evolves to at a slower rate than the time-constant of the DNF. The

decay of the activity of the preshape occurs at an even slower rate than said build up,

hence enabling the memory trace to withhold an activation peak and therefore

information for a longer time. The dynamics of a memory trace evolve according to the

equation:

𝜏𝑚�̇�(𝑥, 𝑡) = 𝜆𝑏𝑢𝑖𝑙𝑑 (−𝑃(𝑥, 𝑡) + 𝑓(𝑢(𝑥, 𝑡))) 𝑓(𝑢(𝑥, 𝑡))

− 𝜆𝑑𝑒𝑐𝑎𝑦𝑃(𝑥, 𝑡) (1 − 𝑓(𝑢(𝑥, 𝑡))) (18)

75 Cf. p. 52. 76 Architecture here means a construct of connected DNFs and other elements, forming an artificial

cognitive system.

Page 55: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

50 Theoretical Foundations

Where m is the time constant, build and decay are the rates of the build-up and decay

of the activity. The time constant to which the attractor is approached is m/build,

respectively the time constant of the peak decay is m/decay. P(x,t) is the strength of the

memory trace at site x of the DNF with activity u(x,t) processed in a sigmoidal function

f().

B. Ordinal and memory nodes

The ordinal dimension reflects the serial order of the behaviors. The ordinal position of

an action in a sequence is represented by a dynamic node, called the ordinal nodes.

The ordinal nodes are governed by a bistable dynamics, as plotted in Exhibit 17,

meaning they have two stable

states of activation, namely on

and off.

This bistable dynamics allows

for transitions of actions

whenever instabilities in the

ordinal order are caused by the

recognition of completion of the

behavior (condition of

satisfaction system). The lateral

dynamics of the ordinal nodes is

made up of global inhibitory

and self-excitatory connections.

Further, each ordinal node also

projects activation to its

succeeding node. This allows for easier transition along the serial order. Ordinal nodes

further project to a neural field which represents a certain action or a certain feature

element.

The transition from one ordinal node to another is facilitated by a higher group of

memory nodes. Each ordinal node is associated with a memory node, which stores

ordinal information and facilitates activation of the succeeding ordinal node, thus

enabling smooth transition. Memory nodes and ordinal nodes are connected in a way

that reactivation of a previously active ordinal node is prohibited, while the succeeding

ordinal node is stimulated, during the global inhibition, caused by the condition of

satisfaction field. The mechanism is visually depicted in Exhibit 18:

77 Source: https://en.wikipedia.org/wiki/Bistability, 20.10.18.

Exhibit 17. Bistability

“1” and “3” are in the two stable equilibrium states, whereas ball 2

is at the point of unstable equilibrium. The graph is plotted against

the potential energy E and hence shows that a certain amount of

activation energy is needed to surpass the point of instability and

reach either equilibrium.77

Page 56: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

51 Theoretical Foundations

Exhibit 18. Serial Order System

The sketch shows how the memory nodes

activate the associated ordinal nodes. The

bottom graph depicts the activity level of

the memory nodes as to the top graph

representing the activity level of the

ordinal nodes. The top graph further

shows how the activated ordinal node

suppresses activity in the preceding node

and stimulates the succeeding node. The

memory nodes stay above threshold level

after activation to prevent the reactivation

of their associated ordinal node. It may be

added that the memory nodes

additionally inhibit one another (global

inhibition), which is not shown in the

sketch.

Memory node i is activated by its assigned ordinal node j, but inhibited by all of the

other memory nodes (global inhibition). Additionally, it employs a self-excitatory

connection as well as an excitatory projection to the ordinal node j+1. The self-

excitatory connection allows the memory node to remain active, even after its

associated ordinal node no longer stimulates it.

The dynamics of the ordinal (19) and memory nodes (20) can be formulated as follows:

𝑑𝑖̇ (𝑡) = −𝑑𝑖(𝑡) + ℎ𝑑 + 𝑐0𝑓(𝑑𝑖(𝑡)) − 𝑐1 ∑ 𝑓(𝑑𝑖′(𝑡))

𝑖′≠𝑖

+ 𝑐2𝑓(𝑑𝑖−1𝑚 (𝑡))

− 𝑐3𝑓(𝑑𝑖𝑚(𝑡)) − 𝐼𝑐(𝑡)

(19)

𝜏𝑑𝑖�̇�(𝑡) = −𝑑𝑖

𝑚(𝑡) + ℎ𝑚 + 𝑐4𝑓(𝑑𝑖𝑚(𝑡)) − 𝑐5 ∑ 𝑓(𝑑𝑖′(𝑡))

𝑖≠𝑖

+ 𝑐6𝑓(𝑑𝑖(𝑡)) (20)

The first three terms in (19) shape the bistable dynamics, where -di(t) has stabilizing

properties, h < 0 represents the resting level of the membrane and c0 is the strength

of the self-excitatory connection. C1 is the strength of the mutual inhibitory junctions

between the ordinal nodes, whereas c2 is the excitatory projection from memory node

i-1 to the succeeding ordinal node i and c3 is the inhibitory connection from memory

node i to its associated ordinal node i. Ic(t) is the global inhibition projected when the

i

j

j+1

i+1

Page 57: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

52 Theoretical Foundations

termination of a behavior has been recognized. The applied function f() is the

sigmoidal non-linearity which has been previously explained. (p.47).

In equation (20), the terms have the same function as in (19), solely being applied to

the memory nodes. Additionally, c6 is the strength of excitatory input from the

associated ordinal node i.

C. Action field

Action fields are neural dynamic fields that are defined over a specific feature

dimension. This means for example, if the goal behavior is to look for an object with a

certain color, the action field will be spanned over a dimension including the color

spectrum, where the hue values are the behavior (or action) parameters. Each activated

region along the feature dimension accounts for a different action parameter, therefore

resulting in a different behavior.

In order to establish an action field, the system has to learn which action parameters

are looked for and in which order they have to be looked for. This means it has to learn

a sequence of behaviors. During sequence learning, active regions78 in the action field,

stimulated by a perception or sensory field79, are wired with an ordinal node by

modifiable weighted junctions. The regions in the action field are then stimulated in

behavior generation in the correct order by the previously assigned ordinal nodes. Not

only the order of the activation of action parameters has to be remembered, but also

the location of the activity regions assigned to certain ordinal nodes. To solve this

problem, preshapes (or memory traces) of the same dimensionality as the action field

are implemented. When a region along the feature dimension of the action field is

activated during learning, the preshape is activated and due to its slower dynamics,

stores the information about the learned action parameter. During behavior execution,

the preshapes are then reactivated by the same ordinal node as the assigned action

field and hence excite the particular region in the action field encoding for the

demanded behavior.

78 Gaussian distributions of activity with the specific action parameter being the center.

Page 58: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

53 Theoretical Foundations

Exhibit 19. Action field and preshapes

The sketch shows how the location of the activity peak is stored in the preshape during the learning process.

During execution, the stable peak in the preshape is projected to the action field, stimulating the remembered

location.

The activity in the action field surpasses the threshold when the ordinal node boosts

the activation ridge in the action field given by the preshape. The action field’s activity

follows the equation:

𝜏𝑈𝑗�̇�(𝑥𝑗 , 𝑡) = −𝑈𝑗

𝐴 + ℎ𝐴 + ∫ 𝑓 (𝑈𝑗𝐴(𝑥′

𝑗 , 𝑡)) 𝑤(𝑥𝑗 − 𝑥′𝑗)𝑑𝑥′

𝑗

+ ∑ 𝑓(𝑑𝑖(𝑡))𝑀𝑖(𝑥𝑗 , 𝑡) + 𝑐𝑝𝐴𝐼𝑝

𝐴(𝑥𝑗 , 𝑡)

𝑁𝑑

𝑖=0

(21)

The first three terms define the general neural dynamics as elucidated in chapter 2.3.2.5

on p.47, where xj is the action parameter. The resting level hA of the action field is again

negative, and the activation kernel is formed in a Gaussian manner. Nd is the total

number of ordinal nodes that are implemented in the system, whose individual activity

di is thresholded by the sigmoidal function f(). The shape of the input of di is defined

by the neural weights Mi(x,t),which are modified during sequence learning according

to the Hebbian rule. The neural weights function as in (22) accounts for the fact that

only one ordinal node can project to the action field at a time. IpA (xj,t) is the excitatory

input from the perception field that activates the action field at a certain region during

learning an cpA controls the strength of that input.

𝜏𝑀𝑖̇ (𝑥, 𝑡) = 𝑓(𝑑𝑖(𝑡))(−𝑀𝑖(𝑥, 𝑡) + 𝑓(𝑈𝐴(𝑥, 𝑡)) (22)

Page 59: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

54 Theoretical Foundations

D. Perception field

To navigate and orientate an environment, an artificial cognitive system must perceive

its surroundings in one way or another. Perception may be visual, touch, auditory or

even olfactory, the choice of which however often depends on the available hardware.

Visual input poses a relatively simple perceptual input since it can be obtained by using

any camera.

The perception field can be either two- or three-dimensional, depending on what kind

of dimensions it is spanned over. The activation peak in the perception field stimulates

the action field during sequence learning, but also excites the condition of satisfaction

field, when the termination of a behavior is perceived. In sequence generation, the

perception field receives stimuli from the action field, which makes it sensitive to the

kind of sensory input the system should look for. In other words, the action field sends

a ridge along a certain parameter in the perception field and if sensory data coincides

with said parameter, a self-stabilized peak is induced. Generally, perception fields are

modelled after the Amari equation (15), where S(x,t) is the input given by any sensory

system, such as a camera.

E. Condition of satisfaction field

As proposed by Y. Sandamirskaya and G. Schöner in [70], a condition of satisfaction

(CoS) is defined for every action in a sequence. Its function is to recognize the

successful execution of an action and subsequently elicit a cascade of instabilities which

ultimately result in the transition to the next action sequence. The CoS field is spanned

over the same metric dimension as the action field, namely over the features of a

behavior. The action field preactivates the CoS field, which establishes a sub-threshold

ridge of activation in the CoS field, making it sensitive to perceptual input at the region

of the behavior parameter. The excitatory projection from the perception field sends

said ridge past the threshold when the terminal state of a behavior is recognized. For

example, if the agent is set out to look for a red object, a bump in activity will establish

along the feature dimension of the CoS field, the feature being the color of the objects.

If the agent’s visual system now encounters a red object, this bump will evolve into a

spike, however, if the agent sees a yellow object, no spike will be formed in the CoS

field since there is no additional input from the action field. The self-stabilized peak in

the CoS field is then induced and inhibits activity in the ordinal system. This further

removes stimuli from the ordinal node to the action field, leading to a decay of the

activation peak in the action field. With no action field activity, the perception field also

cannot hold a stable peak and its activity plummets sub-threshold as well. This process,

called cascade of instabilities, ends with the decay of the activation peak in the CoS

field since it too no longer receives stimulation from neither the action nor the

perception field. Said final decay removes the inhibition from the ordinal system,

Page 60: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

55 Theoretical Foundations

allowing the transition to the next ordinal node to happen and thus inducing the next

behavioral goal.

The dynamics of a CoS field spanned over the dimension y evolve according to:

𝜏𝑈𝑗�̇�(𝑦, 𝑡) = −𝑈𝑗

𝐶(𝑦, 𝑡) + ℎ𝑐 + ∫ 𝑓( 𝑈𝑗𝐶 (𝑦′

𝑗, 𝑡) 𝑤 (𝑦𝑗 − 𝑦′

𝑗) 𝑑𝑦′

𝑗

+ 𝑇(𝑥𝑗,𝑦𝑗) ∗ 𝑓 (𝑈𝑗𝐴(𝑥𝑗, 𝑡)) + 𝑐𝑝𝐼𝑝(𝑦𝑗, 𝑡)

(23)

The first three terms again define the general neural dynamics with a negative resting

level hc. T(x,y) is an additional transfer function, which maps the action field dimension

for the action parameter onto the respective dimension of the CoS field. Ip is the

excitatory perceptual input from the perception field and cp the constant controlling

its strength

.

Page 61: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

56 Practical Realizations

3 Practical Realizations

3.1 Course of Action

Having been interested in neurobiology as well as informatics and other natural

sciences, my primordial idea for this project was to write a program that would enable

a robot to be controlled by one’s mind. I soon realized that this task would pose a

(nearly) impossible challenge for an 18-year-old amateur programmer, however I still

contacted the Institute of Neuroinformatics (INI) in Zurich about my idea and shortly

after met up with Yulia Sandamirskaya. She first introduced me to the concept of

Spiking Neural Networks, which immediately enticed me since it most closely

resembled the processes happening in a real brain. She further suggested doing a

project that added on to ratSLAM. The first two months then mainly consisted of

reading about SNN’s and computational models as well as understanding how the

hippocampus of a rat functioned and how SLAM poses a suitable technique to make

use of the rodent brain’s properties. The following working progress of my project can

be divided into three phases, which I will shortly elaborate since the programs I used

may be useful to someone else looking into working with SNNs and artificial cognitive

systems. The process also shows how the idea for my brain model has been modified

along the way and may give some insight to the thought train behind the final brain

model.

Phase 1: Brian2 and Gazebo

80

81

Since I did not only want to learn about artificial cognitive systems, but also program

one myself, I stumbled upon Brian2, a brain simulation program written in python, a

programming language that I was already familiar with. I discussed my plans with Yulia

and she too thought that Brian2 could be used for my brain simulation. Additionally,

she proposed the use of Gazebo, a robot simulator designed to test algorithms in a

80 Source: http://briansimulator.org/, 20.10.18. 81 Source: https://www.generationrobots.com/blog/en/robotic-simulation-scenarios-with-gazebo-and-

ros/, 20.10.18.

Page 62: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

57 Practical Realizations

virtual environment. This makes it a lot easier and timesaving when debugging the

brain simulation.

To launch the Brian simulator, I used anaconda navigator, which then again uses the

open source web-application Jupyter Notebook that contains life code and visualizes

plots for synapses and neurons immediately when a block of code is run. The QR Code

below on the left enables open access for interested readers to tutorials for neuron and

synapse programming with Brian that I have tried out. The QR Code on the right shows

the first basic idea for a computational model as drawn up in [55].

1. Brian tutorials 2. Computational Model 1

Where Brian2 was very easy to install and further comprehensible, Gazebo was rather

difficult and also very time consuming to use. Since the program is not compatible with

windows, I had to download a virtual machine to install and run the robot simulator.

However, even after troublesome installation, the rendering of the simulation was so

slow that I could hardly make any use of Gazebo.

Exhibit 20. Robot simulation with Gazebo

My first and only robot that I built using Gazebo with the Virtual Machine. Even this very simplistic model where

the rover only had to steer towards the object would take about 10 minutes until it fully rendered.

Page 63: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

58 Practical Realizations

Phase 2: NEST and HBP Neurorobotics Platform

82

83

After figuring out the tools in Brian2, Raphaela Kreiser, my other mentor, introduced

me to the Human Brain Project (HBP) and its associated Neurorobotics Platform. The

HBP wants to advance research in the fields of neuroscience, computing, and brain-

related medicine and provides cutting-edge infrastructure for such research. The

neurorobotics platform is only one of many platforms of the HBP but allows researchers

to implement their brain simulations in virtual experiments to explore movement,

reaction to stimulus, and learning of the agent. This is convenient for testing the validity

and fidelity of a brain algorithm without the expenses of testing it directly in real life.

The neurorobotics platform also uses Gazebo for the simulation of the robotics system,

where one can edit environments and even build one’s own robots. However, in

contrast to solely using Gazebo in the virtual machine, the simulations run a lot more

smoothly and are easily modifiable.

A problem that arose with using the neurorobotics platform was that it depended on

NEST, another spiking neuronal network simulator like Brian, which also uses python

as a programming language (to be more exact pyNN ). NEST is not compatible with

windows either, and since I was not very successful with using a virtual machine, I

decided to use Ubuntu, a compatible Linux software for NEST. The installation process

took quite some time, for I had no previous experience with any Linux system and also

did not know Ruby, the programming language for the In-Shell-programming in

Ubuntu.

82 Source: http://www.nest-simulator.org/, 20.10.18. 83 Source: https://www.heise.de/newsticker/meldung/Human-Brain-Project-Mit-neuen-

Technologieplattformen-der-Kognition-auf-der-Spur-3159056.html, 20.10.18.

Page 64: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

59 Practical Realizations

After setting up NEST, I planned a new, more detailed

model of my cognitive system, which should be able to

discern three or more colors and map the environment by

reference to the location of these colors. The model was

heavily based on the concept behind ratSLAM, however

neglecting the experience map. On the left there is a QR

Code which shows the ideas behind the brain simulation

that I started programming with NEST. It incorporates local

view cells for visual perception as well as HD cells for

sensorimotor data, and implements pose cells, as in ratSLAM. The individual neurons

are based off the Integrate-and-Fire Model and are looked at individually, which then

no longer applies in my final model (chapter 3.2).

NEST too uses Jupyter Notebook as an interface for life coding, making it easy to debug

short blocks of code quickly. In Exhibit 21, an excerpt from the commenced code for

the first real computational model.

Exhibit 21. Local View Cells in NEST

The block shows how each neuron consists of different parameters (defined in cell_params), or properties that

have to be set beforehand. It is also to be defined what kind of neuron is implemented, as in line 18, the type of

neuron is set to IF_cond_alpha, an integrate-and-fire neuron that is modelled after the conductance dependence

of a synapse and therefore dependent on voltage changes, thus posing as a relatively accurate biological model.

Another interesting aspect to look into, is the extraction of hue values from a camera

which are then applied to several conditions (if/elif-clauses) in order to determine what

color the robot sees. Below in Exhibit 22, two blocks of code are shown that are involved

in hue extraction and processing.

3. Computational Model 2

Page 65: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

60 Practical Realizations

Exhibit 22. Hue extraction in NEST

A. The first block shows how the image is masked, so that we only get the hue values of locations that the robot

is directly looking at, where additionally only the values from the middle of the vision field are taken into account.

This helps with determining the location of the color and also reducing error since the colors of the background

should not interfere with the cognitive system’s navigation.

B. The second block demonstrates the idea of how each color goes through a loop of conditions, where the parts

of red, blue and green (RGB scheme) determine the resulting color. I defined for each color like yellow or orange

a range of RGB-parts that made up said color, however varying in intensity and brightness.

Phase 3: CEDAR

84,85

84 Source : https://cedar.ini.rub.de/, 20.10.18. 85 Source: https://etf2018.dynamicfieldtheory.org/dft_bootcamp, 20.10.18.

Page 66: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

61 Practical Realizations

I discovered cedar coincidentally when doing research on DFT and stumbling upon O.

Lomp et al.’s paper [75]. Cedar is a library for building as well as simulating dynamic

field architectures. The architectures can be built by connecting DFT specific and other

elements by the drag-and-drop principle. The powerful graphical interface makes it

easy for non- or unexperienced programmers to develop complex cognitive

architectures since no coding is necessary. However, since the library has yet to be well

documented, I found it rather difficult at first to comprehend the individual elements

and to figure out how certain inputs have to be processed in order to be passed along.

Cedar also offers a visualization tool, where one can directly simulate the brain

architecture on a virtual robot. One is given a set of different types of robots, where it

can be chosen whether one uses only a virtual robot or a real-life robot since cedar

also enables serial connection. Although the idea for such a tool is very valuable, I was

only able to choose one robot to simulate my architecture and even so ran into many

complications while trying to run the program. Altogether though, cedar is a

convenient platform for neuroscientists for efficient architecture building and

simulation and has great potential for becoming a standard program when dealing

with complex neural dynamic fields.

3.2 Brain Simulation With Cedar

3.2.1 Overview

The QR Code to the right takes you to an image of my artificial brain model built with

cedar. At first glance, the model may seem rather complicated due to its cross-linking

level, which is why I will explain the model step by step, looking at the different systems

involved individually. In Exhibit 23 the entire architecture is

simplified to its elementary parts. The main goal of the

architecture is to generate a sequence of behaviors, where

the agent searches its environment for an object of a

specific color, then moves toward said object and stores

the information about the color and the location of the

object as a memory, which allows the agent to maintain a

cognitive representation of its surroundings.

In my architecture, the artificial cognitive processes begin

in the ordinal dimension, the serial order, where the information about the order of

the following sequence behaviors is stored. The dynamic ordinal nodes represent the

ordinal positions of these actions in a sequence whereas the associated memory nodes

keep track of which actions have been terminated, thus ensuring that the serial nodes

transition forward, so that no same behavior is executed repeatedly. The serial order

indirectly propagates to the action field, which can also be regarded as the output

4. Full Architecture

Page 67: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

62 Practical Realizations

field. In theory, the sequences are first learned and stored in the serial order by

propagating activity from the perception field to the action field, marking the individual

colors and also the order in which they should be looked for. The activated location in

the action field – representing a specific color – is then stored in a preshape. Then in

behavior generation, when the serial order kicks in, the preshapes are activated

consecutively which leads to the stimulation of the specific region in the action field,

thus stimulating the perception field. For simplicity reasons, I built so-called simulated

preshapes, where it is assumed that the sequence of colors already has been learned.

This process is explained in more detail in the next chapter.

The by the simulated preshapes activated region in the action field excites the

perception field. The perception field additionally receives input from the camera,

which carries information about the location and color of each object presented. The

camera is fixed, therefore transmitting a panoramic view of the robot’s surroundings.

When the visual input from the camera coincides with the input from the action field,

i.e. when a location along the color dimension in the perception field is stimulated from

both camera and action field, a self-stabilized peak is formed in the perception field.

If this peak is formed, the kinematics system is set off. A movement plan, involving the

location of the target object and the location of the robot, is activated. The agent is

then set out to move to the recognized object, meaning movement in the motors of

the agent is induced. When the location of the agent and the location of the object

Exhibit 23. Overview DFT Architecture

A simple overview for the full cognitive system built with cedar. The blue arrows represent excitatory connections

whereas the red line terminated with a dot marks an inhibitory connection. Within the fields, the dimension of

the field or node is additionally demonstrated, it is to be added that the activity level does not count as dimension.

Page 68: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

63 Practical Realizations

match, the experience map is stimulated, where the information about the location

and color of an object is stored as memory and thus recallable for later use. The

condition of satisfaction (CoS) system consists of a node that recognizes the terminal

state of the behavior by receiving input from the movement plan. The input from the

movement plan eventually turns on the CoS node that inhibits the serial order and

thus stops the whole system until it transitions to the next action in the sequence.

Page 69: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

64 Practical Realizations

Since it was my goal, to replicate the main components of a biological brain involved

in navigational task, the following exhibit shows a crossover of the most important

fields of my computational architecture and the brain of a rat, for rodent’s brains have

been studied most intensely and also pose as basis of the DFT Architecture and

ratSLAM, the two computational neural models I most heavily relied on.

Exhibit 24. Comparison of a rodent brain and the architecture

Some of the fields will be more thoroughly explained in the following subchapters. The serial order model is

divided into the ordinal nodes as frontal lobe structure and the memory nodes as cells within the dentate gyrus.

The perception field represents the visual cortex and is linked to the camera, standing for the rat’s eyes. The

kinematics system mirrors the function of the motor cortex, whereas the position of target field poses as

representation of place cells in the hippocampus. Lastly, the action field may be viewed as some sort of

analytic/decisive structure in the frontal lobe, driven and stimulated by the CoS node, representing the rat’s award

system (VTA/Nucleus Accumbens).86

3.2.2 Serial Order

Below in Exhibit 25, the serial order and the structures directly involved with the ordinal

dimension are demonstrated. The serial order structure could be considered as a

structure partly located in the prefrontal cortex87, as it is involved in decision making

and ordering of processes (ordinal nodes), as well as a hippocampal structure like the

dentate gyrus (memory nodes) since it remembers visited locations. Its functionality

may be seen as one of the underlying factors of the intelligence of an agent due to its

higher cognitive processing capacity.88

86 Source: https://www.researchgate.net/figure/Sagittal-scheme-of-the-rat-brain-illustrating-

hypocretinergic-influences-on-the-cerebral_fig3_26781445 , 13.12.2018. 87 By ‘prefrontal cortex’, the dorsolateral prefrontal brain areas of primates are meant. However, rats –

on whose brains a lot of the DFT is founded – also have areas in their frontal lobes that accord to the

functions of a prefrontal cortex. 88 Cf. Brain structures involved in navigation 2.3.1 (p.32).

Page 70: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

65 Practical Realizations

Exhibit 25. Serial order structure

A more detailed look at the serial order. The ordinal and memory nodes are serially numbered, where each

number is associated with a behavior, represented by the color of the object as labeled in the simulated

preshapes. The preshapes are one dimensional neural fields like the action field, but are continually slightly

stimulated by a Gaussian input with a specific center, representing the color.

The serial order consists of the memory nodes and the ordinal nodes. As explained

in chapter 2.3.2.6 (p.49), the self-excitatory connection (in green) of the ordinal nodes

result in bistable-dynamics, where the node is either active or not. The mutual global

inhibitory connections (red) among the ordinal and

memory nodes are necessary to prevent that more than

one node is active at the same time. Each ordinal node is

additionally inhibited by its associated memory node,

making a reactivation of the same ordinal node impossible

and hence ensuring the transition to the next ordinal node.

The memory nodes are respectively activated by their

associated ordinal node, and further propagate activity to

the next ordinal node in the sequence, which facilitates its

activation after the termination of the preceding action.

The QR Code to the right leads you to a video that demonstrates the ordered activation

of one ordinal node after another. The video additionally shows how the structure is

first activated by a manually controlled boost element, that injects activity into the first

ordinal node.

5. Serial Order

Page 71: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

66 Practical Realizations

Each ordinal node stimulates a simulated preshape, a neural field that is spanned

over a color hue dimension. A ridge of activity at a certain region determines which

color the preshape represents. This ridge is established by a one-dimensional Gaussian

kernel, where each “preshape” is excited by a kernel with a different center,

representing the color. When stimulated by an ordinal node, this bump passes the

threshold potential and forms a peak. This peak then simulates a certain location in the

action field, transmitting the information about what color is looked for. In DFT theory,

preshapes are actually formed during sequence learning and are able to store

information by having slower decay dynamics. I tried incorporating real preshapes in

my model as well, which however made the parameter tuning of the fields and

connections a lot more complicated, a problem that will be further discussed in the

results section (p. 81).

3.2.3 Perception

The following exhibit visualizes the structures involved in the perception of the

surrounding environment. Said architecture may be looked as simplified model of

structures in the visual cortex of the brain, processing allothetic visual information and

further transmitting it to other areas in the brain.

Exhibit 26. Perception system

The green “bumps” represent the three-dimensional Gaussian kernels used to locate the different objects. The

visualized objects are represented in the diamond shape and have no influence on the cognitive system, but

solely depict the objects visually.

Page 72: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

67 Practical Realizations

In cedar there have not yet been cameras setup which can be used in the simulation,

meaning the visual input about the surroundings have to be modelled artificially. The

simulated camera consists of four three-dimensional Gaussian inputs and a three-

dimensional neural field. The center of the kernels states the position of an object in

the virtual simulation, whereas the third dimension encodes for the color of the object.

All of the Gaussian kernels project to the camera field, resulting in a three-dimensional

field that has four stabilized peaks at four different coordinates.

The camera field then excites the perception field, making it sensitive to input coming

from the action field. In other words, the perception field shows four bumps with

subthreshold activity. When a specific location in the action field is activated, it

stimulates the color dimension of the perception field. This results in the bump at the

same location along the color dimension to pass the threshold and a peak in the

perception field is established.

This peak not only represents the color of the target object, but also the location. The

information about the location of the target object is then passed on to the position

of target field, inducing a peak along two-dimensions encoding for the environmental

space, where the peak hence represents the spatial coordinates of the target object.

The peak in the perception field also sets off the move node, which is involved in

establishing a movement plan, a process explained in the next chapter.

3.2.4 Kinematics

Exhibit 27 shows the structures that are involved in the movement planning and

movement execution of the system.89 The kinematics system relies on functionalities of

the (primary) motor cortex and the reward system including the nucleus accumbens.

However, fields like the current position and initial position field act as representations

of the medial entorhinal cortices whereas the position of target field has a similar

function as the superficial entorhinal cortex, i.e. these fields act as hippocampal

structures. 90

89 Most of the kinematics structure is recreated after the architecture from Stephan K. U. Zibner et al. in

[85] since it has proven to work very well, i.e. there is no need to reinvent the wheel. Additionally, the

structure stays biologically plausible by implementing neural oscillators. 90 Cf. Brain structures involved in navigation 2.3.1 (p.32).

Page 73: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

68 Practical Realizations

Exhibit 27. Kinematics

The orange marked fields are responsible for the actual motoric movement, whereas the blue colored field

demonstrates the starting point of the cognitive system before each movement is undertaken. The green triangle

represents the convolution and inversion process needed to form a movement plan.

The library cedar possesses pre-built in elements to control as well as gather data from

the motoric movement of the virtual robot. “Forward kinematics” is an element that

can be used as path integration tool since its processed output gives you the estimate

of the current position of the robot, in my case the position of the robotic arm91. The

current position is represented in another DNF, which projects to the initial position

field.

The move node is like the memory and ordinal nodes governed by bistable dynamics

and expresses the intention to move towards an object. It acts as a peak detector of

the perception field, meaning it activates as soon as the perception field shows activity

above threshold. The move node inhibits the current position field while it excites the

excitatory and inhibitory oscillator. By inhibiting the current position field, the initial

position field is no longer updated when the robot is set in motion, meaning the

beginning position of the robot is sustained whenever the robot moves to the target

object. The initial position field is only again updated, after the action has been finished.

This property allows a formation of a movement plan, where the representation of the

target object is transformed to a coordinate frame that is centered in the virtual robot.

For this transformation, the output peak in the position of target field is convolved

91 Again, a robotic arm was used since there was no possibility to implement the software on a rover-

like robot.

Page 74: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

69 Practical Realizations

with the inversed output of the initial position field. To recapitulate, the target

position is aligned relative to the initial position of the robot, which allows the

calculation of the vectors that need to be followed by the robot to reach the target

object.

The convoluted output from the position of target and initial position field is

propagated to the excitatory and inhibitory oscillators, which additionally are

stimulated by the move node. Neural oscillations are brainwaves that behave like

rhythmic patterns of neural activity. It has been observed that alpha brainwaves of 8-

12 Hertz and beta brainwaves 13-30 Hertz are involved when humans make certain

movements. It is therefore reasonable to interpret motor commands as travelling

brainwaves, or cortical oscillations, or just to acknowledge the involvement of

oscillatory components in the motor cortex. [76] [77] The excitatory and inhibitory

oscillators are spanned over a larger spatial area than the other DNFs since they

represent the relative distance between the initial and target position of the robot.

Activation is first created in the more quickly evolving excitatory oscillator, which is

suppressed over time by the more slowly evolving inhibitory oscillator. From the

excitatory oscillator, a velocity vector is extracted from the target position, which is then

processed by pseudo inverse kinematics to a virtual joint velocity vector, which steers

the robotic arm towards the target. If the robot is at the object’s position, the relative

distance will be zero which in turn forms a peak in the middle of the oscillator field. The

peak in the excitatory oscillator then decays due to inhibition from the inhibitory

oscillator.

Both oscillators are connected to peak detectors. The excitatory oscillator connects

to peak detector 1, which strongly inhibits the activation of the CoS node. The

inhibitory oscillator connects to peak detector 2, which on the other hand stimulates

the CoS node. If the activity of the excitatory oscillator is suppressed below threshold,

the CoS node is no longer inhibited and hence becomes activated, meaning that the

behavior of moving to an object of a certain color has been executed successfully.

3.2.5 Experience Map

Exhibit 28 shows the experience map as well as the memory field, and their associated

stimulating structures. This structure has been inspired by the technique used in

ratSLAM, where the agent generates a spatially continuous representation of its

environment. Such a feature in an artificial network is a pivotal part when it comes to

mirroring the formation of memories, thus information storage, in biological

organisms.

Page 75: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

70 Practical Realizations

Exhibit 28. Experience map

Visualization of the experience map structures. The dotted line between the perception and initial position field

represents their indirect linkings.

The experience map is spanned over three dimensions. Two dimensions represent the

spatial area, or the location of the objects, whereas the third dimension represents the

color of said object. The initial position and position of target field both project to

the end effector92/ target match field. If the virtual robot moves to the correct object,

the initial position field will become updated, since its preceding current position field

is no longer inhibited. The initial position field will be a representation of the location

of the robot, matching the location of the peak in the position of target field, since the

location of the robot and the specific object coincide. The established peak in the

match field is then projected to the experience map and spread across the two spatial

dimensions. For a peak to establish in the experience map, i.e. for the activity level to

pass the threshold, additional input from the perception field is propagated to the

experience map field. This activity peak location is representative of the color of the

object and is mapped along the third dimension. This sets the activity level of the map

field above threshold and a self-stabilized peak in a three-dimensional space is

established. The peak is then projected to a memory field, which acts like a preshape,

except that the decay rate of activity is even slower, which means that the information

about location and color of the explored objects can be retained for a longer time. In

chapter 5, I will discuss how this feature can be expanded and used for further projects.

92 Synonymous for robot

Page 76: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

71 Practical Realizations

3.2.6 Condition of Satisfaction System

The exhibit below visualizes the structures involved in the termination of a behavior,

namely the condition of satisfaction (CoS) system. This system may be viewed as

simplification of the rewards system93, where the nucleus accumbens (here CoS node)

interacts with the prefrontal cortex (here serial order structure), to decide when a

behavior has been successfully carried out.

Exhibit 29. Condition of satisfaction system

With the CoS node inhibiting the entire serial order structure, every activity in the following field decays as well,

since the preceding input disappears. This cascade of instabilities, marked by the red dashed line, eventually leads

to the deactivation of the CoS node, which in turn releases the inhibition from the serial order (green dashed

line.)

As explained in chapter 3.2.4, the CoS node is activated when the inhibitory oscillator

establishes a peak. In DFT theory, the CoS system is defined by a field, sensitive to the

action and perception field. In my model, the terminal state of a behavior is marked by

the corresponding location of object and robot. Hence, the CoS node is activated when

a peak is detected in the inhibitory oscillator. The node inhibits the ordinal dimension,

by being connected to every ordinal node, so that a cascade of instabilities sets in. The

decay of activity in the ordinal node leads to a deactivation in all of the following fields

since the ordinal node is the origin of neural activity. Eventually, the CoS node will be

turned off, and the memory node will enable the transition to the next ordinal node

93 Cf. Brain structures involved in navigation 2.3.1 (p.32).

Page 77: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

72 Practical Realizations

which no longer receives inhibition from the CoS node. This activation then initiates

the next behavior is initiated.

3.3 Robotic demonstration

The built-in simulator in cedar provides a robotic arm overlooking a table top scene,

where the objects can be placed. The transition from simulated to real-world robotics

should ideally function without any parameter tuning, i.e. if the artificial cognitive

system fulfills its purpose in the simulator, it should also work in real-life. Exhibit 30.

Robotic armExhibit 30 shows the robotic arm, that is controlled by the architecture built

with cedar, and its environment.

Exhibit 30. Robotic arm

The arm is able to move in a three-dimensional space, however in my model it only

needs to move in a translative way, meaning in two dimensions. The objects are placed

within a field of 50x50 and are placed directly above the panel, although they could

also be visualized as floating objects if one wanted to test a three-dimensional

movement. It may be noted that the robotic arm facing in a horizontal direction is a

bug94 within the cedar software and does not actually respond to any input nor inhibit

the architecture from being successfully executed.

3.3.1 Parameter Tuning

The most important and also difficult part of constructing a cognitive system with

cedar, is tuning the parameters for the respective fields, nodes or connections. The

94 It is not so much a bug, that it is simply a useless robotic arm, but more so that this arm should only

be visualized when the program is serially or wirelessly connected to a real life robot, which was not the

case.

Page 78: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

73 Practical Realizations

parameters are based on the mathematical formulas looked at in the chapter about the

Dynamic Field Theory (p.47 and following).

Exhibit 31. Parameter tuning

The image to the left shows the parameter settings for the action field and the image to the right shows the

settings for a Gaussian input functioning as an input for the location and color of an object (red).

Exhibit 31 shows to setting fields, that allow one to tune the properties of either a

neural field, a neural node or also external inputs, as the Gaussian input (right). All of

the fields, except for the oscillators, span over fields with the dimension size of 50, in

order to stay representative of the environment. The settings allow the tuning of the

time constant, resting level and also the lateral interactions by modifying the global

inhibitory connections or the lateral Gaussian kernels. The individual values may be first

set to empiric values given by previous biological experiments, such as the general

understanding that the threshold of a neural membrane lies at -55 mV95. Another

option is to mathematically calculate the individual parameters by using the concept

of Equation (15)96 and tuning some of the equation’s variables.

Exhibit 32 shows the processing steps that are sometimes needed so that the output

of one field can become the input of another. This may mean projecting the output

onto more or fewer dimensions, convoluting it with a Gaussian kernel to get a smoother

95 Cf. Threshold Potential and Refractory Periods 2.1.1.5, p. 11. 96 Depending on what kind of field is tuned, slightly different formulas have to be used (e.g. action field,

memory node, etc.), cf. 2.3.2.5, p. 46.

Page 79: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

74 Practical Realizations

signal, or strengthening/weakening the junctions, so that the target field receives

enough excitatory or inhibitory stimuli.

Exhibit 32. Processing

The parameter settings demonstrate how Gaussian kernels can be modeled and convoluted with an output of a

field. The symbols below are examples processing steps needed to transmit information from one field to

another. The first element converts the dimension, the second convolves the output with a Gaussian kernel, and

the last symbol strengthens the connection, thus causing an excitatory input.

3.3.2 Zero Dimensional Nodes

Nodes are either activated, meaning they have a sigmoided activation level of one, or

they are deactivated, respectively showing no sigmoided activation, as demonstrated

in Exhibit 33. The name ‘zero-dimensional’ may be confusing when looking at the plots

given by the cedar software since these show a graph spanned over two dimensions.

In cedar, the activation level does not count as a dimension because it is no feature

parameter of the neural field. The depicted graph in zero-dimensional nodes is

therefore the depiction of the activity level of the node.

Page 80: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

75 Practical Realizations

Exhibit 33. Neural node

The first plot shows the activation of a node, while the second plot demonstrates the constant value of one an

active ordinal node.

One can also observe the input a node receives, shown in Exhibit 34. The activity is

plotted against the horizontal time axis and the vertical axis presenting the nodes’

voltage potential in mV. The differently colored graphs represent each a different

source of activation, where the source can be figured out by checking the strength of

these inputs.

Exhibit 34. Activity in nodes

The plot shows the different

inputs an ordinal node receives

with their respective strength in

mV. For example, the blue

negative input starting at -2.6 s is

the output from the CoS node,

inhibiting this node entirely due

to its strong inhibitory effect of -

20mV.

Page 81: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

76 Practical Realizations

3.3.3 One-dimensional Fields

The activity of one-dimensional fields is visualized by a peak along the dimension,

which is representative of the color spectrum.

Exhibit 35. One-dimensional field

The left plot shows the sigmoided activation, or the output of a one dimensional field, where the right plot depicts

the input of the field, where the input at 0 is significantly stronger (20 mV) than the bump of activity at 10 (<1

mV).

Exhibit 35. shows the location of activation along the one-dimensional action field.

Here, the center of the peak is one, representing the color red. Such a peak is formed,

whenever the activity level at a certain location crosses the firing threshold, which in

cedar is set at 0. In order to modify the threshold potential, one has to tune the resting

level of the field, which can be done in the parameter settings, as shown in Exhibit 31.

The plot on the bottom of Exhibit 35 depicts the input the one-dimensional field

receives, where the vertical axis again represents the voltage potential in mV. The slight

bump at 10 is the subthreshold input coming from the simulated preshape 2, which is

representative of the color yellow.

3.3.4 Two-dimensional Fields

In my model the two-dimensional fields, like the position of target field shown in

Exhibit 36 and Exhibit 37, are representative of the location of an object, where a peak

marks said object’s position within the environment.

Page 82: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

77 Practical Realizations

Exhibit 36. Two-dimensional field

Plots demonstrating the in- and outputs of a two-dimensional neural field. The color scale to the right of some

fields references the strength of activity within a certain field. The scaling of said spectrums may change

depending on how the parameters are tuned. 1. Two-dimensional input received from a preceding field. 2.

Sigmoided output of the neural field. (Activity squashed between 0 and 1) 3. Output of the neural field depicted

three dimensionally. 4. Values regarding simulated time frame, which are not relevant for my runthroughs of the

program. 5. All of the inputs summed, here the simulated field has the same data as 1 since there is only one

incoming input. 6. All of the lateral interactions between the active regions of the neural field. These interactions

are formed by competitive attractor dynamics.97 7. One lateral kernel showing the strength and form of the

regional interactions 8. Derivation of the activation of the field as in 3. 9. and 10. Are simulated fields that ahow

noisy input, however I did not actually take a noisy environment into account when running the programm.

97 Cf. RatSLAM: Simultaneous Localization and Mapping 2.3.2.1, p.37

1 2

3 4

5 6

7 8

9 10

Page 83: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

78 Practical Realizations

Above, Exhibit 36 depicts the different plots showing the activation of a neural field.

The second plot on the left shows a three-dimensional activation kernel, whose three-

dimensionality stems from the quality that the activity level is spanned on a third

dimension. The Gaussian shape of the activation stems from the lateral interactions, as

observable in the third plot on the left. The actual output is a sigmoided peak, shown

in the first plot to the right. The plots presenting the field are color coded, where a

color spectrum on the right assigns the highest and lowest voltage potential within the

field to the respective ends of the color spectrum.

Exhibit 37 shows the evolution of a two-dimensional field, in this case the position of

target field, when being excited by another neural field.

Exhibit 37. Activity in 2D neural fields

A.1

A.2

B

A.1 and A.2 show the evolution from unclear to clear input, whereas B depicts the sigmoided activation within

the field, meaning the peak. The numbers along the first two dimensions are the space coordinates, the numbers

on the vertical axis are the values for the voltage potential.

A.1 and A.2 show the activation of the field, where the

received input is first subthreshold (A.1), but then passes the

threshold (A.2) and establishes a peak. The output being

transmitted to the next structure is demonstrated as

sigmoided peak in (B). In A.1 the darker colored spots (left)

show the bumps (right) representing the location of the four

objects, of which one passes the threshold, depending on the

input.

6. Activation 2D Field

Page 84: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

79 Practical Realizations

3.3.5 Three-dimensional Fields

In fields with three dimensions, the plotting is more difficult to decipher since the

additional activity dimension does not simply allow for a three-dimensional depiction

as in Exhibit 36.

Exhibit 38. Three-dimensional fields

Plots demonstrating the different interactions and activations within a three-dimensional field.

The plot on the top left depicts input coming from a one-dimensional field whereas the plot

next to it on the top right shows the input coming from another three- dimensional field, hence

the different displays. These simulated fields depict the same properties of a DNF as explained

in Exhibit 37.

In Exhibit 38, the different plots for the in- and outputs of the three-dimensional

perception field are shown. The field is divided into squares, which in reality are three

dimensional cubes, in this case encoding for the two spatial and the one color

Page 85: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

80 Practical Realizations

dimension. The active locations are spread along a row of these squares in order to

represent the activity level of the in- and output. The active site in the middle along

said row marks the peak of the activation, which can be seen in the second plot to the

right.

Page 86: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

81 Results

4 Results

In this section, the performance of my model will be evaluated by first looking at the

functioning of the individual structures making up the architecture. Each structure has

been tested isolated, meaning the input a structure received based on the theory

behind the architecture, has been artificially reconstructed, so that the input can be

manually controlled. Lastly, the overall performance of the cognitive system will be

analyzed, i.e. it will be looked at how well the brain simulation functions when all of the

basic structures are connected. Errors that have occurred while running the simulation

as well as omissions of structures mentioned in subchapter 3.2 will then be explained

in chapter 5.

As a reminder: The primary goal was to construct a cognitive system, which enabled a

robot to autonomously navigate through an environment by means of the colors of

the surrounding objects. By navigation, it is meant that a specific color is dictated to

be looked for, and the robot then navigates towards said colored object by using its

working memory. Secondary, the robot should be able to learn a sequence of colors

by itself and thus executing the behaviors in the correct order.

I. Serial order structure

Figure 1.

A.

A shows the strength of the inputs the

first ordinal node receives. The first

positive activation (turquoise) is the

boost element, setting off the

structure. The purple line is the self-

excitation, the first negative bump

(blue) marks the inhibitino from the

memory node 1. The second negative

activation stems from the CoS node

(blue), which also results in the

deactivation of the node (decline of

the purple graph). The third inhibitino

(green) comes from the now activated

ordinal node 2 (global inhibitory

connections).

Page 87: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

82 Results

B.

B. shows the inputs of the first memory nodes on the left and the sigmoided activation on the right. The plot on

the left depicts how the memory node is activated (self-excitation in yellow) after the ordinal node has become

active (blue). The first negative input (green) stems from the second memory node, the second negative input

(red) from the third memory node (global inhibition).

Figure 1 depicts the general input of the ordinal node on the left (A) and the input as

well as the sigmoided activation of the memory node on the right (B). In plot A, it can

be observed how the ordinal node is first set off by a manually activated boost with

the strength of 5 mV, this then sets off the self-excitatory connection. The activated

ordinal node stimulates the memory node, which in return propagates inhibition to the

associated ordinal node. The activation of the memory node is transformed in a

sigmoid function and then sustained, due to the slower dynamics which lead to a

slower decay in activity, whenever the ordinal node becomes deactivated. The

deactivation of the ordinal node occurs with the activation of the CoS node, which

inhibits all ordinal nodes with an input of -20 mV, marked by

the second negative input in A. When the inhibition from the

CoS node is released, the next ordinal node is activated by

the previous memory node. The first ordinal node then is not

only inhibited from reactivating by the memory node, but

also by the global inhibitory connection coming from the

succeeding ordinal node. The QR Code to the right leads you

to a video of the transitions within the serial order caused by

the CoS node.

7. Serial Order - CoS

Page 88: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

83 Results

Figure 2.

1.

2.

3.

4.

1-3 show the transition from the first to the second ordinal node due to the manually activated CoS node. 4

depicts the successful run of the serial order, where the memory nodes remain activated. The red dots signalize

when a node is active, whereas the white dots stand for a non-active node.

Tested isolated, the serial order structure undergoes this process smoothly, with each

ordinal node activating after another, while the memory nodes sustain their activity

level. Figure 2 depicts how the ordinal nodes become inactive with the manual

activation of the CoS node whereas the associated memory nodes stay active. The

manual deactivation of the CoS node then leads to the transition to the next node until

the last node becomes activated and hence the last behavior is terminated. Hence, this

isolated structure shows a successful implementation of serial sequence learning, which

was discussed in the subchapter 2.2.2.3.

Another functionality to be tested was the activation of the

action field by the associated ordinal node. Figure 3

demonstrates how the excitation coming from the ordinal

node, boosts the activation bump above the threshold of

recognition, eliciting a peak at a specific location along the

action field. A.1 and B.1 mark how the occurrence of the

respective peaks in the action field coincide with the activity

of the ordinal node associated with a certain color, in this case

associated with the simulated preshapes. The bumps along

the one-dimensional action fields as in A.2 and B.2 are induced by the simulated

preshapes, representing the colors that should be looked for. The plots show how this

8. Serial Order - Action

Field

memory nodes ordinal nodes

CoS node

Page 89: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

84 Results

structure successfully functions, when the serial order is manually controlled with the

CoS node. The QR Code on the left shows a video demonstrating this function.

Figure 3.

A

A.1

A.2

B.

Page 90: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

85 Results

II. Action field

In Figure 4, the activation and sigmoided activation in the action field are depicted for

each color represented by an object in the environment. The different locations of the

peaks represent the different colors, meaning that the objects can also be effectively

distinguished from one another and that there is no ambiguity. The action field receives

one-dimensional input from the simulated preshapes, which in turn are stimulated by

a three-dimensional Gaussian input, where the coordinate of the third center is

representative of the color. Said value of the center then becomes the center of the

peak induced in the action field, as visible on the left in Figure 4.

Figure 4.

98 By ‘false’ it is meant that the preshapes are manually controlled, rather than being an autonomous

structure. (cf. 3.2, p.60)

B.1

B.2

A. and B. show the activation of the first and second ordinal node, hence the activation of the associated simulated

preshapes and the action field. The location of the activated region is plotted in A.1 and B.1, where the peak in

A.1 is representative of the color red and the peak in B.1 of the color yellow. A.2 and B.2 then demonstrate the

slight bump in activity level, whenever no input from the ordinal node excites the action field. These bumps result

from excitation from the false preshape fields98, their different locations representing the value of the looked for

color.

Page 91: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

86 Results

red yellow

green

blue

The four groups of plots show the location of the activation peak in the action

field on the left, and the activation of the position of target field to the right,

where the top plot is a two-dimensional and the bottom plot a three-

dimensional depiction. The activation in the position field is shaped as a

Gaussian kernel due to the lateral interactions of the field.

The QR Code on the right leads you to a video that shows how the position of

target field is activated, whenever a peak forms in the action and perception

field. The location represented in the position of target field is the location of

the green object.

9. Color green

The action field also has to stimulate the perception field in order to communicate

which object should be moved toward to. This connectivity is shown in Figure 5, where

the peak in the action field lies at 1, the location encoding for the color red. This output

is then spanned onto a three-dimensional field, as seen in the middle, which stimulates

the perception field. This input is needed to push the perception field above threshold,

which is further examined in part III. The sigmoided activity in the perception field

hence represents the location as well as the color of the target object. This shows, that

the modeled perception field can be seen as a neural circuit in the visual cortex

accounting for the processing of visual stimuli which then acts as allothetic information

for the place fields in the hippocampal structures.99

99 Cf. Exhibit 13. A simplified anatomical model, p. 36.

Page 92: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

87 Results

Figure 5.

A.

B.

C.

A. shows the plot of the sigmoided activation of the ordinal node 1 and the resulting location of activation in the

action field. B. demonstrates how said peak is spanned over three-dimensions, which is projected to the perception

field. C. shows the output of the perception field, where the sigmoided activation is representative of the location

and color of the target object.

III. Camera and perception

The camera receives four different inputs as depicted on the left in Figure 6. The

different inputs are four three-dimensional Gaussian kernels, where two centers mark

the coordinates in the two-dimensional space of the environment, while the third

center is symbolic for the different colors of the objects. The plot on the right for the

sigmoided activation of the camera further shows how the peak of the Gaussian inputs

is represented separately, therefore differentiating itself from the other stimuli. There

are no overlaps in the location of the individual peaks, meaning there is no ambiguity

about the location or the color of the objects ‘sensed’100 by the camera.

100 Quotation marks since it is not a real camera and there are not actual objects visually sensed in the

virtual environment.

Page 93: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

88 Results

Figure 6.

A.

B.

A. shows the three-dimensional input the camera field receives from the Gaussian kernels. B. demonstrates the

resulting output, namely four different representations of four different locations. The numbers correspond to

which perceptual input results in which output from the camera field.

The perception field has to receive input from both the camera field and the action

field in order to establish a peak. As demonstrated in Figure 7, the perception field is

excited by the output of the camera, meaning a bump in along the perception field is

formed for every object, respectively the Gaussian kernel acting as input of the

simulated camera. It takes the additional activation from the action field, seen to the

left of the figure, to pass the threshold and establish a peak.

Figure 7.

1

2

3

4

4 3

2 1

Page 94: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

89 Results

blue

red

The top plotting group depicts the locations of activation for the object

of the color blue whereas the lower group shows the activated locations

for thecolor red. The different positions of activation in the position of

target field are representative of the spatial coordinates of the different

objects and further demonstrate how the input coming from the action

field specifies which object to look for.

Said three-dimensional self-stabilized peak seen in the bottom left plot, is used to

activate the position of target field. The two spatial dimensions of the perception field

are projected to the position of target field, which therefore represents the location

coordinates of the target object. The two groups of plots in Figure 7 each show the

activations in each field for the color blue (top) or respectively red (bottom). The

activation caused in the position of target field confirms that the objects have been

differentiated by their colors, hence resulting in different locations.

IV. Position field and movement

Below, the connectivity between the action, perception, and position of target field is

demonstrated by their respective activity levels. The local spike in the action fields

determines which object is chosen to move to in the perception field, which in return

propagates the information about the location of said object to the position of target

field.

Page 95: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

90 Results

Figure 8.

This group of plots demonstrates how the peak in the

action field “decides” which object’s location is

further transmitted to the position of target field,

marking the object that has to be moved to.

This peak in the position of target field is needed to inform the system where to move

to. Figure 9 shows the process of a movement plan, where the position of the target

object is offset with the initial position of the robotic arm.

Figure 9.

1. 2.

3. 4.

The plots 1-4 show the activation process of the excitatory oscillator. The peaks in the position of target and initial

position field are plotted to the left. The field on the top right in 3 depicts how the location of the activation kernel

is shifted when the robotic arm moves resulting in a centered location as in 4 when the location of the robot and

object coincide as observable in the location of the peaks to the left in 4. The change of color resulting in the

excitatory oscillator field between 3. and 4. Is due to its oscillatory effect, meaning its activation levels are reversed.

Page 96: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

91 Results

The QR Code on the left takes you to a video demonstrating

the process of Figure 9.

The group of plots in Figure 9 depict the sigmoided activation

of the position of target and the initial position field as well

as the activation and sigmoided activation of the excitatory

oscillator. At first (1.) the excitatory oscillator only receives

input from the initial position field, causing unclear activation.

The oscillator is then additionally stimulated by the position

of target field (2.), which leads to a definite activation. The location of the activation

peak in (2.) is representative of the relative distance between the position of the robotic

arm and the position of the target object. In (3.), one can observe how this activation

shifts towards the center of the field, meaning the motor system of the robotic arm is

stimulated to move towards the object. The activation peak in the excitatory oscillator

is sustained as long as the position of the target and the position of the robotic arm

do not coincide. The last plot (4.) in Figure 9 shows that when those positions do

coincide, the activation of the excitatory oscillator is centered, meaning there is no

distance between the two coordinates. There is however no peak in the oscillator field

since the robot no longer has to move.

The excitatory oscillator is inhibited by the inhibitory

oscillator, which is shown in the following Figure 10 as well as

in a video, which can be accessed with the QR Code to the

right. Whenever the excitatory oscillator shows a peak of

activity in its field’s center, the inhibitory oscillator is

activated. The inhibitory oscillator is not set off by some sort

of peak detector, that detects whenever the activity peak is

centered within the excitatory oscillator field. It receives the

same input as the excitatory oscillator, namely from the initial

position and position of target field. However, its field’s dynamics have to be slowed

down in a way, that it becomes active whenever the position of the target and the

position of the robotic arm coincide. This however can be problematic when the

execution of behaviors takes different amounts of time. In the model, the inhibitory

oscillator has been tuned so that there surely is enough time to execute an action,

though this technique would lead to pauses between actions.

10. Movement plan

11. Oscillators

Page 97: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

92 Results

Figure 10.

1.

2.

1 shows how the inhibitory oscillator becomes active whenever the peak in the excitatory oscillator is centered.

2 demonstrates the oscillatory effect of the inhibitory on the excitatory field, where activity levels reverse, marked

by the color coding (to the right of each plot) of the activation level.

In Figure 10, the plots on the bottom right show why those fields are called oscillators.

The excitatory oscillator is inhibited in a way, that it reverses its activity level, i.e. it

oscillates, where the peak becomes the minimum. The whole field however is

subthreshold, so that the peak detector 1 becomes deactivated. With the deactivation

of peak detector 1 and the activation

of peak detector 2, the CoS node is

activated, hence leading to the

cascade of instabilities.

Figure 11 demonstrates the cascade

of activation in the three nodes.

However, it can be observed that the

signal coming from the oscillators is

not quite stable at first, since both

peak detectors oscillate as well,

resulting in the delayed activation of

the CoS node. Nevertheless, this

inconsistency did not seem to tamper

to much with the functioning of the

system since it only seemed to cause

a slightly delayed deactivation of the

current ordinal node.

Figure 11.

The group of plots show the activity levels of the peak

detectors on the left and the activity level of the CoS node

on the right. Peak detector 1 (top) is stimulated by the

excitatory oscillator whereas peak detector 2 (bottom) is

excited by the inhibitory oscillator.

Page 98: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

93 Results

V. Overall performance

While the individual structures necessary to form this artificial cognitive system fulfilled

their task successfully when run isolated or only connected to one or two other

structures, my model was not able to perform the task of autonomously generating

four behaviors in a given/learned sequence. This means that when all of the structures

were connected, the robot did not execute one behavior after another, according to

the dictated sequence.

Figure 12 demonstrates this problem,

where the robotic arm solely executes the

first action, namely finding and moving

towards the red object, but is unable to

perform the next task. Looking at (1.), the

first ordinal node is activated, thus

activating a specific location in the action

field encoding for the color red. This then

indirectly leads to the robotic arm moving

towards the red object. (2.) shows how

ordinal node 1 is inhibited by the CoS

node, resulting in the decay of a peak in

the action field and further leading to the

transition to ordinal node 2. Whenever

the second ordinal node becomes active,

the action field is again excited, now at the

location encoding for yellow. It becomes

clear in (3.) that the robotic arm does not

react upon the activity in the action field

and that the ordinal node 2 is shortly after

its activation already deactivated. The QR

Code below takes you to a video showing

the faulty behavior of the robotic arm.

However, the robot is able to recognize

each one of the

four colors. The

top image of

Figure 13 shows

the architecture

connected in a

way, that en-

ables the robot to autonomously move to one object of a

Figure 12.

1.

2.

3.

1 shows how the robotic arm moves towards the red

object with the activation of the first ordinal node. 2

then demonstrates the transition to the next ordinal

node, which is associated with the color yellow. In 3,

the action field shows that the robot receives input

about the yellow object, but it also shows how the

robotic arm does not move towards said object.

12. Overall Performance 1

Page 99: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

94 Results

specific color. It must be noted that only one simulated preshape is connected to the

serial order, meaning the robotic arm cannot conduct a series of behaviors, but only

one. The behavior of moving to a certain object functions independently of the

Gaussian input for the simulated preshape. This means the robotic arm is able to move

to the four different locations of the four differently colored objects, when a specific

color is dictated via the simulated preshape.

Figure 13.

A.

A.1

A.2

A. shows the architecture which enables a functioning artificial cognitive system. A.1 depicts the autonomous

movement of the robotic arm toward the red colored object, which is also the first behavior of the sequence. A.2

shows how the robotic arm is also able to move toward the other colored objects, when the Gaussian input for

the simulated preshape is altered.

A.1 demonstrates the arm’s movement according to the activated neural fields in A.

The lower image to the right A.2 shows the arms movement towards the other objects.

This means that the structures involved in location and color recognition work as well

as the calculation of the relative distance from one location to another.

The QR Codes below lead to videos demonstrating the robot detecting the correct

color and moving towards said colored object. The QR Code on the left shows this

Page 100: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

95 Results

movement for every color, whereas the QR Code on the right also shows the cascade

of activation along the connected neural fields.

13. Overall Performance 2 14. Functioning Brain Architecture

Page 101: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

96 Discussion

5 Discussion

The results have shown that the cognitive system is able to maintain information about

different sensory input, such as the color of an object or the current location of the

robotic arm, which enables the system to autonomously navigate towards a specified

object in a previously unknown environment. However, the system is not yet able to

learn a sequence of such behaviors, nor can it autonomously execute such a series of

actions, as further discussed below in 5.2. Subchapter 5.1 will analyze the errors of the

brain simulation as well as While the cognitive system shows properties resembling

working memory, it has yet to perform higher level processes involving long term

memory, an ability that could be realized by the experience map structure, proposed

in chapter 3.2.5. Below in chapter 5.3 further expansions relating to memory are

discussed.

The proposed brain simulation is to my knowledge, currently one of the first

architectures built with cedar that tries to implement serial order in a dynamic behavior.

Although the behaviors could not be run sequentially, the model still uses preshape

like structures and a CoS field, that could enable sequence learning. The theory behind

my model further tries to correlate the experience map feature of the ratSLAM model

[67] with the DFT framework [71] in order to further incorporate dynamic memory

processing in a cognitive system.

5.1 Error Analysis

The biggest problem of the brain simulation was the inability to execute multiple

behaviors in a distinct order. As analyzed in the results section, the robotic arm would

rest at the position of the first colored object and not react to the different inputs it

received from the action/perception field, when the ordinal nodes transitioned.

Looking further into the problem, a faulty behavior of the position of target field could

be detected.

Page 102: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

97 Discussion

Figure 14.

1.

2.

3.

1 shows the activation of the color red and the resulting location of the object represented in the position of

target field. In 2, the color yellow is activated, however the location of the peak in the position of target field

remains the same. In 3, the second ordinal node is deactivated, leading to the immediate decay of activity in the

action field and the slightly slower decay in the perception field. While the perception field still shows slight

activity, the position of target field is already completely deactivated, proposing an inconsistency in the time

scaling of the fields.

Above, Figure 14. demonstrates a problem that arose when trying to run two behaviors

in their sequential order. In (1.), the location for the red object is represented in the

position of target field since this is also the color representative of the location of the

peak in the action field. In (2.) the second ordinal node becomes active, inducing a peak

in the action field encoding for the color yellow. On the top right, one can see that the

perception field is also transmitting the location of the yellow object since its peak

shifted with the change of input coming from the action field. However, the position

of the peak in the position of target field is sustained at the same location, meaning

the field still represents the location of the red object. Due to the fact that the robotic

arm is already positioned at the location of the red object, the action is perceived as

terminated and the CoS node is initiated, causing the activity in the ordinal node 2 to

decay as seen in (3.). When looking closely at the activity in the perception field in (3.),

one can observe that the stimulus coming from the

perception field has not completely decayed, contrasting the

full decay of the activation peak in the position of target field.

One may interpret this plot as an indication that the time

scale of the position of target field has not been tuned

correctly, hence the delayed reaction to the missing

simulation coming from the deactivated ordinal 1 node.

Given more time, this problem could be fixed by tuning the

perception and position of target The QR Code on the right

takes you to a video that shows this faulty activation and

deactivation of the position of target field.

15. Error analysis

Page 103: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

98 Discussion

Omissions: A problem that I faced while constructing the architecture, was

implementing a move node, that inhibited the current position field from updating the

initial position field during movement of the robotic arm. This feature was supposed to

ensure that the relative distance between the location of the object and the location of

the robotic arm stays the same. Since it was rather time-consuming tuning the node in

a way that it only inhibited the current position field, but not the initial position field, I

dropped this feature entirely. The arm still moves towards the object of a specific color,

but especially when moving to the red object, it can be observed that the movement

is not executed smoothly, but is shortly discontinued, which may stem from the

omission of the move node.

Another two structures, namely the experience map and the preshape, introduced in

chapter 3.2 were neglected in the simulation. Implementing preshapes and therefore

enabling the system to learn a sequence of colors would mean to construct an

additional “learning loop”, that would have to be active before the generation of

behaviors take place. Such a loop takes a lot of parameter tuning and quickly becomes

rather complex, due to the intertwined connectivity. The experience map however,

should be relatively simple to construct a neural field that peaks whenever the position

of target and initial position field coincide, and then projects to a three-dimensional

experience map. The experience map would further receive input from the color

dimension of the perception field, in order to establish a representation of the color of

the located object. Due to my time restrictions, I was not able to tune the respective

strength of the connections to realize a stable experience map and its associated

memory field.

5.2 Learning

The sequence learning aspect of the architecture, where the perception field stimulates

the action field and hence enables the learning of a sequence of the colors which later

have to be executed, falls short in the robotic demonstration. It would demand an

additional structure that would also have to be set off before the serial order, making

it a little more complex to build. However, learning is a vital factor for autonomous and

intelligent systems which is why it is of interest to further focus on this feature in my

model.

Another idea to expand the learning abilities of the system would be to incorporate

reinforcement learning. By using reinforcement techniques, one could build a cognitive

model with a certain value system, where for example the model is inhibited from

moving to objects of a certain type of color but encouraged to move to an object of

another color. Primarily, such a feature would also mimic biological reward-based

learning involving the nucleus accumbens. Furthermore, it could pose as a useful

Page 104: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

99 Discussion

property when the agent has to navigate in larger environments, where certain objects

must be avoided. A practical example for this is autonomous vehicles that have learned

to avoid certain obstacles, for instance objects that are shaped like humans.

5.3 Expansion and Development

5.3.1 Exploration Feature

In my model, the robotic arm moves directly to the location of the object. A reasonable

feature to further implement would be an “exploration feature”, where the robot roams

freely in an environment, searching for the object of a certain color. When detected,

the robot moves towards that object and remembers its placement within the

environment. A rover-like robot, such as the Khepera robot, lend itself to that kind of

roaming since one could also use a head direction system to navigate the environment.

An exploration feature would also ask for an object avoidance structure, which prevents

the agent from running into any obstacles. Using the heading direction of the robot

(hence the importance of a head direction system) and the turning rate of the vehicle,

we can build a software that represents an object either as attractor or repellor,

consequently steering the robot either to the object or from the object away.101

5.3.2 Memory Field

As previously mentioned, the experience map and memory field structure of the

artificial cognitive model have great potential when being implemented. In a later

project, the memory field of the experience map could be connected to the serial order

in a way that the cognitive system could resort to the stored information about the

location and colors of any object. By expanding the memory

capacity, the cognitive system can become more

independent and may even be able to employ a greater range

of behaviors. The QR Code to the right leads you to notes that

visualize the idea of an expansion of the memory field.

Going back to the previous example of autonomous vehicles,

such a feature could be used to tell the GPS system of the car

where it should drive to. In this example, an already registered

town sign can be looked at as the manually demanded color, which the vehicle’s system

101 In the paper of G. Schöner et al. [86] these behavioral dynamics are explained in more detail.

16. Memory Field

Page 105: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

100 Discussion

then assigns to specific geographic coordinates to which it is able to drive

autonomously due to its kinematics system (movement plan).102

5.3.3 Dynamic Vision Sensor

The Dynamic Vision Sensor (DVS) is a camera developed by the Swiss company iniLabs.

It behaves like the human retina, where only local pixel-changes that occur during

movement are further transmitted. This means that not successive image frames

containing redundant information have to be processed, but only punctual fluctuations

in brightness are used to create a dynamic view of the environment. Said property of

the DVS results in a stream of events at microsecond time resolution, additionally

saving data storage, power and computational processing.103

It would be interesting to implement this sensor in an artificial cognitive system like

mine since it would further undermine its biological plausibility and also allow for a

more dynamic visual system, that is able to react more flexible to environmental

change.

102 Such a system would obviously be a lot more complicated, however I wanted to empathize how

dynamic neural systems can be applied in real-life and edge cutting technologies. 103 Details from: https://phys.org/news/2013-08-dynamic-vision-sensor-tech-human.html, 20.10.18

Page 106: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

101 Conclusion

6 Conclusion

My artificial cognitive system demonstrates how an artificial system can be efficiently

constructed by relying on biologically inspired mechanisms, even if it could not

perform all of the higher-level cognitive abilities first proposed. Modelling the system

also demonstrates the complexity of neural processing behind seemingly simple tasks,

such as color-based navigation. However, new software like Brian, Nest and cedar are

valuable tools to implement these complex biological processes in a computational

system.

When I started this thesis seven months ago, I did not know anything about spiking

neural networks and I was only superficially involved with AI. Constructing my own

artificial cognitive system allowed me to dive into the world of neurons and synapses

and come to understand processes that happen in our brain without us realizing it. By

confronting myself with the mathematical formulations – which at first seemed

impossible to comprehend – trying to break down these processes to their core, I was

able to retrace the theory behind computational models for neurons and brain

structures. The deeper I dug, the more parallels I could draw between technological

and biological processes.

Even though my initially high aspirations proved too demanding for the time available

and for my limited pre-existing knowledge, I was able to gain a lot of insight about

cognitive systems, both artificial and natural. I further became even more interested in

neuroinformatics, a field where I can picture myself further expanding my program or

even working on other projects right in the crossroad of biology and technology.

Page 107: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

102

Acknowledgements

First and foremost, I wanted to thank my supervisor Katarina Gromova for encouraging

me to pursue my ideas, even if the realization of them seemed out of reach. I also want

to thank her for reassuring me in times where nothing seemed to work and motivating

me to push through these times with her positive manner.

An immense thank you also goes out to Yulia Sandamirskaya and Raphaela Kreiser,

who have invested a lot of time and expenses to support me during those months.

Working with them has given me the opportunity to encounter an area of research

where I can also engage in in the future. I further want to thank Alpha Renner, Mathis

Richter and Jan Tekülve, for helping me reconstruct and tune my simulation with cedar.

I am thankful for how young adults are encouraged to pursue scientific research and

to embark on new topics by providing time, assistance and resources, otherwise

reserved for actual scientists.

Last but not least, I want to thank my father, Ueli Eckhardt, who sacrificed several sunny

spring days by sitting with me behind our laptops while trying to figure out new

software systems and various programs and also took his time to read my thesis and

providing me with constructive criticism.

Page 108: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

103

Bibliography

[1] Khan Academy, "Khan Academy," [Online]. Available:

https://www.khanacademy.org/science/biology/human-biology/neuron-

nervous-system/a/overview-of-neuron-structure-and-function . [Accessed 22

August 2018].

[2] National Institute of Neurological Disorders and Stroke, "NINDS online," 16 July

2018. [Online]. Available: https://www.ninds.nih.gov/Disorders/Patient-Caregiver-

Education/life-and-death-neuron. [Accessed 22 August 2018].

[3] Wikipedia, "Wikipedia," [Online]. Available: https://en.wikipedia.org/wiki/Synapse.

[Accessed 23 August 2018].

[4] Lumen Learning, "Lumen Learning," [Online]. Available:

https://courses.lumenlearning.com/wm-biology2/chapter/chemical-and-

electrical-synapses/. [Accessed 23 August 2018].

[5] Khan Academy, "Khan Academy," [Online]. Available:

https://www.khanacademy.org/test-prep/mcat/organ-systems/neuron-

membrane-potentials/a/neuron-action-potentials-the-creation-of-a-brain-

signal. [Accessed 23 August 2018].

[6] S. D. Erulkar and T. L. Lentz, "Encyclopaedia Britannica," 29 January 2018. [Online].

Available: https://www.britannica.com/science/nervous-system/Active-transport-

the-sodium-potassium-pump. [Accessed 24 August 2018].

[7] H. Lodish, A. Berk and S. L. Zipursky, Molecular Cell Biology. 4th edition, New York:

W. H. Freeman, 2000.

[8] Khan Academy, "Khan Academy," [Online]. Available:

https://www.khanacademy.org/test-prep/mcat/organ-systems/neuron-

membrane-potentials/a/neuron-action-potentials-the-creation-of-a-brain-

signal. [Accessed 25 August 2018].

[9] Wikipedia, "Wikipedia," [Online]. Available: https://en.wikipedia.org/wiki/Ligand-

gated_ion_channel. [Accessed 25 August 2018].

[10] Annenberg Learner, "Learner," [Online]. Available:

https://www.learner.org/courses/biology/textbook/neuro/neuro_4.html.

[Accessed 25 August 2018].

Page 109: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

104

[11] S. D. Erulkar and T. L. Lentz, "Encyclopaedia Britannica," 29 January 2018. [Online].

Available: https://www.britannica.com/science/nervous-system/The-neuronal-

membrane. [Accessed 25 August 2018].

[12] Annenberg Learner, "Learner," [Online]. Available:

https://www.learner.org/courses/biology/textbook/neuro/neuro_7.html.

[Accessed 26 August 2018].

[13] Biologie-Schule.de, "Biolofie-Schule.de," [Online]. Available: http://www.biologie-

schule.de/epsp-ipsp.php. [Accessed 26 August 2018].

[14] Multimedia Neuroscience Education Project, "Williams," [Online]. Available:

https://web.williams.edu/imput/synapse/pages/IV.html. [Accessed 26 August

2018].

[15] Wikipedia, "Wikipedia," [Online]. Available:

https://en.wikipedia.org/wiki/Spiking_neural_network. [Accessed 13 September

2018].

[16] D. Soni, "Towards Data Science," 10 January 2018. [Online]. Available:

https://towardsdatascience.com/spiking-neural-networks-the-next-generation-

of-machine-learning-84e167f4eb2b. [Accessed 9 August 2018].

[17] A. Shekhar, "Mindorks," 14 April 2018. [Online]. Available:

https://medium.com/mindorks/understanding-the-recurrent-neural-network-

44d593f112a2. [Accessed 9 August 2018].

[18] D. Cornelisse, "freeCodeCamp," 24 April 2018. [Online]. Available:

https://medium.freecodecamp.org/an-intuitive-guide-to-convolutional-neural-

networks-260c2de0a050. [Accessed 9 August 2018].

[19] W. Gerstner, W. M. Kistler, R. Naud and L. Paninski, "Integrate-And-Fire Models,"

in Neuronal Dynamics Online Book, Cambridge, Cambridge University Press,

2014, p. Chapter 1.3.

[20] F. Ponulak and A. Kasinski, "Introduction to spiking neural networks: Information

processing, learning and applications," Acta Neurobiologiae Experimentals, pp.

409-433, 8 January 2014.

[21] Wikipedia, "Wikipedia," [Online]. Available:

https://en.wikipedia.org/wiki/Dirac_delta_function. [Accessed 13 September

2018].

Page 110: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

105

[22] W. J. Heitler, "Dr. Bill Heitler: University of St. Andrews," 6 July 2007. [Online].

Available: https://www.st-andrews.ac.uk/~wjh/hh_model_intro/. [Accessed 17

October 2018].

[23] Wikipedia, "Wikipedia," [Online]. Available:

https://en.wikipedia.org/wiki/Synaptic_plasticity. [Accessed 7 September 2018].

[24] M. R. Klein, "Donald Olding Hebb," Scholarpedia, vol. 6, no. 4, p. 3719, 2011.

[25] Wikipedia, "Wikipedia," [Online]. Available:

https://en.wikipedia.org/wiki/Hebbian_theory. [Accessed 7 September 2018].

[26] H. Z. Shouval, "Models of synaptic plasticity," Scholarpedia, vol. 2, no. 7, p. 1605,

2007.

[27] W. Senn and J. Pfister, "Spike-Timing Dependent Plasticity, Learning Rules,"

Encyclopedia of Computational Neuroscience, 14 August 2014.

[28] Goldman, "Neuroscience UC Davis; The Goldman Lab," [Online]. Available:

http://neuroscience.ucdavis.edu/goldman/Tutorials_files/Integrate%26Fire.pdf.

[29] B. Skaggs, "Quora," 2 September 2014. [Online]. Available:

https://www.quora.com/What-is-the-difference-between-a-membrane-time-

constant-and-a-synaptic-time-constant-Which-is-higher-and-which-is-lower-

What-is-the-effect-of-each-time-constant-on-the-postsynaptic-potential-An-

answer-for-a-novice-neuroscientist-woul. [Accessed 10 August 2018].

[30] D. R. Curtis and J. C. Eccles, "The time course of excitatory and synaptic actions,"

Journal of Physiology, pp. 529-546, 1959.

[31] M. Castellano and G. Pipa, "Memory Trace in Spiking Neural Networks," in ICANN

2013: Artificial Neural Networks and Machine Learning, Sofia, Bulgaria, 2013.

[32] Wikipedia, "Wikipedia," [Online]. Available:

https://en.wikipedia.org/wiki/Chemical_synapse#Synaptic_strength. [Accessed 7

September 2018].

[33] M. L. Blanke and A. M. VanDongen, "Activation Mechansims of the NMDA

Receptor," in Biology of the NMDA Receptor, Boca Raton, FL, CRC Press: Taylor&

Francis Group, 2009.

[34] C. Lüscher and R. C. Malenka, "NMDA Receptor-Dependent Long-Term

Potentiatoin and Long-Term Depression (LTP/LTD)," Cold Spring Harbor

Perspectives in Biology, 12 June 2012.

Page 111: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

106

[35] R. H. Hall, "Long Term Potentiation," 1998.

[36] Wikipedia, "CREB".

[37] S. Kida, "A Functional Role for CREB as a Positive Regulator of Memory Formation

and LTP," Experimental Neurobiology, pp. 136-140, 1 September 2012.

[38] Y. S. Lee and S. J. Alcino, "The molecular biology of enhanced cognition," Nature

Review Neuroscience, pp. 126-140, 1 February 2009.

[39] Wikipedia, "Wikipedia," [Online]. Available: https://en.wikipedia.org/wiki/Brain-

derived_neurotrophic_factor#Function. [Accessed 1 September 2018].

[40] UniProtKB, "UniProt," [Online]. Available:

https://www.uniprot.org/uniprot/Q63053. [Accessed 9 September 2018].

[41] Wikipedia, "Wikipedia," [Online]. Available: https://en.wikipedia.org/wiki/Activity-

regulated_cytoskeleton-associated_protein. [Accessed 9 September 2018].

[42] J. F. Guzowski, G. L. Lyford, G. D. Stevenson, F. P. Houston, J. L. McGaugh, P. F.

Worley and C. A. Barnes, "Inhibition of activity-dependent arc protein expression

in the rat hippocampus impairs the maintenance of long-term potentiation and

the consolidation of long-term memory," The Journal of Neuroscience, pp. 3993-

4001, 1 June 2000.

[43] M. Chistakova, N. M. Bannon, J. Y. Chen, M. Bazhenov and M. Volgushev,

"Homeostatic role of heterosynaptic plasticity: models and experiments,"

Frontiers in Computational Neuroscience, 13 July 2015.

[44] F. Zenke, E. Agnes and W. Gerstner, "Formation and recall of cell assemblies in

recurrent networks of spiking neurons and multiple roles of plasticity," Nature

Communications, 21 April 2015.

[45] R. Zaman, "The revisionist," [Online]. Available: https://therevisionist.org/bio-

hacking/neurotransmitters-vs-neuromodulators/. [Accessed 7 September 2018].

[46] F. Nadim and D. Bucher, "Neuromodulation of Neurons and Synapses," Current

Opinion in Neurobiology, pp. 48-56, December 2014.

[47] J. Chen, P. Lonjers, C. Lee, M. Chistiakova, M. Volgushev and M. Bazhenov,

"Heterosynaptic plasticity prevents runaway synaptic dynamics," Journal of

Neuroscience, pp. 15915-15929, 2 October 2013.

Page 112: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

107

[48] F. Zenke, "Memory formation and recall in spiking neural networks," EPFL,

Lausanne, 2016.

[49] W. Erlhagen and E. Bicho, "The dynamic neural field approach to cognitive

robotics," Journal of Neural Engineering, pp. 36-54, 27 June 2006.

[50] S. Amari, "Dynamic pattern formation in lateral-inhibition type neural fields,"

Biological Cybernetics, pp. 77-87, September 1977.

[51] S. Coombes, "Scholarpedia," 2006. [Online]. Available:

http://www.scholarpedia.org/article/Neural_fields#Physiological_motivation .

[Accessed 17 October 2018].

[52] T. Dettmers, "NVIDIA Developer," 7 March 2016. [Online]. Available:

https://devblogs.nvidia.com/deep-learning-nutshell-sequence-learning/.

[Accessed 19 October 2018].

[53] A. Barrera, "Robot Topological Mapping and Goal-Oriented Navigation based on

Rat Spatial Cognition," in Mobile Robots Navigation, Mexico, InTech, 2010, pp.

535-561.

[54] J. O'Keefe and J. Dostrovsky, "The hippocampus as a spatial map. Preliminary

evidence from unit activity in the freely-moving rat.," Brain research, vol. 34, no.

1, pp. 171-175, November 1971.

[55] A. Arleo and W. Gerstner, "Spatial Cognition and neuro-mimetic navigation: a

model of hippocampal place cell activity," Biological Cybernetics, no. 83, pp. 287-

299, 20 March 2000.

[56] R. M. Yoder and J. S. Taube, "The vestibular contribution to the head direction

signal and navigation," Frontiers in Integrative Neuroscience, vol. 8, no. 32, 22

April 2014.

[57] N. Spruston, "Scholarpedia," 2009. [Online]. Available:

http://www.scholarpedia.org/article/Pyramidal_neuron. [Accessed 5 August

2018].

[58] J. Jacobs, M. J. Kahana, A. D. Ekstrom, M. V. Mollison and I. Fried, "A sense of

direction in human entorhinal cortex," Proc. Natl. Acad. Sci. USA, vol. 107, no. 14,

p. 6487–6492, 6 April 2010.

[59] H. Cui and R. A. Andersen, "Different Representations of Potential and Selected

Motor Plans by Distinct Parietal Areas," JNeurosci, vol. 31, no. 49, 7 December

2011.

Page 113: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

108

[60] P. G. Cox and N. Jeffery, "Semicircular canals and agility: the influence of size and

shape measures," Journal of Anatomy, vol. 216, no. 1, pp. 37-47, January 2010.

[61] G. F. Xavier and V. C. Costa, "Dentate gyrus and spatial behavior," Prog

Neuropsychopharmacol Biol Psychiatry, vol. 33, no. 5, pp. 762-773, 1 August 2009.

[62] F. Mannella, K. Gurney and G. Baldassarre, "The nucleus accumbens as nexus

between values and goals in goal-directed behavior: a review and hypothesis,"

Frontiers in behavioral neuroscience online, 23 October 2013.

[63] S. Schwerin, "Brainconnection," 5 March 2013. [Online]. Available:

https://brainconnection.brainhq.com/2013/03/05/the-anatomy-of-movement/.

[Accessed 7 August 2018].

[64] S. Meikle, N. A. Thacker and R. B. Yates, "A computational model for learning to

navigate in an unknown environment," in IEE Colloquium on Application of

Machine Vision, London, 1995.

[65] P. Corke and M. Milford, "Robotics@QUT," 21 October 2013. [Online]. Available:

https://wiki.qut.edu.au/display/cyphy/RatSLAM#RatSLAM-MajorPublications.

[Accessed 20 September 2018].

[66] S. Riisgard and M. R. Blas, "SLAM for Dummies: A Tutorial Approach to

Simultaneous Localization and Mapping," 2005.

[67] M. J. Milford, G. F. Wyeth and D. Prasser, "RatSLAM: A Hippocampal Model for

Simultaneous Localization and Mapping," in IEEE International Conference on

Robotics and Automation, Brisbane, Australia, 2004.

[68] J. Rennie, "Quantamagazine," 9 May 2018. [Online]. Available:

https://www.quantamagazine.org/artificial-neural-nets-grow-brainlike-

navigation-cells-20180509/. [Accessed 14 October 2018].

[69] M. Milford, D. Prasser and G. Wyeth, "Experience Mapping: Producing Spatially

Continuous Environment Representations using RatSLAM," 2005.

[70] Y. Sandamirskaya and G. Schöner, "Serial Order in an acting system: a

multidimensional dynamic neural fields implementation," in 2010 IEEE 9th

International COnference on Development and Learning, Ann Arbor, Michigan,

2010.

[71] Y. Sandamirskaya and G. Schöner , "Dynamic Field Theory of Sequential Action: A

Model and its Implementation on an Embodied Agent," in 7th IEEE International

Conference on Development and Learning, Monterey, California, 2008.

Page 114: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

109

[72] E. Billing, R. Lowe and Y. Sandamirskaya, "Simultaneous planning and action:

neural-dynamic sequencing of elementary behaviors in robot navigation,"

Adaptive Behavior, vol. 23, no. 5, pp. 243-264, 24 September 2015.

[73] J. Lins and G. Schöner, "A Neural Approach to Cognition Based on Dynamic Field

Theory," in Neural Fields: Application and Theory, Springer Berlin Heidelberg,

2014, pp. 319-339.

[74] Y. Sandamirskaya, "Dynamic Neural Fields as a Step Towards Cognitive

Neuromorphic Architectures," Frontiers in Neuroscience, vol. 7, p. 276, 2013.

[75] O. Lomp, M. Richter, S. K. U. Zibner and G. Schöner, "Developing Dynamic Field

Theory Architectures for Embodied Cognitive Systems with cedar," Frontiers in

Neurorobotics Online, 2 November 2016.

[76] Wikipedia, "Wikipedia," [Online]. Available:

https://en.wikipedia.org/wiki/Neural_oscillation. [Accessed 11 October 2018].

[77] NeurotechEdu, "neurotechedu," [Online]. Available:

http://learn.neurotechedu.com/oscillations/. [Accessed 11 October 2018].

[78] N. Cuperlier, M. Quoy, P. Gaussier and C. Giovanangelli, "Navigation and planning

in an unknown environment using vision and a cognitive map," in Springer Tracts

in Advanced Robotics, vol. 22, Berlin, Heidelberg, Springer_verlag Berlin

Heidelberg, 2005, pp. 129-142.

[79] W. Gerstner, W. M. Kistler, R. Naud and L. Paninski, "NonIinear Integrate-and-Fire

Neurons," in Neuronal Dynamics Online Book, Cambridge, Cambridge University

Press, 2014, p. Chapter 5.

[80] A. Robins, "Sequential learning in neural networks: A review and a discussion of

pseudorehearsal based methods," Intelligent Data Analysis, vol. 8, no. 3, pp. 301-

322, 15 September 2004.

[81] G. Deco and E. T. Rolls, "Sequential memory: A putative neural and synaptic

dynamic mechanism," Journal of Cognitive Neuroscience, vol. 17, no. 2, pp. 294-

307, February 2006.

[82] S. Monteiro and E. Bicho, "A Dynamical Systems Approach to Behavior-Based

Formation Control," in Proceedings 2002 IEEE International Conference on

Robotics and Automation, Washington D.C., 2002.

[83] Y. Sandamirskaya, "Dynamic neural fields as a step toward cognitive

neuromorphic architectures," Frontiers in Neurorobotics, 14 January 2014.

Page 115: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

110

[84] J. O'Keefe and L. Nadel, The hippocampus as a cognitive map, Oxford University

Press, 1978.

[85] S. K. U. Zibner, J. Tekülve and G. Schöner, "The neural dynamics of goal-directed

arm movements: a developmental perspective," in ICDL-EPIROB, 2015.

[86] G. Schöner, C. Faubel, E. Dineva and E. Bicho, "Embodied Neural Dynamics," in

Dynamic Thinking: A Primer on Dynamic Field Theory, Oxford Universtity Press,

2016, pp. 95-118.

Page 116: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

111

Table of Exhibits EXHIBIT 1. SIMPLIFIED STRUCTURE OF A NEURON ................................................................................................................ 7 EXHIBIT 2. EVOLUTION OF AN ACTION POTENTIAL ............................................................................................................. 10 EXHIBIT 3. PROPAGATION OF AN ACTION POTENTIAL ......................................................................................................... 11 EXHIBIT 4. EXOCYTOSIS ............................................................................................................................................... 13 EXHIBIT 5. RNN ........................................................................................................................................................ 15 EXHIBIT 6. CNN ........................................................................................................................................................ 16 EXHIBIT 7. DIRAC DELTA FUNCTION ............................................................................................................................... 17 EXHIBIT 8. ASYMPTOTIC CURVE STDP ............................................................................................................................ 20 EXHIBIT 9. HODGKIN-HUXLEY MODEL ............................................................................................................................ 21 EXHIBIT 10. LTP AND LTP AT HIPPOCAMPAL CA1 SYNAPSES .............................................................................................. 27 EXHIBIT 11. DIAGRAM OF THE RAT HIPPOCAMPUS ............................................................................................................ 34 EXHIBIT 12. THE HIPPOCAMPAL NETWORK ...................................................................................................................... 35 EXHIBIT 13. A SIMPLIFIED ANATOMICAL MODEL ............................................................................................................... 37 EXHIBIT 14. HD CELLS................................................................................................................................................. 41 EXHIBIT 15. PATH INTEGRATION ................................................................................................................................... 43 EXHIBIT 16. GRID CELLS .............................................................................................................................................. 44 EXHIBIT 17. BISTABILITY .............................................................................................................................................. 50 EXHIBIT 18. SERIAL ORDER SYSTEM ............................................................................................................................... 51 EXHIBIT 19. ACTION FIELD AND PRESHAPES ..................................................................................................................... 53 EXHIBIT 20. ROBOT SIMULATION WITH GAZEBO ............................................................................................................... 57 EXHIBIT 21. LOCAL VIEW CELLS IN NEST ........................................................................................................................ 59 EXHIBIT 22. HUE EXTRACTION IN NEST .......................................................................................................................... 60 EXHIBIT 23. OVERVIEW DFT ARCHITECTURE ................................................................................................................... 62 EXHIBIT 24. COMPARISON OF A RODENT BRAIN AND THE ARCHITECTURE ............................................................................... 64 EXHIBIT 25. SERIAL ORDER STRUCTURE ........................................................................................................................... 65 EXHIBIT 26. PERCEPTION SYSTEM .................................................................................................................................. 66 EXHIBIT 27. KINEMATICS ............................................................................................................................................. 68 EXHIBIT 28. EXPERIENCE MAP ...................................................................................................................................... 70 EXHIBIT 29. CONDITION OF SATISFACTION SYSTEM ........................................................................................................... 71 EXHIBIT 30. ROBOTIC ARM .......................................................................................................................................... 72 EXHIBIT 31. PARAMETER TUNING .................................................................................................................................. 73 EXHIBIT 32. PROCESSING ............................................................................................................................................. 74 EXHIBIT 33. NEURAL NODE .......................................................................................................................................... 75 EXHIBIT 34. ACTIVITY IN NODES .................................................................................................................................... 75 EXHIBIT 35. ONE-DIMENSIONAL FIELD ............................................................................................................................ 76 EXHIBIT 36. TWO-DIMENSIONAL FIELD ........................................................................................................................... 77 EXHIBIT 37. ACTIVITY IN 2D NEURAL FIELDS .................................................................................................................... 78 EXHIBIT 38. THREE-DIMENSIONAL FIELDS ........................................................................................................................ 79

Page 117: An Artificial Cognitive System for ... - Impuls Mittelschule · The ability to retain knowledge and applying it to new experiences enables broader networking of the neural cells and

112

Table of QR Codes 1. BRIAN TUTORIALS: HTTPS://WWW.DROPBOX.COM/SH/3O8ORK7OUECFDDT/AAB4ZH43FRPE8GORMEVT-3C3A?DL=0 ....... 57 2. COMPUTATIONAL MODEL 1:

HTTPS://WWW.DROPBOX.COM/HOME/AN%20ARTIFICIAL%20COGNITIVE%20SYSTEM%20FOR%20AUTONOMOUS%20NAVI

GATION/COMPUTATIONAL%20MODELS?PREVIEW=MODEL1.PDF ............................................................................. 57 3. COMPUTATIONAL MODEL 2:

HTTPS://WWW.DROPBOX.COM/HOME/AN%20ARTIFICIAL%20COGNITIVE%20SYSTEM%20FOR%20AUTONOMOUS%20NAVI

GATION/COMPUTATIONAL%20MODELS?PREVIEW=MODEL2.PDF ............................................................................. 59 4. FULL ARCHITECTURE:

HTTPS://WWW.DROPBOX.COM/HOME/AN%20ARTIFICIAL%20COGNITIVE%20SYSTEM%20FOR%20AUTONOMOUS%20NAVI

GATION/DFT%20ARCHITECTURE?PREVIEW=FULL_ARCHITECTURE1.PNG ................................................................... 61 5. SERIAL ORDER:

HTTPS://WWW.DROPBOX.COM/HOME/AN%20ARTIFICIAL%20COGNITIVE%20SYSTEM%20FOR%20AUTONOMOUS%20NAVI

GATION/DFT%20ARCHITECTURE?PREVIEW=SERIAL_ORDER.MP4 ............................................................................ 65 6. ACTIVATION 2D FIELD:

HTTPS://WWW.DROPBOX.COM/HOME/AN%20ARTIFICIAL%20COGNITIVE%20SYSTEM%20FOR%20AUTONOMOUS%20NAVI

GATION/DFT%20ARCHITECTURE?PREVIEW=ACTIVATION_POSITION_GREEN_2D-3D.MP4 ............................................ 78 7. SERIAL ORDER – COS:

HTTPS://WWW.DROPBOX.COM/HOME/AN%20ARTIFICIAL%20COGNITIVE%20SYSTEM%20FOR%20AUTONOMOUS%20NAVI

GATION/RESULTS?PREVIEW=SERIAL_COS.MP4 ...................................................................................................... 82 8. SERIAL ORDER - ACTION FIELD:

HTTPS://WWW.DROPBOX.COM/HOME/AN%20ARTIFICIAL%20COGNITIVE%20SYSTEM%20FOR%20AUTONOMOUS%20NAVI

GATION/RESULTS?PREVIEW=ACTION_ORD1-ORD2.MP4 ......................................................................................... 83 9. COLOR GREEN:

HTTPS://WWW.DROPBOX.COM/HOME/AN%20ARTIFICIAL%20COGNITIVE%20SYSTEM%20FOR%20AUTONOMOUS%20NAVI

GATION/RESULTS?PREVIEW=COLOR_GREEN_POSITION.MP4 .................................................................................... 86 10. MOVEMENT PLAN:

HTTPS://WWW.DROPBOX.COM/HOME/AN%20ARTIFICIAL%20COGNITIVE%20SYSTEM%20FOR%20AUTONOMOUS%20NAVI

GATION/RESULTS?PREVIEW=EXC_OSCILLATOR.MP4 ............................................................................................... 91 11. OSCILLATORS:

HTTPS://WWW.DROPBOX.COM/HOME/AN%20ARTIFICIAL%20COGNITIVE%20SYSTEM%20FOR%20AUTONOMOUS%20NAVI

GATION/RESULTS?PREVIEW=EXC_OSCILLATOR_INH.MP4 ......................................................................................... 91 12. OVERALL PERFORMANCE 1:

HTTPS://WWW.DROPBOX.COM/HOME/AN%20ARTIFICIAL%20COGNITIVE%20SYSTEM%20FOR%20AUTONOMOUS%20NAVI

GATION/RESULTS?PREVIEW=OVERALL_PERFORMANCE_NO-SEQUENCE.MP4 ............................................................... 93 13. OVERALL PERFORMANCE 2:

HTTPS://WWW.DROPBOX.COM/HOME/AN%20ARTIFICIAL%20COGNITIVE%20SYSTEM%20FOR%20AUTONOMOUS%20NAVI

GATION/RESULTS?PREVIEW=OVERALL_PERFORMANCE_COLOR.MP4 ......................................................................... 95 14. FUNCTIONING BRAIN ARCHITECTURE:

HTTPS://WWW.DROPBOX.COM/HOME/AN%20ARTIFICIAL%20COGNITIVE%20SYSTEM%20FOR%20AUTONOMOUS%20NAVI

GATION/RESULTS?PREVIEW=OVERALL_PERFORMANCE_RED.MP4 ............................................................................ 95 15. ERROR ANALYSIS:

HTTPS://WWW.DROPBOX.COM/HOME/AN%20ARTIFICIAL%20COGNITIVE%20SYSTEM%20FOR%20AUTONOMOUS%20NAVI

GATION/RESULTS?PREVIEW=FALSE_PERC_ACT_POS.MP4 ........................................................................................ 97 16. MEMORY FIELD:

HTTPS://WWW.DROPBOX.COM/HOME/AN%20ARTIFICIAL%20COGNITIVE%20SYSTEM%20FOR%20AUTONOMOUS%20NAVI

GATION/COMPUTATIONAL%20MODELS?PREVIEW=MEMORY+FIELD.PDF .................................................................. 99