tfg_memoria-2.pdf - upcommons

61
MODELLING OF EGOCENTRIC SPACE AND ALLOCENTRIC SPACE FROM NEURAL ACTIVITY IN BEHAVING MICE By SANTIAGO ACOSTA MENDOZA A senior thesis submitted in partial fulfillment of the requirements for the degree of ENGINEERING PHYSICS UNIVERSITAT POLITÈCNICA DE CATALNYA Escola Tècnica Superior de Telecomunicacions de Barcelona JANUARY 2021

Upload: khangminh22

Post on 28-Apr-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

MODELLING OF EGOCENTRIC SPACE AND ALLOCENTRIC SPACE

FROM NEURAL ACTIVITY IN BEHAVING MICE

By

SANTIAGO ACOSTA MENDOZA

A senior thesis submitted in partial fulfillment ofthe requirements for the degree of

ENGINEERING PHYSICS

UNIVERSITAT POLITÈCNICA DE CATALNYAEscola Tècnica Superior de Telecomunicacions de Barcelona

JANUARY 2021

ACKNOWLEDGMENT

Many thanks to Dr. Michael J. Goard for being such an incredible mentor to me. He

has been absolutely instrumental to all work composing this work. He blindly offered me to

join his laboratory and has wisely guided me through all the scientific process.

Special thanks to William Redman and Kevin Sit. Will, for patiently introducing me to

all the experimental and animal techniques and Kevin, for always having a solution to every

technical problem that popped up. Further, I would like to thank the rest of the labmates;

Luca Montelisciani, Nora Wolcott, Tyler Marks and Luis Franco for making me feel like at

home at all times, even if it was on-line.

Also, I would like to thank Dr. Antonio J. Pons for his abroad tutoring and for believing

me in the first instace.

Finally, I would like to thank my housemates for making this mostly stay-at-home ex-

perience an unforgettable one. Last but not least, I would like to thank my family for the

endless support they have provided me during this terribly enriching experience.

ii

MODELLING OF EGOCENTRIC SPACE AND ALLOCENTRIC SPACE

FROM NEURAL ACTIVITY IN BEHAVING MICE

Abstract

by Santiago Acosta Mendoza,Universitat Politècnica de Catalnya

January 2021

Director: Michael J. Goard

Co-Director: Antoni J. Pons

Space plays a prominent role in virtually all animal behavior. In order to be able to navi-

gate, several types of neurons involved in spatial navigation encode external space in distinct

reference frames, with respect to the animal (egocentric) or with respect to external cues (al-

locentric). Here, we sought to characterize the spatial correlates of large neural populations

and single-cells in awake, behaving mice using fluorescence calcium imaging. First, we cou-

pled a wide-field microscope with a tracking system so that we could systematically record

spatial correlations in large-scale dynamics. Using this experimental paradigm, we didn’t

identify any underlying signal allocentric signal throughout cortex in the mesoscopic scale.

Second, we studied single-cell dynamics in the retrosplenial cortex using 2-photon imaging

and reported neural representations of boundaries in egocentric coordinates. Overall, the

current work provides a framework to study spatial correlates in distinct reference frames in

both single-cell and mesoscopic neural activity in behaving mice.

iii

TABLE OF CONTENTS

Page

ACKNOWLEDGMENT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii

ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii

LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi

INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

AIM OF THE STUDY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

1 NAVIGATION CORRELATES IN DORSAL CORTEX . . . . . . . . . . 141.1 Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

1.2.1 Experiment Set-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151.2.2 Post-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

1.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241.3.1 Head-fixed mouse successfully navigates through the cage but in limited

ways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241.3.2 Cortex-wide neural activity is modulated by motion . . . . . . . . . . 261.3.3 Information index as a metric for the amount of information conveyed

by neural activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271.3.4 Allocentric spatially selective pixels are restricted to the somato-sensory

dorsal cortex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

2 EGOCENTRIC BORDER CELL TUNING IN RSC . . . . . . . . . . . . 362.1 Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

iv

2.2.1 Experiment Set-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372.2.2 Post-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

2.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412.3.1 Computation of egocentric and allocentric spatial activty maps . . . . 412.3.2 Most RSC neurons are excitable . . . . . . . . . . . . . . . . . . . . . 422.3.3 Egocentric boundary vector responsivity of RSC . . . . . . . . . . . . 43

2.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

v

LIST OF FIGURES

1 Cells tunned to spatial variables . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Cortex architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3 Temporal Dynamics of GCaMP6s calcium indicator . . . . . . . . . . . . . . 9

4 Optical path of a fluorescence microscope . . . . . . . . . . . . . . . . . . . . 10

5 One-photon and two-photon imaging comparison . . . . . . . . . . . . . . . 11

1.1 Neurotar Mobile Home Cage . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

1.2 Neurotar-Widefield set-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

1.3 Example of ∆F/F traces raw and deconvolved . . . . . . . . . . . . . . . . . 22

1.4 Cross-session registration with a common reference . . . . . . . . . . . . . . 24

1.5 Template Matcher GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

1.6 Behavior correlates measure by the MHC . . . . . . . . . . . . . . . . . . . . 27

1.7 Cortex-wide activity is modulated by motion . . . . . . . . . . . . . . . . . . 28

1.8 Allocentric Spatial Information Index . . . . . . . . . . . . . . . . . . . . . . 32

1.9 Changes in ∆F/F as function of allocentric position . . . . . . . . . . . . . . 33

2.1 Two-photon and MobileHomeCage set-up . . . . . . . . . . . . . . . . . . . . 38

vi

2.2 Transformation from an allocentric reference frame to a mouse-centered ref-

erence frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

2.3 Distribution of Pearson’s ρ for all neurons . . . . . . . . . . . . . . . . . . . 45

2.4 Example of the tuning to two cells to egocentric borders . . . . . . . . . . . 46

2.5 Activity maps of three detectec egocentric border cells . . . . . . . . . . . . 47

vii

Mientras el cerebro sea un misterio, el universo continuará siendo un misterio

Santiago Ramón y Cajal

viii

INTRODUCTION

Neural spatial representations

A major goal of neuroscience research is to understand the circuits and the patterns of

neural activity that give rise to mental processes and representations ultimately leading to

behavior. In that sense, spatial cognition offers us a unique window to access to such mental

representations in animal models. Space plays a prominent role in all types of behavior,

given that it is intrinsically attached to the animal’s existence. As such, animals brains have

evolved to carry out higher-level cognitive computations with the capability of associating

locations with behaviorally relevant outcomes, such as being able to locate food sources or

to find shelter.

Then, how does the brain code for space? Is space derived only from sensory impres-

sions? Or is space an innate organizing principle of the mind, through which the world is,

and must be, perceived? The first evidence towards an answer came more than 35 years

ago, when O’Keefe and Dovstrovsky1 reported spatial receptive fields in neurons in the rat.

These neurons, called place cells, fired whenever the rat was in a certain place in the local

environment and remained silent elsewhere (Fig 1.A). However, the startling fact about

these cells was that they fired no matter what the orientation of the rat was with respect to

the environment. Furthermore, neighboring place cells fired at different locations such that

the entire environment was represented in the activity of the local cell population.

The discovery of such cells gave rise to a new discipline in neuroscience. Over the years,

1

multiple neural systems that function to construct and store spatial representations were

discovered. Most of this discoveries were neurons that exhibited activity when some combi-

nation of spatial relationship between the animal and environment were satisfied. Examples

of these functional cell types are head direction cells2, neurons that would fire whenever the

animal was facing a certain direction in the absolute space (Fig 1.C)), like a compass, or

grid cells3, neurons that would fire in multiple locations in a specific manner so the firing

fields formed a periodic triangular array or grid (Fig 1.B).

Figure 1 Cells tunned to spatial variables Adapted from Burgess & Bicanski4A. Response of a place cell. On top the firing rate as function of the animal’slocation. Notice that when the animal is in the place field illustrated in the bottom,the firing rate increases (warmer color references to more activity) B. Similar as inA., but now the cell has multiple place fields whose structure is hexagonal. C. Headdirection cell. The firing rate is depicted as a polar plot. In room A, the cell firedony when the animal was facing 120 ◦(Room A). Then, the experiments rotated thecage 90 ◦to the right (Room B) just to see that the cell followed the rotation.(RoomB)

Apart from the ones already discussed, neurons with a variety of spatial responses have

also been reported. From cells that encode the position of some behavioral goal5 (like food

rewards) to cells that fire when at selective positions within a route6. Most importantly,

there is experimental evidence of border cells, neurons that fire only when the animal is

close to a boundary of the local environment. These cells, as we will see later on, have been

predicted to be a key element as to explain the formation of place cells, given that place field

2

location change as function of the geometry of the boundaries of the environment7

It is important to note that although mice and rats are the most used animal model in

neuroscience and most of the research is done in these rodent species, these single-cell spatial

representations have also been found in bats8, monkeys910 and humans1112.

Neural representations of space are encoded in distinct coordinate systems

All spatial representations that we have discussed so far map an animal’s location relative

to properties of the external world. This coordinate system is often referred to as the

allocentric spatial reference frame. The spatially tuned cell within this coordinate reference

frame has the property of firing activity that does not depend on the animal facing location,

or, more generally, on any sensory stimuli. Proof is evidence of place cells showing stable

locational firing across multiple light–dark conditions13.

However, all information utilized to construct representations of allocentric space enters

the brain via sensory systems14. Critically, all sensory systems are encoded with respect to

the animal itself. For example, the auditory hair cells amplify sound waves and transduce

auditory information to the brain stem, who compares the input of both ears and determines

from which direction is the sound coming with respect to the animal. This form of spatial

information is encoded in egocentric coordinates or body relative coordinates.

Spatial neural representations in the egocentric coordinate frame have recently been

found. These include egocentric object bearing cells15, cells that fire only when the object

is in a certain orientation and distance with respect to the animal; egocentric goal-direction

cells16, cells that indicate the location of a goal in egocentric coordinates and egocentric

border cells, cells whose firing field depends on the boundaries of the environment being

positioned in certain angles and directions with respect to the animal17.

3

Egocentric border cells and their role in the coordinate transformation system

Given the vast amount of evidence of neural representations encoded in remarkably different

coordinate systems, and taking into account that one representation, the allocentric spatial

representational, must necessarily arise from the egocentric representation, then there has

to be a neural mechanism that is capable of transforming egocentric neural responses into

allocentric responses. Moreover, the reverse transformation also needs to be true. Motor

actions are naturally encoded in egocentric coordinates. As such, in order to be able to

properly navigate through the environment, neural systems must also have the capability of

transforming the existent allocentric memory into egocentric coordinates.

In this sense, boundaries are thought to be key elements in this transformation because

they represent an extraordinary intersection between egocentric and allocentric coordinate

systems. It is so because they are at fixed positions that define the space they enclose, while

at the same restricting the egocentric affordances of the animal, such as what can be viewed

or what motor plans can be executed. Theoretical studies18 have proposed the existence of

cells egocentrically tuned to borders, but it has been not up until recently that there has

been experimental evidence of such cells.1719

The cerebral cortex plays an essential role in navigation

As it name illustrates, the cerebral cortex is the outermost layer of the brain. In humans,

it is the part of the brain responsible for higher cognitive abilities, like speech or thought.

Although the human the human brain is over 1000 larger by area and number of neurons

compared to the mouse’s brain, the basic architecture appears to be conserved in mammalian

species. So, even if the rodent cerebral cortex is not capable of the superior cognitive abilities

such as speech or thought, it is the region responsible for sensory integration, voluntary

movement and most importantly, spatial cognition. Hence, understanding how the mouse

cortex is able to perform computations leading to these traits leads us to a more complete

4

understanding of our brain, given that these processes are shared between mammalian species

(cerebral cortex is not present in birds or reptiles). The main regions of cerebral cortex are

the hippocampus and the neocortex.

The neocortex, which in turn is the outer layer of the brain, consists of six cell layers of

different density and containing also different cell types (Fig. 2A). Remarkably, in this study,

we recorded activity from whole columns in the case of Chapter 1 and from single cells in

Chapter 2 (more information in the mentioned chapters).

Also, the neocortex consists of various sub-units or areas, each of them performing one

specific function. Their spatial location in top view can be seen in Fig. 2B-C. These areas

include but are not limited to: somatosensory cortex (SMC), the are responsible for motor

control; somatonsensory (SSC), the area where sensations arising from receptors throughout

the body (touch, pain, temperature) are integrated; visual cortex (VC), that receives visual

inputs directly from the retina; posterior parietal cortex (PPC), containing neurons that

multi-modal specific and retrosplenial cortex (RSC), whose role has been elusive for years

but in light of recent evidence it has been hypothesize to be a key element in navigation.

Apart from this neocortical region, in the cerebral cortex we also find the hippocampus

and the related hippocampal formation, located just below the superficial cortex. Inter-

estingly, its structure and function is remarkably different compared to other parts of the

cortex. The hippocampus is thought to be a crucial structure in memory consolidation and

spatial navigation. In fact, it is one of the brain regions most affected by Alzheimer’s disease,

a disease characterized by memory loss.

Critically, most of the allocentric representations presented in previous sections have been

found in the hippocampus, such as place cells. It has a crucial role in spatial cognition. For

example, one of the early impairments happening in Alzheimer’s diseases, as a consequence

of the neural degeneration of the hippocampus neural population, is the impairment of

spatial navigation and orientation deficits. Therefore, for decades, the hippocampus has been

considered the spatial cognitive map of the brain20. However, the origin of these allocentric

5

representations have remained elusive. Necessarily, all allocentric representations must be

derived from egocentric representations. From a connectivity point of view, the retrosplenial

cortex (RSC) is placed in a privileged location in cortex, given that it has strong connections

to sensory association areas and to the hippocampus.

The allocentric representations have mainly been studied in the hippocampus, however

recent studies have identified place cell representations and other related allocentric cell types

in other cortical areas, such as somatosensory cortex21, somatomotor cortex22 and posterior

parietal cortex23. There, in light of these recent discovories, the traditional notion of the

hippocampus as the only spatial navigation system is disputed.

Figure 2 Cortex architecture A. Original drawing of the different cortical celllayers by the father of neuroscience, Santiago Ramon y Cajal. Layers are orderedfrom I to VI being I the top layer. Note that for layer I there are few neuron bodies.Layers I-III mainly have intracortical connections, layers IV receives thalamocorticalconnections and layers V-VI primarily connect the cerebral cortex with subcorticalareas. B GCaMP6s expression throughout the dorsal (viewed from above) cortexof a mouse. Note that the two hemispheres can clearly be distinguishable. CParcellation of B with the names of the principal cortical areas. Adapted from24

6

In vivo calcium imaging of neural activity

Many different techniques that allow us to measure neurons’ electrical activity have been

developed . However, in order to be able to decode patterns of neural activity and see how

are they related to the animal’s behavior, hundreds of neurons -not just an individual neuron-

need to be recorded simultaneously. Moreover, if we want to decode single cell responses,

we also need measuring methods to distinguish activity from different neurons. As neurons

in cortex average around 20 micrometres in length, it is essential for our measuring method

to potentially reach a spatial resolution of that order of magnitude.

Neurons of the central nervous system interact primarily with action potentials or “spikes”,

which are stereotyped electrical impulses that last up to 1 ms. In the cortex, information is

usually transmitted via bursts of spikes, that usually last hundreds of milliseconds. Therefore,

our measuring method also needs to have the enough temporal resolution to detect these

individual electrical events. Last of all, we want to measure the activity of animals while they

are behaving and not just in vitro, so our method needs to be adjusted to such experimental

conditions.

The method that meets these requirements plus some more is calcium imaging, and it

is the main experimental method used in this project. In order to give a general idea, the

first part of this section will be devoted to explain how calcium imaging works and how it

enables us to measure neural activity. Then, we will aim to describe the specific tools and

imaging techniques that we will be using throughout this project: the wide-field microscopy

and the two-photon microscopy.

Calcium Imaging

Calcium ions (Ca 2+) are involved in a number processes in virtually every cell type in

biological organisms. Some examples include the control of the contraction in heart muscle

cells, the regulation of vital aspects of the entire cell cycle and their vital role in signal

7

transduction pathways. Specifically, calcium ions play an important role in the mammalian

neurons as an essential intracellular messenger. In the presynaptic terminal, calcium influx

triggers the release of neurotransmitter-containing synaptic vesicles. Postsynaptically, a rise

of the calcium level in dendritic spines is essential for the induction of activity-dependent

synaptic plasticity. It is important to note that calcium ion influx is the first step of neuronal

activation, with fast action potential and decaying time dynamics, making it a perfect target

to assess neural activity. So if we can monitor the dynamics of cellular calcium, we will be

able to infer the neurons’ dynamics as we will discuss later on.

The technique that allow us to optically measure calcium concentrations of an isolated

cell, tissue or medium is called calcium imaging. This microscopy approach takes advan-

tage of calcium indicators, fluorescent molecules that respond to the binding of Ca 2+ ions

by changing their fluorescent properties. The most used molecule is the molecule Green

Fluorescent Protein (GFP), a protein encoded in the DNA of the jellyfish Aequorea victoria.

The type of calcium indicators used in this project are genetically encoded calcium in-

dicators (GECI), meaning that the DNA of the indicators are integrated in the mice DNA.

GECI are incorporated to the mice genome by means of in utero electroporation, which uses

an electrical field to drive negatively charged DNA with the coded indicators into the cells.

These indicators are targeted to specific brain areas and cell types. In our case, we make

use of the GCaMP6s calcium indicator (see Fig 3)

Fluorescent microscopy approaches: one-photon and two-photon imaging

The underlying goal behind fluorescence microscopy is to detect the spontaneous emission of

light during the relaxation of a molecule (in our case, the calcium indicator) from an excited

state to its ground state. In order for our calcium indicator to emit a photon, we have to

excite it the right amount of energy so that it reaches electronic the first electronic excited

state. The optical path that the light follows within the microscope is described in Fig. 4.

In this study, the fluorescence molecule used (GCaMP6s) has the major excitation peak at

8

Figure 3 Adapted from25. A. Simultaneous fluorescence dynamics and spikes in aGCaMP6s expressing neuron. The number of spikes for each burst is indicated belowthe trace (single spikes are indicated by asterisks). B Zoomed-in view of bursts ofaction potentials. C. Fluorescence change in response to one action potential

a wavelength of 470 nm. The emitted photon exhibits an increase of the wavelength with

respect to the excitation photon, effect known as Stokes shift. As a results, the emitted light

shifts and it is seen as green.

In one-photon microscopy, the fluorescence molecule is excited by photons within the

excitation spectra (visible blue light). In this modality, the entire sample is exposed to

the excitation light, allowing us to imaging across wide fields of view. However, we trade

the large fields of view for spatial resolution, given that we are not able to capture single

cell dynamics. This is due to absorption phenomena in brain tissue and scattering effects.

Another downside is that we are also not able to image deeply through brain tissue and can

only resolve surface dynamics. However, it is a powerful tool to resolve activity across wide

dorsal cortical areas simultaneously.

The scattering and absorption effects can be solved using two-photon excitation instead of

one-photon excitation. In this case, the fluorescence molecule is excited by the simultaneous

absorption of two less energetic photons compared to one-photon excitation (Fig 5) The

advantage of this approach is that it lets us use a different excitation of higher wavelength,

thus reducing the scattering and penetration deficiencies of one-photon imaging. However,

the nonlinear process governing signal generation requires immense power densities on the

order of MW/cm2. Power densities of this magnitude are only reached at the focal plane of

9

Figure 4 Optical path of a fluorescence microscope First, an excitation filtercube containing a dichroic mirror directs excitation light (from a bulb or laser;filtered with an excitation filter) to the specimen. Then, fluorescence emitted fromthe specimen goes to the emission cube for further separation. In the way, the barrierfilter B prevents excitation light from reaching the detectors. mitted fluorescenceis separated by DM 2 into two beams.Finally, emission filters (Em 1 and Em 2)block unwanted light and the fluorescence is detected by cameras(wide-field) orPMTs(two-photon).

the objective lens, confining the observable signal to the plane of focus. Confining the energy

to the focal plane is a great advantage, as it greatly reduces photo-damage by eliminating

signals above or below the focus. But this high peak power could be harmful for live tissue,

so femto-lasers are required to maintain a low average power as well. Two-photon microscopy

has then two main advantages: because of the reduced scattering and greater absorption it

allow us to image deeper in brain tissue while resolving single cell neural dynamics.

10

Figure 5 One-photon and two-photon imaging comparison A,B Jablonskidiagrams of one-photon (A) and two-photon (B) excitation, which occurs when thefluorescence molecule absorbs the excitation radiation in the form of one or twoequivalent photons of lower-energy. Adapted from26. C,D Scattering effects aregreater in one-photon excitation (C) than in two-photon laser excitation (D).

11

AIM

Since the discovering of the place cell in 1978 by John O’Keefe, neuroscientists have deepened

their knowledge in the ways the brain stores and represents spatial information. Detailed

electrophysiological combined with behavioral studies have identified an increasing number

of different single-cell neural spatial representations, mainly allocentric representations and,

recently, also representations of space in egocentric coordinates have been reported.

In the first chapter, we ought to explore the mesoscopic dynamics of allocentric spatial

representations. Traditionally, spatial representations in the brain have been largely asso-

ciated with the hippocampus and its adjacent areas. However, it has been unresolved if

this representations are widely distributed over the cortex or if they are only present in the

hippocampus. As the animal needs the spatial information stored in order to successfully

navigate, we hypothesize that there must be a wide-spread spatial allocentric signal present

in the mouse cortex. Electrophysiology, via implanted electrodes in the brain, has been

successful in characterizing single-cell responses while the animal is freely navigating in a

closed environment. However, electrophyisiology approaches are not enough for exploring

large-scale neural dynamics throughout the cortex simultaneously. In this sense, calcium

wide-field imaging opens the possibility of recording neural dynamics across a vast range of

cortical areas while the animal is behaving. Yet this approach offers limitations, given that

it requires that mice have their heads fixed during recording sessions. Navigation, trivially,

requires free exploration of the environment. In order to overcome this limitation, I aim to

design and conduct experiments coupling the wide-field microscope setting with the Mobile-

12

HomeCage, a flat-floored air-lifted platform that allows relative free movement even when

the head is fixed. The wide-field imaging technique doesn’t resolve single cell dynamics, but

each captured pixel represents a relatively small set of neurons, of the order of the hundreds.

Therefore, we need a high spatial resolution, with the inconvenience that spatial resolution

comes at the expense of high volume of experimental data. Hence, I also aim to design the

computational methods necessary to process recordings of high pixel densities at high frame

rates in large time-scales and to do so in a timely and robust manner.

The second chapter is centered on the question of how the allocentric and egocentric

spatial representations interact with each other. It has been proposed by a number of theo-

retical studies that the transformation circuit that allows this conversion of reference frames

requires the existence of egocentric border cells, neurons that respond only when boundaries

are placed in a certain direction and orientation with respect to the animals. The candidate

cortical region that has been proposed to perform this computation is the retrosplenial cor-

tex, given that it has unique connectivity with the sensory cortices, that intrinsically contains

egocentric representations and with the hippocampus, that has been shown to store most

of the allocentric spatial representations. Recent studies have demonstrated the presence of

such type of neurons in the retrosplenial cortex. However, these studies have been achieved

by means of electrophysiology. The caveat of electrophysiology is that only a handful of

neurons can be recorded each session. As such, the researchers tend to bias the highly active

neurons. In the second chapter of this thesis, I aim to characteristic the egocentric response

to walls of neurons in the retrosplenial using two-photon microscopy. This method will allow

us to explore the dynamics of hundreds of neurons simultaneously, thus letting us know how

this representation is encoded on a population basis.

13

Chapter One

NAVIGATION CORRELATES IN

DORSAL CORTEX

1.1 Preface

The first step in characterizing the spatial information of mesoscopic cortex-wide activity is

to set up a method that can effectively emulate navigation in a real-life environment while

recording recording large-scale neural dynamics at high spatial resolutions for long periods of

time. Further, such dynamics needs to be compared across sessions and across animals. To

this end, I put together a procedure that can account for relatively real-life experience nav-

igation while recording neural activity across cortex. I developed a computational pipeline

that incorporated and processed fluorescence data from the microscope along with the be-

havior variables. All the activity was registered to a common reference and divided into

regions using a user guided interface in order to enable cross-session and cross-animal analy-

sis. Activity data correlated with navigational variables agrees with already published data,

confirming that our procedure is a good alternative to study navigational neural correlates

in mice using calcium imaging. I then derived a spatial information index to compute the

rate of information conveyed by neural activity for each pixel. Last, I computed the rate

of information encoded in each pixel and found that there is not any underlying allocentric

14

signal, at least in the mesoscopic scale.

1.2 Methods

1.2.1 Experiment Set-up

Animals

For cortex-wide calcium indicator expression, Emx1-Cre (Jax Stock 005628) x ROSA-LNL-

tTA (Jax Stock 011008) x TITL-GCaMP6s (Jax Stock 024104) triple transgenic mouse

(n = 1) was bred to express GCaMP6s in cortical excitatory neurons. When the mouse

was six to twelve week-old, it was implanted with a head plate and cranial window and

imaged starting 2 weeks after recovery from surgical procedures and up to 10 months after

window implantation. The animal was housed on a 12 h light/dark cycle in cages of up to

5 animals before the implants, and individually after the implants. All animal procedures

were approved by the Institutional Animal Care and Use Committee at UC Santa Barbara

Head-plate Surgeries

All surgeries were conducted by Michael Goard under isoflurane anesthesia (3.5% induction,

1.5 - 2.5% maintenance). Prior to incision, the scalp was infiltrated with lidocaine (5 mg

kg-1, subcutaneous) for analgesia and meloxicam (1 mg kg-1, subcutaneous) was admin-

istered pre-operatively to reduce inflammation. Once anesthetized, the scalp overlying the

dorsal skull was sanitized and removed, and the periosteum was removed with a scalpel. The

skull was polished with conical rubber polishing bit, coated with cyanoacrylate (Loctite 406)

and overlaid with an 8-10 mm round coverglass. A cranial window was implanted over the

craniotomy and sealed first with silicon elastomer (Kwik-Sil, World Precision Instruments)

then with dental acrylic (CB-Metabond, Parkell) mixed with black ink to reduce light trans-

mission. The cranial windows was made of two rounded pieces of coverglass (Warner Instru-

15

ments) bonded with a UV-cured optical adhesive (Norland, NOA61). The bottom coverglass

(4 mm) fit tightly inside the craniotomy while the top coverglass (5 mm) was bonded to the

skull using dental acrylic. A custom-designed stainless steel head plate (eMachineShop.com)

was then affixed using dental acrylic. After surgery, the mouse was administered carprofen

(5 – 10 mg kg-1, oral) every 24 h for 3 days to reduce inflammation.

Head-Fixing Tracking Apparatus: The Mobile Home Cage

In order to be able to optically image cortex activity of mice while navigating through

a familiar environment, we used the Mobile HomeCage (Neurotar Ltd, Finland) tracking

system where animals have their heads fixed to an aluminum frame but otherwise freely

moving in an ultralight carbon container floating above an air-dispenser base. The system

works in such a way that although the head of the mouse is fixed and cannot be moved, the

mouse body movements make the carbon container slide on the air-dispenser base, allowing

it to explore the environment.

Navigation variables are tracked using a sensor board (located inside of the MHC’s air

dispenser), a mat with magnets (placed inside the floating cage), a control unit and open-

source software . Sensors detect the movement of magnets and the processor in the control

unit calculates the cage coordinates in real-time and streams the data to the computer

software, which extrapolates the cage movements into the animal’s trajectory relative to the

cage.

The processing unit is also equipped with a digital TTL I/O interference. This digital

ports can be used to trigger the start and end of data acquisition. Therefore, in order to

synchronize the data acquisition triggering of both the MHC and the wide-field microscope,

a voltage pulse train was generated via MATLAB and sent to the MHC and microscope

control unit using a digital-analog converter, model NIDAQ BNC-2110 (National Instruments

Corportation).

The carbon fiber container is a disk with a 12.5cm radius and 68g weight. Given that

16

it has been previously shown that flat walls with edges rather than round walls can evoke

stronger spatial responses, a set of inner walls were created using nylon plastic sheets so

that the container walls were a pentagon. Such shape was chosen so that the cage’s space

was maximized without needing to many different pieces. Although mice stress (measured

by cortisone levels) is greatly reduced after being habituated to head fixation27, the stress

levels are higher than in normal conditions. This higher stress can produce a thigmotax

behavior (tendency to remain close the walls instead of exploring the whole environment).

To reduce this tendency, two objects were placed in the arena so that they had to go around

them. As the cage has to be cleaned after each use with ethanol in order to remove ol-

factory cues, the set of walls and objects had to be removed after each time, so a template

was designed in order for them to be placed in the exact same position every imaging session.

Figure 1.1 Neurotar Mobile Home Cage A. Bird’s eye view of the head-fixedmouse along with the custom created arena. B. Front-view C. The carbon fibercage is positioned on top of the air-dispenser, so that when it is on it slides throughit.

Behavioral Training

Before imaging, mice need to be habituated to the apparatus, given that when mouse feel

threat they remain still and do not move. Training consisted in four phases. First, they

needed to be acclimated to the experimental room. This was done by placing the cage

unopened in the experimental room for half an hour once a day for two or three days. The

17

second phased consisted in getting them used to the experimenter, so they were handled

until they were no longer afraid and didn’t try to escape when holding them on an open

hand. This second phase lasted for at least 3-4 days, in some case even longer. The last

phase had the goal of habituating them to the cage and to being head-fixed. For a couple of

days, they were placed inside the cage so that they could explore it. Once they were used

to it, they were head-fixated for 5 minutes during 3-4 days while freely running around. At

this point, the air pressure was chosen so that the cage was floating at 1-2mm distance from

the air dispenser. Finally, mice were placed in the cage for 20minutes once a day until they

spent an approximately 15% of the time moving. This took usually a week, with the whole

process being more or less two weeks long. After this period, they were ready to be imaged.

Wide-field Imaging

GCaMP6s fluorescence was imaged using a custom wide-field epifluorescence microscope.

The full specifications and parts list can be found on our institutional lab website (labs.

mcdb.ucsb.edu/goard/michael/content/resources). In brief, broad spectrum (400–700 nm)

LED illumination (Thorlabs, MNWHL4) was band-passed at 469 nm (Thorlabs, MF469-35),

and redirected through a dichroic (Thorlabs, MD498) to the microscope objective (Olympus,

MVPLAPO 2XC). Green fluorescence from the imaging window passed through the dichroic

and a bandpass filter (Thorlabs, MF525-39) to a scientic CMOS (PCO-Tech, pco.edge 4.2).

Images were acquired at 400x400 pixels with a field of view of 4.0x4.0 mm, leading to a pixel

size of 0.01 mm pixel. A custom light blocker afixed to the head plate was used to prevent

light from the visual stimulus monitor from entering the imaging path. The camera was

triggered with a binary digital pulse train of pulses consisting of high levels of 3V and lows

of 0V. High levels, which triggered the camera exposure, had a duration of 75ms while lows,

which stopped exposure, had a duration of 25ms. Therefore, acquisition frame-rate was set

to 10 fps.

Each mouse was imaged for 4-5 sessions. These sessions had a duration of 20 minutes

18

each. Before imaging a reference image of the whole window was taken so that all sessions

were approximately centered and with the same magnification in order to allow for better

alignment later on.

Figure 1.2 Neurotar-Widefield set-up Schematic of the custom epifluorescentwide-field microscope for in vivo GCaMP6s imaging while the mouse is behaving inthe MobileHomeCage.

1.2.2 Post-processing

All code developed in the following sections, unless noted otherwise, has been entirely

developed by the author using MATLAB R2020a (Mathworks Inc) and can be found at

https://github.com/s-acosta

MHC data post-processing

The MHC generated .tdms files were converted to MATLAB structures. The software inter-

polates the walls position so that although the mice is not moving, it is able to compute the

relative position of the mouse with respect to the walls. Apart from position, other useful

behavior output variables are the instant speed, and the angular displacement of the cage.

19

This later variable is trivially related with the mouse head-direction at any given time. The

MHC processing unit sampling time was 50Hz, so the signal had to be downsampled to 10Hz

in order for it to match with the microscopes frames.

Extraction of fluorescent traces

The images acquired with the microscope are informative of the fluorescence levels beneath.

However, fluorescent reporters often exhibit bleaching that produces a slow change in both

baseline and transient signal amplitude. Also, absolute fluorescence values can vary de-

pending on optical path-length, indicator expression levels, excitation intensity and detector

sensitivity. Such variations can have dramatic changes in absolute fluorescence values of not

only different mouse and sessions, but also within the same sample.

This time-dependent loss of fluorescence can be addressed by low-pass filtering of the

signal, followed by subtraction and normalization of the original data by this “baseline” F0,

resulting in standard ∆F/F0 values. Normalization by F0 also corrects for variation in

optical path length, indicator expression levels, excitation intensity, and detector sensitivity,

allowing for more robust comparisons across microscopes and studies. Therefore, ∆F/F0 is

defined as follows:

∆F

F= 100% · Fx,y,t − F0x,y

F0x,y(1.1)

Where the matrix Fx,y,t is the fluorescence of all pixels, (x, y) is pixels’ index, t is the

frame index and F0x,y is the baseline fluorescence. The resultant matrix ∆F/F first two

dimensions specify the pixel’s location while the third dimension specifies frames. In order

to estimate the base fluorescence F0, we take into account the fact that when the mouse is

moving, the overall cortex-wide activity significantly increases with respect to the rest state

as it has been shown previously shown28.

20

Processing of the ∆F/F

In the present study, we want to examine if there is an underlying predominant spatial

signal that drives cortex-wide activity, and thus we were only interested in large transients

of activity rather than the overall ∆F/F . Such fluorescence transients were then identified

as events that started when ∆F/F deviated 2 standard deviation (σ) from the baseline,

computed as the mean ∆F/F over the same pixel, and ended when it returned to within

0.5σ of baseline.

However, as discussed in the Introduction, since the recordings captures light emission

generated by the fluorescence of the calcium indicator, the neural activity that drives the

calcium changes is masked by the calcium indicator dynamics. We cannot apply directly

spike inference methods (we will discuss them in Chapter 2), as the underlying signal is

produced by most likely hundreds or thousands of different neurons, so a single spike train

cannot explain the changes of fluorescence.

Instead, we used Lucy-Richardson2930 convolution algorithm, named after William Richard-

son and Leon Lucy, who described it independently. Briefly, given a neural signal sj at frame

j, the observed fluorescence at frame i, fi, will be given by:

fi =∑j

pijsj (1.2)

with pij given by,

pij = P (i− j) (1.3)

Where p(∆i) is called a point-spread function and equation 1.2 is a convolution. Our

problem is to estimate, or deconvolute, the signal sj given the observed fluorescent traces fi.

Given a known point-spread function, the estimate of the signal, snj , at iteration n is:

s(n+1)j = stj

∑i

ficipij (1.4)

21

where

ci =∑j

pijunj (1.5)

It has been proved31 that if equation 1.4 converges, then it converges to the maximum

likelihood solution for uj. In our case, as the signal decays exponential, our point-spread

function will be:

p(∆i) = γ∆i (1.6)

Where γ is the decay parameter of our fluorescence calcium indicator, GCaMP6s, at a

sampling rate of 10Hz25. This method has previously been shown to yield good results com-

pared to ground truth data measured with electrodes32. The algorithm was implemented

using MATLAB’s deconvlucy. Raw ∆F/F traces vs deconvolved are depicted in Figure 1.3.

Figure 1.3 Example of ∆F/F traces raw and deconvolved. Left. Mean∆F/F of the whole dorsal cortex average within a session (20minutes). Whitesquares are pixel locations from which the traces at the Right. have been taken.In the figure at the left, order goes from up to down and left to right, respectively,while in the right it goes from left to right as usual. RIGHT traces of 10 randomlyselected pixels. In blue, raw ∆F/F traces without processing. In red, traces havebeen deconvolved. Note the increased temporal resolution in the deconvolved traces.

22

Cross-session alignment

Here, we are interested in comparing the fluorescent activity of the same pixel over the course

of several sessions. Although the wide-field setup was pretty rigid and the same area was

imaged every time, any small change of either the window position or the magnification can

significantly alter the analysis. So, in order to fix such variation, a cross-session alignment

was carried out.

This cross-session alignment consisted in selecting the same pixel in both a general ref-

erence image and the reference of the session we wanted to align. Once the pairs of pixels

were selected, a rigid transformation T between the two set of points was inferred. Such

transformation, also called Euclidean isometry, is a geometric transformation of a Euclidean

space that preserves the Euclidean distance between every pair of points. As such, it acts

on the vector containing all the pixels in the image we want to correct as:

T (~v) = R~v + ~t (1.7)

where RT = R−1 is an orthogonal transformation (therefore, R represents a rotation) and

~t is a vector giving the translation of the origin. The pixels were selected using MATLAB’s

cpselect, the transformation was inferred using fitgeotrans and the image was corrected

using imwarp. An example of the pixel final results can be seen in Fig 1.4.

Parcellation of cortical regions

The alignment transformation was useful to analyze data from the same mouse across differ-

ent days. However, we were also interested in standardize measurements between animals.

Furthermore, we aimed to inspect what was the contribution of each cortical region to the

overall spatial signal. To this end, the aligned reference for each mouse was segmented using

a template obtained from the Allen Mouse Brain Common Coordinate Framework33. Briefly,

23

Figure 1.4 Cross-session registration with a common reference A. Imageoverlay of the mouse reference image (purple) and the session un-fixed reference(green). B. Overlay of the registered session reference and the mouse reference.

a custom MATLAB guided user interface (GUI) (Figure 1.5) was developed in order to move

and scale the template over the reference image taking as reference points the superior sagit-

tal sinus, the transverse sinus, and the borders of the dorsal cortex as reference points. In

turn, the GUI generated a 6 layer mask of 400x400 pixels with each layer defining one dif-

ferent region. The parcellation GUI can be seen in Fig 1.5.

1.3 Results

1.3.1 Head-fixed mouse successfully navigates through the cage but

in limited ways

Manual restriction of head movement, or head-fixation, of awake mice allows for sophisticated

investigation of neural circuits in vivo, that would otherwise be impossible in completely

freely moving animals. Several approaches have been made to study the neural activity

in behaving mice using head-fixation. These approaches include fully torso restriction34,

24

Figure 1.5 Template Matcher GUI. The guided user interface allows the usermove and scale a layered template so that it matches with the reference image. Tobest results, it has to be aligned to the curvature of the posterior edge of the dorsalcortex. The GUI receives the layered mask and the image as input and allows theuser to create also a window mask.

free running on a treadmill35 or spinning disc28. In the case of the treadmill approach, the

methodology is limited given that it only allows some aspects of linear locomotion. In the case

of the spinning disk, the mouse is placed on top of an optical mouse so that the movement

of the paws is reflected in a virtual-reality environment placed in a screen in front of it. This

approach also offer it’s limitations, as virtual reality cannot mimic the multi-sensory input

(visual, olfactory, tactile and vestibular) present in real-life navigation.

In this sense, our approach, the MobileHomeCage, offers us a more naturalistic environ-

ment that makes results more relatable to freely moving experiments. As the mouse cage

is lifted and the mouse movement impacts the cage movement, the mouse can navigate in

all possible directions in a real-environment while keeping a more natural body posture. In

order to study the correlation of neural activity with the navigation and with the environ-

ment, we have build an analytic arena to represent the mouse position and head-direction

25

with respect to the cage at each time point. Fig shows a scatter plot of all positions with

respect to the computational arena.

One of the disadvantages of the head-fixation is that it can cause great stress on mice27.

When mice are stressed or scared, they show a freezing behavior, thus they don’t move and

stay still. This was the case in the first habituating sessions. However, when the mouse was

imaged it moved an average of 14.3% of the total time.

The mouse was able to explore all the cage, as it is seen in Fig. 1.6A. However, it did so in

a very specific manner, only running in counter-clockwise direction. This is due to another

undesired cause of stress that is called thigmotaxis. Because of evolutionary reasons, open-

field environments without any shelter induce stress in mice. This produces an inclination

to explore mainly the peripheral zone of the environment and stay close to the walls. In our

case, in order to break this thigmotaxis behavior, we have placed an object in the arena, so

they cannot just follow the walls. Yet it didn’t fully eliminated the thigmotaxis behavior

completely, as can be seen in Fig. 1.6B.

We acknowledge that the locomotion behavior offered by the MHC home cage poses

limitations in that it doesn’t fully recover freely moving mouse behavior. However, it is

the best currently available option to explore the behavior correlates in dorsal cortex using

calcium imaging and to study how sensory and locomotion variables affect to navigation.

1.3.2 Cortex-wide neural activity is modulated by motion

Movement has been repetitively shown to strongly modulate cortex-wide neural dynamics.

Running modulates the gain of visual inputs36 and is a main factor in the integration of

visual motion and predictive coding37. Movement is also known to modulate multiple cortical

areas38. The reason behind this modulation is that the onset of movement reflect changes

in the animal’s internal states, for example, increased arousal during running. To assess the

impact of movement in the neural activity, the correlation of fluorescence changes to speed

26

Figure 1.6 Behavior correlates measure by the MHC. A. Scatter plot ofall the position points in the custom built analytic arena B. Scatter plot of thevelocity’s direction for times when the mouse was moving. Note that the movingpattern is clearly counter-clock-wise

was quantified by computing the cross-correlation coefficient between the fluorescence traces

of individual pixels and running traces for all sessions. The running-related triggered activity

can also be seen in the mean changes of fluorescence per areas. Overall, all regions showed

an increase of mean activity when the mouse was running (Fig 1.7).

1.3.3 Information index as a metric for the amount of information

conveyed by neural activity

Neurons convey information by means of their firing rate. For example, suppose we are

recording the activity of a neuron in the auditory cortex of a mouse while we are intermit-

tently playing a sound of a certain frequency. Ideally, we assume that this neuron fires at

a constant rate whenever such frequency is playing and doesn’t fire at all in the absence of

noise. In this case, if we are prevented from hearing any noise, but we are informed that

neuron has just this very moment fired a spike, we obtain one bit of information: there is

or there is not music playing. Now suppose that we also play another frequency, but the

27

Figure 1.7 Cortex-wide activity is modulated by motion. A. Mean ∆F/Factivity when the mouse speed is zero. B. Same as in A, but with the mouse moving.C. Pearson’s correlation coefficient of activity with speed. Note the correlation inthe lateral sides of the somatosensory cortices, it is similar as the one described in39

neuron isn’t capable of picking this frequency up. If we are informed then that the neuron

has just fired, this neural activity has provided us two bits of information.

In this context, the neuron is acting as an "information channel", whose input is the

frequency of the sound or any other variable and whose output is the spike train. Therefore,

we would like to measure the rate of information that any given neuron is transmitting about

any variable, i.e. the mutual information. In information theory, the mutual information

is a quantity that measures how much one random variable tells us about about another.

High mutual information indicates a large reduction in uncertainty; low mutual information

indicates a small reduction; and zero mutual information between two random variables

means the variables are independent.

For two jointly discrete random variables X and Y , the mutual information is defined as:

I(X;Y ) =∑y∈Υ

∑X∈χ

pXY (x, y) log2

pXY (x, y)

pX(x)pY (y)(1.8)

28

where pXY is the mass joint probability function of X and Y and pX and pY are the

marginal probabilities. For any given variable, we can discretize it in bins, being X the

index of the currently occupied bin. During a sufficiently small time ∆t, the neuron can

either spike or not spike. Therefore, the event of spiking may be indicated by a binary

random variable Y . Given a mean firing rate λ, the probability of a spike occurring in ∆t is:

pY (Y = 1) = λ∆t (1.9)

similarly, the probability of a spike given that it has occurred in a given bin will be:

pX(Y = 1|X = x) = λx∆t (1.10)

where λx is the mean firing rate at that certain bin. Consequently, it follows that:

λ =∑x

λxpX(x) (1.11)

Using Bayes’ theorem and simplifying notation, we can rewrite Eq. 1.8 as:

I(X;Y ) =∑x

∑y

P (Y |X)P (X) log2

P (Y |X)

P (X)(1.12)

Plugging Eq. 1.9 and Eq. 1.10 in Eq. 1.13 yields:

I(X;Y ) =∑x

λx∆tP (X) log2

λx∆t

λ∆t+ (1− λx∆t)P (X) log2

1− λx∆t1− λ∆t

=∑x

λx∆tP (X) log2

λxλ

+ P (X) log2(1− λx∆t)− P (X) log2(1− λ∆t)

− λi∆tP (X) log2(1− λx∆t)− log2(1− λ∆t)

(1.13)

Approximating log2(1− z) to the first term in the Taylor expansion:

log2(1− λ∆t) ≈ −λ∆t

ln(2)(1.14)

29

Applying the approximation to Eq. 1.13 and excluding the second order terms:

I(X;Y ) =∑

xλx∆tP (X) log2

λiλ− ∆t

ln(2)

∑x

P (X)λx +λ∆t

ln(2)

∑x

P (X) (1.15)

Using Eq. 1.11 and∑

x P (X) = 1:

I(X;Y ) = ∆t∑x

λxP (X) log2

λxλ

(1.16)

Expanding S around t+ ∆t:

I(X;Y )(t+ ∆t) =inf∑k=0

∆tkIk

k!(1.17)

Finally, the first term of such expansion yields:

I(X;Y ) ≈∑x

λxP (X) log2

λxλ

(1.18)

The information index I is therefore the information rate in bits per second. The formula

can easily be used to define the rate at which a cell conveys any aspect of the mouse’s state

or any combination of aspects.

In the present study, we don’t ought to study the information conveyed by a single cell,

but by hundreds or thousands of cells at the same grouped in a single imaging pixel. Yet

all the suppositions made until now hold equally for a linear sum of responses. Also, our

signal is not in the units of spikes but of changes in fluorescence. This could indeed pose

an issue, due to the fact that the indicator’s dynamics is slower than spikes’ dynamics and

our supposition that for small enough time-steps spikes can be modeled as a binary random

variable may not hold true, bearing in mind that there might be fluorescent residues from

previous time points. However, we have recovered the original neural activity dynamics

deconvolving the signal (see Section 1.2.2), so fluorescent traces immediately after relevant

neural events are dismissed.

30

Still, the information index shouldn’t be regarded as an absolute measure of information

conveyed on account of its limitations, but rather it should be considered as a relative

measure useful to compare the cortex-wide tuning of different variables.

1.3.4 Allocentric spatially selective pixels are restricted to the somato-

sensory dorsal cortex

In order to apply the information index described, we discretized the Cartesian position of

the mouse for each frame in a number of N different bins. Therefore, the spatial index for

each pixel was computed as:

Ipixel =N∑i=1

λipi log2

λiλ

(1.19)

where i is the index of the bin, λi is the mean activity of the pixel computed in times

where the mouse was in bin i and λ is the overall mean ∆F/F for that pixel. In this case,

as each pixel is equally big, we consider the probability pi as being the proportion of time

spent in that certain with respect to the total time.

Therefore, we applied the spatial information index to cortical fluorescence recordings of

3, 20minutes-long session. As discussed in the preceding section, large calcium transients

were mostly found in times where the mouse was moving. So, in order to reduce computa-

tional effort, only the frames at times when the mouse was moving were considered. Also,

pixels outside the mask region were not considered for analysis. Overall, ∆F/F of 100k pix-

els along 15k time-points (25 minutes of recording, 3 sessions) were considered for analysis.

The information index was computed under a spatial binning of 64 bins. The reason behind

this number of bins is that this is the minimum number of bins that can fully recover the

cage inner structure.

The spatial information index was computed for each session separately. To filter the

true spatial index values and dismiss the possible indexes occurring due to chance, the

31

vector of positions of the mouse was randomly shifted 100 times, with the spatial index for

each pixel computed for each random shift. In this way, the real activity was de-correlated

with the position. Then, only pixels with spatial information index values greater than the

99th percentile of the distribution of all values were considered, see Fig. 1.8A. In order to

compute the spatial information index across sessions, the mean spatial information index

was computed as the mean of the spatial information index of the three sessions. Then, only

pixels regarded as significant in all the three sessions were considered. (Fig. 1.8B).

Figure 1.8 Allocentric Spatial Information Index. A. First column: Theinformation index for all pixels was computed for each session (rows). Pixels withan information index greater than the 99th percentile of the suffled distribution atshown in the second column. B. Overall spatial information index (first column)and pixels whose response isn’t due to chance (second column)

32

Considering the location of spatial information in the cortical areas, we would have

expected to see regions such as retrosplenial cortex or parietal cortex encoding allocentric

spatial information. However, we only see a robust spatial information encoding in the

posterior part of the somatosensory cortex, and only in the left hemisphere. To see the

position that these pixels where coding for, we computed the mean allocentric activity map

of this area: briefly, for each time point we binned the position of the mouse, as done in the

spatial information index, and added to that bin the activity of a pixel at that same instant.

Then, we normalized by the time spent in each bin. Results can be seen in Fig 1.9.

Figure 1.9 Changes in ∆F/F as function of allocentric position. Allocentricactivity map showing the mean activity of the spatial meaning-ful pixels as functionof location. It indeed had a place field in the lower corner of the arena and thisplace field coincide with the hexagon object location

Somato-sensory cortex was further divided in more specialized regions. We could then

verify that the area with meaningful spatial information rate was the barrel cortex. Barrel

fields cortex is the sensory area that receives inputs directly from the whiskers of the mouse.

Then, it was not a surprise to see spatial information being coded in that region. In these

imaging sessions, the mouse tended to run in a counter-clock-wise manner, so that when it

was in that region he had the hexagon object at his right. The objective placed on top of

the mouse didn’t allow it to come close enough to the walls of the arena as the objective is

33

wider than the head-plate. However, the height of the objects didn’t impede the mouse to

come close to them. Therefore, when the mouse was in that same region, we could sense the

object with its whiskers. Sensory stimuli coded in the cortex is contra-lateral, meaning that

stimuli that comes from the right will be coded in the left hemisphere. Therefore, we can

conclude that there was no spatial relevant information in that area, given that it was just

an artifact of the experimental conditions.

1.4 Discussion

Existing theories of spatial cognition have posited that allocentric information is necessary to

navigate through an environment. Previous studies have indeed demonstrated the presence

of other allocentric spatial representations throughout cortex, such as head-direction. Here

we hypothesized that there was an underlying spatial information signal present in the cortex.

However, our results suggest otherwise.

We have designed an experimental paradigm that allow the recording of large-scale neural

dynamics in the mouse cortex. Although the movement of the animal in such experimental

conditions was somehow limited, it was enough to show navigation correlates that are in

agreement with previous studies. We showed indeed a correlation with movement similar

to those studies3928. This result was expected, as movement encoding is one of the keys

aspects involved in navigation: it modulates visual cortex responses and along with the

head-direction signal, it allows the animal to keep track of its position, i.e. path integration.40

We have derived an index that can account for the information rate per pixel41. How-

ever, the only meaningful and not-chance driven spatial information encoding region is an

experimental artifact, as it really encoded sensory stimuli. Therefore, our current evidence

suggests that there is not wide-spread allocentric information in the mouse cortex. Yet, we

can’t rule out this possibility completely. One of the caveats of our current calcium indicator

is that it signals changes of calcium throughout all the neuron, from cell body to dendrites

34

and axons. As we were imaging the superficial layer of the cortical surface, the signal that

we were able to detect was mostly from the first layer of the cortex. Layer I consists largely

of extensions of apical dendritic tufts of pyramidal neurons and horizontally oriented axons,

as well as glial cells. Therefore, in this layer there are few pyramidal neurons, the main

computational unit of the cortex. As a result, our signal to noise ratio could be greatly

influenced by axons and dendrites of neurons that didn’t necessarily have to be located in

the same region. Further experiments to resolve the spatial allocentric signal throughout

cortex need to use another calcium indicator that only is expressed in the cell body. These

type of calcium indicators have just been recently engineered42.

Even if we couldn’t show the presence of an underlying allocentric signal, the detection

of the barrel cortex using the spatial information proofs that the method is consistent with

detecting spatially tuned pixels. Besides, The location of the pixels is consistent with the

position of the barrel cortex, thus validating our parcellation procedure. Further, it also

serves as validation of our deconvolution method.

Although our main hypothesis has been proved false, at least with the current evidence,

the experimental paradigm we have designed can be powerful in the future in order to study

mouse cortex during navigation. It increases the degrees of freedom compared to other

methods such as treadmills and spinning disks. However, we need to further improve the

design of the arena, given the tendency of the mouse of moving in the same manner. The

training and habituation of the mouse also need to be improved in future studies.

35

Chapter Two

EGOCENTRIC BORDER CELL

TUNING IN RSC

2.1 Preface

Retrosplenial cortex has been proposed to store neural egocentric border representations, i.e.,

cells that are tuned to the boundaries located at specific directions and orientations with

respect to the animal. Here, we extracted neural activity from 2-photon imaging data along

with behavior variables given by the MobilHomeCage. We computed egocentric activity

maps and found the egocentric tuning to walls across all the neural population. Then, we

applied statistical methods to obtain the neurons that are tuned to boundaries in egocentric

coordinates. In agreement with prior evidence, we found that there is a robust coding for

boundaries in egocentric coordinates in the retrosplenial cortex, hence providing evidence to

the role of the retrosplenial cortex as a coordinate transformation hub.

36

2.2 Methods

2.2.1 Experiment Set-up

Animal, Head-fixing surgery & Training

In this case, one single triple transgenic male mouse of the same characteristics as in Chap-

ter 1 was bred. The same protocol described in Chapter 1 was applied. The head-plate

surgery also followed a similar procedure, but in this case the skull was abraded with a drill

burr to improve adhesion of dental acrylic. Then, a 4 - 5 mm diameter craniotomy was

made over the midline (centered at 2.5 – 3.0 mm posterior to bregma (so that it completely

covered the retrosplenial cortex), leaving the dura intact. A cranial window was implanted

over the craniotomy and sealed first with silicon elastomer (Kwik-Sil, World Precision In-

struments) then with dental crylic (CB-Metabond, Parkell) mixed with black ink to reduce

light transmission. The cranial windows were made of two rounded pieces of coverglass

(Warner Instruments) bonded with a UV-cured optical adhesive (Norland, NOA61). The

bottom coverglass (4 mm) fit tightly inside the craniotomy while the top coverglass (5 mm)

was bonded to the skull using dental acrylic. The surgery was done by Michael Goard. The

behavioral acclimation and training to the mobile home cage was again similar to the one

already described in Chapter 1. All animal procedures were approved by the Institutional

Animal Care and Use Committee at University of California, Santa Barbara.

Two-Photon Imaging

After > 2 weeks’ recovery from surgery, GCaMP6s fluorescence was imaged using a Prairie

Investigator 2-photon microscopy system with a resonant galvo scanning module (Bruker).

Before 2-photon imaging during mouse navigation, control for GCaMP6s expression was

done to assess the condition of the fluorescence expression. The focus control of our mi-

croscope design was achieved by mounting the microscope objective on a linear translation

37

stage (XYZ axes control) below the microscope. The linearized translation stage allowed us

to place the Neurotar Mobile HomeCage platform, with the carbon cage arena, directly be-

low the objective, giving the possibility to focus on the head-restraint area of the brain. For

fluorescence excitation, we used a Ti:Sapphire laser (Mai- Tai eHP, Newport). For collection,

we used GaAsP photomultiplier tubes (Hamamatsu). We used a 16x/0.8 NA microscope ob-

jective (Nikon) at 4x (760 x 760 m). Laser power ranged from 190–210 mW. Photobleaching

was minimal (<1% min 1) for all laser powers used. Since the photomultipliers (PMTs) are

very sensible to any external photons, a plastic light blocker surrounded the objective, and

a custom stainless-steel light blocker was mounted to the head plate and interlocked with

a tube around the objective to prevent light from reaching the PMTs. An scheme of the

imaging set-up can be seen in Fig. 2.1.

Figure 2.1 Two-photon and MobileHomeCage set-up

Schematic of the custom two-photon microscope set-up for in vivo GCaMP6s single-cell

imaging while the mouse is behaving inthe MobilHomeCage.

2.2.2 Post-processing

In the case of the two-photon microscopy, the aim was to be able to identify all the neurons

present in the recording and extract their fluorescence traces. All subsequent analysis were

38

performed in MATLAB R2019b using the laboratory custom code. Source code can be found

at https://github.com/ucsb-goard-lab/Two-photon-calcium-post-processing.

Extraction of single cell fluorescent traces

Images were acquired using PrairieView acquisition software and converted into TIF files.

Although during the imaging sessions the mouse head is fixed, at this level magnification

(spatial resolutions of the order of micrometres) any minor X-Y movement can severely

impact the quality of the data. To that end, images were corrected by registration to a

reference frame (the pixe-wise mean of all frames) using a 2D cross-correlation.

To identify responsive neural somata (cell’s body), a pixel-wise activity map was cal-

culated using a modified kurtosis measure. Then, neuron cell bodies were identified using

local adaptive threshold and iterative segmentation. Automatically defined ROIs were then

manually checked for proper segmentation in a graphical user interface (allowing comparison

to raw fluorescence and activity map images). To ensure that the response of individual neu-

rons was not due to local neuropil contamination of somatic signals, a corrected fluorescence

measure was estimated according to:

Fcorrected(n) = Fsoma(n)− α ∗ Fneuropil(n) (2.1)

where Fneuropil was defined as the fluorescence in the region within 30µm from the ROI

border (excluding, of course, other ROIs) from frame n. The parameter α was chosen from

[0 1] to minimize the Pearson’s correlation coefficient between Fcorrected and Fneuropil. The

∆F/F was then computed for each ROI as:

∆F

F= 100% ∗ Fn − F0

F0

(2.2)

Where Fn is the corrected fluorescence (Fcorrected), n is the frame and F0 is defined as the

mode of the corrected fluorescence density distribution across the entire time series.

39

Spike inference from fluorescence traces

As we have discussed in the Introduction, the underlying spiking activity of the neurons is

masked by the calcium dynamics. However, extracting the spike train of each neuron from a

fluorescence trace is a non-trivial problem. Assuming there is a one-dimensional fluorescence

trace F from a neuron, at time t, the fluorescence Ft is a function of the intra-cellular calcium

concentration at that time, Ct:

Ft = αCt + β + εr (2.3)

On the one hand, α is a factor accounting for the scale of the signal, including variables

such as number of calcium sensors within the cell, photons emitted per calcium ion, am-

plification of the signal, etc. On the other hand, β accounts for the offset, and it absorbs

the baseline calcium concentration of the cell, background fluorescence and other constant

factors. Besides, ε accounts for the noise at each time and is independent and identically

distributed with a normal distribution with zero mean and σ2 variance.

Assuming that the inner calcium concentration increases by AµM whenever there is a

spike, with nt being the number of spikes at a time-point, and then decays exponentially back

to a baseline concentration CbµM with some constant s, we can express the concentration

of calcium at the following time-step as:

Ct+1 = (1− ∆

τ)Ct + (

τ)Cb + Ant (2.4)

Where ∆ is the time-step size, or, in this case the inverse of the frame rate. As the

fluorescence is linear dependent on the calcium concentration, we can set the factors α and

A to 1 without loss of generality. Also, Cb can be set to zero because the parameter β already

accounts for offset. It is important to note that we are not interested to know at any point

the true calcium concentration, but rather a relative value

40

Ft = Ct + β + εt

Ct = γCt−1 + nt

(2.5)

where γ = 1 − ∆τ. Intuitively, Eq. 2.5 states that the calcium concentration will decay

exponentially over time, unless there is a spike at the tth time-point, in which case it will

increase again. It does so by assuming that the observed fluorescence is a noisy realization

of the underlying calcium at each time-point.

Therefore, the problem reduces to an optimization problem:

minimize ~c,β0

{ T∑t=1

(Ft − β0 − C2t + λP (ct − λct−1)

}subject to ct ≥ γct−1 (2.6)

where λ is a non-negative tuning parameter and P () is a penalty function designed with

the purpose of ensuring sparsity in its argument, that is, encouraging ct − γct−1 = nt = 0,

so that at most time-points no spike is estimated to occur.

The derivation and design of the algorithm is beyond of the scope of this study. This

non-negative deconvolution algorithm has been proved to successfully predict underlying

spike trains43.

2.3 Results

2.3.1 Computation of egocentric and allocentric spatial activty maps

In order to assess the spatial stability of the cells and study their egocentric responses to the

walls of the cage, we constructed allocentric and egocentric spatial activity maps. In the case

of the allocentric activity maps, the animal coordinates during the session were discretized

into 2.5mm x 2.5 mm spatial bins (yielding a 50X50 bin matrix). Then, for each time-point

in the activity trace, we assigned such activity to the corresponding spatial bin. After all

41

activity was binned, we normalized the overall matrix by the time that the mouse spent in

each bin.

In the case of the egocentric activity maps, we constructed an analytic arena like we

did in Chapter 1, with each wall discretized in 30 points. Then, we calculated the position

relative to the animal for each frame. To do so, we subtracted the mouse position to all

the wall-points and applied a 2D rotation with the head-direction angle provided by the

MobilHomeCage. Next, for each frame, we converted the walls to polar coordinates and

binned the distance to the animal in 5 mm bins and the orientation in 3◦ bins. Importantly,

it was all relative to the animal. The binning was done with angles going from 0◦ to 360◦ in

such away that 0◦ corresponded to the right of the animal, 90◦ corresponded to the front of

the animal, 180◦ to the right. Only points of the walls with modulus smaller than the radius

of the cage (125 mm) were considered. For all frames, all the points of the walls were added

to their respective bins in the polar boundary occupancy map. To compute the egocentric

activity map, we added the neural activity of each frame to the corresponding locations in

the occupancy map. Then, we normalized the activity map by dividing it by the amount of

time that a certain bin was occupied by the walls. For visualization, both the egocentric and

allocentric spatial activity maps have been smoothed with a gaussian kernel of 5 bin width

and 5 bin SD.

2.3.2 Most RSC neurons are excitable

We extracted the activity of n = 496 neurons from RSC from a mouse during head-fixing

exploration in the MobileHomeCage during one 20 minutes session. For the analysis that

follow, we only considered activity of frames in which the mouse was moving. In this session,

the mouse explored the cage 17% of the total time. Neurons exhibited diverse firing rates as

function of the location of the animal. Spatially responsive cells are thought to be excitable

cells, thus they respond with very high firing rates when the animal is in the receptive field

42

Figure 2.2 Transformation from an allocentric reference frame to a mouse-centered reference frame For each frame, we translated the position of the animalto the origin of coordinates and then rotated the environment so that the cage waswith respect to itself.

.

of the cell. Hence, in order to reduce the pool of possible egocentric border cells, we applied

a mean firing rate threshold of 5 Hz. Across the full population, 28.3% (143/496) were

rejected.

2.3.3 Egocentric boundary vector responsivity of RSC

Before actually computing the egocentric activity maps, we computed the allocentric activ-

ity maps of pool of cells. Several of them had receptive fields only near the environment

boundaries. We then inspected the relation of this activity with the head direction of the

animal. Further, a number of receptive field revealed that the head direction was preserved

along the boundaries, thus suggesting that those cells were egocentric border neurons, i.e.

cells that respond to the walls of the environment only when they are place at a specific

orientation and distance with respect to the animal.

43

To test this, we then built the egocentric activity maps as described before. Egocentric

activity maps (EAM) corresponding to boundaries must satisfy that they only respond to

a specific direction and distance, thus the high activity values must be clustered together.

Given that the cage’s walls are symmetric, we could think that if one neuron fired for a certain

orientation, it could also fire for the orientation placed 180◦ to the one we are considering,

(i.e. if one cell responds to a border to the front of the animal, it will also show activity

to the back of the animal). However, this possibility is ruled out because we have only

considered as potential receptive fields the points of the walls than are closer than half of

the cage diameters.

Hence, in order to cluster EAM, we selected bins with activities higher than the 95th

percentile. Then, we imposed the bins with activity higher 99th percentile to be all in the

same 95th percentile cluster. n = 200 of the pool of 393 cells were rejected.

From each EAM we computed the mean resultant angle of angular tuning. In circular

statistics, this is comparable to the mean resultant angle of the distribution of bins, with the

mean resultant of a distribution of bins defined as:

MR =

(n∑θ=1

m∑D=1

Fθ,D ∗ eiθ)/(n ∗m) ∈ C (2.7)

where θ is the sum for all possible angles, D is the sum for all possible distances, Fθ,D

is the EAM for that distance and angle, n is the number of θ bins and m the number of

distance bins. Trivially, the mean resultant angle of the EAM is then:

MRA = arctanim(MR)

re(MR)) (2.8)

With the mean resultant angle computed, for each EAM we identified the preferred

distance of the cell by finding, for all distances, the maximum bin value with the computed

angle.

Once we had identified the preferred distance and direction for each cell, we proceed to

44

determine if the response was due to the egocentric orientation of the border with respect

to the animal or it was due to chance. And asses that, if the EAM had in fact a receptive

field with some preferred orientation and distance, how robust was this representation.

To that end, we generated synthetic neuron responses with their receptive fields being

the preferred distance and direction of the real neuron . This synthetic neuron would add

a spike to the EAM for all time-points with a border occupying the bin corresponding to

the real distance and direction. These synthetic EAMs were also normalized by the egocen-

tric occupancy. Then, as a measure of similarity, we computed the Spearman’s correlation

coefficient of the synthetic and real EAMs (Fig 2.4).

Figure 2.3 Distribution of Pearson’s ρ for all neurons.Only neurons with acorrelation coefficient with its synthetic map higher than the 99th of the distributionwere considered as egocentric border cells.

Then, for each neuron, the mouse’s positions was randomly shifted 100 times, so that

activity was decorrelated from position. For each random shift, we then computed the real

EAM, the synthetic EAM and the correlation between them. For all neurons, only neurons

with correlation with their synthetic EAM greater than the 99th percentile of the distribution

of all correlations computed in the random shifts were selected . Also, the activity maps

correlation had to be greater than the 99th distribution of the correlations within that same

neuron. Overall, across all the neuron’s population, 78 had correlations greater than the

threshold (Fig 2.3). Compared to the 492 cells detected, that number corresponds to the

45

Figure 2.4 Each row is a different cell, while the left column is the real egocentricactivity and the right is the synthetically built EAM. The plots are polar, so theradius of the circle is the maximum distance allowed. We can see for cell 488, thatits preferred orientation is 260◦ and its preferred distance is 65mm (indicated withthe white bin) Cell 488 robustly encodes egocentric position of borders while cell308 does not.

.

15% of all cells. Strikingly, the recently published study about egocentric border cells found

a total of 17% of measured neurons to be egocentric border cells17.

2.4 Discussion

To effectively navigate through their environment, animals need to store and recall repre-

sentations of spatial interrelationships. These representations have repeatedly been shown

46

Figure 2.5 Activity maps of three detectec egocentric border cells Firstcolumn is the allocentric activity map, second column is the head-direction at eachdetected spike, with the size of the dots indicating the magnitude of the spike, thirdcolumn is the real egocentric activity map and the last column is the synthetic map.It can be seen that neuron number 223 codes for the walls at the right of the animalwhile neuron number 304 does the same for the walls at its right.

to be exist within the hippocampal formation, and they are mostly encoded in allocentric

coordinates, i.e., coordinates with respect to external cues and not with respect to the ani-

mal itself. However, all allocentric spatial information has to be encoded first in egocentric

coordinates, as all the stimuli comes from the world to the animal with respect to itself. The

reveres also needs to hold true. Egocentric border cells, neurons that respond to boundaries

located in very specific distance and directions with respect to the animal, have been pre-

dicted to be an essential element in the egocentric-allocentric transformation system18. The

retrosplenial, in turn, is a perfect candidate area to store these representations, given its

47

location and strong connectivity to the hippocampus (where allocentric representations and

memories are stored) and to the sensory cortices, where sensations are processed.

Although these cells have been predicted to exist for years, it has not been up until very

recently that they have been systematically reported1744. Here, we aimed to replicate the

results obtained by these two groups, but using two-photon microscopy in an awake living

mouse instead of electrophyisiology. To do so, we coupled the 2-photon microscope to the

MobileHomeCage. This set-up allowed us to explore the egocentric boundary responsivity

of hundreds of neurons simultaneously while the animal was behaving in a cage.

We inferred the spiking rate of hundreds of automatically traced and manually curated

cells in the retrosplenial. Then, we applied elemental transformations to arena in order

to assess the responsivity of these cells to boundaries situated in coordinates relative to the

mouse’s position. We proceed to calculate the correlation of these neurons with synthetically

built neural units with the same preferred orientation and distance. Finally, we made random

trials and selected only the cells whose correlation with the synthetic cells were higher than

the random distribution of correlation.

In this manner, we were able to identify a number of egocentric border cells in retrosplenial

cortex similar to other studies17. Therefore, we have been able to successfully replicate those

studies using a different experimental approach. Two-photon microscopy opens us a vast

different range of possibilities in regard to the responses of these cells. For example, in the

future we could explore what dendritic inputs these cells receive and assess the stability of

their representations in the course of days. Although egocentric border cells are vital in the

transformation circuit, they are not the only computational unit predicted in the models.

Therefore, future research should assess if the other spatial representations are also present

in the retrosplenial cortex. Further, the discoveries made here also serve as a proof of concept

of the MobileHomeCage system tracking system and expands our possibilities of assessing

and exploring spatial information correlates in single-cells with 2-photon imaging systems.

However, mainly because of the pandemic, our experiment involved only one mouse of

48

the course of only one 20-minute session. In order to validate our preliminary results, we will

need to expand our experiments to multiple mice and sessions to effectively confirm that our

results hold true, not only for this mouse, but for the retrosplenial cortex of mice in general.

49

REFERENCES

1. O’Keefe, J. & Dostrovsky, J. The hippocampus as a spatial map: Preliminary evidencefrom unit activity in the freely-moving rat. Brain research (1971).

2. Taube, J. S., Muller, R. U. & Ranck, J. B. Head-direction cells recorded from thepostsubiculum in freely moving rats. I. Description and quantitative analysis. Journalof Neuroscience 10, 420–435 (1990).

3. Fyhn, M., Molden, S., Witter, M. P., Moser, E. I. & Moser, M.-B. Spatial Representationin the Entorhinal Cortex. Science 305, 1258–1264 (2004).

4. Bicanski, A. & Burgess, N. Neuronal vector coding in spatial cognition. Nature ReviewsNeuroscience 21, 453–470 (2020).

5. Shine, J., Valdes-Herrera, J. P., Tempelmann, C. & Wolbers, T. Evidence for allocentricboundary and goal direction information in the human entorhinal cortex and subiculum.Nature communications 10, 1–10 (2019).

6. Nitz, D. A. Tracking route progression in the posterior parietal cortex. Neuron 49,747–756 (2006).

7. Lever, C., Wills, T., Cacucci, F., Burgess, N. & O’Keefe, J. Long-term plasticity inhippocampal place-cell representation of environmental geometry. Nature 416, 90–94(2002).

8. Sarel, A., Finkelstein, A., Las, L. & Ulanovsky, N. Vectorial representation of spatialgoals in the hippocampus of bats. Science 355, 176–180 (2017).

9. Rolls, E. T. Spatial view cells and the representation of place in the primate hippocam-pus. Hippocampus 9, 467–480 (1999).

10. Hori, E. et al. Place-related neural responses in the monkey hippocampal formation ina virtual space. Hippocampus 15, 991–996 (2005).

11. Foster, D. J. & Wilson, M. A. Reverse replay of behavioural sequences in hippocampalplace cells during the awake state. Nature 440, 680–683 (2006).

12. Tsitsiklis, M. et al. Single-neuron representations of spatial targets in humans. CurrentBiology 30, 245–253 (2020).

50

13. Brotons-Mas, J. R., Montejo, N., O’Mara, S. M. & Sanchez-Vives, M. V. Stability ofsubicular place fields across multiple light and dark transitions. European Journal ofNeuroscience 32, 648–658 (2010).

14. Andersen, R. A., Snyder, L. H., Li, C.-S. & Stricanne, B. Coordinate transformations inthe representation of spatial information. Current opinion in neurobiology 3, 171–176(1993).

15. Wang, C. et al. Egocentric coding of external items in the lateral entorhinal cortex.Science 362, 945–949 (2018).

16. Vale, R. et al. A cortico-collicular circuit for accurate orientation to shelter duringescape. bioRxiv (2020).

17. Alexander, A. S. et al. Egocentric boundary vector tuning of the retrosplenial cortex.Science advances 6, eaaz2322 (2020).

18. Byrne, P., Becker, S. & Burgess, N. Remembering the past and imagining the future:a neural model of spatial memory and imagery. Psychological review 114, 340 (2007).

19. Hinman, J. R., Chapman, G. W. & Hasselmo, M. E. Neuronal representation of environ-mental boundaries in egocentric coordinates. Nature communications 10, 1–8 (2019).

20. O’keefe, J. & Nadel, L. The hippocampus as a cognitive map (Oxford: Clarendon Press,1978).

21. Long, X. & Zhang, S.-J. A novel somatosensory spatial navigation system outside thehippocampal formation. bioRxiv, 473090 (2018).

22. Yin, A., Tseng, P. H., Rajangam, S., Lebedev, M. A. & Nicolelis, M. A. Place cell-likeactivity in the primary sensorimotor and premotor cortex during monkey whole-bodynavigation. Scientific reports 8, 1–17 (2018).

23. Esteves, I. M. et al. Spatial Information Encoding across Multiple Neocortical RegionsDepends on an Intact Hippocampus. Journal of Neuroscience 41, 307–319 (2020).

24. Franco, L. M. & Goard, M. J. A distributed circuit for associating environmental contextto motor choice in retrosplenial cortex. bioRxiv (2020).

25. Chen, T.-W. et al. Ultrasensitive fluorescent proteins for imaging neuronal activity.Nature 499, 295–300 (2013).

26. So, P. T. Two-photon Fluorescence Light Microscopy <http://web.mit.edu/solab/Documents/Assets/So-2PF%5C%20light%5C%20microscopy.pdf> (2002).

27. Juczewski, K., Koussa, J. A., Kesner, A. J., Lee, J. O. & Lovinger, D. M. Stress andbehavioral correlates in the head-fixed method: stress measurements, habituation dy-namics, locomotion, and motor-skill learning in mice. Scientific reports 10, 1–19 (2020).

51

28. Dombeck, D. A., Khabbaz, A. N., Collman, F., Adelman, T. L. & Tank, D. W. ImagingLarge-Scale Neural Activity with Cellular Resolution in Awake, Mobile Mice. Neuron56, 43–57 (2007).

29. Lucy, L. An iterative technique for the rectification of observed distributions. The As-tronomical Journal 79, 745 (1974).

30. Richardson, W. H. Bayesian-based iterative method of image restoration. JOSA 62,55–59 (1972).

31. Shepp, L. A. & Vardi, Y. Maximum Likelihood Reconstruction for Emission Tomogra-phy. IEEE Transactions on Medical Imaging 1, 113–122 (1982).

32. Wekselblatt, J. B., Flister, E. D., Piscopo, D. M. & Niell, C. M. Large-scale imaging ofcortical dynamics during sensory perception and behavior. Journal of neurophysiology(2016).

33. Wang, Q. et al. The Allen Mouse Brain Common Coordinate Framework: A 3D Refer-ence Atlas. Cell 181, 936–953.e20 (2020).

34. Sit, K. K. & Goard, M. J. Distributed and retinotopically asymmetric processing ofcoherent motion in mouse visual cortex. Nature communications 11, 1–14 (2020).

35. Giovannucci, A. et al. Automated gesture tracking in head-fixed mice. Journal of Neu-roscience Methods 300, 184–195 (2018).

36. Niell, C. M. & Stryker, M. P. Modulation of visual responses by behavioral state inmouse visual cortex. Neuron 65, 472–479 (2010).

37. Keller, G. B., Bonhoeffer, T. & Hübener, M. Sensorimotor mismatch signals in primaryvisual cortex of the behaving mouse. Neuron 74, 809–815 (2012).

38. Musall, S., Kaufman, M. T., Gluf, S. & Churchland, A. K. Movement-related activitydominates cortex during sensory-guided decision making. BioRxiv, 308288 (2018).

39. Shimaoka, D., Harris, K. D. & Carandini, M. Effects of arousal on mouse sensory cortexdepend on modality. Cell reports 22, 3160–3167 (2018).

40. McNaughton, B. L., Battaglia, F. P., Jensen, O., Moser, E. I. & Moser, M.-B. Pathintegration and the neural basis of the’cognitive map’. Nature Reviews Neuroscience 7,663–678 (2006).

41. Skaggs, W. E., McNaughton, B. L., Gothard, K. M. & Markus, E. J. An information-theoretic approach to deciphering the hippocampal code in Proceedings of the 5th Inter-national Conference on Neural Information Processing Systems (1992), 1030–1037.

42. Shemesh, O. A. et al. Precision calcium imaging of dense neural populations via acell-body-targeted calcium indicator. Neuron 107, 470–486 (2020).

52

43. Vogelstein, J. T. et al. Fast nonnegative deconvolution for spike train inference frompopulation calcium imaging. Journal of neurophysiology 104, 3691–3704 (2010).

44. Van Wijngaarden, J. B., Babl, S. S. & Ito, H. T. Entorhinal-retrosplenial circuits forallocentric-egocentric transformation of boundary coding. Elife 9, e59816 (2020).

53