Matt Wilson: Hippocampal memory reactivation in awake and sleep states
Date Posted:
June 7, 2014
Date Recorded:
June 7, 2014
CBMM Speaker(s):
Matt Wilson All Captioned Videos Brains, Minds and Machines Summer Course 2014
Description:
Topics: Role of hippocampus in formation of episodic memory (linkage of events) and spatial memory used in navigation (linkage of spatial locations); both capabilities depend on temporal sequence encoding; place fields emerge for rats on linear tracks; hippocampal place cells; ensemble decoding; decoding sleep reactivation; interaction of asymmetric excitation with oscillatory variation in inhibition to translate space into time, suggested by hippocampal phase precession; hippocampal spatial representations encoded as sequences during behavior
PRESENTER: All right, excellent. We have a full day. [INAUDIBLE] on the computer. But don't worry. Matt is going to use his own computer. So we have a full day today. I'd like to give you several amazing people who are working as part of in CBMM. And a sure goal is to try to understand neurons and circuits of neurons and how they implement the specific aspects of intelligence.
So as you'll see throughout the day today, we have an amazing opportunity to peek inside the brain-- to go from neurons and to go from circuits of neurons and begin to understand specific aspects of the mechanisms that instantiate computation. So for a lot of the type of programs that we've been discussing over the last several days, we know there is one machine that controls those processes of our brains.
And the notion is that if we can understanding biological circuits and mechanisms that gave rise to answering questions such as what's there-- where are they-- what happened before-- what will happen next. We would be able to translate those biological ideas into reality. So neurons and groups of neurons can, on the one hand, constrain and inform our computational models, but also vice versa. We can take ideas from computational models and begin to evaluate some of those ideas and present with a solution to some of the problems.
So I am not going to give a presentation myself. I do want to show one slide because it is not something that will be represented today from work that I already alluded to briefly on Monday where one of the possibilities that we have is to record invasively from the human brain in patients that have epilepsy. So our lab is very interested in, like many of you in the building, computational models. And we also are interested the opportunity of recording from the human brain and therefore bridging some of the questions that are asked in some of the other labs to have a mechanistic understanding that we can provide from recordings in animal models.
So very quickly, this is a recording from a neuron in the entorhinal cortex, which is a patient that has a electrodes implanted for clinical reason. And this is a neuron that has a significant degree of selectivity discerning pictures, but not others. These are clustered plots. Each of these plots is an action potential obtained after spike sorting data from recordings in which a patient is shown a series of images.
And here there is a surge in activity for one particular category of stimuli with respect to others. In other cases, we record field potentials. And this is [INAUDIBLE] potential as a function of time. And each of these colors also corresponds to a different category of stimuli. So we can get selective responses from the cortex in the human brain. And this may potentially provide us with the bridge to understand how our potential understanding coming from animal models can relate to some of the work in [INAUDIBLE].
I'm not going to say anymore. If you are interested, I'd be happy more to talk more about this recording. And without further ado, I'd like to introduce an amazing person in [INAUDIBLE], Matt Wilson, who has been doing beautiful work recording the hippocampus as well as interaction in other areas as well.
MATT WILSON: Great, thanks, [INAUDIBLE]. When I was coming over here, I was reflecting that it was 28 years ago when I first came to MBL and was a TA like [INAUDIBLE] TAs back there in a new course that was being started-- the methods of computational neuroscience course that [INAUDIBLE].
So it was a time where it was modelled. It was neurons. It was developing ideas, theories, and hopefully inspiring the next generation of neuroscientists. And I met a lot of friends-- a lot of colleagues-- people that I know today. And we still remember the times we had here at MBL. I was just talking with Gabrielle. I don't know if they still allow you to build bonfires on the beach. But if they do, I highly recommend it.
So with that, I would like to try and give you an overview of the work that we do the hippocampus, but not just-- it would be largely descriptive. But what I hope to try to convey is the sense that the current methods that we're using combined with computational perspectives can give us some insight into the nature of computation that might be embedded in neural circuits as evidenced by the dynamics of information as it's expressed in the system-- so being able to monitor activity in the brain-- see how it evolves-- look at the interaction between different brain areas and give us some sense of how information is encoded and how it's actually used and transformed in the service of memory and cognition.
And I'm going to highlight two and a half brain states. There'll be the awake state and the asleep state. During wakefulness, we can think of wakefulness as being broken down into active engagement where we're perceiving or taking in information and then introspective reflection where we're evaluating, planning, or preparing to interact with the world based upon previous information.
And we can interrogate the brains of rats-- if we have a pointer. Here we go. So we interrogate brains of rats. Here you see the structure I'm going to focus on, the hippocampus, shown in cross sectional view in the rat. Here in cross section, you see the classic trisynaptic circuit of the hippocampus. Information coming in from across the brain-- information about the world-- perceptual information-- associational information converging on the adjacent structure, the entorhinal cortex.
The stuff gets funneled into the hippocampus along primary afferent pathways-- perforant path, making synaptic connections into the three primary subfields of the hippocampus-- the dentate gyrus-- area CA3, cornu minus 3, CA1. CA2 is in there as well. They were actually adept at numbering. They didn't skip it.
There is a CA2. CA2 is an intermediate region, which is actually expanded in humans. That's one of the things that distinguishes it. It's something that has grown disproportionately large in humans. It's relatively small in rodents. So we talk about what that might mean evolutionarily. Why would you add this little-- why would you expand this particular structure. Another comparable structure that has evolutionarily expanded disproportionately is the prefrontal cortex.
So think about expansion of prefrontal structures and an intermediate subfield of the hippocampus, CA2, with different computational insight. Then area CA1 and then the subiculum. So I'm going to talk about recordings that have been made in this subfield-- subfield CA1 of the hippocampus. And the reasons for this is really threefold. The first reason, the best reason is see it's right there on top, which means it's easy to get to. It's accessible.
A slightly less practical reason is that it's on the output side of the hippocampus. And as we think about this circuit forming a loop in which input comes in, again, from the entorhinal cortex then moves from the synapse on the dentate gyrus. The dentate gyrus synapses on CA3-- CA3 on CA1-- then CA1 back up to the subiculum.
Now you might say, well, why don't you look at the subiculum and the subiculum actual output of the hippocampus. And the answer to that is yes it is. And it would in fact be one of the preferred targets. It's just that it's harder to report from. And as we can talk about some of properties as the electrophysiological properties-- neurons in the subiculum-- in area CA1-- differ so much that when I talk about the techniques that I'm going to show you, getting good isolation from individual cells is simply more difficult in the subiculum.
So what you get a sense of already. And I think should be confirmed by people during your physiology-- is that data and the approach that we use is the combination of two things. One is the pragmatics of doing the recordings. You go where structures are accessible, but also informed by functional assessing. And that is an interesting region. One of the reasons this region became interesting was in the early 1970s when John O'Keefe, working at UCL, dropped electrodes into the hippocampus, recording from individual cells in this accessible part of the dorsal hippocampus of rodents. And what he observed was that individual neurons had these receptive field as they fired preferentially based on location of the animal.
And the animal walking around a little table like this-- as the animal walked around, when he would go to certain locations, different cells would fire. So different cells had different places that they preferred to respond to-- different cells-- different places. He termed the cells, place cells, and the regions of space that they fired in place fields. So in the early 1970s, John O'Keefe discovered place fields in area CA1 of the hippocampus.
Previous to that, there had been more cognitive work-- cognitive neuropsychological work, which had demonstrated the involvement of the hippocampus in all forms of memory in rodents and, most notably, in autobiographical or episodic memory-- event memory in humans in the seminal case of patient HM, which I'm sure you're all familiar with. Damage to hippocampus produced an irreversible loss-- anterograde loss in the ability to form new memories.
It also produced a partial retrograde loss in memory. Old memories were lost. But interestingly, really old memories, or remote memories, seemed to be preserved. So you needed the hippocampus to form new memories. You also needed it to access relatively recent past memories. Really remote memories seem to be intact, suggesting there was some kind of transformation of memory-- of information-- requiring the hippocampus over time.
So we'll talk about all of these issues-- encoding of information-- the immediate processing of information. And then the mechanisms might contribute a little long-term transformation or consolidation of information. So dropping electrodes into CA1, John O'Keefe discovered place cells. But the idea of place cells-- individual cells responding to stimuli in the world. So it's a concept-- we think of visual receptive fields. We think about conceptual receptive fields in general as the brain represents information as it exists in the world.
But information in the world does not exist in a static form. It changes over time to say the one imperative of the brain-- organisms interacting with the world, an unpredictable world, is in fact, to make rational predictions about the future given incomplete knowledge. So how does the brain operate to draw inferences from past experience and to extrapolate or more make predictions or estimations of future state.
In order to do that, they have to build some kind of internal model which carries information about possible interactions. A came before B. Maybe A caused B. So C-A is going to happen-- something like that. And that kind of causal inference requires that you actually maintain not all the state information, but the kind of state transitional information-- the probability if I'm in state A, I get to state B-- something that I can infer causality-- something about time.
So we need in some way at least to appreciate the time order in which events have occurred and then hopefully they capture and then exploit that at some later point. So one working hypothesis is that experience-- processing experience and formed memories of experience involves time and its linkage of states. In the case of rodent, these states are spatial locations.
And damage to the hippocampus and rodents produces impairments in spatial navigation as it does in humans. If I were to go in and lesion your hippocampus, you would not only have difficulty in remembering what you did last week. But you would have difficulty remembering where you parked your car or where the bathroom was or basically being able to move around and navigate the world.
So something linking experience and spatial navigation in a working hypothesis that is kind of a temporal linkage might be a common computation or common computational imperative that the hippocampus satisfies and keep track of things in time. So the methodology is just a device that allows the insertion of fine micro wires. This microdrive array would be implanted on the animal's head-- make a little hole-- place this on top. Cement the thing on top. And the now the animal's got a permanent interface-- full crown.
It allows us to advanced micro wires and then take signals off over indefinite periods of time. So this is a chronic, permanent implant, which gives us access to neural activity in the vicinity of these electrodes-- now that it's some technical details. In this case, you see the electrodes depicted as a tight bundle of four micro wires. The principle here is just one of spatial triangulation based upon the spatial attenuation of electric fields generated by individual cells as they produce these action potential events.
So currents flowing in neurons. Those currents produce fields. The field's can be detected by these electrodes. And the relative amplitude of the field can be used to infer the distance of the recording surface from the source. Based on that distance, you have to convert that. You can figure out, or at least estimate, the localization of the relative location of the sources-- different locations of sources you attribute to different neurons.
So we typically refer to these things as a units. Units you don't really-- they're sort of units-- units of signal generating capacity. These individual units could be neurons. But as we gain deeper insight into the detailed cellular functioning of these more complicated units we call neurons, what we realize is that compartments of neurons can function in a very similar capacity to dendrites and cells and have the same action potential generative capabilities and can independently produce these sorts of action potential events independently from soma.
So you have to think about this as [INAUDIBLE] talk and Tommy [INAUDIBLE], who's back here had done many years back. This was before that first methods in computational neurscience course. It was part of the inspiration for that-- the idea that neurons and cells could perform computations. And these logical computations could then be embedded within larger networks.
So you have networks within networks thinking about how neurocomputation is implemented could potentially give us some novel insight in how the brain actually exploits that. So we can talk about that a little bit. I'll give my talk. I have limited the number of slides because I want to encourage more discussion.
I tend to talk a long time anyway. It's largely because I go off on little tangents. And so I just throw these things out here-- these little hooks-- tangent hooks that you should feel free to grab onto and come back to at any point. So thinking about the nature of dendritic computation [INAUDIBLE]. And I don't know what it's worth, but I'm going to point it out anyway.
So if you think about the neurons. you think about them as have-- they have dendrites. They have synapses. And the synapses serve to take in information and integrate it and summate. You have some. You'll have some weight matrix. You'll have some discrete events by the way you add them up. And that's the ouput.
But those inputs are spatially distributed along the neuron. And one thing-- you see this as a very general principle of an organization. And that is that inputs coming from different sources will tend to be segregated in different regions of the dendrite such that inputs coming from the adjacent subfield-- areas C3 into C1-- those will tend to terminate here in a particular region of the dendrite-- proximal to the cell body.
The inputs coming from a look further away, the entorhinal cortex, they come a little bit further away. And so there's a mapping-- a general mapping of the distance from the both the anatomical and potentially computational distance-- the equivalence of representation is mapped onto distance along [INAUDIBLE]. What kind of computation might necessitate segregation of these inputs so that you could potentially manipulate them independently?
One thing that allows is again, these local interactions. You can have local, nonlinear interactions. You can amplify. You can perform computations on self-similar inputs. By segregating them, you can independently partition them off and gate off different inputs. And as I'll mention, when we finally get to it, activity in these output structures like CA1 is modulated in time. So there's a dynamic to it.
So the activity changes over time. And one thing that one sees is that the presence of these dynamics, which are often manifested in terms of oscillation. So you have brain oscillations. Brain oscillations allow you to do two things. One, they allow you to establish some kind of coordinated timing. You can use it as a clock to coordinate the timing of interactions.
So you have frequency, which you can lock. But you also shift the relative time interface. So frequency and phase-- you can lock in frequency. You can shift in phase. One thing that you find is that the inputs that come in not only come from different places. Not only are those places organized differentially along the dendritic tree. But in the hippocampus in particular-- and we can discuss as well in other structures-- they come in different phases.
So the relative timing shifted in a systematic fashion. So you can think about this-- inputs coming into places at different regions of the dendrite-- different sources coming in at different relative phases with respect to the output. And when I get to it, I might mention that we've done manipulations trying to influence using techniques that Boyden is going to highlight optigenetically-- optigenetics-- optigenetically manipulating activity at certain phases that we map into various discrete cognitive functions that you can selectively impact, for instance, encoding and retrieval functions-- simply fiddling around with activity at different phases [INAUDIBLE] computations. So question? Thank you.
AUDIENCE: [INAUDIBLE] you were talking about. How specific is that to the hippocampus? And do we know if that is general for the cortex?
MATT WILSON: Not specific to the hippocampus at all. In fact, you see this very elaborate in the cortex. And, of course, one of the properties of the cortex distinct from the hippocampus. We've heard of the isocortex because of its highly laminated structure. So you have multiple layers of cell bodies. The hippocampus is older cortex, archicortex. And it only has one cell-- has one prominent cell body layer.
So there have been proposals-- John Allman being the most notable-- to postulate evolutionarily this isocortical elaboration is really is a repetition of a theme that's already been manifested in the hippocampus. And as you double and fold over, the hippocampus could get-- neocortex being isocortex.
So virtual associational neocortical areas have the same basic principles. Another-- I don't think we have anybody talking about the piriform cortex. I got started in the piriform cortex-- olfactory cortex-- olfactory cortex-- evolutionary extremely old. Organisms don't really have other vertebrates that don't really have a neocortex.
They have an olfactory cortex and a hippocampus. These are evolutionarily very old. The olfactory cortex has an organization almost identical to the hippocampus-- the same organization of inputs, distant inputs from the olfactory bulb and from different parts of the olfactory cortex terminating in different zones.
Other interesting properties-- if you look at the synaptic properties-- modulation of transmission of plasticity also varies systematically. That is their systematic interactions between different inputs neuromodulators will change the way synapses change and independently modulate transmission plasticity.
AUDIENCE: Another question-- when people are talking about subunits-- dendritic subunits-- where they have different nonlinear computations, are these modular things? And if so, are there three modules or 100 modules or do people not really know?
MATT WILSON: Well, yeah. The extent to which dendrites can serve to modularize computation certainly is something that has enjoyed a lot of modeling effort. It's been empirically a little bit difficult to demonstrate. And that's just because, again, just like the hippocampus, you go where it's easy to go.
And it's been difficult to get the measurements that would allow you to determine exactly what is the nature of interactions-- how modular and how isolated are different dendritic compartments. I believe now the data and the thinking is for increasingly modularized dendrite computation.
AUDIENCE: Can you just elaborate on what that means? What is the--
MATT WILSON: It makes a difference whether [INAUDIBLE]. So synaptic contacts that are nearby where nearby is determined based upon a distance in dendritic states-- how far are you from other inputs? And think about that in terms of a cable theory formulation that-- do any neural modelling or biophysics.
But if you just think about the dendrites-- the branches of dendrites as having some electrical distance-- put a current in one-- into one dendrite-- there's going to be some resistance to traveling long paths depending upon the distance and the size of the dendrites involved. So you can think of that difficulty in resistance as does partitioning space-- segregating inputs such that an input-- one point on the tree is going to have a little influence on an input in another part of the tree.
And that proximity is therefore going to translate into some probability of interaction. But that distance can be spanned either by making things physically closer or making them electrically closer. And how would you make them electrically closer to make it easier for currents to travel? One strategy that neurons employee in order to do that is they use nonlinear amplification as you see here evidenced in the generation of action potentials.
And that is that you rather just relying on passive propagation, you actively amplifiy. And then that amplification allows you get further-- active amplification is something that's used as a communication strategy everywhere, including in the dendritic tree. And so you can think of the expression of active properties in dendrites-- these nonlinear dendrites-- as indicating a need or desire to increase let's say the modular size.
You want everybody to be able to talk. It's like the distance-- of the shortest distance-- two degrees of separation in an active dendrite-- three or four in a more passive dendrite. So I mentioned one of the differences between CA1 and the next station down, the subiculum-- the subiculum has much more active dendritic processing.
That is, that there's an action bursting that goes in cell bodies that we think that is associated-- that's what neurons do. They integrate. And they fire. Dendrites do the same thing. And the subiculum is highly active. So the subiculum you can think of as more modularly integrated-- less active processing-- more module segregated into smaller modules because it's just harder to communicate.
AUDIENCE: Do you mean that they actually have an axon hillock thing where they have--
MATT WILSON: The same active conductances-- these voltage-dependent passing sodium channels that allow action potentials to be generated in, for instance, to axon hillock are present in the dendrites. The dendrites can generate action potentials that can propagate action potentials from one point of the dendrite to another point of the dendrite. Most propagated action potentials never have to make it down to the soma.
Now there are strategies for directing and throttling that communication. And one of those strategies is to strategically place inhibition along the dendrite. So this one thing that also rarely comes up-- you think of inhibition as just a minus sign-- it's the converse of excitation. It's something that complements [INAUDIBLE] but in addition is organized strategically in a very different way in addition by and large unless you go around and screw around with the genetics of the organism, do not terminate on synaptic spines in that it is segregated from the inputs.
The spine itself you could think of as the finest modular granularity. It is what allows single inputs-- single synapses-- to be processed and modular. So inhibition tends to come in either at the base of spines or on the primary dendritic shafts-- that is, for instance, like here or adjacent to small dendrites-- in addition has the ability to potentially throttle or segregate or regulate the nature of modular interaction. So again, you can think about it that they're like traffic lights, green, red. You can direct traffic by regulating inhibition at strategic throttle points.
And one of the most ubiquitous throttle point across all neurons is here at the interface between the dendrite and the soma. That is this sort of proximal region. This is where you find a whole class of inhibitory interneurons, the perisomatic inhibitory interneurons which I might mention. They've brought a lot of attention A, because they have these sort of very characteristic termination profiles. You'll see very powerful synapses and surround the cell body, in particular the regions just at the base of these proximal dendrites. And the effect that they have is to braid large conductance pathways that will effectively shunt out any an [? excitatory ?] current.
So again, they have the ability and a throttle to direct communication, allow lots of traffic to flow here, to completely cut it off from here. It's like the Bourne Bridge of neurons, right. Both on the Bourne Bridge, nobody's getting to the Cape. This is all the rest of Massachusetts and nobody's get to the Cape, just one point.
So this kind of strategic throttling point tells us that but there is very likely computation that requires that we segregate input processing and output processing and allow this kind of input processing utilize the same mechanisms for bridging distances through carefully choreographed interactions. So what you see in networks again you see in dendrites, the segregated throttling interactions between different units. For here we're talking about units. The cells connect by axons. Here, we're talking about [INAUDIBLE] which are dendritic modules separated through inhibition in a similar fashion. That was a pretty long segue for the [INAUDIBLE].
So here, you see an illustration of the raw data that would come from the detection of these action potential discharge events. They're generated both from cell bodies, but also can come from dendrites. So this technique does not allow us to distinguish what the precise source is. It's just let's us say the location of the source differs, so we're going to call these things different units.
And if we now take those isolated units, action potentials generated from different spatially localized sources, and then we map their currents onto space and just accumulate that over time as an animal runs along a maze like this little track. It's about a two meter track. The animal gets food rewards at the one end and each end. So he's running back and forth along this track, back and forth.
And here, the color coding indicates the identity of the isolated unit. So the yellow unit [INAUDIBLE] like it came from one zone, one location in anatomical nearby anatomic space. So we think of this as a cell. So it tends to fire here in this part of the maze. The red cell fires on this part of the track, and so on over here.
So this is again, this is just a demonstration of John O'Keefe discovered, spatial firing preference of place fields of these cells. One thing about these cells, place cells, is that basically wherever you record in the hippocampus, you find cells that show this kind of spatial preference. And the estimates are that 95% or better of cell, primary [INAUDIBLE] cells of the hippocampus have some kind of spacial specificity. At any given time, only about 3% to 5% of them are active. And this is consistent with the sort of a general estimate of sparsity. That is, at any given time, the campus is using about 3% to 5% of its available resources.
Now of course, if the animal is in different locations, it's a different 3% to 5%. So overall, if you asked how many hippocampal neurons are going to participate in the I'll loosely call it representation-- it's just the inactivity associated with this experience or this maze-- it's on the order of about 30%. It may differ 30 to 50%, but generally, the campus is using about a third of all its neuronal capacity just for this one little track.
Put it onto a different track like this, and it's going to be another 30%, not necessarily an independent 30%. It's just like another random draw. You reach in into the hippocampal grab bag of neurons and you pull out a handful of 30% of the neurons and throw them out onto the environment. That's what you get. Move to a different room, same thing. You pull them out throw them down wherever they land. So the distribution, the location where this yellow cell fires if I put this animal to a different track is essentially independent from the location of that separate field in this particular track.
There are games you can played by fiddling environmental and essentially trying to create these cognitive conflicts. If the tracks look the same, have the animal do the same sorts of things, so independence does not mean that there isn't going to be any correlation between the firing if make environment themselves correlated. It's just that cells themselves, left to their own devices, are primed to orthogonalize maximally, orthogonalize representations across infinite environments.
There might be things that are shared might show up in similarity, but in general, you can't predict where one cell is going to fire by knowing where it fired somewhere else. And the two adjacent cells are not necessarily going to fire in a self-similar fashion. So anatomical location does not actually map onto similarity or equivalence of the receptive field problem.
This is very different from other structures where you find representations that map. They have a topography that is that adjacency in stimulus or representational space. Stimulus space is mapped into similarity or adjacency in representational space. Cells that are nearby in the somatosensory cortex will tend to respond in nearby locations on the body surface. Echoes nearby locations are not necessarily anatomical location, but they can be functional, the hand, mouth, something nearby. But they point to the other adjacencies. You can figure out what those actually mean.
But the idea that they reflect the proximity or adjacency reflects some kind of dimension of the computation, just trying to extract some statistical regularity. That statistic regularity in very predictable inputs, like body surface-- my body, your body, the bodies of generations to come-- are all going to have the same sort of statistical regularity. So why not embed it in the system. Why not embed it in the circuit? The campus doesn't have those topographic regularities.
The reason is my experience, your experience, the experience of generations to come, my experience tomorrow is unpredictable. The statistical regularities cannot be embedded in the circuit. They have to be earned. So that strategy is rather than introduce a systematic bias, which is going to be wrong-- systematic error is the worst kind of error-- it is better to just make random errors. So that's what they hippocampus has done. It suggests again that it's trying to discover statistical regularity that are not manifest or obvious in the inputs themselves. There's a question? Yeah.
AUDIENCE: So once a given environment is learned, how constant is the representation of a certain area in a neuron?
MATT WILSON: Right. So looking at the stability of that, that is I take the animal out, I bring it back in and I look at these patterns. What you'll find by and large is that the cells will fire, the location of the place fields will remain constant. Now the precise firing rate-- and that the number of spikes that are emitted-- that vaires. That varies even over the time scale of seconds to minutes. When we get to it, you can think of there actually being at least two basic representational dimensions. And you can think of this as a computational imperative that the campus has, as I have to form memories of events.
One thing is that as events occur, I have to remember where they occur. I come back to Woods Hall, I think about MPL. I think of the Chorus. I think of the beach. I think of all the locales, places where things happen. So those places have not changed. In 25 years, those places have not changed. I walked by Captain Kidd. Everything's the same. There's some new little funky-- I don't know what that the heck they're selling, hats or something-- across the street. That was not there. I have no idea what's going on there.
So I'm able to identify let's say similarity equivalence to recognize static context and deviations from that context. I can recognize those things. And those things don't change. But the experience, things that happen in there, my experience just going by the Kidd is very different from my experience, my remembered experience of things that happened in the Kidd.
So I have these two things constantly of static contextual information and I have to have non-stationary of the time varying and independent experiential information. Something's got to stay the same. Something's got to change on a moment to moment timescale. The thing that stays the same is where the cells fire. So the cell's relative location, the probability of a cell emitting some lights as a function of location, that stays constant.
I bring this animal back in here and it's been looked at over in a rat in a period of months. I bring the rat back in today, a week, a month, two months, three months, that cells fire in the same location. Again, if you look at the number of spikes, so the firing rate, that fluctuates. And often, it fluctuates in a very systematic fashion. That is that over the course of let's say over the course of this 10 minute training session, the firing rate of the cell might go from a heat of maybe 10 spikes per second-- which we'll see is consistent with an internal rhythm that we have-- to maybe 5. So that there will be systematic shifts. You could think of those drifting non-stationarities. And the drifting non-stationarity is interesting, because what it means is if you look at a vector, the vector itself, the components would be constant. The actual values would change. And they would change in a systematic way so that you could even infer, if you give me two vectors, I could look at the effective distance between them it's like a measure of relative time that is how far they might separate just based upon the relative variation of the independent components. Question?
AUDIENCE: You said about 30% of the neurons participate. How does it scale with the size of the map and would some
MATT WILSON: It doesn't seem to scale in any significance. So if I take the animal and size the place fields as well, so if I take an animal and I put it into a smaller version, smaller environment, what you find is that you can still get roughly the same number of place cells. But what can change both in number and the relative independence of the firing is the way the animal behaves in that environment, that it's the more variable behavior the more segregated the fields become.
And the clearest demonstration of that comes when you compare the activity of these cells and the animal is just wandering around. For instance, it's just in an environment, just wandering around versus an environment in which, like this, where it's following circumscribed paths. That is, it's going through these locations in these two sequences. There's this sequence and there's this sequence. Those two sequences are really independent.
You could think of this and just topologically, you could fold this out. It really looks like a linear maze. You can think of it as just kind of a large circuit. This trajectory and this trajectory. For instance here, this trajectory, this location on these two trajectories, these are the two farthest points. That is they are the least likely to co-occur. You can't take them--
And so when you look at these things, these cells not only fire as a function of location, but they become what people would call directionally dependant. It's not only where the animal is, but the direction as well. So for instance, the green and the blue cell, these things actually don't fire together, even though they look like they fire in the space. The blue cell fires, in this case, when animal is going on the outbound trajectory. The green cells fires when he's on the inbound trajectory. Those two things are separate.
AUDIENCE: So you can estimate the behavior of an animal in a very large environment based on some kind of a capacity that's told by the--
MATT WILSON: You could in fact, just by looking at patterns of activity, infer something about A the dimensionality of the space and something about complexity behavior. In fact, I took this out of the talk because I knew I was going to talk too long. But we've actually done this. So we published recently a paper that's with on of our former CBM post-docs, [INAUDIBLE] Chen. She just got a position down at NYU. So he did this analysis. This is just doing an unbiased hidden Markov model analysis of spiking activity. So I just give you all these spikes. I just give you all this raw, a bunch of spikes from a hundreds neurons. So what does it mean? You say, what do you mean what does it mean. I don't know. It was just the spikes. I don't tell you anything about this other than the animal is doing something.
So what you could do is you just walk through this. Now, you're just imagining. Imagine you're a downstream unbiased observer, and all you know is that there's going to be some kind of consistency. There's going to be some relationship between where the animal is and what it's doing and the patterns that you see. They're not random, which means there are patterns that are going to be recurring.
It's a cryptographic exercise. There are patterns that are going to recur. And those patterns are going to be constrained in some way by the topology of the space, and that there is that there's this time bearing component, that two patterns that are nearby one another, probably are coming from states that were nearby each other in the world. So you just come throgh and say I'm just going to assume that these spike trains were generated by some sort of Markov-like cloud. I can describing it using a Markov.
So I go through, identify the states. I compute the transition probabilities between the various states, essentially do some kind of some dimensional reduction. View the transitional matrix, look for the transitions. And then I can infer, I say well, the actual dimensionality of this is much lower than I expected, because these states actually have some connectivity.
You can infer well, it comes from a one- or two-dimensional environment, whatever that is, one- or two-dimensional space in which they were generated. And you can go back in and then reconstruct everything. So just given-- this is [INAUDIBLE]. Just given [INAUDIBLE], make this Markov assumption. You go back, you identify the states, you reconstruct the place fields, and you can reconstruct the behavior with nothing. It's like your camera's broken. And so what it tells you is that the neurons are actually capturing enough information that a blind observer just looking at the output of the hippocampus can figure out nature of the environment, reconstruct the behavior, and make a lot of inferences about what's actually going on.
So that's kind of an important property, because unless there's some kind of homunculus that we don't know about, brain areas are kind of on their own. They need to be able to carry out this sort of blind inference based on activity, and then perhaps some sort of prior assumption about structure. And that could be something of [INAUDIBLE], like the somatosensory areas. But here in the hippocampus, you don't have that. All you know is the things that are connected in time-- that are nearby in time likely came from states that are nearby, in this case, states. So that's the one [INAUDIBLE] assumptions. You map time in the space, and then you figure out what is the space that would give you this kind of temporal trajectory among these states.
AUDIENCE: Question?
MATT WILSON: Yeah.
AUDIENCE: All right, just to make sure I agree with you. Are these place fields, is this-- can you get these with a mouse running through the maze for the first time?
MATT WILSON: Yes.
AUDIENCE: Do--
MATT WILSON: And so we looked at the dynamics of that. And so you see that there's a big jump between the first time and the second time. So the second time, what you find is that the robustness of firing, where robustness is the consistency with which the cell emits spikes as a function of location, and the consistency of the population, so if you look at the covariance across cells, so there's going to be-- as I say, at any given time, like 3% to 5% of the cells fire. So there's 3% to 5% of hippocampal cells here. In the area CA1, this may be 100,000-- let's say this is 100,000 neurons.
So what that means is there's 3,000 to 5,000 cells at this point firing. It's a unique pattern across 3,000 to 5,000 cells. And yeah, so you're going to get the capacity issue and say, what's the ultimate capacity at the hippocampus. Well, you say, it's 100,000 to 5,000. Which is-- that's a pretty big number. So that's the-- but if you think about that pattern, 3,000 to 5,000-- so the first time you go, you see some cells firing. Second time you go through, the question is how consistent do you see that 3,000 to 5,000 cell vector, 3,000 to 5,000 cell vector, being engaged on a moment-to-moment basis.
Now we have to define what is moment. What's a moment in the case of the hippocampus. We'll see that a moment is in fact a very well-defined time scale. It's about 100 milliseconds as clock of the primary rhythm within the hippocampus. Now it's the theta rhythm, something that paces activity, and also segregates it into each 100 millisecond windows. So if you compute covariance across 100 millisecond-- adjacent 100 milliseconds cycles, covariance is going to give you an estimate, actually a pretty reasonable estimate, of relative novelty or familiarity. I can tell-- again, it's like I do this on the blind observer. Just take those spike patterns. I could just walk through these spike patterns if had have the dynamics, as well. I could say oh, by looking at covariance across adjacent [INAUDIBLE], there's a lot of consistency. Animal must have been in this place before. This is-- in fact, I could probably estimate how many times. Ah, the animal's been in this environment seven times.
Just by looking, if I had enough, if I could sample from enough, I could estimate both the relative novelty and familiarity, and possibly something about the temporal distance. That is the relatively familiarity, how long ago it actually occurred, just by looking at the statistics of correlation across the cells, without even knowing-- I don't even know what they represent at this point. Just say, look they tend to fire pretty consistently. Animal's got to be familiar with the [INAUDIBLE]. There's a lot of variability. This must be something new.
And so if you're a downstream structure, and there are a lot of downstream structures that need to be able to detect novelties. And you say well how could they detect novelty? Well one thing, you could say well, we detect novelty by doing what I did. I walked by when I was a kid, there was a little hat-- whatever that trinket stand that's there. I don't remember seeing that trinket stand. And so it's a very specific detection of novelty. And I was going to say this was different from that.
Now maybe I'm a downstream structure, and that's how I figure out something has changed. But there are other structures. In particular, a lot of sort of general neuromodulatory structures, they don't have a lot of neurons. They have to regulate broad state. When there's arousal, they have to direct attention. How do I direct attention to something. How do I direct my attention to that little hat stand that I didn't recognize. Well you could say well, at first I didn't notice it's a hat stand that I didn't recognize, so I didn't direct attention to it. Well, you just need to direct attention to it in order to figure out it's a hat stand that you didn't recognize.
So there's something nonspecific, said oh, something changed. Something has changed. And that-- so the signal of change would be something like a covariance signal. Hippocampus says man, this just doesn't line up. I don't know what it is, but let's direct attention to resources to figure out precisely what the nature of that is. So again, the use of nonspecific novelty or familiarity signals that could be derived from simple [INAUDIBLE] covariance measures I think is probably a general strategy that's used to identify direct resources in this sort of memory to perception-based computation. Other questions? That was a good question, the novelty-- you know, how many times. And so the difference between one, two, three, five-- it's in the relative firing range. Relative firing range [INAUDIBLE].
The location, though, that tends to be-- once you've seen it the first time through, that's where it tends to stick. And the thing that changes, again, is the firing rate. Firing rate indicating some non-stationary that comes from different degrees of relative experience.
AUDIENCE: What do you [INAUDIBLE].
MATT WILSON: So it's-- yeah. So now you say, OK, I'm going-- I got this great little toy. I can start playing games. I can start changing the cues. I can start doing things to see what actually drives these cells. And so what you find is the changes to an environment that don't change, say, the animals cognitive sense of place-- rearranging the furniture, changing cues-- they cause change in activity where-- in what's referred to as rate remapping in the current nomenclature. That is, it changes the relative firing of these cells.
Global remapping as change in the location of the place fields generally comes when you actually physically change the environment. Or it also can come when you change the behavior in a given environment. So doing different things in a common space, you could say that is actually a determinant that we can think of as context. So if you think about context, that thing that allows us to recognize the consistency across repeated presentations, sometimes referred to as reference memory, it's the same-- it's the kind of memory I can refer back to and say yes, this is the same as it was yesterday and the day before.
That constancy is not only defined based on cue configurations, that is recognition that everything's the same, it's also recognition that I'm going to do the same thing in that environment. So changing the behavior in a similar environment will induce a global remapping, a change in the actual firing probability that's a function of location. Again, this directionality is the most straightforward demonstration of that. And that is that the cell-- if you think about these vectors, which cells fire at this location.
Which cells fire at this location, those patterns will change based upon the animal's behavior. If he starts going back and forth in these-- again in these sort of repetitive independent trajectories, then the patterns [INAUDIBLE]. Other examples of that, if an animal follows-- it's like if-- it's like you get to an intersection. Like you're driving home, and there's two ways you can go. You can go right and go left. Right gets you to the 711, left gets you to-- I don't know where you go to. Starbucks, let's say.
So you can think of your context, the context could be oh, I want a Big Gulp. I'm really thirsty. So now the context is [INAUDIBLE] right. So what'll happen is you'll have a different set of hippocampal cells that fire based on the context, which is you're thirsty, want to satisfy your thirst. And so you'll see place cells that will differ along that right [INAUDIBLE] trajectory.
Other hand, another context might be going wow, I'm really tired. I stayed up all night [INAUDIBLE]. I need a coffee. And I really do need a coffee. Man I could use a coffee, like this coffee place cells are really going up. So you get different-- the behavioral demands would dictate you could say an important determinant of context, and those are reflected in, again, global remapping.
Change of the activity in the hippocampal cells, which tells you that the hippocampus is not just about remembering things. It's remembering things that are going to have important causal consequences. That is, the need to be able to predict or guide future behavior that might be independent. So if I need to do different things, then I need to represent things independently so that I can actually establish that thing. So creating different behavioral contexts can change the [INAUDIBLE] cells in this global remapping sense.
AUDIENCE: [INAUDIBLE].
MATT WILSON: Yeah.
AUDIENCE: When you change the environment, is the [INAUDIBLE] the same [INAUDIBLE]?
MATT WILSON: If I just make changes to the environment that don't influence the behavior and don't change the-- fool the animal into thinking it's a different space, so being in an actual difference space, like if we went to another room, you said, that's like an automatic. If I am someplace different and I'm going to have to do different things, I'm going to-- it automatically becomes a different [INAUDIBLE] context.
Now I can play these games where I make-- I use the same-- I change the room, but I make the maze the same. And what you find is that the hippocampal cells can actually map into these embedded reference frames. That is, that there's going ti be a reference frame for the maze. So I could take this maze for instance and shift it. I could move it within the room. It's like you're sitting at your table, and I just move the-- I take Max, I just shift his table back. And I ask, what are his place cells going to be doing. Well you'd say well, Max, now he has to simultaneously know I'm still in my same spot, I'm still sitting here in front of my computer. That context has not changed. But I'm also in a different place in the room.
And we all have that. I know where I am. I'm in the front of this room. But I also know I'm in the [INAUDIBLE] house, and I know I'm [INAUDIBLE]. So I have-- there's this notion of nested spatial reference frames. And the hippocampal cells will do the same thing. And that is a fraction of the cells are bound to different elements of these nested reference frames, which in part helps us to at least begin to try to understand why do you need 3,000 to 5,000 cells to fire in that one spot. When in fact, if you actually crunch the numbers, being able to decode-- an optimal decoder can decode position down to the finest resolution. That is, the psycho-physical resolution of the animals. Which in the case of rats is about 2 centimeters.
That's why I put-- if I'm in a large room and I have a bunch of little holes and I hide food in these little holes, and I figure well how close can I put these holes and the animal can still remember oh, it's in that hole versus that hole. And the answer is 2 centimeters. 2-centimeter holes. I can decode that resolution about 100 cells. 100 place cells are fired this way, it gives me 2-centimeter resolution at 100-millisecond time scale. So again that's the behavioral sort of the computational window of integration.
So essentially, all the-- if that's all the hippocampus is doing is trying to convey information of where the animal is, 100 cells should be enough. But I've got 3,000 to 5,000. So what am I doing with all of those other cells? Well one thing is you say well, it's not just static recognition, but there's just [INAUDIBLE] nested thing. And that is that if I shift, some allocation of those might be to these nested reference frames.
Then the other thing is there are things that are going to be changed. The non-stationarity is experience itself is bound to not just the kind of experience itself and the time-varying nature of this bound to these different reference frames. In other words, there's going to be a different experiential time scale for the different contexts. There's the experiential time scale of, say, my talk. [INAUDIBLE] granularity. So you could say that's being captured at one resolution, but then at the same time there's also the resolution of longer time scale experience, your experience walking through space. There are things that are going on at different time scales.
And so you can think of those events that are going on at different time scales also mapping into different-- if you want to think of them as kind of segregated or modular subsets of neurons. So given that from experiential time scales, different spatial time scales and nested reference frame, you could see how you might want to split these things up. Break up the populations.
Now I'll just leave this as kind of an exercise. I haven't really gotten to the interesting stuff. This is just the-- that's why I didn't put in more slides. I knew I would never get to all of that. But if you think about OK, if I wanted to actually-- if I needed to simultaneously represent all that stuff, so you think I've got this vector, it's got all this stuff in it. It's got the nested reference frames, got different experiential time scales. But now I need to be able to pull that out. How do I pull that out? How do I tap the different components of all these embedded representations. What would be a strategy.
Just think-- you can think about communication channels. How would you actually do it. Sorry. Well, just think about it. So there's really kind of two strategies. One of them is like space and time. I could just [INAUDIBLE] label [INAUDIBLE]. I say well, I can group them together and I say these are the local, these are the remote, this is the fine time scale, this is the course time scale. [INAUDIBLE]. I could segregate them, and I can identify them based on their location.
Another strategy, [INAUDIBLE] strategy in my mind, is you could do it in time. That is, you use the ability to segregate activity in time. You use what's called phased coding. So you do phase segregation, and you use differences in [INAUDIBLE] phase to potentially tag, segregate, different populations. Now the beauty of that is that phase, a relative timing, is something that can be dynamically adjusted.
Anatomy, that's kind of fixed. Again, somatotopy of my body surface, I can't change that. That's hard-wired. But I might be able to change dynamically if something-- let's say I started-- I have a tick and I start-- keep scratching my head, making a-- establishing a relationship between my head and my hand, something that I could establish if I had some control the timing. Head, hand, your phase, and I could just simply align these phases. Or the preferred strategy, I just distribute the phase and I distribute all of that stuff, and then downstream I have something that selects out. Say OK, I'm going to bring together head phase and hand phase.
So using phase, using relative timing, is way of establishing dynamic connectivity. Certainly it's a topic that's enjoyed a lot of current computational attention, and as a role for oscillations. And I'm going to show you some data that points out how these dynamics and oscillations further serve to confer upon the hippocampus the ability [INAUDIBLE] information over time. So it's used as a computational tool, both for encoding and for dynamic retrieval, pulling out things [INAUDIBLE], putting them into phase and then pulling them out [INAUDIBLE]. What's our-- 10:20? Is that our?
AUDIENCE: It's [INAUDIBLE].
MATT WILSON: So I'm just going to blow through this. This is the place cells. This is a little movie. You don't even have to see this movie. This is just showing you what I showed you. I think will show you-- not this one. I will show you this. So you see those raw-- I'll show you the raw place cells. So this is the activity. And it's silent, [INAUDIBLE]. But if you did hear the sound-- I don't think [INAUDIBLE]. If you could hear the sound, you'd be able to hear some modulation. It's a rhythmic modulation [INAUDIBLE]. There you go.
So what you're seeing over here, this is the raw activity. This is what I showed you before. This is the individuals spikes being mapped into this amplitude space, that is the amplitude of the action potentials, that gives you a sense of where the unit sources are coming from. Can you hear that kind of rhythm? You know, that ch-ch-ch-ch-ch-ch. Wow, it's like really strong. Are you deaf? It's something that you really pick up. I mean with experience, it just immediately jumps out. Now that rhythm is expressed when animals are moving.
So I can tell the animal's moving, he's running right now. And then as soon as it stops, the rhythm will go away. And it's replaced by this kind of bursting activity. So you can tell what the animals doing just based upon brain dynamics. And what I've shown you there-- so you're seeing the raw activity here. What we were doing here is we were just doing a simple Bayesian decoding. So we map out the place fields, so we've got our place fields of our posterior estimates. And we just go through about a 100-millisecond time scale. We just estimate the likely location that the animal would have to be in order to see the pattern of activity that we see in that given time [INAUDIBLE].
And the triangles reflect the probability estimate. The larger the triangle, the higher the likelihood that the animal is actually in that location. So you see it tracks the animal pretty well when the animal's actually running. When the animal stops, though, something else happens. See he's not running [INAUDIBLE]. So you see it tracks the animal pretty well. When he stops, though, the triangle goes away and you see that it-- I mean that it ceases to reflect the animal's current location. And now the estimate jumps all over the maze, which tells us the pattern that's being expressed is cor-- is a pattern that was correlated with different locations. So you see it jumping along [INAUDIBLE].
You'll see later on, he actually--
AUDIENCE: Hey, Matt? I remember there was some other work that I think you did that [? interpreted ?] that that sort of stuff is, like, planning--
MATT WILSON: Right. So when you slow it down, what you're seeing is that it actually-- it's expressing sequences or trajectories, spatial trajectories, both in the forward and reverse direction along paths that the animal has taken, animal could take. So The question is it reflecting on previous paths or is it trying to extrapolate future paths. And we'll see that it can be a little bit of both. Questions?
AUDIENCE: I had a quick question with respect to the idea that we're using basically phase to pick out information?
MATT WILSON: Well in this case, this is just a simple Bayesian estimated basis. So we're taking-- there's no phase information taken here other than we're taking time [INAUDIBLE] that span, what I'll show you, is a period of this fundamental rhythm, this 10-hertz rhythm.
AUDIENCE: So a combination between the theta rhythm and firing rate gives you firing discrimination in terms of spatial location--
MATT WILSON: Well, that's exactly what you'd think. And I'll show you this. This has lot of phase information. In fact when John O'Keefe in the early 1990s, as I'll show you, discovered another property of hippocampal place cells, and not only that they fire as a functional location of the animal's direction and are modulated by this rhythm, but there's a phase dependency. And that is that depending on where the animal is relative to the place field, that is distance within the field, is actually coded as a difference in phase, in terms of phase procession. So phase actually gives you a lot of information about location.
So what he argued was, phase is what you should be decoding. Phase tells you where the animal is. Firing rate gives you some other information.
AUDIENCE: My question would be, though, is if phase is using to basically differentiate out different aspects of a scene, but it's already used to encode spatial position, and spatial relation is not necessarily always correlated timewise, how would you then decouple that?
MATT WILSON: Yeah, you're right.
AUDIENCE: [INAUDIBLE].
MATT WILSON: Right. So you'd say oh, there's a lot of spatial information in the phase. So phase is [INAUDIBLE]. But if you have all these other things that you want to use it for, isn't that a problem. Well when we-- so we went in and actually using this GLM approach, that is a sort of model-based statistics that allows us to include [INAUDIBLE]. So we can include a phase term, we can include a position term. And we just ask, how well does a spatial decoder do if we include all this information. You would think gee, phase should be-- that should really boost things up. That should give you the most information.
Phase actually gives a very modest bump in the overall decoding power. And one of the reasons is, as I mentioned, 100 neurons already got pretty much all the information I need. Yeah, sure, phase gives you a little bit more. It's like-- there's a big jump going from SD to HD. SD TV looks like crap. HD TV looks pretty good. But you know, HD to 4K, it's too much resolution. You can't even see-- if I put that up there, you wouldn't even be able to see it, right? You're too far. There's more information than you actually need. Doesn't add very much. And so you'd say, that's really the difference. Spatial firing property, its small number of neurons, gives us as much information. So what's this other information [INAUDIBLE].
AUDIENCE: But wouldn't you then expect phase to de-correlate from spatial position if its being used for another--
MATT WILSON: It does. But as we'll see, the systematic variation-- and that's if, in fact, they were independent. If they were like an independent phase, you're just trying to spread things out on phase, sort of maximally orthogonalize activity. That's increase information, you could do that. But what you find is that phase varies systematically. And as I showed, the systematic variation of phase as a function of position, what that actually does is it confers the ability to encode projector or sequence.
So what you have is, within the theta cycle, you actually have two phase domains, one in which rate dominates. That's where you could capture and reflect this kind of contextual information. Another one in which phase dominates, and that's where you can capture sequence information. So it's like you have two independent channels. It's like television. You've got your video and you've got your audio. You've got two different streams that are captured as the same stream, but they're independently-modulated. You could pull them out as needed.
And so being able to do these two things, recognize oh, I'm in the [INAUDIBLE] house, and recognize that oh, I did this or I heard that at the same time to have sequence memory and context memory together. One could pull out based on relative phase preference. So this is just a-- I only got like 90% of our [INAUDIBLE]. That's OK. This isn't important. We'll get to the important stuff, which is this modulation. A rhythmic modulation as this function of locomotion.
So when animals are interacting with the world, either moving through the world or attentionally engaged, and that is the taking in information-- activity in the hippocampus, as well as in other structures, and we'll see for instance in [INAUDIBLE] cortical areas and [INAUDIBLE] prefrontal singular cortices. Anything that talks to the hippocampus, you get this rhythm. And there's a phase-locking that occurs. So the rhythm is something that occurs, and when structures need to communicate, they phase lock.
When the animal stops, the rhythm goes away, and it's replaced by these kind of aperiodic bursts. These are called sharp wave ripples, sharp wave just because of the nature of the strong deflection in the electric field. And then when you look at activity of individual [INAUDIBLE], lots of neurons, hippocampal cells are firing during these sharp [? ripple ?] events. But this is what I was mentioning before.
So if you look at-- you kind of zoom in, this is the rhythm, the theta rhythm. So the theta rhythm, it's a local field potential measurement. So you put the electorate in, you're looking at kind of population activity, where activity in this case is likely the reflection of integrated dendritic currents. So this is like the integrated input that the neurons are receiving. The spikes are the output.
So if we look at the timing of individual spikes-- so here's the hippocampal theta rhythm, and this is a single cell-- if you look at the cell as the animal begins to-- as it begins to fire, you find that the cell fires at a particular phase here at the peak. Now over time, as the animal moves, what happens is the firing rate increases. More spikes are emitted. And the relative phase or timing of the spike with respect to this theta oscillation changes. It shifts forward.
And so now we can map phase as a function of position. And when you do that, what you get is something that John O'Keefe had discovered in 1991. And that's the property of phase procession. That is, there's a systematic relationship between the position within the field-- so these are all the spikes produced by one place cell. This is essentially its place field. If I kind of marginalize across phase and I just look at the firing probability, the number of spikes as a function of position, this gives me the place field.
Two things you'll notice. One is that if I do that, that the place field itself is not symmetric. Most of the spikes are kind of bunched up here at the end. And this is actually asymmetric with respect to the direction of travel. This place field, the animal's moving from left to right. And I know that because of this asymmetry in firing rate. And then also you noticed that this is the phase within the theta ripple this is late, and this is early. So phase procession refers to the property that as the animal is moving through its place field, initially spikes fire late. You have to wait all the way into the late-- late in the theta cycle.
Then as it moves through the field, the spikes begin to fire earlier and earlier. That's what happens, earlier and earlier. So farther into the field, earlier in the spikes, more spikes, more phase variability. To think about this actually having two-- the place field has two regions. There's two place fields, actually. There's the early part of the place field. And if I just look at that part of the place field, you find here that there's a very strong correlation, linear correlation, between position and phase. This is if I wanted to decode position in this part, I'd say this is where I'm going to [INAUDIBLE].
This is part of it-- this is lousy. I'm not getting it. Phase is not giving much information. But this is where I'm getting the most if I'm looking at firing probability. Firing probability here is low. And as the firing rate-- firing rate, this is summated over many laps. The probability that the cell spikes, fires a spike over here, is relatively-- well here, it's relatively high. So anything that requires that I use firing probability, for instance this relative familiarity-- relative familiarity requires that I look at relative covariance. And I want to look at the [INAUDIBLE], I want to look at the relative firing rates as a function of time. So this portion of the field is giving a lot of rate information. And so early and late field also segregates into early and late phase. And that is the latter portion of the phase in the theta cycle. This is carrying most of the temporal information.
Early phase is giving me all this rate information. So really there's early phase and our late phase. Two different types of information. And a very simple model, my physical model, can actually account for all of this. And it's a simple excitability model. If we imagine that the animal's walking through the field, and in blue, you're seeing excitation. This is the input, the excitatory drive, that the cell is receiving. And red in inhibition. And inhibition is fluctuating. It's time-varying at the theta frequency.
And if we simply ask, when is a spike going to be emitted, it's going to be when excitation exceeds inhibition, when the blue line is higher than the red line. So as an animal enters the field, so as excitation ramps up, you have to wait until inhibition drops all the way to here. So this gives us the early field, late phase. As the excitation increases, a cell can fire sooner and sooner. So it's a very simple principle-- stronger input, fire earlier. That's all. Weaker input, fire later. So it's just the magnitude to latency transformation.
That magnitude to latency transformation gives you all of these properties. It gives you the phase shift. The probability of firing is going to be a function of the integrated time during which blue exceeds red, excitation. So if there's more excitation, firing rates are going to be higher. There can be more phase variability. Yeah. And then I'll--
AUDIENCE: So how is the velocity of the left [INAUDIBLE] this kind of phase [INAUDIBLE].
MATT WILSON: Yes. So velocity-- so when John O'Keefe actually first-- again, when he observed this phase procession, his conclusion was position is encoded in phase, and velocity is encoded in rate. Now you could say-- you can think about why would velocity be encoded in rate. One thing is that velocity, and that is the time spent, if you think about the number of cycles that are actually going to be occupied by a place field, as velocity increases, the number of cycles is less. So it's going over-sample the high-firing probability regions. So essentially, when the animals are moving very quickly, you're going to lose a lot of the temporal information. So the [INAUDIBLE] resolution temporal information is going to drop. This is really largely unaffected by that, so you could see how you would draw that conclusion. And maybe that's reasonable or accurate a description.
Another thing that happens is, perhaps in an effort to compensate the frequency, the frequency of the theta rhythm goes up. So there's a relationship very-- it's a relatively modest relationship between velocity and frequency. Higher running speed, higher frequency. [INAUDIBLE].
AUDIENCE: Sorry, why is the excitation increasing? Why is that blue line going up as you--
MATT WILSON: Oh, yeah, so why you say-- so all this relies on this asymmetry of the ramping property of the excitation. So this is something that [INAUDIBLE], when he was in the lab, he both sort of postulated and then empirically demonstrated that this is an experience-dependent phenomenon. That is that you can start with something that is not biased, but if you now bias the behavior and then you introduce an asymmetric plasticity or learning rule, you spike time-independent plasticity. So if you spike time-independent plasticity, which is temporally asymmetric, temporally asymmetric plasticity rule will give you a spatially asymmetric receptive field contingent upon the systematic [INAUDIBLE]. So when the animal runs from left to right, that asymmetry in timing will give you asymmetry in spatial [INAUDIBLE].
That's actually very important, because what it says is this asymmetry is actually a reflection of behavior. You can't build that in until you know what the animal's actually doing. So the firing, again, you say-- where it fires, that is not a function of experience. That's just a random draw where your hippocampus reaches in and throws out the 30%. And so this portion of it, you say, this is just the random draw.
But getting the sequence, now the sequence is a product of experience. Asymmetry is a product of experience. The asymmetry is what gives you the phase-- systematic phase encoding capabilities. So this is where experience, the time-dependent component of hippocampal memory rises. It's in this fundamental property. Now by extrapolation you say OK, if all that relies on spike time-independent plasticity, you take that out, whole thing goes in the can, as you should lose the ability to remember a particular aspect of hippocampally-dependent memory. That is the experiential component. Static contextual recognition, so if you know fear conditioning, that's what I would predict. You could do fear conditioning, animals remember, I've been here before. I don't remember exactly what happened, but I've been here before. And then that could be conditioned to some downstream structure.
Now something that required explicit recall-- navigational task, animal has to remember oh, gee, where was the food. Food was over there. Was it in that hole or was it in that hole. That's going to require some element of time sequence memory. So just thinking that you could be able to associate these two components and then even test them empirically and then [INAUDIBLE] back in some property of basic biophysics. And this basic biophysics-- nah, we're not-- we're already way past. So this is very interest-- I've mentioned it's very interesting phase-- the idea that different phases, the different phase domains, could be carrying different kinds of information. It could be like an encoding and retrieval.
So you can think about this, where's the encoding and where's the retrieval, if you were thinking about this. Where's the encoding, where's the retrieval. Just using those terms very loosely. What might you point to? It's not clear that they would-- that you could map it specifically, but you can think about there being different kinds of demands here. And as the retrieval component, you'd say look, those static context recognition, you think about it as recognition because it's formed one shot. You drop in, format is done. I don't need anymore, right?
So rapid encoding, all I need to do now is I just need to retrieve this. Whereas this is constantly varying. This is the changing in firing rates. So there's going to be a constant encoding demand on this region of the phase states. And as I mentioned, the different inputs to the different dendritic regions have different phases. And they're shifted by 180 degrees. So the different inputs from the entorhinal cortex and CA3 map into these different base regimes. So it gives you a very rich computational substrate for doing interesting stuff relating to memory encoding, retrieval of context of sequence, of experience. And can't get into any of that. But I just want to put this last thing up here. I didn't even get into any of the data slides, but.
So if you take that basic principle, and that is this is the basic principle, you make these things asymmetric. You get phase procession. And phase procession across-- in a single cell-- translates into sequence encoding across a population of cells. So this is what phase procession looks like in one cell. If I take two cells, phase cell 1 in blue, phase cell 2 in purple, now what you see is that spatial adjacency, when he runs through here activating place cell 1 and then place cell 2 on this behavioral time scale-- which could be seconds, it could be minutes, it could be anything-- this phase procession, phase transformation, means that in every cycle of the theta cycle, place cell 1 will fire before place cell 2. And the time will reflect the distance here. And this is actual distance.
So you get this sequence or trajectory compressed into this single 100-millisecond cycle where you do two things. One is you transform this behavioral-- long and variable behavioral time scale into something that is short [INAUDIBLE] physically appropriate for doing things like spike time-independent plasticity, which is exactly what I had suggested. Inferred the downstream asymmetry, which allowed you to encode sequence. So if somewhere downstream needed spikes to occur with the proper timing, in order for that structure to give you asymmetries, or these ramps, this is what would give you the proper spike timing. So it maps into something that's short, that is consistent, gets rid of the variability of behavioral time scale, and [INAUDIBLE] by a physical time scale.
So I like to say there's another general rule. So general rules, things that are organized like module, dendritic modules, separate things in phase. And when you see ramps, anytime I see ramps-- and there are ramps everywhere. You look in the brain, it's like everything is a ramp. It's ramps in the prefrontal cortex over time, it's ramps [INAUDIBLE] fields and, you know, and-- you know in motion and high-level visual processing, ramps in the hippocampus. There were ramps that were discovered actually before the discovery of place fields [INAUDIBLE] animals. Just looking at basic conditioning, what you see over time, animals sitting there, anticipating there's going to be a light. He's going to get some juice or something. What happens [INAUDIBLE]? You get ramps. There's ramping activity over time.
Ramps, when you have ramps, you can transform a ramp into these internal sequence representations using exactly the same simple strategy. This just says, this ramp just happens to be one that generates functional position. I could change this anything I want, and everything still applies. So I would think about that. When I see ramps, I think, oh, there's sequences. There's sequence encoding, phasing code, at work.
So let's just quit there. And you do in fact-- yeah. This just shows you you actually do get the sequences. So that's the real representation in the hippocampus. Sequences, short sequences, that go from just behind the animal just in front of the animal. This is the actual animal's position. These are these decoded-- using this Bayesian decoder, decoded sequences from just behind the animal to just in front of the animal.
Associated Research Thrust: