1
|
Meissner-Bernard C, Zenke F, Friedrich RW. Geometry and dynamics of representations in a precisely balanced memory network related to olfactory cortex. eLife 2025; 13:RP96303. [PMID: 39804831 PMCID: PMC11733691 DOI: 10.7554/elife.96303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2025] Open
Abstract
Biological memory networks are thought to store information by experience-dependent changes in the synaptic connectivity between assemblies of neurons. Recent models suggest that these assemblies contain both excitatory and inhibitory neurons (E/I assemblies), resulting in co-tuning and precise balance of excitation and inhibition. To understand computational consequences of E/I assemblies under biologically realistic constraints we built a spiking network model based on experimental data from telencephalic area Dp of adult zebrafish, a precisely balanced recurrent network homologous to piriform cortex. We found that E/I assemblies stabilized firing rate distributions compared to networks with excitatory assemblies and global inhibition. Unlike classical memory models, networks with E/I assemblies did not show discrete attractor dynamics. Rather, responses to learned inputs were locally constrained onto manifolds that 'focused' activity into neuronal subspaces. The covariance structure of these manifolds supported pattern classification when information was retrieved from selected neuronal subsets. Networks with E/I assemblies therefore transformed the geometry of neuronal coding space, resulting in continuous representations that reflected both relatedness of inputs and an individual's experience. Such continuous representations enable fast pattern classification, can support continual learning, and may provide a basis for higher-order learning and cognitive computations.
Collapse
Affiliation(s)
| | - Friedemann Zenke
- Friedrich Miescher Institute for Biomedical ResearchBaselSwitzerland
- University of BaselBaselSwitzerland
| | - Rainer W Friedrich
- Friedrich Miescher Institute for Biomedical ResearchBaselSwitzerland
- University of BaselBaselSwitzerland
| |
Collapse
|
2
|
Breffle J, Germaine H, Shin JD, Jadhav SP, Miller P. Intrinsic dynamics of randomly clustered networks generate place fields and preplay of novel environments. eLife 2024; 13:RP93981. [PMID: 39422556 PMCID: PMC11488848 DOI: 10.7554/elife.93981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2024] Open
Abstract
During both sleep and awake immobility, hippocampal place cells reactivate time-compressed versions of sequences representing recently experienced trajectories in a phenomenon known as replay. Intriguingly, spontaneous sequences can also correspond to forthcoming trajectories in novel environments experienced later, in a phenomenon known as preplay. Here, we present a model showing that sequences of spikes correlated with the place fields underlying spatial trajectories in both previously experienced and future novel environments can arise spontaneously in neural circuits with random, clustered connectivity rather than pre-configured spatial maps. Moreover, the realistic place fields themselves arise in the circuit from minimal, landmark-based inputs. We find that preplay quality depends on the network's balance of cluster isolation and overlap, with optimal preplay occurring in small-world regimes of high clustering yet short path lengths. We validate the results of our model by applying the same place field and preplay analyses to previously published rat hippocampal place cell data. Our results show that clustered recurrent connectivity can generate spontaneous preplay and immediate replay of novel environments. These findings support a framework whereby novel sensory experiences become associated with preexisting "pluripotent" internal neural activity patterns.
Collapse
Affiliation(s)
- Jordan Breffle
- Neuroscience Program, Brandeis UniversityWalthamUnited States
| | - Hannah Germaine
- Neuroscience Program, Brandeis UniversityWalthamUnited States
| | - Justin D Shin
- Neuroscience Program, Brandeis UniversityWalthamUnited States
- Volen National Center for Complex Systems, Brandeis UniversityWalthamUnited States
- Department of Psychology , Brandeis UniversityWalthamUnited States
| | - Shantanu P Jadhav
- Neuroscience Program, Brandeis UniversityWalthamUnited States
- Volen National Center for Complex Systems, Brandeis UniversityWalthamUnited States
- Department of Psychology , Brandeis UniversityWalthamUnited States
| | - Paul Miller
- Neuroscience Program, Brandeis UniversityWalthamUnited States
- Volen National Center for Complex Systems, Brandeis UniversityWalthamUnited States
- Department of Biology, Brandeis UniversityWalthamUnited States
| |
Collapse
|
3
|
Jauch J, Becker M, Tetzlaff C, Fauth MJ. Differences in the consolidation by spontaneous and evoked ripples in the presence of active dendrites. PLoS Comput Biol 2024; 20:e1012218. [PMID: 38917228 PMCID: PMC11230591 DOI: 10.1371/journal.pcbi.1012218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 07/08/2024] [Accepted: 05/31/2024] [Indexed: 06/27/2024] Open
Abstract
Ripples are a typical form of neural activity in hippocampal neural networks associated with the replay of episodic memories during sleep as well as sleep-related plasticity and memory consolidation. The emergence of ripples has been observed both dependent as well as independent of input from other brain areas and often coincides with dendritic spikes. Yet, it is unclear how input-evoked and spontaneous ripples as well as dendritic excitability affect plasticity and consolidation. Here, we use mathematical modeling to compare these cases. We find that consolidation as well as the emergence of spontaneous ripples depends on a reliable propagation of activity in feed-forward structures which constitute memory representations. This propagation is facilitated by excitable dendrites, which entail that a few strong synapses are sufficient to trigger neuronal firing. In this situation, stimulation-evoked ripples lead to the potentiation of weak synapses within the feed-forward structure and, thus, to a consolidation of a more general sequence memory. However, spontaneous ripples that occur without stimulation, only consolidate a sparse backbone of the existing strong feed-forward structure. Based on this, we test a recently hypothesized scenario in which the excitability of dendrites is transiently enhanced after learning, and show that such a transient increase can strengthen, restructure and consolidate even weak hippocampal memories, which would be forgotten otherwise. Hence, a transient increase in dendritic excitability would indeed provide a mechanism for stabilizing memories.
Collapse
Affiliation(s)
- Jannik Jauch
- Third Institute for Physics, Georg-August-University, Göttingen, Germany
| | - Moritz Becker
- Group of Computational Synaptic Physiology, Department for Neuro- and Sensory Physiology, University Medical Center Göttingen, Göttingen, Germany
| | - Christian Tetzlaff
- Group of Computational Synaptic Physiology, Department for Neuro- and Sensory Physiology, University Medical Center Göttingen, Göttingen, Germany
| | - Michael Jan Fauth
- Third Institute for Physics, Georg-August-University, Göttingen, Germany
| |
Collapse
|
4
|
Sammons RP, Vezir M, Moreno-Velasquez L, Cano G, Orlando M, Sievers M, Grasso E, Metodieva VD, Kempter R, Schmidt H, Schmitz D. Structure and function of the hippocampal CA3 module. Proc Natl Acad Sci U S A 2024; 121:e2312281120. [PMID: 38289953 PMCID: PMC10861929 DOI: 10.1073/pnas.2312281120] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 11/01/2023] [Indexed: 02/01/2024] Open
Abstract
The hippocampal formation is crucial for learning and memory, with submodule CA3 thought to be the substrate of pattern completion. However, the underlying synaptic and computational mechanisms of this network are not well understood. Here, we perform circuit reconstruction of a CA3 module using three dimensional (3D) electron microscopy data and combine this with functional connectivity recordings and computational simulations to determine possible CA3 network mechanisms. Direct measurements of connectivity schemes with both physiological measurements and structural 3D EM revealed a high connectivity rate, multi-fold higher than previously assumed. Mathematical modelling indicated that such CA3 networks can robustly generate pattern completion and replay memory sequences. In conclusion, our data demonstrate that the connectivity scheme of the hippocampal submodule is well suited for efficient memory storage and retrieval.
Collapse
Affiliation(s)
- Rosanna P. Sammons
- Neuroscience Research Center, Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health, Neuroscience Research Center, Berlin10117, Germany
| | - Mourat Vezir
- Ernst Strüngmann Institute for Neuroscience, Frankfurt am Main60528, Germany
| | - Laura Moreno-Velasquez
- Neuroscience Research Center, Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health, Neuroscience Research Center, Berlin10117, Germany
| | - Gaspar Cano
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu Berlin, Berlin10115, Germany
| | - Marta Orlando
- Neuroscience Research Center, Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health, Neuroscience Research Center, Berlin10117, Germany
| | - Meike Sievers
- Department of Connectomics, Max Planck Institute for Brain Research, Frankfurt am Main60438, Germany
| | - Eleonora Grasso
- Ernst Strüngmann Institute for Neuroscience, Frankfurt am Main60528, Germany
| | - Verjinia D. Metodieva
- Neuroscience Research Center, Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health, Neuroscience Research Center, Berlin10117, Germany
| | - Richard Kempter
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu Berlin, Berlin10115, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin10115, Germany
- Einstein Center for Neurosciences Berlin, Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health, Einstein Center for Neurosciences Berlin, Berlin10117, Germany
| | - Helene Schmidt
- Ernst Strüngmann Institute for Neuroscience, Frankfurt am Main60528, Germany
| | - Dietmar Schmitz
- Neuroscience Research Center, Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health, Neuroscience Research Center, Berlin10117, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin10115, Germany
- Einstein Center for Neurosciences Berlin, Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health, Einstein Center for Neurosciences Berlin, Berlin10117, Germany
- German Center for Neurodegenerative Diseases Berlin, Berlin10117, Germany
- Max-Delbrück Center for Molecular Medicine in the Helmholtz Association, Berlin13125, Germany
| |
Collapse
|
5
|
Chen ZS, Wilson MA. How our understanding of memory replay evolves. J Neurophysiol 2023; 129:552-580. [PMID: 36752404 PMCID: PMC9988534 DOI: 10.1152/jn.00454.2022] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 01/20/2023] [Accepted: 01/20/2023] [Indexed: 02/09/2023] Open
Abstract
Memory reactivations and replay, widely reported in the hippocampus and cortex across species, have been implicated in memory consolidation, planning, and spatial and skill learning. Technological advances in electrophysiology, calcium imaging, and human neuroimaging techniques have enabled neuroscientists to measure large-scale neural activity with increasing spatiotemporal resolution and have provided opportunities for developing robust analytic methods to identify memory replay. In this article, we first review a large body of historically important and representative memory replay studies from the animal and human literature. We then discuss our current understanding of memory replay functions in learning, planning, and memory consolidation and further discuss the progress in computational modeling that has contributed to these improvements. Next, we review past and present analytic methods for replay analyses and discuss their limitations and challenges. Finally, looking ahead, we discuss some promising analytic methods for detecting nonstereotypical, behaviorally nondecodable structures from large-scale neural recordings. We argue that seamless integration of multisite recordings, real-time replay decoding, and closed-loop manipulation experiments will be essential for delineating the role of memory replay in a wide range of cognitive and motor functions.
Collapse
Affiliation(s)
- Zhe Sage Chen
- Department of Psychiatry, New York University Grossman School of Medicine, New York, New York, United States
- Department of Neuroscience and Physiology, New York University Grossman School of Medicine, New York, New York, United States
- Neuroscience Institute, New York University Grossman School of Medicine, New York, New York, United States
- Department of Biomedical Engineering, New York University Tandon School of Engineering, Brooklyn, New York, United States
| | - Matthew A Wilson
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
- Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
| |
Collapse
|
6
|
Bang JW, Hamilton-Fletcher G, Chan KC. Visual Plasticity in Adulthood: Perspectives from Hebbian and Homeostatic Plasticity. Neuroscientist 2023; 29:117-138. [PMID: 34382456 PMCID: PMC9356772 DOI: 10.1177/10738584211037619] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
The visual system retains profound plastic potential in adulthood. In the current review, we summarize the evidence of preserved plasticity in the adult visual system during visual perceptual learning as well as both monocular and binocular visual deprivation. In each condition, we discuss how such evidence reflects two major cellular mechanisms of plasticity: Hebbian and homeostatic processes. We focus on how these two mechanisms work together to shape plasticity in the visual system. In addition, we discuss how these two mechanisms could be further revealed in future studies investigating cross-modal plasticity in the visual system.
Collapse
Affiliation(s)
- Ji Won Bang
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
| | - Giles Hamilton-Fletcher
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
| | - Kevin C. Chan
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
- Department of Radiology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
- Neuroscience Institute, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
- Center for Neural Science, College of Arts and Science, New York University, New York, NY, USA
| |
Collapse
|
7
|
Pietras B, Schmutz V, Schwalger T. Mesoscopic description of hippocampal replay and metastability in spiking neural networks with short-term plasticity. PLoS Comput Biol 2022; 18:e1010809. [PMID: 36548392 PMCID: PMC9822116 DOI: 10.1371/journal.pcbi.1010809] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 01/06/2023] [Accepted: 12/11/2022] [Indexed: 12/24/2022] Open
Abstract
Bottom-up models of functionally relevant patterns of neural activity provide an explicit link between neuronal dynamics and computation. A prime example of functional activity patterns are propagating bursts of place-cell activities called hippocampal replay, which is critical for memory consolidation. The sudden and repeated occurrences of these burst states during ongoing neural activity suggest metastable neural circuit dynamics. As metastability has been attributed to noise and/or slow fatigue mechanisms, we propose a concise mesoscopic model which accounts for both. Crucially, our model is bottom-up: it is analytically derived from the dynamics of finite-size networks of Linear-Nonlinear Poisson neurons with short-term synaptic depression. As such, noise is explicitly linked to stochastic spiking and network size, and fatigue is explicitly linked to synaptic dynamics. To derive the mesoscopic model, we first consider a homogeneous spiking neural network and follow the temporal coarse-graining approach of Gillespie to obtain a "chemical Langevin equation", which can be naturally interpreted as a stochastic neural mass model. The Langevin equation is computationally inexpensive to simulate and enables a thorough study of metastable dynamics in classical setups (population spikes and Up-Down-states dynamics) by means of phase-plane analysis. An extension of the Langevin equation for small network sizes is also presented. The stochastic neural mass model constitutes the basic component of our mesoscopic model for replay. We show that the mesoscopic model faithfully captures the statistical structure of individual replayed trajectories in microscopic simulations and in previously reported experimental data. Moreover, compared to the deterministic Romani-Tsodyks model of place-cell dynamics, it exhibits a higher level of variability regarding order, direction and timing of replayed trajectories, which seems biologically more plausible and could be functionally desirable. This variability is the product of a new dynamical regime where metastability emerges from a complex interplay between finite-size fluctuations and local fatigue.
Collapse
Affiliation(s)
- Bastian Pietras
- Institute for Mathematics, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience, Berlin, Germany
- Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Valentin Schmutz
- Brain Mind Institute, School of Computer and Communication Sciences and School of Life Sciences, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Tilo Schwalger
- Institute for Mathematics, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience, Berlin, Germany
| |
Collapse
|
8
|
Tirole M, Huelin Gorriz M, Takigawa M, Kukovska L, Bendor D. Experience-driven rate modulation is reinstated during hippocampal replay. eLife 2022; 11:79031. [PMID: 35993533 PMCID: PMC9489210 DOI: 10.7554/elife.79031] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 08/12/2022] [Indexed: 11/16/2022] Open
Abstract
Replay, the sequential reactivation within a neuronal ensemble, is a central hippocampal mechanism postulated to drive memory processing. While both rate and place representations are used by hippocampal place cells to encode behavioral episodes, replay has been largely defined by only the latter – based on the fidelity of sequential activity across neighboring place fields. Here, we show that dorsal CA1 place cells in rats can modulate their firing rate between replay events of two different contexts. This experience-dependent phenomenon mirrors the same pattern of rate modulation observed during behavior and can be used independently from place information within replay sequences to discriminate between contexts. Our results reveal the existence of two complementary neural representations available for memory processes. How do our brains store memories? We now know that this is a complex and dynamic process, involving multiple regions of the brain. A brain region, called the hippocampus, plays an important role in memory formation. While we sleep, the hippocampus works to consolidate information, and eventually creates stable, long-term memories that are then stored in other parts of the brain. But how does the hippocampus do this? Neuroscientists believe that it can replay the patterns of brain activity that represent particular memories. By repeatedly doing this while we sleep, the hippocampus can then direct the transfer of this information to the rest of the brain for storage. The behaviour of nerve cells in the brain underpins these patterns of brain activity. When a nerve cell is active, it fires tiny electrical impulses that can be detected experimentally. The brain thus represents information in two ways: which nerve cells are active and when (sequential patterns); and how active the nerve cells are (how fast they fire electrical impulses or firing rate). For example, when an animal moves from one location to another, special place cells in the hippocampus become active in a distinct sequence. Depending on the context, they will also fire faster or slower. We know that the hippocampus can replay sequential patterns of nerve cell activity during memory consolidation, but whether it can also replay the firing rates associated with a particular experience is still unknown. Tirole, Huelin Gorriz et al. set out to determine if the hippocampus could also preserve the information encoded by firing rate during replay. In the experiments, rats explored two different environments that they had not seen before. The activity of the rats’ place cells was recorded before and after they explored, and also later while they were sleeping. Analysis of the recordings revealed that during replay, the rats’ hippocampi could indeed reproduce both the sequential patterns of activity and the firing rate of the place cells. It also confirmed that each environment was associated with unique firing rates – in other words, the firing rates were memory-specific. These results contribute to our understanding of how the hippocampus represents and processes information about our experiences. More broadly, they also shed new light on how the brain lays down memories, by revealing a key part of the mechanism that it uses to consolidate that information.
Collapse
|
9
|
Replay, the default mode network and the cascaded memory systems model. Nat Rev Neurosci 2022; 23:628-640. [PMID: 35970912 DOI: 10.1038/s41583-022-00620-6] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/07/2022] [Indexed: 12/25/2022]
Abstract
The spontaneous replay of patterns of activity related to past experiences and memories is a striking feature of brain activity, as is the coherent activation of sets of brain areas - particularly those comprising the default mode network (DMN) - during rest. We propose that these two phenomena are strongly intertwined and that their potential functions overlap. In the 'cascaded memory systems model' that we outline here, we hypothesize that the DMN forms the backbone for the propagation of replay, mediating interactions between the hippocampus and the neocortex that enable the consolidation of new memories. The DMN may also independently ignite replay cascades, which support reactivation of older memories or high-level semantic representations. We suggest that transient cortical activations, inducing long-range correlations across the neocortex, are a key mechanism supporting a hierarchy of representations that progresses from simple percepts to semantic representations of causes and, finally, to whole episodes.
Collapse
|
10
|
Braun W, Memmesheimer RM. High-frequency oscillations and sequence generation in two-population models of hippocampal region CA1. PLoS Comput Biol 2022; 18:e1009891. [PMID: 35176028 PMCID: PMC8890743 DOI: 10.1371/journal.pcbi.1009891] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 03/02/2022] [Accepted: 02/02/2022] [Indexed: 11/19/2022] Open
Abstract
Hippocampal sharp wave/ripple oscillations are a prominent pattern of collective activity, which consists of a strong overall increase of activity with superimposed (140 − 200 Hz) ripple oscillations. Despite its prominence and its experimentally demonstrated importance for memory consolidation, the mechanisms underlying its generation are to date not understood. Several models assume that recurrent networks of inhibitory cells alone can explain the generation and main characteristics of the ripple oscillations. Recent experiments, however, indicate that in addition to inhibitory basket cells, the pattern requires in vivo the activity of the local population of excitatory pyramidal cells. Here, we study a model for networks in the hippocampal region CA1 incorporating such a local excitatory population of pyramidal neurons. We start by investigating its ability to generate ripple oscillations using extensive simulations. Using biologically plausible parameters, we find that short pulses of external excitation triggering excitatory cell spiking are required for sharp/wave ripple generation with oscillation patterns similar to in vivo observations. Our model has plausible values for single neuron, synapse and connectivity parameters, random connectivity and no strong feedforward drive to the inhibitory population. Specifically, whereas temporally broad excitation can lead to high-frequency oscillations in the ripple range, sparse pyramidal cell activity is only obtained with pulse-like external CA3 excitation. Further simulations indicate that such short pulses could originate from dendritic spikes in the apical or basal dendrites of CA1 pyramidal cells, which are triggered by coincident spike arrivals from hippocampal region CA3. Finally we show that replay of sequences by pyramidal neurons and ripple oscillations can arise intrinsically in CA1 due to structured connectivity that gives rise to alternating excitatory pulse and inhibitory gap coding; the latter denotes phases of silence in specific basket cell groups, which induce selective disinhibition of groups of pyramidal neurons. This general mechanism for sequence generation leads to sparse pyramidal cell and dense basket cell spiking, does not rely on synfire chain-like feedforward excitation and may be relevant for other brain regions as well.
Collapse
Affiliation(s)
- Wilhelm Braun
- Neural Network Dynamics and Computation, Institute of Genetics, University of Bonn, Bonn, Germany
- Institute of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- * E-mail: (WB); (R-MM)
| | - Raoul-Martin Memmesheimer
- Neural Network Dynamics and Computation, Institute of Genetics, University of Bonn, Bonn, Germany
- * E-mail: (WB); (R-MM)
| |
Collapse
|
11
|
Ecker A, Bagi B, Vértes E, Steinbach-Németh O, Karlocai MR, Papp OI, Miklós I, Hájos N, Freund T, Gulyás AI, Káli S. Hippocampal sharp wave-ripples and the associated sequence replay emerge from structured synaptic interactions in a network model of area CA3. eLife 2022; 11:71850. [PMID: 35040779 PMCID: PMC8865846 DOI: 10.7554/elife.71850] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 01/17/2022] [Indexed: 11/25/2022] Open
Abstract
Hippocampal place cells are activated sequentially as an animal explores its environment. These activity sequences are internally recreated (‘replayed’), either in the same or reversed order, during bursts of activity (sharp wave-ripples [SWRs]) that occur in sleep and awake rest. SWR-associated replay is thought to be critical for the creation and maintenance of long-term memory. In order to identify the cellular and network mechanisms of SWRs and replay, we constructed and simulated a data-driven model of area CA3 of the hippocampus. Our results show that the chain-like structure of recurrent excitatory interactions established during learning not only determines the content of replay, but is essential for the generation of the SWRs as well. We find that bidirectional replay requires the interplay of the experimentally confirmed, temporally symmetric plasticity rule, and cellular adaptation. Our model provides a unifying framework for diverse phenomena involving hippocampal plasticity, representations, and dynamics, and suggests that the structured neural codes induced by learning may have greater influence over cortical network states than previously appreciated.
Collapse
|
12
|
Sarazin MXB, Victor J, Medernach D, Naudé J, Delord B. Online Learning and Memory of Neural Trajectory Replays for Prefrontal Persistent and Dynamic Representations in the Irregular Asynchronous State. Front Neural Circuits 2021; 15:648538. [PMID: 34305535 PMCID: PMC8298038 DOI: 10.3389/fncir.2021.648538] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Accepted: 05/31/2021] [Indexed: 11/13/2022] Open
Abstract
In the prefrontal cortex (PFC), higher-order cognitive functions and adaptive flexible behaviors rely on continuous dynamical sequences of spiking activity that constitute neural trajectories in the state space of activity. Neural trajectories subserve diverse representations, from explicit mappings in physical spaces to generalized mappings in the task space, and up to complex abstract transformations such as working memory, decision-making and behavioral planning. Computational models have separately assessed learning and replay of neural trajectories, often using unrealistic learning rules or decoupling simulations for learning from replay. Hence, the question remains open of how neural trajectories are learned, memorized and replayed online, with permanently acting biological plasticity rules. The asynchronous irregular regime characterizing cortical dynamics in awake conditions exerts a major source of disorder that may jeopardize plasticity and replay of locally ordered activity. Here, we show that a recurrent model of local PFC circuitry endowed with realistic synaptic spike timing-dependent plasticity and scaling processes can learn, memorize and replay large-size neural trajectories online under asynchronous irregular dynamics, at regular or fast (sped-up) timescale. Presented trajectories are quickly learned (within seconds) as synaptic engrams in the network, and the model is able to chunk overlapping trajectories presented separately. These trajectory engrams last long-term (dozen hours) and trajectory replays can be triggered over an hour. In turn, we show the conditions under which trajectory engrams and replays preserve asynchronous irregular dynamics in the network. Functionally, spiking activity during trajectory replays at regular timescale accounts for both dynamical coding with temporal tuning in individual neurons, persistent activity at the population level, and large levels of variability consistent with observed cognitive-related PFC dynamics. Together, these results offer a consistent theoretical framework accounting for how neural trajectories can be learned, memorized and replayed in PFC networks circuits to subserve flexible dynamic representations and adaptive behaviors.
Collapse
Affiliation(s)
- Matthieu X B Sarazin
- Institut des Systèmes Intelligents et de Robotique, CNRS, Inserm, Sorbonne Université, Paris, France
| | - Julie Victor
- CEA Paris-Saclay, CNRS, NeuroSpin, Saclay, France
| | - David Medernach
- Institut des Systèmes Intelligents et de Robotique, CNRS, Inserm, Sorbonne Université, Paris, France
| | - Jérémie Naudé
- Neuroscience Paris Seine - Institut de biologie Paris Seine, CNRS, Inserm, Sorbonne Université, Paris, France
| | - Bruno Delord
- Institut des Systèmes Intelligents et de Robotique, CNRS, Inserm, Sorbonne Université, Paris, France
| |
Collapse
|
13
|
Aljadeff J, Gillett M, Pereira Obilinovic U, Brunel N. From synapse to network: models of information storage and retrieval in neural circuits. Curr Opin Neurobiol 2021; 70:24-33. [PMID: 34175521 DOI: 10.1016/j.conb.2021.05.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 05/06/2021] [Accepted: 05/25/2021] [Indexed: 10/21/2022]
Abstract
The mechanisms of information storage and retrieval in brain circuits are still the subject of debate. It is widely believed that information is stored at least in part through changes in synaptic connectivity in networks that encode this information and that these changes lead in turn to modifications of network dynamics, such that the stored information can be retrieved at a later time. Here, we review recent progress in deriving synaptic plasticity rules from experimental data and in understanding how plasticity rules affect the dynamics of recurrent networks. We show that the dynamics generated by such networks exhibit a large degree of diversity, depending on parameters, similar to experimental observations in vivo during delayed response tasks.
Collapse
Affiliation(s)
- Johnatan Aljadeff
- Neurobiology Section, Division of Biological Sciences, UC San Diego, USA
| | | | | | - Nicolas Brunel
- Department of Neurobiology, Duke University, USA; Department of Physics, Duke University, USA.
| |
Collapse
|
14
|
Frölich S, Marković D, Kiebel SJ. Neuronal Sequence Models for Bayesian Online Inference. Front Artif Intell 2021; 4:530937. [PMID: 34095815 PMCID: PMC8176225 DOI: 10.3389/frai.2021.530937] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Accepted: 04/13/2021] [Indexed: 11/13/2022] Open
Abstract
Various imaging and electrophysiological studies in a number of different species and brain regions have revealed that neuronal dynamics associated with diverse behavioral patterns and cognitive tasks take on a sequence-like structure, even when encoding stationary concepts. These neuronal sequences are characterized by robust and reproducible spatiotemporal activation patterns. This suggests that the role of neuronal sequences may be much more fundamental for brain function than is commonly believed. Furthermore, the idea that the brain is not simply a passive observer but an active predictor of its sensory input, is supported by an enormous amount of evidence in fields as diverse as human ethology and physiology, besides neuroscience. Hence, a central aspect of this review is to illustrate how neuronal sequences can be understood as critical for probabilistic predictive information processing, and what dynamical principles can be used as generators of neuronal sequences. Moreover, since different lines of evidence from neuroscience and computational modeling suggest that the brain is organized in a functional hierarchy of time scales, we will also review how models based on sequence-generating principles can be embedded in such a hierarchy, to form a generative model for recognition and prediction of sensory input. We shortly introduce the Bayesian brain hypothesis as a prominent mathematical description of how online, i.e., fast, recognition, and predictions may be computed by the brain. Finally, we briefly discuss some recent advances in machine learning, where spatiotemporally structured methods (akin to neuronal sequences) and hierarchical networks have independently been developed for a wide range of tasks. We conclude that the investigation of specific dynamical and structural principles of sequential brain activity not only helps us understand how the brain processes information and generates predictions, but also informs us about neuroscientific principles potentially useful for designing more efficient artificial neuronal networks for machine learning tasks.
Collapse
Affiliation(s)
- Sascha Frölich
- Department of Psychology, Technische Universität Dresden, Dresden, Germany
| | | | | |
Collapse
|
15
|
Mackwood O, Naumann LB, Sprekeler H. Learning excitatory-inhibitory neuronal assemblies in recurrent networks. eLife 2021; 10:59715. [PMID: 33900199 PMCID: PMC8075581 DOI: 10.7554/elife.59715] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Accepted: 03/03/2021] [Indexed: 12/22/2022] Open
Abstract
Understanding the connectivity observed in the brain and how it emerges from local plasticity rules is a grand challenge in modern neuroscience. In the primary visual cortex (V1) of mice, synapses between excitatory pyramidal neurons and inhibitory parvalbumin-expressing (PV) interneurons tend to be stronger for neurons that respond to similar stimulus features, although these neurons are not topographically arranged according to their stimulus preference. The presence of such excitatory-inhibitory (E/I) neuronal assemblies indicates a stimulus-specific form of feedback inhibition. Here, we show that activity-dependent synaptic plasticity on input and output synapses of PV interneurons generates a circuit structure that is consistent with mouse V1. Computational modeling reveals that both forms of plasticity must act in synergy to form the observed E/I assemblies. Once established, these assemblies produce a stimulus-specific competition between pyramidal neurons. Our model suggests that activity-dependent plasticity can refine inhibitory circuits to actively shape cortical computations.
Collapse
Affiliation(s)
- Owen Mackwood
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany.,Department for Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
| | - Laura B Naumann
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany.,Department for Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
| | - Henning Sprekeler
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany.,Department for Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
| |
Collapse
|
16
|
Reifenstein ET, Bin Khalid I, Kempter R. Synaptic learning rules for sequence learning. eLife 2021; 10:e67171. [PMID: 33860763 PMCID: PMC8175084 DOI: 10.7554/elife.67171] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 03/31/2021] [Indexed: 12/29/2022] Open
Abstract
Remembering the temporal order of a sequence of events is a task easily performed by humans in everyday life, but the underlying neuronal mechanisms are unclear. This problem is particularly intriguing as human behavior often proceeds on a time scale of seconds, which is in stark contrast to the much faster millisecond time-scale of neuronal processing in our brains. One long-held hypothesis in sequence learning suggests that a particular temporal fine-structure of neuronal activity - termed 'phase precession' - enables the compression of slow behavioral sequences down to the fast time scale of the induction of synaptic plasticity. Using mathematical analysis and computer simulations, we find that - for short enough synaptic learning windows - phase precession can improve temporal-order learning tremendously and that the asymmetric part of the synaptic learning window is essential for temporal-order learning. To test these predictions, we suggest experiments that selectively alter phase precession or the learning window and evaluate memory of temporal order.
Collapse
Affiliation(s)
- Eric Torsten Reifenstein
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu BerlinBerlinGermany
- Bernstein Center for Computational Neuroscience BerlinBerlinGermany
| | - Ikhwan Bin Khalid
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu BerlinBerlinGermany
| | - Richard Kempter
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu BerlinBerlinGermany
- Bernstein Center for Computational Neuroscience BerlinBerlinGermany
- Einstein Center for Neurosciences BerlinBerlinGermany
| |
Collapse
|
17
|
Maes A, Barahona M, Clopath C. Learning compositional sequences with multiple time scales through a hierarchical network of spiking neurons. PLoS Comput Biol 2021; 17:e1008866. [PMID: 33764970 PMCID: PMC8023498 DOI: 10.1371/journal.pcbi.1008866] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Revised: 04/06/2021] [Accepted: 03/08/2021] [Indexed: 11/17/2022] Open
Abstract
Sequential behaviour is often compositional and organised across multiple time scales: a set of individual elements developing on short time scales (motifs) are combined to form longer functional sequences (syntax). Such organisation leads to a natural hierarchy that can be used advantageously for learning, since the motifs and the syntax can be acquired independently. Despite mounting experimental evidence for hierarchical structures in neuroscience, models for temporal learning based on neuronal networks have mostly focused on serial methods. Here, we introduce a network model of spiking neurons with a hierarchical organisation aimed at sequence learning on multiple time scales. Using biophysically motivated neuron dynamics and local plasticity rules, the model can learn motifs and syntax independently. Furthermore, the model can relearn sequences efficiently and store multiple sequences. Compared to serial learning, the hierarchical model displays faster learning, more flexible relearning, increased capacity, and higher robustness to perturbations. The hierarchical model redistributes the variability: it achieves high motif fidelity at the cost of higher variability in the between-motif timings.
Collapse
Affiliation(s)
- Amadeus Maes
- Bioengineering Department, Imperial College London, London, United Kingdom
| | - Mauricio Barahona
- Mathematics Department, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, United Kingdom
| |
Collapse
|
18
|
Characteristics of sequential activity in networks with temporally asymmetric Hebbian learning. Proc Natl Acad Sci U S A 2020; 117:29948-29958. [PMID: 33177232 PMCID: PMC7703604 DOI: 10.1073/pnas.1918674117] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Sequential activity is a prominent feature of many neural systems, in multiple behavioral contexts. Here, we investigate how Hebbian rules lead to storage and recall of random sequences of inputs in both rate and spiking recurrent networks. In the case of the simplest (bilinear) rule, we characterize extensively the regions in parameter space that allow sequence retrieval and compute analytically the storage capacity of the network. We show that nonlinearities in the learning rule can lead to sparse sequences and find that sequences maintain robust decoding but display highly labile dynamics to continuous changes in the connectivity matrix, similar to recent observations in hippocampus and parietal cortex. Sequential activity has been observed in multiple neuronal circuits across species, neural structures, and behaviors. It has been hypothesized that sequences could arise from learning processes. However, it is still unclear whether biologically plausible synaptic plasticity rules can organize neuronal activity to form sequences whose statistics match experimental observations. Here, we investigate temporally asymmetric Hebbian rules in sparsely connected recurrent rate networks and develop a theory of the transient sequential activity observed after learning. These rules transform a sequence of random input patterns into synaptic weight updates. After learning, recalled sequential activity is reflected in the transient correlation of network activity with each of the stored input patterns. Using mean-field theory, we derive a low-dimensional description of the network dynamics and compute the storage capacity of these networks. Multiple temporal characteristics of the recalled sequential activity are consistent with experimental observations. We find that the degree of sparseness of the recalled sequences can be controlled by nonlinearities in the learning rule. Furthermore, sequences maintain robust decoding, but display highly labile dynamics, when synaptic connectivity is continuously modified due to noise or storage of other patterns, similar to recent observations in hippocampus and parietal cortex. Finally, we demonstrate that our results also hold in recurrent networks of spiking neurons with separate excitatory and inhibitory populations.
Collapse
|
19
|
Generation of Sharp Wave-Ripple Events by Disinhibition. J Neurosci 2020; 40:7811-7836. [PMID: 32913107 PMCID: PMC7548694 DOI: 10.1523/jneurosci.2174-19.2020] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2019] [Revised: 06/29/2020] [Accepted: 07/17/2020] [Indexed: 11/21/2022] Open
Abstract
Sharp wave-ripple complexes (SWRs) are hippocampal network phenomena involved in memory consolidation. To date, the mechanisms underlying their occurrence remain obscure. Here, we show how the interactions between pyramidal cells, parvalbumin-positive (PV+) basket cells, and an unidentified class of anti-SWR interneurons can contribute to the initiation and termination of SWRs. Using a biophysically constrained model of a network of spiking neurons and a rate-model approximation, we demonstrate that SWRs emerge as a result of the competition between two interneuron populations and the resulting disinhibition of pyramidal cells. Our models explain how the activation of pyramidal cells or PV+ cells can trigger SWRs, as shown in vitro, and suggests that PV+ cell-mediated short-term synaptic depression influences the experimentally reported dynamics of SWR events. Furthermore, we predict that the silencing of anti-SWR interneurons can trigger SWRs. These results broaden our understanding of the microcircuits supporting the generation of memory-related network dynamics. SIGNIFICANCE STATEMENT The hippocampus is a part of the mammalian brain that is crucial for episodic memories. During periods of sleep and inactive waking, the extracellular activity of the hippocampus is dominated by sharp wave-ripple events (SWRs), which have been shown to be important for memory consolidation. The mechanisms regulating the emergence of these events are still unclear. We developed a computational model to study the emergence of SWRs and to explain the roles of different cell types in regulating them. The model accounts for several previously unexplained features of SWRs and thus advances the understanding of memory-related dynamics.
Collapse
|
20
|
Stöber TM, Lehr AB, Hafting T, Kumar A, Fyhn M. Selective neuromodulation and mutual inhibition within the
CA3–CA2
system can prioritize sequences for replay. Hippocampus 2020; 30:1228-1238. [DOI: 10.1002/hipo.23256] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Revised: 07/10/2020] [Accepted: 08/07/2020] [Indexed: 11/06/2022]
Affiliation(s)
- Tristan M. Stöber
- Department of Computational Physiology Simula Research Laboratory Lysaker Norway
- Centre for Integrative Neuroplasticity University of Oslo Oslo Norway
- Department of Informatics University of Oslo Oslo Norway
| | - Andrew B. Lehr
- Department of Computational Physiology Simula Research Laboratory Lysaker Norway
- Centre for Integrative Neuroplasticity University of Oslo Oslo Norway
- Department of Computational Neuroscience University of Göttingen Göttingen Germany
| | - Torkel Hafting
- Centre for Integrative Neuroplasticity University of Oslo Oslo Norway
- Institute of Basic Medical Sciences University of Oslo Oslo Norway
| | - Arvind Kumar
- Department of Computational Science and Technology KTH Royal Institute of Technology Stockholm Sweden
| | - Marianne Fyhn
- Centre for Integrative Neuroplasticity University of Oslo Oslo Norway
- Department of Biosciences University of Oslo Oslo Norway
| |
Collapse
|
21
|
Gwak J, Kwag J. Distinct subtypes of inhibitory interneurons differentially promote the propagation of rate and temporal codes in the feedforward neural network. CHAOS (WOODBURY, N.Y.) 2020; 30:053102. [PMID: 32491918 DOI: 10.1063/1.5134765] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2019] [Accepted: 04/09/2020] [Indexed: 06/11/2023]
Abstract
Sensory information is believed to be encoded in neuronal spikes using two different neural codes, the rate code (spike firing rate) and the temporal code (precisely-timed spikes). Since the sensory cortex has a highly hierarchical feedforward structure, sensory information-carrying neural codes should reliably propagate across the feedforward network (FFN) of the cortex. Experimental evidence suggests that inhibitory interneurons, such as the parvalbumin-positive (PV) and somatostatin-positive (SST) interneurons, that have distinctively different electrophysiological and synaptic properties, modulate the neural codes during sensory information processing in the cortex. However, how PV and SST interneurons impact on the neural code propagation in the cortical FFN is unknown. We address this question by building a five-layer FFN model consisting of a physiologically realistic Hodgkin-Huxley-type models of excitatory neurons and PV/SST interneurons at different ratios. In response to different firing rate inputs (20-80 Hz), a higher ratio of PV over SST interneurons promoted a reliable propagation of all ranges of firing rate inputs. In contrast, in response to a range of precisely-timed spikes in the form of pulse-packets [with a different number of spikes (α, 40-400 spikes) and degree of dispersion (σ, 0-20 ms)], a higher ratio of SST over PV interneurons promoted a reliable propagation of pulse-packets. Our simulation results show that PV and SST interneurons differentially promote a reliable propagation of the rate and temporal codes, respectively, indicating that the dynamic recruitment of PV and SST interneurons may play critical roles in a reliable propagation of sensory information-carrying neural codes in the cortical FFN.
Collapse
Affiliation(s)
- Jeongheon Gwak
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| | - Jeehyun Kwag
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| |
Collapse
|
22
|
Swanson RA, Levenstein D, McClain K, Tingley D, Buzsáki G. Variable specificity of memory trace reactivation during hippocampal sharp wave ripples. Curr Opin Behav Sci 2020; 32:126-135. [DOI: 10.1016/j.cobeha.2020.02.008] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
|
23
|
Green model to adapt classical conditioning learning in the hippocampus. Neuroscience 2020; 426:201-219. [PMID: 31812493 DOI: 10.1016/j.neuroscience.2019.11.021] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2019] [Revised: 11/11/2019] [Accepted: 11/12/2019] [Indexed: 12/27/2022]
Abstract
Compared with the biological paradigms of classical conditioning, non-adaptive computational models are not capable of realistically simulating the biological behavioural functions of the hippocampal regions, because of their implausible requirement for a large number of learning trials, which can be on the order of hundreds. Additionally, these models did not attain a unified, final stable state even after hundreds of learning trials. Conversely, the output response has a different threshold for similar tasks in various models with prolonged transient response of unspecified status via the training or even testing phases. Accordingly, a green model is a combination of adaptive neuro-computational hippocampal and cortical models that is proposed by adaptively updating the whole weights in all layers for both intact networks and lesion networks using instar and outstar learning rules with adaptive resonance theory (ART). The green model sustains and expands the classical conditioning biological paradigms of the non-adaptive models. The model also overcomes the irregular output response behaviour by using the proposed feature of adaptivity. Further, the model successfully simulates the hippocampal regions without passing the final output response back to the whole network, which is considered to be biologically implausible. The results of the Green model showed a significant improvement confirmed by empirical studies of different tasks. In addition, the results indicated that the model outperforms the previously published models. All the obtained results successfully and quickly attained a stable, desired final state (with a unified concluding state of either "1" or "0") with a significantly shorter transient duration.
Collapse
|
24
|
Maes A, Barahona M, Clopath C. Learning spatiotemporal signals using a recurrent spiking network that discretizes time. PLoS Comput Biol 2020; 16:e1007606. [PMID: 31961853 PMCID: PMC7028299 DOI: 10.1371/journal.pcbi.1007606] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 02/18/2020] [Accepted: 12/13/2019] [Indexed: 12/15/2022] Open
Abstract
Learning to produce spatiotemporal sequences is a common task that the brain has to solve. The same neurons may be used to produce different sequential behaviours. The way the brain learns and encodes such tasks remains unknown as current computational models do not typically use realistic biologically-plausible learning. Here, we propose a model where a spiking recurrent network of excitatory and inhibitory spiking neurons drives a read-out layer: the dynamics of the driver recurrent network is trained to encode time which is then mapped through the read-out neurons to encode another dimension, such as space or a phase. Different spatiotemporal patterns can be learned and encoded through the synaptic weights to the read-out neurons that follow common Hebbian learning rules. We demonstrate that the model is able to learn spatiotemporal dynamics on time scales that are behaviourally relevant and we show that the learned sequences are robustly replayed during a regime of spontaneous activity.
Collapse
Affiliation(s)
- Amadeus Maes
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Mauricio Barahona
- Department of Mathematics, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
25
|
Pereira U, Brunel N. Unsupervised Learning of Persistent and Sequential Activity. Front Comput Neurosci 2020; 13:97. [PMID: 32009924 PMCID: PMC6978734 DOI: 10.3389/fncom.2019.00097] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2018] [Accepted: 12/23/2019] [Indexed: 11/25/2022] Open
Abstract
Two strikingly distinct types of activity have been observed in various brain structures during delay periods of delayed response tasks: Persistent activity (PA), in which a sub-population of neurons maintains an elevated firing rate throughout an entire delay period; and Sequential activity (SA), in which sub-populations of neurons are activated sequentially in time. It has been hypothesized that both types of dynamics can be “learned” by the relevant networks from the statistics of their inputs, thanks to mechanisms of synaptic plasticity. However, the necessary conditions for a synaptic plasticity rule and input statistics to learn these two types of dynamics in a stable fashion are still unclear. In particular, it is unclear whether a single learning rule is able to learn both types of activity patterns, depending on the statistics of the inputs driving the network. Here, we first characterize the complete bifurcation diagram of a firing rate model of multiple excitatory populations with an inhibitory mechanism, as a function of the parameters characterizing its connectivity. We then investigate how an unsupervised temporally asymmetric Hebbian plasticity rule shapes the dynamics of the network. Consistent with previous studies, we find that for stable learning of PA and SA, an additional stabilization mechanism is necessary. We show that a generalized version of the standard multiplicative homeostatic plasticity (Renart et al., 2003; Toyoizumi et al., 2014) stabilizes learning by effectively masking excitatory connections during stimulation and unmasking those connections during retrieval. Using the bifurcation diagram derived for fixed connectivity, we study analytically the temporal evolution and the steady state of the learned recurrent architecture as a function of parameters characterizing the external inputs. Slow changing stimuli lead to PA, while fast changing stimuli lead to SA. Our network model shows how a network with plastic synapses can stably and flexibly learn PA and SA in an unsupervised manner.
Collapse
Affiliation(s)
- Ulises Pereira
- Department of Statistics, The University of Chicago, Chicago, IL, United States
| | - Nicolas Brunel
- Department of Statistics, The University of Chicago, Chicago, IL, United States.,Department of Neurobiology, The University of Chicago, Chicago, IL, United States.,Department of Neurobiology, Duke University, Durham, NC, United States.,Department of Physics, Duke University, Durham, NC, United States
| |
Collapse
|
26
|
Kang L, DeWeese MR. Replay as wavefronts and theta sequences as bump oscillations in a grid cell attractor network. eLife 2019; 8:46351. [PMID: 31736462 PMCID: PMC6901334 DOI: 10.7554/elife.46351] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 11/15/2019] [Indexed: 11/17/2022] Open
Abstract
Grid cells fire in sequences that represent rapid trajectories in space. During locomotion, theta sequences encode sweeps in position starting slightly behind the animal and ending ahead of it. During quiescence and slow wave sleep, bouts of synchronized activity represent long trajectories called replays, which are well-established in place cells and have been recently reported in grid cells. Theta sequences and replay are hypothesized to facilitate many cognitive functions, but their underlying mechanisms are unknown. One mechanism proposed for grid cell formation is the continuous attractor network. We demonstrate that this established architecture naturally produces theta sequences and replay as distinct consequences of modulating external input. Driving inhibitory interneurons at the theta frequency causes attractor bumps to oscillate in speed and size, which gives rise to theta sequences and phase precession, respectively. Decreasing input drive to all neurons produces traveling wavefronts of activity that are decoded as replays.
Collapse
Affiliation(s)
- Louis Kang
- Redwood Center for Theoretical Neuroscience, Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, United States.,Department of Physics, University of California, Berkeley, Berkeley, United States
| | - Michael R DeWeese
- Redwood Center for Theoretical Neuroscience, Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, United States.,Department of Physics, University of California, Berkeley, Berkeley, United States
| |
Collapse
|
27
|
Nicola W, Clopath C. A diversity of interneurons and Hebbian plasticity facilitate rapid compressible learning in the hippocampus. Nat Neurosci 2019; 22:1168-1181. [PMID: 31235906 DOI: 10.1038/s41593-019-0415-2] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2018] [Accepted: 04/23/2019] [Indexed: 11/09/2022]
Abstract
The hippocampus is able to rapidly learn incoming information, even if that information is only observed once. Furthermore, this information can be replayed in a compressed format in either forward or reverse modes during sharp wave-ripples (SPW-Rs). We leveraged state-of-the-art techniques in training recurrent spiking networks to demonstrate how primarily interneuron networks can achieve the following: (1) generate internal theta sequences to bind externally elicited spikes in the presence of inhibition from the medial septum; (2) compress learned spike sequences in the form of a SPW-R when septal inhibition is removed; (3) generate and refine high-frequency assemblies during SPW-R-mediated compression; and (4) regulate the inter-SPW interval timing between SPW-Rs in ripple clusters. From the fast timescale of neurons to the slow timescale of behaviors, interneuron networks serve as the scaffolding for one-shot learning by replaying, reversing, refining, and regulating spike sequences.
Collapse
Affiliation(s)
- Wilten Nicola
- Department of Bioengineering, Imperial College London, London, UK
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK.
| |
Collapse
|
28
|
Herpich J, Tetzlaff C. Principles underlying the input-dependent formation and organization of memories. Netw Neurosci 2019; 3:606-634. [PMID: 31157312 PMCID: PMC6542621 DOI: 10.1162/netn_a_00086] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2018] [Accepted: 03/21/2019] [Indexed: 11/29/2022] Open
Abstract
The neuronal system exhibits the remarkable ability to dynamically store and organize incoming information into a web of memory representations (items), which is essential for the generation of complex behaviors. Central to memory function is that such memory items must be (1) discriminated from each other, (2) associated to each other, or (3) brought into a sequential order. However, how these three basic mechanisms are robustly implemented in an input-dependent manner by the underlying complex neuronal and synaptic dynamics is still unknown. Here, we develop a mathematical framework, which provides a direct link between different synaptic mechanisms, determining the neuronal and synaptic dynamics of the network, to create a network that emulates the above mechanisms. Combining correlation-based synaptic plasticity and homeostatic synaptic scaling, we demonstrate that these mechanisms enable the reliable formation of sequences and associations between two memory items still missing the capability for discrimination. We show that this shortcoming can be removed by additionally considering inhibitory synaptic plasticity. Thus, the here-presented framework provides a new, functionally motivated link between different known synaptic mechanisms leading to the self-organization of fundamental memory mechanisms.
Collapse
Affiliation(s)
- Juliane Herpich
- Department of Computational Neuroscience, Third Institute of Physics - Biophysics, Georg-August-University, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Georg-August-University, Göttingen, Germany
| | - Christian Tetzlaff
- Department of Computational Neuroscience, Third Institute of Physics - Biophysics, Georg-August-University, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Georg-August-University, Göttingen, Germany
| |
Collapse
|
29
|
Pang R, Fairhall AL. Fast and flexible sequence induction in spiking neural networks via rapid excitability changes. eLife 2019; 8:44324. [PMID: 31081753 PMCID: PMC6538377 DOI: 10.7554/elife.44324] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Accepted: 05/11/2019] [Indexed: 12/14/2022] Open
Abstract
Cognitive flexibility likely depends on modulation of the dynamics underlying how biological neural networks process information. While dynamics can be reshaped by gradually modifying connectivity, less is known about mechanisms operating on faster timescales. A compelling entrypoint to this problem is the observation that exploratory behaviors can rapidly cause selective hippocampal sequences to 'replay' during rest. Using a spiking network model, we asked whether simplified replay could arise from three biological components: fixed recurrent connectivity; stochastic 'gating' inputs; and rapid gating input scaling via long-term potentiation of intrinsic excitability (LTP-IE). Indeed, these enabled both forward and reverse replay of recent sensorimotor-evoked sequences, despite unchanged recurrent weights. LTP-IE 'tags' specific neurons with increased spiking probability under gating input, and ordering is reconstructed from recurrent connectivity. We further show how LTP-IE can implement temporary stimulus-response mappings. This elucidates a novel combination of mechanisms that might play a role in rapid cognitive flexibility.
Collapse
Affiliation(s)
- Rich Pang
- Neuroscience Graduate ProgramUniversity of WashingtonSeattleUnited States,Department of Physiology and BiophysicsUniversity of WashingtonSeattleUnited States,Computational Neuroscience CenterUniversity of WashingtonSeattleUnited States
| | - Adrienne L Fairhall
- Department of Physiology and BiophysicsUniversity of WashingtonSeattleUnited States,Computational Neuroscience CenterUniversity of WashingtonSeattleUnited States
| |
Collapse
|
30
|
Matheus Gauy M, Lengler J, Einarsson H, Meier F, Weissenberger F, Yanik MF, Steger A. A Hippocampal Model for Behavioral Time Acquisition and Fast Bidirectional Replay of Spatio-Temporal Memory Sequences. Front Neurosci 2018; 12:961. [PMID: 30618583 PMCID: PMC6306028 DOI: 10.3389/fnins.2018.00961] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Accepted: 12/03/2018] [Indexed: 01/09/2023] Open
Abstract
The hippocampus is known to play a crucial role in the formation of long-term memory. For this, fast replays of previously experienced activities during sleep or after reward experiences are believed to be crucial. But how such replays are generated is still completely unclear. In this paper we propose a possible mechanism for this: we present a model that can store experienced trajectories on a behavioral timescale after a single run, and can subsequently bidirectionally replay such trajectories, thereby omitting any specifics of the previous behavior like speed, etc, but allowing repetitions of events, even with different subsequent events. Our solution builds on well-known concepts, one-shot learning and synfire chains, enhancing them by additional mechanisms using global inhibition and disinhibition. For replays our approach relies on dendritic spikes and cholinergic modulation, as supported by experimental data. We also hypothesize a functional role of disinhibition as a pacemaker during behavioral time.
Collapse
Affiliation(s)
- Marcelo Matheus Gauy
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich, Switzerland
| | - Johannes Lengler
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich, Switzerland
| | - Hafsteinn Einarsson
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich, Switzerland
| | - Florian Meier
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich, Switzerland
| | - Felix Weissenberger
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich, Switzerland
| | - Mehmet Fatih Yanik
- Department of Information Technology and Electrical Engineering, Institute for Neuroinformatics, ETH Zurich, Zurich, Switzerland
| | - Angelika Steger
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
31
|
Malerba P, Bazhenov M. Circuit mechanisms of hippocampal reactivation during sleep. Neurobiol Learn Mem 2018; 160:98-107. [PMID: 29723670 DOI: 10.1016/j.nlm.2018.04.018] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2017] [Revised: 03/13/2018] [Accepted: 04/30/2018] [Indexed: 10/17/2022]
Abstract
The hippocampus is important for memory and learning, being a brain site where initial memories are formed and where sharp wave - ripples (SWR) are found, which are responsible for mapping recent memories to long-term storage during sleep-related memory replay. While this conceptual schema is well established, specific intrinsic and network-level mechanisms driving spatio-temporal patterns of hippocampal activity during sleep, and specifically controlling off-line memory reactivation are unknown. In this study, we discuss a model of hippocampal CA1-CA3 network generating spontaneous characteristic SWR activity. Our study predicts the properties of CA3 input which are necessary for successful CA1 ripple generation and the role of synaptic interactions and intrinsic excitability in spike sequence replay during SWRs. Specifically, we found that excitatory synaptic connections promote reactivation in both CA3 and CA1, but the different dynamics of sharp waves in CA3 and ripples in CA1 result in a differential role for synaptic inhibition in modulating replay: promoting spike sequence specificity in CA3 but not in CA1 areas. Finally, we describe how awake learning of spatial trajectories leads to synaptic changes sufficient to drive hippocampal cells' reactivation during sleep, as required for sleep-related memory consolidation.
Collapse
Affiliation(s)
- Paola Malerba
- Department of Medicine, University of California San Diego, United States
| | - Maxim Bazhenov
- Department of Medicine, University of California San Diego, United States.
| |
Collapse
|
32
|
Joglekar MR, Mejias JF, Yang GR, Wang XJ. Inter-areal Balanced Amplification Enhances Signal Propagation in a Large-Scale Circuit Model of the Primate Cortex. Neuron 2018; 98:222-234.e8. [DOI: 10.1016/j.neuron.2018.02.031] [Citation(s) in RCA: 86] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Revised: 12/27/2017] [Accepted: 02/27/2018] [Indexed: 01/19/2023]
|
33
|
Mothersill C, Smith R, Wang J, Rusin A, Fernandez-Palomo C, Fazzari J, Seymour C. Biological Entanglement-Like Effect After Communication of Fish Prior to X-Ray Exposure. Dose Response 2018; 16:1559325817750067. [PMID: 29479295 PMCID: PMC5818098 DOI: 10.1177/1559325817750067] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2017] [Revised: 08/31/2017] [Accepted: 09/26/2017] [Indexed: 12/24/2022] Open
Abstract
The phenomenon by which irradiated organisms including cells in vitro communicate with unirradiated neighbors is well established in biology as the radiation-induced bystander effect (RIBE). Generally, the purpose of this communication is thought to be protective and adaptive, reflecting a highly conserved evolutionary mechanism enabling rapid adjustment to stressors in the environment. Stressors known to induce the effect were recently shown to include chemicals and even pathological agents. The mechanism is unknown but our group has evidence that physical signals such as biophotons acting on cellular photoreceptors may be implicated. This raises the question of whether quantum biological processes may occur as have been demonstrated in plant photosynthesis. To test this hypothesis, we decided to see whether any form of entanglement was operational in the system. Fish from 2 completely separate locations were allowed to meet for 2 hours either before or after which fish from 1 location only (group A fish) were irradiated. The results confirm RIBE signal production in both skin and gill of fish, meeting both before and after irradiation of group A fish. The proteomic analysis revealed that direct irradiation resulted in pro-tumorigenic proteomic responses in rainbow trout. However, communication from these irradiated fish, both before and after they had been exposed to a 0.5 Gy X-ray dose, resulted in largely beneficial proteomic responses in completely nonirradiated trout. The results suggest that some form of anticipation of a stressor may occur leading to a preconditioning effect or temporally displaced awareness after the fish become entangled.
Collapse
Affiliation(s)
| | | | - Jiaxi Wang
- Department of Chemistry, Mass Spectrometry Facility, Queen’s University, Kingston, Ontario, Canada
| | | | | | | | | |
Collapse
|
34
|
Gönner L, Vitay J, Hamker FH. Predictive Place-Cell Sequences for Goal-Finding Emerge from Goal Memory and the Cognitive Map: A Computational Model. Front Comput Neurosci 2017; 11:84. [PMID: 29075187 PMCID: PMC5643423 DOI: 10.3389/fncom.2017.00084] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2017] [Accepted: 09/01/2017] [Indexed: 01/19/2023] Open
Abstract
Hippocampal place-cell sequences observed during awake immobility often represent previous experience, suggesting a role in memory processes. However, recent reports of goals being overrepresented in sequential activity suggest a role in short-term planning, although a detailed understanding of the origins of hippocampal sequential activity and of its functional role is still lacking. In particular, it is unknown which mechanism could support efficient planning by generating place-cell sequences biased toward known goal locations, in an adaptive and constructive fashion. To address these questions, we propose a model of spatial learning and sequence generation as interdependent processes, integrating cortical contextual coding, synaptic plasticity and neuromodulatory mechanisms into a map-based approach. Following goal learning, sequential activity emerges from continuous attractor network dynamics biased by goal memory inputs. We apply Bayesian decoding on the resulting spike trains, allowing a direct comparison with experimental data. Simulations show that this model (1) explains the generation of never-experienced sequence trajectories in familiar environments, without requiring virtual self-motion signals, (2) accounts for the bias in place-cell sequences toward goal locations, (3) highlights their utility in flexible route planning, and (4) provides specific testable predictions.
Collapse
Affiliation(s)
- Lorenz Gönner
- Artificial Intelligence, Department of Computer Science, Technische Universität Chemnitz, Chemnitz, Germany
| | - Julien Vitay
- Artificial Intelligence, Department of Computer Science, Technische Universität Chemnitz, Chemnitz, Germany
| | - Fred H Hamker
- Artificial Intelligence, Department of Computer Science, Technische Universität Chemnitz, Chemnitz, Germany.,Bernstein Center Computational Neuroscience, Humboldt-Universität Berlin, Berlin, Germany
| |
Collapse
|
35
|
Sprekeler H. Functional consequences of inhibitory plasticity: homeostasis, the excitation-inhibition balance and beyond. Curr Opin Neurobiol 2017; 43:198-203. [PMID: 28500933 DOI: 10.1016/j.conb.2017.03.014] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2016] [Revised: 03/12/2017] [Accepted: 03/22/2017] [Indexed: 11/18/2022]
Abstract
Computational neuroscience has a long-standing tradition of investigating the consequences of excitatory synaptic plasticity. In contrast, the functions of inhibitory plasticity are still largely nebulous, particularly given the bewildering diversity of interneurons in the brain. Here, we review recent computational advances that provide first suggestions for the functional roles of inhibitory plasticity, such as a maintenance of the excitation-inhibition balance, a stabilization of recurrent network dynamics and a decorrelation of sensory responses. The field is still in its infancy, but given the existing body of theory for excitatory plasticity, it is likely to mature quickly and deliver important insights into the self-organization of inhibitory circuits in the brain.
Collapse
Affiliation(s)
- Henning Sprekeler
- Department for Electrical Engineering and Computer Science, Berlin Institute of Technology, and Bernstein Center for Computational Neuroscience, Marchstr. 23, 10587 Berlin, Germany.
| |
Collapse
|