1
|
Pang R, Recanatesi S. A non-Hebbian code for episodic memory. SCIENCE ADVANCES 2025; 11:eado4112. [PMID: 39982994 PMCID: PMC11844740 DOI: 10.1126/sciadv.ado4112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 01/22/2025] [Indexed: 02/23/2025]
Abstract
Hebbian plasticity has long dominated neurobiological models of memory formation. Yet, plasticity rules operating on one-shot episodic memory timescales rarely depend on both pre- and postsynaptic spiking, challenging Hebbian theory in this crucial regime. Here, we present an episodic memory model governed by a simpler rule depending only on presynaptic activity. We show that this rule, capitalizing on high-dimensional neural activity with restricted transitions, naturally stores episodes as paths through complex state spaces like those underlying a world model. The resulting memory traces, which we term path vectors, are highly expressive and decodable with an odor-tracking algorithm. We show that path vectors are robust alternatives to Hebbian traces, support one-shot sequential and associative recall, along with policy learning, and shed light on specific hippocampal plasticity rules. Thus, non-Hebbian plasticity is sufficient for flexible memory and learning and well-suited to encode episodes and policies as paths through a world model.
Collapse
Affiliation(s)
- Rich Pang
- Center for the Physics of Biological Function, Princeton, NJ and New York, NY, USA
- Princeton Neuroscience Institute, Princeton, NJ, USA
| | - Stefano Recanatesi
- Allen Institute for Neural Dynamics, Seattle, WA, USA
- Technion–Israel Institute of Technology, Haifa, Israel
| |
Collapse
|
2
|
Renner A, Sheldon F, Zlotnik A, Tao L, Sornborger A. The backpropagation algorithm implemented on spiking neuromorphic hardware. Nat Commun 2024; 15:9691. [PMID: 39516210 PMCID: PMC11549378 DOI: 10.1038/s41467-024-53827-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Accepted: 10/22/2024] [Indexed: 11/16/2024] Open
Abstract
The capabilities of natural neural systems have inspired both new generations of machine learning algorithms as well as neuromorphic, very large-scale integrated circuits capable of fast, low-power information processing. However, it has been argued that most modern machine learning algorithms are not neurophysiologically plausible. In particular, the workhorse of modern deep learning, the backpropagation algorithm, has proven difficult to translate to neuromorphic hardware. This study presents a neuromorphic, spiking backpropagation algorithm based on synfire-gated dynamical information coordination and processing implemented on Intel's Loihi neuromorphic research processor. We demonstrate a proof-of-principle three-layer circuit that learns to classify digits and clothing items from the MNIST and Fashion MNIST datasets. To our knowledge, this is the first work to show a Spiking Neural Network implementation of the exact backpropagation algorithm that is fully on-chip without a computer in the loop. It is competitive in accuracy with off-chip trained SNNs and achieves an energy-delay product suitable for edge computing. This implementation shows a path for using in-memory, massively parallel neuromorphic processors for low-power, low-latency implementation of modern deep learning applications.
Collapse
Affiliation(s)
- Alpha Renner
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, 8057, Switzerland
- Forschungszentrum Jülich, Jülich, 52428, Germany
| | - Forrest Sheldon
- Physics of Condensed Matter & Complex Systems (T-4), Los Alamos National Laboratory, Los Alamos, NM, 87545, USA
- London Institute for Mathematical Sciences, Royal Institution, London, W1S 4BS, UK
| | - Anatoly Zlotnik
- Applied Mathematics & Plasma Physics (T-5), Los Alamos National Laboratory, Los Alamos, NM, 87545, USA
| | - Louis Tao
- Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing, 100871, China
- Center for Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, 100871, China
| | - Andrew Sornborger
- Information Sciences (CCS-3), Los Alamos National Laboratory, Los Alamos, NM, 87545, USA.
| |
Collapse
|
3
|
Breffle J, Germaine H, Shin JD, Jadhav SP, Miller P. Intrinsic dynamics of randomly clustered networks generate place fields and preplay of novel environments. eLife 2024; 13:RP93981. [PMID: 39422556 PMCID: PMC11488848 DOI: 10.7554/elife.93981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2024] Open
Abstract
During both sleep and awake immobility, hippocampal place cells reactivate time-compressed versions of sequences representing recently experienced trajectories in a phenomenon known as replay. Intriguingly, spontaneous sequences can also correspond to forthcoming trajectories in novel environments experienced later, in a phenomenon known as preplay. Here, we present a model showing that sequences of spikes correlated with the place fields underlying spatial trajectories in both previously experienced and future novel environments can arise spontaneously in neural circuits with random, clustered connectivity rather than pre-configured spatial maps. Moreover, the realistic place fields themselves arise in the circuit from minimal, landmark-based inputs. We find that preplay quality depends on the network's balance of cluster isolation and overlap, with optimal preplay occurring in small-world regimes of high clustering yet short path lengths. We validate the results of our model by applying the same place field and preplay analyses to previously published rat hippocampal place cell data. Our results show that clustered recurrent connectivity can generate spontaneous preplay and immediate replay of novel environments. These findings support a framework whereby novel sensory experiences become associated with preexisting "pluripotent" internal neural activity patterns.
Collapse
Affiliation(s)
- Jordan Breffle
- Neuroscience Program, Brandeis UniversityWalthamUnited States
| | - Hannah Germaine
- Neuroscience Program, Brandeis UniversityWalthamUnited States
| | - Justin D Shin
- Neuroscience Program, Brandeis UniversityWalthamUnited States
- Volen National Center for Complex Systems, Brandeis UniversityWalthamUnited States
- Department of Psychology , Brandeis UniversityWalthamUnited States
| | - Shantanu P Jadhav
- Neuroscience Program, Brandeis UniversityWalthamUnited States
- Volen National Center for Complex Systems, Brandeis UniversityWalthamUnited States
- Department of Psychology , Brandeis UniversityWalthamUnited States
| | - Paul Miller
- Neuroscience Program, Brandeis UniversityWalthamUnited States
- Volen National Center for Complex Systems, Brandeis UniversityWalthamUnited States
- Department of Biology, Brandeis UniversityWalthamUnited States
| |
Collapse
|
4
|
Breffle J, Germaine H, Shin JD, Jadhav SP, Miller P. Intrinsic dynamics of randomly clustered networks generate place fields and preplay of novel environments. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.26.564173. [PMID: 37961479 PMCID: PMC10634993 DOI: 10.1101/2023.10.26.564173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
During both sleep and awake immobility, hippocampal place cells reactivate time-compressed versions of sequences representing recently experienced trajectories in a phenomenon known as replay. Intriguingly, spontaneous sequences can also correspond to forthcoming trajectories in novel environments experienced later, in a phenomenon known as preplay. Here, we present a model showing that sequences of spikes correlated with the place fields underlying spatial trajectories in both previously experienced and future novel environments can arise spontaneously in neural circuits with random, clustered connectivity rather than pre-configured spatial maps. Moreover, the realistic place fields themselves arise in the circuit from minimal, landmark-based inputs. We find that preplay quality depends on the network's balance of cluster isolation and overlap, with optimal preplay occurring in small-world regimes of high clustering yet short path lengths. We validate the results of our model by applying the same place field and preplay analyses to previously published rat hippocampal place cell data. Our results show that clustered recurrent connectivity can generate spontaneous preplay and immediate replay of novel environments. These findings support a framework whereby novel sensory experiences become associated with preexisting "pluripotent" internal neural activity patterns.
Collapse
Affiliation(s)
- Jordan Breffle
- Neuroscience Program, Brandeis University, 415 South St., Waltham, MA 02454
| | - Hannah Germaine
- Neuroscience Program, Brandeis University, 415 South St., Waltham, MA 02454
| | - Justin D Shin
- Neuroscience Program, Brandeis University, 415 South St., Waltham, MA 02454
- Volen National Center for Complex Systems, Brandeis University, 415 South St., Waltham, MA 02454
- Department of Psychology, Brandeis University, 415 South St., Waltham, MA 02454
| | - Shantanu P Jadhav
- Neuroscience Program, Brandeis University, 415 South St., Waltham, MA 02454
- Volen National Center for Complex Systems, Brandeis University, 415 South St., Waltham, MA 02454
- Department of Psychology, Brandeis University, 415 South St., Waltham, MA 02454
| | - Paul Miller
- Neuroscience Program, Brandeis University, 415 South St., Waltham, MA 02454
- Volen National Center for Complex Systems, Brandeis University, 415 South St., Waltham, MA 02454
- Department of Biology, Brandeis University, 415 South St., Waltham, MA 02454
| |
Collapse
|
5
|
Kanigowski D, Urban-Ciecko J. Conditioning and pseudoconditioning differently change intrinsic excitability of inhibitory interneurons in the neocortex. Cereb Cortex 2024; 34:bhae109. [PMID: 38572735 PMCID: PMC10993172 DOI: 10.1093/cercor/bhae109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 02/26/2024] [Accepted: 02/27/2024] [Indexed: 04/05/2024] Open
Abstract
Many studies indicate a broad role of various classes of GABAergic interneurons in the processes related to learning. However, little is known about how the learning process affects intrinsic excitability of specific classes of interneurons in the neocortex. To determine this, we employed a simple model of conditional learning in mice where vibrissae stimulation was used as a conditioned stimulus and a tail shock as an unconditioned one. In vitro whole-cell patch-clamp recordings showed an increase in intrinsic excitability of low-threshold spiking somatostatin-expressing interneurons (SST-INs) in layer 4 (L4) of the somatosensory (barrel) cortex after the conditioning paradigm. In contrast, pseudoconditioning reduced intrinsic excitability of SST-LTS, parvalbumin-expressing interneurons (PV-INs), and vasoactive intestinal polypeptide-expressing interneurons (VIP-INs) with accommodating pattern in L4 of the barrel cortex. In general, increased intrinsic excitability was accompanied by narrowing of action potentials (APs), whereas decreased intrinsic excitability coincided with AP broadening. Altogether, these results show that both conditioning and pseudoconditioning lead to plastic changes in intrinsic excitability of GABAergic interneurons in a cell-specific manner. In this way, changes in intrinsic excitability can be perceived as a common mechanism of learning-induced plasticity in the GABAergic system.
Collapse
Affiliation(s)
- Dominik Kanigowski
- Laboratory of Electrophysiology, Nencki Institute of Experimental Biology PAS, 3 Pasteur Street, 02-093 Warsaw, Poland
| | - Joanna Urban-Ciecko
- Laboratory of Electrophysiology, Nencki Institute of Experimental Biology PAS, 3 Pasteur Street, 02-093 Warsaw, Poland
| |
Collapse
|
6
|
Whelan MT, Jimenez-Rodriguez A, Prescott TJ, Vasilaki E. A robotic model of hippocampal reverse replay for reinforcement learning. BIOINSPIRATION & BIOMIMETICS 2022; 18:015007. [PMID: 36327454 DOI: 10.1088/1748-3190/ac9ffc] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Accepted: 11/03/2022] [Indexed: 06/16/2023]
Abstract
Hippocampal reverse replay, a phenomenon in which recently active hippocampal cells reactivate in the reverse order, is thought to contribute to learning, particularly reinforcement learning (RL), in animals. Here, we present a novel computational model which exploits reverse replay to improve stability and performance on a homing task. The model takes inspiration from the hippocampal-striatal network, and learning occurs via a three-factor RL rule. To augment this model with hippocampal reverse replay, we derived a policy gradient learning rule that associates place-cell activity with responses in cells representing actions and a supervised learning rule of the same form, interpreting the replay activity as a 'target' frequency. We evaluated the model using a simulated robot spatial navigation task inspired by the Morris water maze. Results suggest that reverse replay can improve performance stability over multiple trials. Our model exploits reverse reply as an additional source for propagating information about desirable synaptic changes, reducing the requirements for long-time scales in eligibility traces combined with low learning rates. We conclude that reverse replay can positively contribute to RL, although less stable learning is possible in its absence. Analogously, we postulate that reverse replay may enhance RL in the mammalian hippocampal-striatal system rather than provide its core mechanism.
Collapse
Affiliation(s)
- Matthew T Whelan
- Department of Computer Science, The University of Sheffield, Sheffield, United Kingdom
- Sheffield Robotics, Sheffield, United Kingdom
| | - Alejandro Jimenez-Rodriguez
- Department of Computer Science, The University of Sheffield, Sheffield, United Kingdom
- Sheffield Robotics, Sheffield, United Kingdom
| | - Tony J Prescott
- Department of Computer Science, The University of Sheffield, Sheffield, United Kingdom
- Sheffield Robotics, Sheffield, United Kingdom
| | - Eleni Vasilaki
- Department of Computer Science, The University of Sheffield, Sheffield, United Kingdom
- Sheffield Robotics, Sheffield, United Kingdom
| |
Collapse
|
7
|
Zou Z, Alimohamadi H, Zakeri A, Imani F, Kim Y, Najafi MH, Imani M. Memory-inspired spiking hyperdimensional network for robust online learning. Sci Rep 2022; 12:7641. [PMID: 35538126 PMCID: PMC9090930 DOI: 10.1038/s41598-022-11073-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 04/08/2022] [Indexed: 11/09/2022] Open
Abstract
Recently, brain-inspired computing models have shown great potential to outperform today's deep learning solutions in terms of robustness and energy efficiency. Particularly, Spiking Neural Networks (SNNs) and HyperDimensional Computing (HDC) have shown promising results in enabling efficient and robust cognitive learning. Despite the success, these two brain-inspired models have different strengths. While SNN mimics the physical properties of the human brain, HDC models the brain on a more abstract and functional level. Their design philosophies demonstrate complementary patterns that motivate their combination. With the help of the classical psychological model on memory, we propose SpikeHD, the first framework that fundamentally combines Spiking neural network and hyperdimensional computing. SpikeHD generates a scalable and strong cognitive learning system that better mimics brain functionality. SpikeHD exploits spiking neural networks to extract low-level features by preserving the spatial and temporal correlation of raw event-based spike data. Then, it utilizes HDC to operate over SNN output by mapping the signal into high-dimensional space, learning the abstract information, and classifying the data. Our extensive evaluation on a set of benchmark classification problems shows that SpikeHD provides the following benefit compared to SNN architecture: (1) significantly enhance learning capability by exploiting two-stage information processing, (2) enables substantial robustness to noise and failure, and (3) reduces the network size and required parameters to learn complex information.
Collapse
Affiliation(s)
- Zhuowen Zou
- University of California San Diego, La Jolla, CA, 92093, USA
- University of California Irvine, Irvine, CA, 92697, USA
| | | | - Ali Zakeri
- University of California Irvine, Irvine, CA, 92697, USA
| | - Farhad Imani
- University of Connecticut, Storrs, CT, 06269, USA
| | - Yeseong Kim
- Daegu Gyeongbuk Institute of Science and Technology, Daegu, South Korea
| | | | - Mohsen Imani
- University of California Irvine, Irvine, CA, 92697, USA.
| |
Collapse
|
8
|
Alejandre-García T, Kim S, Pérez-Ortega J, Yuste R. Intrinsic excitability mechanisms of neuronal ensemble formation. eLife 2022; 11:77470. [PMID: 35506662 PMCID: PMC9197391 DOI: 10.7554/elife.77470] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 05/03/2022] [Indexed: 11/13/2022] Open
Abstract
Neuronal ensembles are coactive groups of cortical neurons, found in spontaneous and evoked activity, that can mediate perception and behavior. To understand the mechanisms that lead to the formation of ensembles, we co-activated layer 2/3 pyramidal neurons in brain slices from mouse visual cortex, in animals of both sexes, replicating in vitro an optogenetic protocol to generate ensembles in vivo. Using whole-cell and perforated patch-clamp pair recordings we found that, after optogenetic or electrical stimulation, coactivated neurons increased their correlated activity, a hallmark of ensemble formation. Coactivated neurons showed small biphasic changes in presynaptic plasticity, with an initial depression followed by a potentiation after a recovery period. Optogenetic and electrical stimulation also induced significant increases in frequency and amplitude of spontaneous EPSPs, even after single-cell stimulation. In addition, we observed unexpected strong and persistent increases in neuronal excitability after stimulation, with increases in membrane resistance and reductions in spike threshold. A pharmacological agent that blocks changes in membrane resistance reverted this effect. These significant increases in excitability can explain the observed biphasic synaptic plasticity. We conclude that cell-intrinsic changes in excitability are involved in the formation of neuronal ensembles. We propose an ‘iceberg’ model, by which increased neuronal excitability makes subthreshold connections suprathreshold, enhancing the effect of already existing synapses, and generating a new neuronal ensemble. In the brain, groups of neurons that are activated together – also known as neuronal ensembles – are the basic units that underpin perception and behavior. Yet, exactly how these coactive circuits are established remains under investigation. In 1949, Canadian psychologist Donald Hebb proposed that, when brains learn something new, the neurons which are activated together connect to form ensembles, and their connections become stronger each time this specific piece of knowledge is recalled. This idea that ‘neurons that fire together, wire together’ can explain how memories are acquired and recalled, by strengthening their wiring. However, recent studies have questioned whether strengthening connections is the only mechanism by which neural ensembles can be created. Changes in the excitability of neurons (how easily they are to fire and become activated) may also play a role. In other words, ensembles could emerge because certain neurons become more excitable and fire more readily. To solve this conundrum, Alejandre-García et al. examined both hypotheses in the same system. Neurons in slices of the mouse visual cortex were stimulated electrically or optically, via a technique that controls neural activity with light. The activity of individual neurons and their connections was then measured with electrodes. Spontaneous activity among connected neurons increased after stimulation, indicative of the formation of neuronal ensembles. Connected neurons also showed small changes in the strength of their connections, which first decreased and then rebounded after an initial recovery period. Intriguingly, cells also showed unexpected strong and persistent increases in neuronal excitability after stimulation, such that neurons fired more readily to the same stimulus. In other words, neurons maintained a cellular memory of having been stimulated. The authors conclude that ensembles form because connected neurons become more excitable, which in turn, may strengthen connections of the circuit at a later stage. These results provide fresh insights about the neural circuits underpinning learning and memory. In time, the findings could also help to understand disorders such as Alzheimer’s disease and schizophrenia, which are characterised by memory impairments and disordered thinking.
Collapse
Affiliation(s)
| | - Samuel Kim
- Department of Biological Sciences, Columbia University, New York, United States
| | - Jesús Pérez-Ortega
- Department of Biological Sciences, Columbia University, New York, United States
| | - Rafael Yuste
- Department of Biological Sciences, Columbia University, New York, United States
| |
Collapse
|
9
|
Smith SJ, Hawrylycz M, Rossier J, Sümbül U. New light on cortical neuropeptides and synaptic network plasticity. Curr Opin Neurobiol 2020; 63:176-188. [PMID: 32679509 DOI: 10.1016/j.conb.2020.04.002] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2020] [Revised: 04/11/2020] [Accepted: 04/13/2020] [Indexed: 01/14/2023]
Abstract
Neuropeptides, members of a large and evolutionarily ancient family of proteinaceous cell-cell signaling molecules, are widely recognized as extremely potent regulators of brain function and behavior. At the cellular level, neuropeptides are known to act mainly via modulation of ion channel and synapse function, but functional impacts emerging at the level of complex cortical synaptic networks have resisted mechanistic analysis. New findings from single-cell RNA-seq transcriptomics now illuminate intricate patterns of cortical neuropeptide signaling gene expression and new tools now offer powerful molecular access to cortical neuropeptide signaling. Here we highlight some of these new findings and tools, focusing especially on prospects for experimental and theoretical exploration of peptidergic and synaptic networks interactions underlying cortical function and plasticity.
Collapse
Affiliation(s)
- Stephen J Smith
- Allen Institute for Brain Science, 615 Westlake Ave N, Seattle WA, USA.
| | - Michael Hawrylycz
- Allen Institute for Brain Science, 615 Westlake Ave N, Seattle WA, USA
| | - Jean Rossier
- Neuroscience Paris Seine, Sorbonne Université, Paris, France
| | - Uygar Sümbül
- Allen Institute for Brain Science, 615 Westlake Ave N, Seattle WA, USA
| |
Collapse
|