1
|
Ratzon A, Derdikman D, Barak O. Representational drift as a result of implicit regularization. eLife 2024; 12:RP90069. [PMID: 38695551 PMCID: PMC11065423 DOI: 10.7554/elife.90069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/04/2024] Open
Abstract
Recent studies show that, even in constant environments, the tuning of single neurons changes over time in a variety of brain regions. This representational drift has been suggested to be a consequence of continuous learning under noise, but its properties are still not fully understood. To investigate the underlying mechanism, we trained an artificial network on a simplified navigational task. The network quickly reached a state of high performance, and many units exhibited spatial tuning. We then continued training the network and noticed that the activity became sparser with time. Initial learning was orders of magnitude faster than ensuing sparsification. This sparsification is consistent with recent results in machine learning, in which networks slowly move within their solution space until they reach a flat area of the loss function. We analyzed four datasets from different labs, all demonstrating that CA1 neurons become sparser and more spatially informative with exposure to the same environment. We conclude that learning is divided into three overlapping phases: (i) Fast familiarity with the environment; (ii) slow implicit regularization; and (iii) a steady state of null drift. The variability in drift dynamics opens the possibility of inferring learning algorithms from observations of drift statistics.
Collapse
Affiliation(s)
- Aviv Ratzon
- Rappaport Faculty of Medicine, Technion - Israel Institute of TechnologyHaifaIsrael
- Network Biology Research Laboratory, Technion - Israel Institute of TechnologyHaifaIsrael
| | - Dori Derdikman
- Rappaport Faculty of Medicine, Technion - Israel Institute of TechnologyHaifaIsrael
| | - Omri Barak
- Rappaport Faculty of Medicine, Technion - Israel Institute of TechnologyHaifaIsrael
- Network Biology Research Laboratory, Technion - Israel Institute of TechnologyHaifaIsrael
| |
Collapse
|
2
|
Sosa M, Plitt MH, Giocomo LM. Hippocampal sequences span experience relative to rewards. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.12.27.573490. [PMID: 38234842 PMCID: PMC10793396 DOI: 10.1101/2023.12.27.573490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
Hippocampal place cells fire in sequences that span spatial environments and non-spatial modalities, suggesting that hippocampal activity can anchor to the most behaviorally salient aspects of experience. As reward is a highly salient event, we hypothesized that sequences of hippocampal activity can anchor to rewards. To test this, we performed two-photon imaging of hippocampal CA1 neurons as mice navigated virtual environments with changing hidden reward locations. When the reward moved, the firing fields of a subpopulation of cells moved to the same relative position with respect to reward, constructing a sequence of reward-relative cells that spanned the entire task structure. The density of these reward-relative sequences increased with task experience as additional neurons were recruited to the reward-relative population. Conversely, a largely separate subpopulation maintained a spatially-based place code. These findings thus reveal separate hippocampal ensembles can flexibly encode multiple behaviorally salient reference frames, reflecting the structure of the experience.
Collapse
Affiliation(s)
- Marielena Sosa
- Department of Neurobiology, Stanford University School of Medicine; Stanford, CA, USA
| | - Mark H. Plitt
- Department of Neurobiology, Stanford University School of Medicine; Stanford, CA, USA
- Present address: Department of Molecular and Cell Biology, University of California Berkeley; Berkeley, CA, USA
| | - Lisa M. Giocomo
- Department of Neurobiology, Stanford University School of Medicine; Stanford, CA, USA
| |
Collapse
|
3
|
Pals M, Macke JH, Barak O. Trained recurrent neural networks develop phase-locked limit cycles in a working memory task. PLoS Comput Biol 2024; 20:e1011852. [PMID: 38315736 PMCID: PMC10868787 DOI: 10.1371/journal.pcbi.1011852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 02/15/2024] [Accepted: 01/22/2024] [Indexed: 02/07/2024] Open
Abstract
Neural oscillations are ubiquitously observed in many brain areas. One proposed functional role of these oscillations is that they serve as an internal clock, or 'frame of reference'. Information can be encoded by the timing of neural activity relative to the phase of such oscillations. In line with this hypothesis, there have been multiple empirical observations of such phase codes in the brain. Here we ask: What kind of neural dynamics support phase coding of information with neural oscillations? We tackled this question by analyzing recurrent neural networks (RNNs) that were trained on a working memory task. The networks were given access to an external reference oscillation and tasked to produce an oscillation, such that the phase difference between the reference and output oscillation maintains the identity of transient stimuli. We found that networks converged to stable oscillatory dynamics. Reverse engineering these networks revealed that each phase-coded memory corresponds to a separate limit cycle attractor. We characterized how the stability of the attractor dynamics depends on both reference oscillation amplitude and frequency, properties that can be experimentally observed. To understand the connectivity structures that underlie these dynamics, we showed that trained networks can be described as two phase-coupled oscillators. Using this insight, we condensed our trained networks to a reduced model consisting of two functional modules: One that generates an oscillation and one that implements a coupling function between the internal oscillation and external reference. In summary, by reverse engineering the dynamics and connectivity of trained RNNs, we propose a mechanism by which neural networks can harness reference oscillations for working memory. Specifically, we propose that a phase-coding network generates autonomous oscillations which it couples to an external reference oscillation in a multi-stable fashion.
Collapse
Affiliation(s)
- Matthijs Pals
- Machine Learning in Science, Excellence Cluster Machine Learning, University of Tübingen, Tübingen, Germany
- Tübingen AI Center, University of Tübingen, Tübingen, Germany
| | - Jakob H. Macke
- Machine Learning in Science, Excellence Cluster Machine Learning, University of Tübingen, Tübingen, Germany
- Tübingen AI Center, University of Tübingen, Tübingen, Germany
- Department Empirical Inference, Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Omri Barak
- Rappaport Faculty of Medicine Technion, Israel Institute of Technology, Haifa, Israel
- Network Biology Research Laboratory, Israel Institute of Technology, Haifa, Israel
| |
Collapse
|
4
|
Plautz EJ, Barbay S, Frost SB, Stowe AM, Dancause N, Zoubina EV, Eisner-Janowicz I, Guggenmos DJ, Nudo RJ. Spared Premotor Areas Undergo Rapid Nonlinear Changes in Functional Organization Following a Focal Ischemic Infarct in Primary Motor Cortex of Squirrel Monkeys. J Neurosci 2023; 43:2021-2032. [PMID: 36788028 PMCID: PMC10027035 DOI: 10.1523/jneurosci.1452-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Revised: 01/10/2023] [Accepted: 01/13/2023] [Indexed: 02/16/2023] Open
Abstract
Recovery of motor function after stroke is accompanied by reorganization of movement representations in spared cortical motor regions. It is widely assumed that map reorganization parallels recovery, suggesting a causal relationship. We examined this assumption by measuring changes in motor representations in eight male and six female squirrel monkeys in the first few weeks after injury, a time when motor recovery is most rapid. Maps of movement representations were derived using intracortical microstimulation techniques in primary motor cortex (M1), ventral premotor cortex (PMv), and dorsal premotor cortex (PMd) in 14 adult squirrel monkeys before and after a focal infarct in the M1 distal forelimb area. Maps were derived at baseline and at either 2 (n = 7) or 3 weeks (n = 7) postinfarct. In PMv the forelimb maps remained unchanged at 2 weeks but contracted significantly (-42.4%) at 3 weeks. In PMd the forelimb maps expanded significantly (+110.6%) at 2 weeks but contracted significantly (-57.4%) at 3 weeks. Motor deficits were equivalent at both time points. These results highlight two features of plasticity after M1 lesions. First, significant contraction of distal forelimb motor maps in both PMv and PMd is evident by 3 weeks. Second, an unpredictable nonlinear pattern of reorganization occurs in the distal forelimb representation in PMd, first expanding at 2 weeks, and then contracting at 3 weeks postinjury. Together with previous results demonstrating reliable map expansions in PMv several weeks to months after M1 injury, the subacute time period may represent a critical window for the timing of therapeutic interventions.SIGNIFICANCE STATEMENT The relationship between motor recovery and motor map reorganization after cortical injury has rarely been examined in acute/subacute periods. In nonhuman primates, premotor maps were examined at 2 and 3 weeks after injury to primary motor cortex. Although maps are known to expand late after injury, the present study demonstrates early map expansion at 2 weeks (dorsal premotor cortex) followed by contraction at 3 weeks (dorsal and ventral premotor cortex). This nonlinear map reorganization during a time of gradual behavioral recovery suggests that the relationship between map plasticity and motor recovery is much more complex than previously thought. It also suggests that rehabilitative motor training may have its most potent effects during this early dynamic phase of map reorganization.
Collapse
Affiliation(s)
- Erik J Plautz
- Department of Molecular and Integrative Physiology, University of Kansas Medical Center, Kansas City, Kansas 66160
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas 66160
| | - Scott Barbay
- Department of Molecular and Integrative Physiology, University of Kansas Medical Center, Kansas City, Kansas 66160
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas 66160
| | - Shawn B Frost
- Department of Molecular and Integrative Physiology, University of Kansas Medical Center, Kansas City, Kansas 66160
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas 66160
| | - Ann M Stowe
- Department of Molecular and Integrative Physiology, University of Kansas Medical Center, Kansas City, Kansas 66160
| | - Numa Dancause
- Department of Molecular and Integrative Physiology, University of Kansas Medical Center, Kansas City, Kansas 66160
| | - Elena V Zoubina
- Department of Molecular and Integrative Physiology, University of Kansas Medical Center, Kansas City, Kansas 66160
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas 66160
| | - Ines Eisner-Janowicz
- Department of Molecular and Integrative Physiology, University of Kansas Medical Center, Kansas City, Kansas 66160
| | - David J Guggenmos
- Department of Molecular and Integrative Physiology, University of Kansas Medical Center, Kansas City, Kansas 66160
| | - Randolph J Nudo
- Department of Molecular and Integrative Physiology, University of Kansas Medical Center, Kansas City, Kansas 66160
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas 66160
| |
Collapse
|
5
|
Yoon HG, Kim P. STDP-based associative memory formation and retrieval. J Math Biol 2023; 86:49. [PMID: 36826758 DOI: 10.1007/s00285-023-01883-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 12/11/2022] [Accepted: 01/31/2023] [Indexed: 02/25/2023]
Abstract
Spike-timing-dependent plasticity (STDP) is a biological process in which the precise order and timing of neuronal spikes affect the degree of synaptic modification. While there has been numerous research focusing on the role of STDP in neural coding, the functional implications of STDP at the macroscopic level in the brain have not been fully explored yet. In this work, we propose a neurodynamical model based on STDP that renders storage and retrieval of a group of associative memories. We showed that the function of STDP at the macroscopic level is to form a "memory plane" in the neural state space which dynamically encodes high dimensional data. We derived the analytic relation between the input, the memory plane, and the induced macroscopic neural oscillations around the memory plane. Such plane produces a limit cycle in reaction to a similar memory cue, which can be used for retrieval of the original input.
Collapse
Affiliation(s)
- Hong-Gyu Yoon
- Department of Mathematical Sciences, Ulsan National Institute of Science and Technology (UNIST), Ulsan Metropolitan City, 44919, Republic of Korea
| | - Pilwon Kim
- Department of Mathematical Sciences, Ulsan National Institute of Science and Technology (UNIST), Ulsan Metropolitan City, 44919, Republic of Korea.
| |
Collapse
|
6
|
Inhibition of hippocampal palmitoyl acyltransferase activity impairs spatial learning and memory consolidation. Neurobiol Learn Mem 2023; 200:107733. [PMID: 36804592 DOI: 10.1016/j.nlm.2023.107733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 01/12/2023] [Accepted: 02/15/2023] [Indexed: 02/21/2023]
Abstract
Protein palmitoylation regulates trafficking, mobilization, localization, interaction, and distribution of proteins through the palmitoyl acyltransferases (PATs) enzymes. Protein palmitoylation controls rapid and dynamic changes of the synaptic architecture that modifies the efficiency and strength of synaptic connections, a fundamental mechanism to generate stable and long-lasting memory traces. Although protein palmitoylation in functional synaptic plasticity has been widely described, its role in learning and memory processes is poorly understood. In this work, we found that PATs inhibition into the hippocampus before and after the training of Morris water maze (MWM) and object location memory (OLM) impaired spatial learning. However, we demonstrated that PATs inhibition during the retrieval does not affect the expression of spatial memory in both MWM and OLM. Accordingly, long-term potentiation induction is impaired by inhibiting PATs into the hippocampus before high-frequency electrical stimulation but not after. These findings suggest that PATs activity is necessary to modify neural plasticity, a mechanism required for memory acquisition and consolidation. Like phosphorylation, active palmitoylation is required to regulate the function of already existing proteins that change synaptic strength in the hippocampus to acquire and later consolidate spatial memories.
Collapse
|
7
|
The molecular memory code and synaptic plasticity: A synthesis. Biosystems 2023; 224:104825. [PMID: 36610586 DOI: 10.1016/j.biosystems.2022.104825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 12/29/2022] [Accepted: 12/30/2022] [Indexed: 01/06/2023]
Abstract
The most widely accepted view of memory in the brain holds that synapses are the storage sites of memory, and that memories are formed through associative modification of synapses. This view has been challenged on conceptual and empirical grounds. As an alternative, it has been proposed that molecules within the cell body are the storage sites of memory, and that memories are formed through biochemical operations on these molecules. This paper proposes a synthesis of these two views, grounded in a computational model of memory. Synapses are conceived as storage sites for the parameters of an approximate posterior probability distribution over latent causes. Intracellular molecules are conceived as storage sites for the parameters of a generative model. The model stipulates how these two components work together as part of an integrated algorithm for learning and inference.
Collapse
|
8
|
Fukai T. Computational models of Idling brain activity for memory processing. Neurosci Res 2022; 189:75-82. [PMID: 36592825 DOI: 10.1016/j.neures.2022.12.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 12/29/2022] [Indexed: 01/01/2023]
Abstract
Studying the underlying neural mechanisms of cognitive functions of the brain is one of the central questions in modern biology. Moreover, it has significantly impacted the development of novel technologies in artificial intelligence. Spontaneous activity is a unique feature of the brain and is currently lacking in many artificially constructed intelligent machines. Spontaneous activity may represent the brain's idling states, which are internally driven by neuronal networks and possibly participate in offline processing during awake, sleep, and resting states. Evidence is accumulating that the brain's spontaneous activity is not mere noise but part of the mechanisms to process information about previous experiences. A bunch of literature has shown how previous sensory and behavioral experiences influence the subsequent patterns of brain activity with various methods in various animals. It seems, however, that the patterns of neural activity and their computational roles differ significantly from area to area and from function to function. In this article, I review the various forms of the brain's spontaneous activity, especially those observed during memory processing, and some attempts to model the generation mechanisms and computational roles of such activities.
Collapse
Affiliation(s)
- Tomoki Fukai
- Okinawa Institute of Science and Technology, Tancha 1919-1, Onna-son, Okinawa 904-0495, Japan.
| |
Collapse
|
9
|
Chambers AR, Aschauer DF, Eppler JB, Kaschube M, Rumpel S. A stable sensory map emerges from a dynamic equilibrium of neurons with unstable tuning properties. Cereb Cortex 2022; 33:5597-5612. [PMID: 36418925 PMCID: PMC10152095 DOI: 10.1093/cercor/bhac445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 10/17/2022] [Accepted: 10/21/2022] [Indexed: 11/25/2022] Open
Abstract
Abstract
Recent long-term measurements of neuronal activity have revealed that, despite stability in large-scale topographic maps, the tuning properties of individual cortical neurons can undergo substantial reformatting over days. To shed light on this apparent contradiction, we captured the sound response dynamics of auditory cortical neurons using repeated 2-photon calcium imaging in awake mice. We measured sound-evoked responses to a set of pure tone and complex sound stimuli in more than 20,000 auditory cortex neurons over several days. We found that a substantial fraction of neurons dropped in and out of the population response. We modeled these dynamics as a simple discrete-time Markov chain, capturing the continuous changes in responsiveness observed during stable behavioral and environmental conditions. Although only a minority of neurons were driven by the sound stimuli at a given time point, the model predicts that most cells would at least transiently become responsive within 100 days. We observe that, despite single-neuron volatility, the population-level representation of sound frequency was stably maintained, demonstrating the dynamic equilibrium underlying the tonotopic map. Our results show that sensory maps are maintained by shifting subpopulations of neurons “sharing” the job of creating a sensory representation.
Collapse
Affiliation(s)
- Anna R Chambers
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University Mainz , Duesbergweg 6, Mainz 55128 , Germany
| | - Dominik F Aschauer
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University Mainz , Duesbergweg 6, Mainz 55128 , Germany
| | - Jens-Bastian Eppler
- Frankfurt Institute for Advanced Studies and Department of Computer Science, Goethe University Frankfurt , Ruth-Moufang-Straße 1, Frankfurt am Main 60438 , Germany
| | - Matthias Kaschube
- Frankfurt Institute for Advanced Studies and Department of Computer Science, Goethe University Frankfurt , Ruth-Moufang-Straße 1, Frankfurt am Main 60438 , Germany
| | - Simon Rumpel
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University Mainz , Duesbergweg 6, Mainz 55128 , Germany
| |
Collapse
|
10
|
Lee J, Jo J, Lee B, Lee JH, Yoon S. Brain-inspired Predictive Coding Improves the Performance of Machine Challenging Tasks. Front Comput Neurosci 2022; 16:1062678. [PMID: 36465966 PMCID: PMC9709416 DOI: 10.3389/fncom.2022.1062678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 10/28/2022] [Indexed: 09/19/2023] Open
Abstract
Backpropagation has been regarded as the most favorable algorithm for training artificial neural networks. However, it has been criticized for its biological implausibility because its learning mechanism contradicts the human brain. Although backpropagation has achieved super-human performance in various machine learning applications, it often shows limited performance in specific tasks. We collectively referred to such tasks as machine-challenging tasks (MCTs) and aimed to investigate methods to enhance machine learning for MCTs. Specifically, we start with a natural question: Can a learning mechanism that mimics the human brain lead to the improvement of MCT performances? We hypothesized that a learning mechanism replicating the human brain is effective for tasks where machine intelligence is difficult. Multiple experiments corresponding to specific types of MCTs where machine intelligence has room to improve performance were performed using predictive coding, a more biologically plausible learning algorithm than backpropagation. This study regarded incremental learning, long-tailed, and few-shot recognition as representative MCTs. With extensive experiments, we examined the effectiveness of predictive coding that robustly outperformed backpropagation-trained networks for the MCTs. We demonstrated that predictive coding-based incremental learning alleviates the effect of catastrophic forgetting. Next, predictive coding-based learning mitigates the classification bias in long-tailed recognition. Finally, we verified that the network trained with predictive coding could correctly predict corresponding targets with few samples. We analyzed the experimental result by drawing analogies between the properties of predictive coding networks and those of the human brain and discussing the potential of predictive coding networks in general machine learning.
Collapse
Affiliation(s)
- Jangho Lee
- Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea
| | - Jeonghee Jo
- Institute of New Media and Communications, Seoul National University, Seoul, South Korea
| | - Byounghwa Lee
- CybreBrain Research Section, Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea
| | - Jung-Hoon Lee
- CybreBrain Research Section, Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea
| | - Sungroh Yoon
- Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea
- Interdisciplinary Program in Artificial Intelligence, Seoul National University, Seoul, South Korea
| |
Collapse
|
11
|
Small, correlated changes in synaptic connectivity may facilitate rapid motor learning. Nat Commun 2022; 13:5163. [PMID: 36056006 PMCID: PMC9440011 DOI: 10.1038/s41467-022-32646-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 08/08/2022] [Indexed: 11/08/2022] Open
Abstract
Animals rapidly adapt their movements to external perturbations, a process paralleled by changes in neural activity in the motor cortex. Experimental studies suggest that these changes originate from altered inputs (Hinput) rather than from changes in local connectivity (Hlocal), as neural covariance is largely preserved during adaptation. Since measuring synaptic changes in vivo remains very challenging, we used a modular recurrent neural network to qualitatively test this interpretation. As expected, Hinput resulted in small activity changes and largely preserved covariance. Surprisingly given the presumed dependence of stable covariance on preserved circuit connectivity, Hlocal led to only slightly larger changes in activity and covariance, still within the range of experimental recordings. This similarity is due to Hlocal only requiring small, correlated connectivity changes for successful adaptation. Simulations of tasks that impose increasingly larger behavioural changes revealed a growing difference between Hinput and Hlocal, which could be exploited when designing future experiments.
Collapse
|
12
|
Regimes and mechanisms of transient amplification in abstract and biological neural networks. PLoS Comput Biol 2022; 18:e1010365. [PMID: 35969604 PMCID: PMC9377633 DOI: 10.1371/journal.pcbi.1010365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Accepted: 07/06/2022] [Indexed: 11/24/2022] Open
Abstract
Neuronal networks encode information through patterns of activity that define the networks’ function. The neurons’ activity relies on specific connectivity structures, yet the link between structure and function is not fully understood. Here, we tackle this structure-function problem with a new conceptual approach. Instead of manipulating the connectivity directly, we focus on upper triangular matrices, which represent the network dynamics in a given orthonormal basis obtained by the Schur decomposition. This abstraction allows us to independently manipulate the eigenspectrum and feedforward structures of a connectivity matrix. Using this method, we describe a diverse repertoire of non-normal transient amplification, and to complement the analysis of the dynamical regimes, we quantify the geometry of output trajectories through the effective rank of both the eigenvector and the dynamics matrices. Counter-intuitively, we find that shrinking the eigenspectrum’s imaginary distribution leads to highly amplifying regimes in linear and long-lasting dynamics in nonlinear networks. We also find a trade-off between amplification and dimensionality of neuronal dynamics, i.e., trajectories in neuronal state-space. Networks that can amplify a large number of orthogonal initial conditions produce neuronal trajectories that lie in the same subspace of the neuronal state-space. Finally, we examine networks of excitatory and inhibitory neurons. We find that the strength of global inhibition is directly linked with the amplitude of amplification, such that weakening inhibitory weights also decreases amplification, and that the eigenspectrum’s imaginary distribution grows with an increase in the ratio between excitatory-to-inhibitory and excitatory-to-excitatory connectivity strengths. Consequently, the strength of global inhibition reveals itself as a strong signature for amplification and a potential control mechanism to switch dynamical regimes. Our results shed a light on how biological networks, i.e., networks constrained by Dale’s law, may be optimised for specific dynamical regimes. The architecture of neuronal networks lies at the heart of its dynamic behaviour, or in other words, the function of the system. However, the relationship between changes in the architecture and their effect on the dynamics, a structure-function problem, is still poorly understood. Here, we approach this problem by studying a rotated connectivity matrix that is easier to manipulate and interpret. We focus our analysis on a dynamical regime that arises from the biological property that neurons are usually not connected symmetrically, which may result in a non-normal connectivity matrix. Our techniques unveil distinct expressions of the dynamical regime of non-normal amplification. Moreover, we devise a way to analyse the geometry of the dynamics: we assign a single number to a network that quantifies how dissimilar its repertoire of behaviours can be. Finally, using our approach, we can close the loop back to the original neuronal architecture and find that biologically plausible networks use the strength of inhibition and excitatory-to-inhibitory connectivity strength to navigate the different dynamical regimes of non-normal amplification.
Collapse
|
13
|
Driscoll LN, Duncker L, Harvey CD. Representational drift: Emerging theories for continual learning and experimental future directions. Curr Opin Neurobiol 2022; 76:102609. [PMID: 35939861 DOI: 10.1016/j.conb.2022.102609] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 06/08/2022] [Accepted: 06/23/2022] [Indexed: 11/03/2022]
Abstract
Recent work has revealed that the neural activity patterns correlated with sensation, cognition, and action often are not stable and instead undergo large scale changes over days and weeks-a phenomenon called representational drift. Here, we highlight recent observations of drift, how drift is unlikely to be explained by experimental confounds, and how the brain can likely compensate for drift to allow stable computation. We propose that drift might have important roles in neural computation to allow continual learning, both for separating and relating memories that occur at distinct times. Finally, we present an outlook on future experimental directions that are needed to further characterize drift and to test emerging theories for drift's role in computation.
Collapse
Affiliation(s)
- Laura N Driscoll
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
| | - Lea Duncker
- Howard Hughes Medical Institute, Stanford University, Stanford, CA, USA.
| | | |
Collapse
|
14
|
An STDP-based encoding method for associative and composite data. Sci Rep 2022; 12:4666. [PMID: 35304537 PMCID: PMC8933433 DOI: 10.1038/s41598-022-08469-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 03/04/2022] [Indexed: 11/09/2022] Open
Abstract
Spike-timing-dependent plasticity(STDP) is a biological process of synaptic modification caused by the difference of firing order and timing between neurons. One of neurodynamical roles of STDP is to form a macroscopic geometrical structure in the neuronal state space in response to a periodic input by Susman et al. (Nat. Commun. 10(1), 1-9 2019), Yoon, & Kim. Stdp-based associative memory formation and retrieval. arXiv:2107.02429v2 (2021). In this work, we propose a practical memory model based on STDP which can store and retrieve high dimensional associative data. The model combines STDP dynamics with an encoding scheme for distributed representations and is able to handle multiple composite data in a continuous manner. In the auto-associative memory task where a group of images are continuously streamed to the model, the images are successfully retrieved from an oscillating neural state whenever a proper cue is given. In the second task that deals with semantic memories embedded from sentences, the results show that words can recall multiple sentences simultaneously or one exclusively, depending on their grammatical relations.
Collapse
|
15
|
Self-healing codes: How stable neural populations can track continually reconfiguring neural representations. Proc Natl Acad Sci U S A 2022; 119:2106692119. [PMID: 35145024 PMCID: PMC8851551 DOI: 10.1073/pnas.2106692119] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/29/2021] [Indexed: 12/19/2022] Open
Abstract
The brain is capable of adapting while maintaining stable long-term memories and learned skills. Recent experiments show that neural responses are highly plastic in some circuits, while other circuits maintain consistent responses over time, raising the question of how these circuits interact coherently. We show how simple, biologically motivated Hebbian and homeostatic mechanisms in single neurons can allow circuits with fixed responses to continuously track a plastic, changing representation without reference to an external learning signal. As an adaptive system, the brain must retain a faithful representation of the world while continuously integrating new information. Recent experiments have measured population activity in cortical and hippocampal circuits over many days and found that patterns of neural activity associated with fixed behavioral variables and percepts change dramatically over time. Such “representational drift” raises the question of how malleable population codes can interact coherently with stable long-term representations that are found in other circuits and with relatively rigid topographic mappings of peripheral sensory and motor signals. We explore how known plasticity mechanisms can allow single neurons to reliably read out an evolving population code without external error feedback. We find that interactions between Hebbian learning and single-cell homeostasis can exploit redundancy in a distributed population code to compensate for gradual changes in tuning. Recurrent feedback of partially stabilized readouts could allow a pool of readout cells to further correct inconsistencies introduced by representational drift. This shows how relatively simple, known mechanisms can stabilize neural tuning in the short term and provides a plausible explanation for how plastic neural codes remain integrated with consolidated, long-term representations.
Collapse
|
16
|
Learning-induced biases in the ongoing dynamics of sensory representations predict stimulus generalization. Cell Rep 2022; 38:110340. [PMID: 35139386 DOI: 10.1016/j.celrep.2022.110340] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Revised: 11/16/2021] [Accepted: 01/14/2022] [Indexed: 11/22/2022] Open
Abstract
Sensory stimuli have long been thought to be represented in the brain as activity patterns of specific neuronal assemblies. However, we still know relatively little about the long-term dynamics of sensory representations. Using chronic in vivo calcium imaging in the mouse auditory cortex, we find that sensory representations undergo continuous recombination, even under behaviorally stable conditions. Auditory cued fear conditioning introduces a bias into these ongoing dynamics, resulting in a long-lasting increase in the number of stimuli activating the same subset of neurons. This plasticity is specific for stimuli sharing representational similarity to the conditioned sound prior to conditioning and predicts behaviorally observed stimulus generalization. Our findings demonstrate that learning-induced plasticity leading to a representational linkage between the conditioned stimulus and non-conditioned stimuli weaves into ongoing dynamics of the brain rather than acting on an otherwise static substrate.
Collapse
|
17
|
Drifting assemblies for persistent memory: Neuron transitions and unsupervised compensation. Proc Natl Acad Sci U S A 2021; 118:2023832118. [PMID: 34772802 DOI: 10.1073/pnas.2023832118] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/11/2021] [Indexed: 11/18/2022] Open
Abstract
Change is ubiquitous in living beings. In particular, the connectome and neural representations can change. Nevertheless, behaviors and memories often persist over long times. In a standard model, associative memories are represented by assemblies of strongly interconnected neurons. For faithful storage these assemblies are assumed to consist of the same neurons over time. Here we propose a contrasting memory model with complete temporal remodeling of assemblies, based on experimentally observed changes of synapses and neural representations. The assemblies drift freely as noisy autonomous network activity and spontaneous synaptic turnover induce neuron exchange. The gradual exchange allows activity-dependent and homeostatic plasticity to conserve the representational structure and keep inputs, outputs, and assemblies consistent. This leads to persistent memory. Our findings explain recent experimental results on temporal evolution of fear memory representations and suggest that memory systems need to be understood in their completeness as individual parts may constantly change.
Collapse
|
18
|
Amorim FE, Chapot RL, Moulin TC, Lee JLC, Amaral OB. Memory destabilization during reconsolidation: a consequence of homeostatic plasticity? ACTA ACUST UNITED AC 2021; 28:371-389. [PMID: 34526382 DOI: 10.1101/lm.053418.121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Accepted: 07/14/2021] [Indexed: 11/24/2022]
Abstract
Remembering is not a static process: When retrieved, a memory can be destabilized and become prone to modifications. This phenomenon has been demonstrated in a number of brain regions, but the neuronal mechanisms that rule memory destabilization and its boundary conditions remain elusive. Using two distinct computational models that combine Hebbian plasticity and synaptic downscaling, we show that homeostatic plasticity can function as a destabilization mechanism, accounting for behavioral results of protein synthesis inhibition upon reactivation with different re-exposure times. Furthermore, by performing systematic reviews, we identify a series of overlapping molecular mechanisms between memory destabilization and synaptic downscaling, although direct experimental links between both phenomena remain scarce. In light of these results, we propose a theoretical framework where memory destabilization can emerge as an epiphenomenon of homeostatic adaptations prompted by memory retrieval.
Collapse
Affiliation(s)
- Felippe E Amorim
- Institute of Medical Biochemistry Leopoldo de Meis, Federal University of Rio de Janeiro, Rio de Janeiro 21941-902, Brazil
| | - Renata L Chapot
- Institute of Medical Biochemistry Leopoldo de Meis, Federal University of Rio de Janeiro, Rio de Janeiro 21941-902, Brazil
| | - Thiago C Moulin
- Functional Pharmacology Unit, Department of Neuroscience, Uppsala University, Uppsala 751 24, Sweden
| | - Jonathan L C Lee
- University of Birmingham, School of Psychology, Edgbaston, Birmingham B15 2TT, United Kingdom
| | - Olavo B Amaral
- Institute of Medical Biochemistry Leopoldo de Meis, Federal University of Rio de Janeiro, Rio de Janeiro 21941-902, Brazil
| |
Collapse
|
19
|
Rajakumar A, Rinzel J, Chen ZS. Stimulus-Driven and Spontaneous Dynamics in Excitatory-Inhibitory Recurrent Neural Networks for Sequence Representation. Neural Comput 2021; 33:2603-2645. [PMID: 34530451 PMCID: PMC8750453 DOI: 10.1162/neco_a_01418] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 04/08/2021] [Indexed: 11/04/2022]
Abstract
Recurrent neural networks (RNNs) have been widely used to model sequential neural dynamics ("neural sequences") of cortical circuits in cognitive and motor tasks. Efforts to incorporate biological constraints and Dale's principle will help elucidate the neural representations and mechanisms of underlying circuits. We trained an excitatory-inhibitory RNN to learn neural sequences in a supervised manner and studied the representations and dynamic attractors of the trained network. The trained RNN was robust to trigger the sequence in response to various input signals and interpolated a time-warped input for sequence representation. Interestingly, a learned sequence can repeat periodically when the RNN evolved beyond the duration of a single sequence. The eigenspectrum of the learned recurrent connectivity matrix with growing or damping modes, together with the RNN's nonlinearity, were adequate to generate a limit cycle attractor. We further examined the stability of dynamic attractors while training the RNN to learn two sequences. Together, our results provide a general framework for understanding neural sequence representation in the excitatory-inhibitory RNN.
Collapse
Affiliation(s)
- Alfred Rajakumar
- Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, U.S.A.
| | - John Rinzel
- Courant Institute of Mathematical Sciences and Center for Neural Science, New York University, New York, NY 10012, USA.
| | - Zhe S Chen
- Department of Psychiatry and Neuroscience Institute, New York University School of Medicine, New York, NY 10016, U.S.A.
| |
Collapse
|
20
|
Raman DV, O'Leary T. Optimal plasticity for memory maintenance during ongoing synaptic change. eLife 2021; 10:62912. [PMID: 34519270 PMCID: PMC8504970 DOI: 10.7554/elife.62912] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 09/13/2021] [Indexed: 11/13/2022] Open
Abstract
Synaptic connections in many brain circuits fluctuate, exhibiting substantial turnover and remodelling over hours to days. Surprisingly, experiments show that most of this flux in connectivity persists in the absence of learning or known plasticity signals. How can neural circuits retain learned information despite a large proportion of ongoing and potentially disruptive synaptic changes? We address this question from first principles by analysing how much compensatory plasticity would be required to optimally counteract ongoing fluctuations, regardless of whether fluctuations are random or systematic. Remarkably, we find that the answer is largely independent of plasticity mechanisms and circuit architectures: compensatory plasticity should be at most equal in magnitude to fluctuations, and often less, in direct agreement with previously unexplained experimental observations. Moreover, our analysis shows that a high proportion of learning-independent synaptic change is consistent with plasticity mechanisms that accurately compute error gradients.
Collapse
Affiliation(s)
- Dhruva V Raman
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Timothy O'Leary
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
21
|
Mather M. Noradrenaline in the aging brain: Promoting cognitive reserve or accelerating Alzheimer's disease? Semin Cell Dev Biol 2021; 116:108-124. [PMID: 34099360 PMCID: PMC8292227 DOI: 10.1016/j.semcdb.2021.05.013] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Revised: 05/04/2021] [Accepted: 05/05/2021] [Indexed: 12/19/2022]
Abstract
Many believe that engaging in novel and mentally challenging activities promotes brain health and prevents Alzheimer's disease in later life. However, mental stimulation may also have risks as well as benefits. As neurons release neurotransmitters, they often also release amyloid peptides and tau proteins into the extracellular space. These by-products of neural activity can aggregate into the tau tangle and amyloid plaque signatures of Alzheimer's disease. Over time, more active brain regions accumulate more pathology. Thus, increasing brain activity can have a cost. But the neuromodulator noradrenaline, released during novel and mentally stimulating events, may have some protective effects-as well as some negative effects. Via its inhibitory and excitatory effects on neurons and microglia, noradrenaline sometimes prevents and sometimes accelerates the production and accumulation of amyloid-β and tau in various brain regions. Both α2A- and β-adrenergic receptors influence amyloid-β production and tau hyperphosphorylation. Adrenergic activity also influences clearance of amyloid-β and tau. Furthermore, some findings suggest that Alzheimer's disease increases noradrenergic activity, at least in its early phases. Because older brains clear the by-products of synaptic activity less effectively, increased synaptic activity in the older brain risks accelerating the accumulation of Alzheimer's pathology more than it does in the younger brain.
Collapse
Affiliation(s)
- Mara Mather
- Leonard Davis School of Gerontology, Department of Psychology, & Department of Biomedical Engineering, University of Southern California, 3715 McClintock Ave, Los Angeles, CA 90089, United States.
| |
Collapse
|
22
|
Computational roles of intrinsic synaptic dynamics. Curr Opin Neurobiol 2021; 70:34-42. [PMID: 34303124 DOI: 10.1016/j.conb.2021.06.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 05/14/2021] [Accepted: 06/15/2021] [Indexed: 12/26/2022]
Abstract
Conventional theories assume that long-term information storage in the brain is implemented by modifying synaptic efficacy. Recent experimental findings challenge this view by demonstrating that dendritic spine sizes, or their corresponding synaptic weights, are highly volatile even in the absence of neural activity. Here, we review previous computational works on the roles of these intrinsic synaptic dynamics. We first present the possibility for neuronal networks to sustain stable performance in their presence, and we then hypothesize that intrinsic dynamics could be more than mere noise to withstand, but they may improve information processing in the brain.
Collapse
|
23
|
Aljadeff J, Gillett M, Pereira Obilinovic U, Brunel N. From synapse to network: models of information storage and retrieval in neural circuits. Curr Opin Neurobiol 2021; 70:24-33. [PMID: 34175521 DOI: 10.1016/j.conb.2021.05.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 05/06/2021] [Accepted: 05/25/2021] [Indexed: 10/21/2022]
Abstract
The mechanisms of information storage and retrieval in brain circuits are still the subject of debate. It is widely believed that information is stored at least in part through changes in synaptic connectivity in networks that encode this information and that these changes lead in turn to modifications of network dynamics, such that the stored information can be retrieved at a later time. Here, we review recent progress in deriving synaptic plasticity rules from experimental data and in understanding how plasticity rules affect the dynamics of recurrent networks. We show that the dynamics generated by such networks exhibit a large degree of diversity, depending on parameters, similar to experimental observations in vivo during delayed response tasks.
Collapse
Affiliation(s)
- Johnatan Aljadeff
- Neurobiology Section, Division of Biological Sciences, UC San Diego, USA
| | | | | | - Nicolas Brunel
- Department of Neurobiology, Duke University, USA; Department of Physics, Duke University, USA.
| |
Collapse
|
24
|
Rao-Ruiz P, Visser E, Mitrić M, Smit AB, van den Oever MC. A Synaptic Framework for the Persistence of Memory Engrams. Front Synaptic Neurosci 2021; 13:661476. [PMID: 33841124 PMCID: PMC8024575 DOI: 10.3389/fnsyn.2021.661476] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Accepted: 02/26/2021] [Indexed: 12/31/2022] Open
Abstract
The ability to store and retrieve learned information over prolonged periods of time is an essential and intriguing property of the brain. Insight into the neurobiological mechanisms that underlie memory consolidation is of utmost importance for our understanding of memory persistence and how this is affected in memory disorders. Recent evidence indicates that a given memory is encoded by sparsely distributed neurons that become highly activated during learning, so-called engram cells. Research by us and others confirms the persistent nature of cortical engram cells by showing that these neurons are required for memory expression up to at least 1 month after they were activated during learning. Strengthened synaptic connectivity between engram cells is thought to ensure reactivation of the engram cell network during retrieval. However, given the continuous integration of new information into existing neuronal circuits and the relatively rapid turnover rate of synaptic proteins, it is unclear whether a lasting learning-induced increase in synaptic connectivity is mediated by stable synapses or by continuous dynamic turnover of synapses of the engram cell network. Here, we first discuss evidence for the persistence of engram cells and memory-relevant adaptations in synaptic plasticity, and then propose models of synaptic adaptations and molecular mechanisms that may support memory persistence through the maintenance of enhanced synaptic connectivity within an engram cell network.
Collapse
Affiliation(s)
- Priyanka Rao-Ruiz
- Department of Molecular and Cellular Neurobiology, Center for Neurogenomics and Cognitive Research, Amsterdam Neuroscience, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | - Esther Visser
- Department of Molecular and Cellular Neurobiology, Center for Neurogenomics and Cognitive Research, Amsterdam Neuroscience, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | - Miodrag Mitrić
- Department of Molecular and Cellular Neurobiology, Center for Neurogenomics and Cognitive Research, Amsterdam Neuroscience, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | - August B Smit
- Department of Molecular and Cellular Neurobiology, Center for Neurogenomics and Cognitive Research, Amsterdam Neuroscience, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | - Michel C van den Oever
- Department of Molecular and Cellular Neurobiology, Center for Neurogenomics and Cognitive Research, Amsterdam Neuroscience, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
25
|
Glutamatergic Dysfunction and Synaptic Ultrastructural Alterations in Schizophrenia and Autism Spectrum Disorder: Evidence from Human and Rodent Studies. Int J Mol Sci 2020; 22:ijms22010059. [PMID: 33374598 PMCID: PMC7793137 DOI: 10.3390/ijms22010059] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2020] [Revised: 12/15/2020] [Accepted: 12/22/2020] [Indexed: 12/12/2022] Open
Abstract
The correlation between dysfunction in the glutamatergic system and neuropsychiatric disorders, including schizophrenia and autism spectrum disorder, is undisputed. Both disorders are associated with molecular and ultrastructural alterations that affect synaptic plasticity and thus the molecular and physiological basis of learning and memory. Altered synaptic plasticity, accompanied by changes in protein synthesis and trafficking of postsynaptic proteins, as well as structural modifications of excitatory synapses, are critically involved in the postnatal development of the mammalian nervous system. In this review, we summarize glutamatergic alterations and ultrastructural changes in synapses in schizophrenia and autism spectrum disorder of genetic or drug-related origin, and briefly comment on the possible reversibility of these neuropsychiatric disorders in the light of findings in regular synaptic physiology.
Collapse
|
26
|
Lu J, Zuo Y. Shedding light on learning and memory: optical interrogation of the synaptic circuitry. Curr Opin Neurobiol 2020; 67:138-144. [PMID: 33279804 DOI: 10.1016/j.conb.2020.10.015] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 10/16/2020] [Accepted: 10/18/2020] [Indexed: 01/02/2023]
Abstract
In the quest for the physical substrate of learning and memory, a consensus gradually emerges that memory traces are stored in specific neuronal populations and the synaptic circuits that connect them. In this review, we discuss recent progresses in understanding the reorganization of synaptic circuits and neuronal assemblies associated with learning and memory, with an emphasis on optical techniques for in vivo interrogations. We also highlight some open questions on the missing link between synaptic modifications and neuronal coding, and how stable memory persists despite synaptic and neuronal fluctuations.
Collapse
Affiliation(s)
- Ju Lu
- Department of Molecular, Cell and Developmental Biology, University of California Santa Cruz, 1156 High Street, Santa Cruz, CA 95064, USA
| | - Yi Zuo
- Department of Molecular, Cell and Developmental Biology, University of California Santa Cruz, 1156 High Street, Santa Cruz, CA 95064, USA.
| |
Collapse
|