1
|
Humphries MD. The Computational Bottleneck of Basal Ganglia Output (and What to Do About it). eNeuro 2025; 12:ENEURO.0431-23.2024. [PMID: 40274408 PMCID: PMC12039478 DOI: 10.1523/eneuro.0431-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 10/12/2024] [Accepted: 10/16/2024] [Indexed: 04/26/2025] Open
Abstract
What the basal ganglia do is an oft-asked question; answers range from the selection of actions to the specification of movement to the estimation of time. Here, I argue that how the basal ganglia do what they do is a less-asked but equally important question. I show that the output regions of the basal ganglia create a stringent computational bottleneck, both structurally, because they have far fewer neurons than do their target regions, and dynamically, because of their tonic, inhibitory output. My proposed solution to this bottleneck is that the activity of an output neuron is setting the weight of a basis function, a function defined by that neuron's synaptic contacts. I illustrate how this may work in practice, allowing basal ganglia output to shift cortical dynamics and control eye movements via the superior colliculus. This solution can account for troubling issues in our understanding of the basal ganglia: why we see output neurons increasing their activity during behavior, rather than only decreasing as predicted by theories based on disinhibition, and why the output of the basal ganglia seems to have so many codes squashed into such a tiny region of the brain.
Collapse
|
2
|
Hasnain MA, Birnbaum JE, Ugarte Nunez JL, Hartman EK, Chandrasekaran C, Economo MN. Separating cognitive and motor processes in the behaving mouse. Nat Neurosci 2025; 28:640-653. [PMID: 39905210 DOI: 10.1038/s41593-024-01859-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 11/21/2024] [Indexed: 02/06/2025]
Abstract
The cognitive processes supporting complex animal behavior are closely associated with movements responsible for critical processes, such as facial expressions or the active sampling of our environments. These movements are strongly related to neural activity across much of the brain and are often highly correlated with ongoing cognitive processes. A fundamental issue for understanding the neural signatures of cognition and movements is whether cognitive processes are separable from related movements or if they are driven by common neural mechanisms. Here we demonstrate how the separability of cognitive and motor processes can be assessed and, when separable, how the neural dynamics associated with each component can be isolated. We designed a behavioral task in mice that involves multiple cognitive processes, and we show that dynamics commonly taken to support cognitive processes are strongly contaminated by movements. When cognitive and motor components are isolated using a novel approach for subspace decomposition, we find that they exhibit distinct dynamical trajectories and are encoded by largely separate populations of cells. Accurately isolating dynamics associated with particular cognitive and motor processes will be essential for developing conceptual and computational models of neural circuit function.
Collapse
Affiliation(s)
- Munib A Hasnain
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Center for Neurophotonics, Boston University, Boston, MA, USA
| | - Jaclyn E Birnbaum
- Center for Neurophotonics, Boston University, Boston, MA, USA
- Graduate Program for Neuroscience, Boston University, Boston, MA, USA
| | | | - Emma K Hartman
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Chandramouli Chandrasekaran
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA
- Department of Neurobiology & Anatomy, Boston University, Boston, MA, USA
- Center for Systems Neuroscience, Boston University, Boston, MA, USA
| | - Michael N Economo
- Department of Biomedical Engineering, Boston University, Boston, MA, USA.
- Center for Neurophotonics, Boston University, Boston, MA, USA.
- Center for Systems Neuroscience, Boston University, Boston, MA, USA.
| |
Collapse
|
3
|
Liu C, Jia S, Liu H, Zhao X, Li CT, Xu B, Zhang T. Recurrent neural networks with transient trajectory explain working memory encoding mechanisms. Commun Biol 2025; 8:137. [PMID: 39875500 PMCID: PMC11775331 DOI: 10.1038/s42003-024-07282-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2024] [Accepted: 11/15/2024] [Indexed: 01/30/2025] Open
Abstract
Whether working memory (WM) is encoded by persistent activity using attractors or by dynamic activity using transient trajectories has been debated for decades in both experimental and modeling studies, and a consensus has not been reached. Even though many recurrent neural networks (RNNs) have been proposed to simulate WM, most networks are designed to match respective experimental observations and show either transient or persistent activities. Those few which consider networks with both activity patterns have not attempted to directly compare their memory capabilities. In this study, we build transient-trajectory-based RNNs (TRNNs) and compare them to vanilla RNNs with more persistent activities. The TRNN incorporates biologically plausible modifications, including self-inhibition, sparse connection and hierarchical topology. Besides activity patterns resembling animal recordings and retained versatility to variable encoding time, TRNNs show better performance in delayed choice and spatial memory reinforcement learning tasks. Therefore, this study provides evidence supporting the transient activity theory to explain the WM mechanism from the model designing point of view.
Collapse
Affiliation(s)
- Chenghao Liu
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Shuncheng Jia
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Hongxing Liu
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Xuanle Zhao
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | | | - Bo Xu
- Institute of Automation, Chinese Academy of Sciences, Beijing, China.
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.
| | - Tielin Zhang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China.
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.
| |
Collapse
|
4
|
Soldado-Magraner J, Minai Y, Yu BM, Smith MA. Robustness of working memory to prefrontal cortex microstimulation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.01.14.632986. [PMID: 39868186 PMCID: PMC11761800 DOI: 10.1101/2025.01.14.632986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 01/28/2025]
Abstract
Delay period activity in the dorso-lateral prefrontal cortex (dlPFC) has been linked to the maintenance and control of sensory information in working memory. The stability of working memory related signals found in such delay period activity is believed to support robust memory-guided behavior during sensory perturbations, such as distractors. Here, we directly probed dlPFC's delay period activity with a diverse set of activity perturbations, and measured their consequences on neural activity and behavior. We applied patterned microstimulation to the dlPFC of monkeys implanted with multi-electrode arrays by electrically stimulating different electrodes in the array while the monkeys performed a memory-guided saccade task. We found that the microstimulation perturbations affected spatial working memory-related signals in individual dlPFC neurons. However, task performance remained largely unaffected. These apparently contradictory observations could be understood by examining different dimensions of the dlPFC population activity. In dimensions where working memory related signals naturally evolved over time, microstimulation impacted neural activity. In contrast, in dimensions containing working memory related signals that were stable over time, microstimulation minimally impacted neural activity. This dissociation explained how working memory-related information could be stably maintained in dlPFC despite the activity changes induced by microstimulation. Thus, working memory processes are robust to a variety of activity perturbations in the dlPFC.
Collapse
Affiliation(s)
- Joana Soldado-Magraner
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh 15213, Pennsylvania, USA
- Center for the Neural Basis of Cognition, Carnegie Mellon University and University of Pittsburgh, Pittsburgh 15213, Pennsylvania, USA
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh 15213, Pennsylvania, USA
| | - Yuki Minai
- Machine Learning Department, Carnegie Mellon University, Pittsburgh 15213, Pennsylvania, USA
- Center for the Neural Basis of Cognition, Carnegie Mellon University and University of Pittsburgh, Pittsburgh 15213, Pennsylvania, USA
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh 15213, Pennsylvania, USA
| | - Byron M. Yu
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh 15213, Pennsylvania, USA
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh 15213, Pennsylvania, USA
- Center for the Neural Basis of Cognition, Carnegie Mellon University and University of Pittsburgh, Pittsburgh 15213, Pennsylvania, USA
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh 15213, Pennsylvania, USA
| | - Matthew A. Smith
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh 15213, Pennsylvania, USA
- Center for the Neural Basis of Cognition, Carnegie Mellon University and University of Pittsburgh, Pittsburgh 15213, Pennsylvania, USA
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh 15213, Pennsylvania, USA
| |
Collapse
|
5
|
Kim JH, Daie K, Li N. A combinatorial neural code for long-term motor memory. Nature 2025; 637:663-672. [PMID: 39537930 PMCID: PMC11735397 DOI: 10.1038/s41586-024-08193-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 10/10/2024] [Indexed: 11/16/2024]
Abstract
Motor skill repertoire can be stably retained over long periods, but the neural mechanism that underlies stable memory storage remains poorly understood1-8. Moreover, it is unknown how existing motor memories are maintained as new motor skills are continuously acquired. Here we tracked neural representation of learned actions throughout a significant portion of the lifespan of a mouse and show that learned actions are stably retained in combination with context, which protects existing memories from erasure during new motor learning. We established a continual learning paradigm in which mice learned to perform directional licking in different task contexts while we tracked motor cortex activity for up to six months using two-photon imaging. Within the same task context, activity driving directional licking was stable over time with little representational drift. When learning new task contexts, new preparatory activity emerged to drive the same licking actions. Learning created parallel new motor memories instead of modifying existing representations. Re-learning to make the same actions in the previous task context re-activated the previous preparatory activity, even months later. Continual learning of new task contexts kept creating new preparatory activity patterns. Context-specific memories, as we observed in the motor system, may provide a solution for stable memory storage throughout continual learning.
Collapse
Affiliation(s)
- Jae-Hyun Kim
- Department of Neurobiology, Duke University, Durham, NC, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Kayvon Daie
- Allen Institute for Neural Dynamics, Seattle, WA, USA
| | - Nuo Li
- Department of Neurobiology, Duke University, Durham, NC, USA.
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA.
| |
Collapse
|
6
|
Kikumoto A, Bhandari A, Shibata K, Badre D. A transient high-dimensional geometry affords stable conjunctive subspaces for efficient action selection. Nat Commun 2024; 15:8513. [PMID: 39353961 PMCID: PMC11445473 DOI: 10.1038/s41467-024-52777-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 09/18/2024] [Indexed: 10/03/2024] Open
Abstract
Flexible action selection requires cognitive control mechanisms capable of mapping the same inputs to different output actions depending on the context. From a neural state-space perspective, this requires a control representation that separates similar input neural states by context. Additionally, for action selection to be robust and time-invariant, information must be stable in time, enabling efficient readout. Here, using EEG decoding methods, we investigate how the geometry and dynamics of control representations constrain flexible action selection in the human brain. Participants performed a context-dependent action selection task. A forced response procedure probed action selection different states in neural trajectories. The result shows that before successful responses, there is a transient expansion of representational dimensionality that separated conjunctive subspaces. Further, the dynamics stabilizes in the same time window, with entry into this stable, high-dimensional state predictive of individual trial performance. These results establish the neural geometry and dynamics the human brain needs for flexible control over behavior.
Collapse
Affiliation(s)
- Atsushi Kikumoto
- Department of Cognitive and Psychological Sciences, Brown University, Rhode Island, US.
- RIKEN Center for Brain Science, Wako, Saitama, Japan.
| | - Apoorva Bhandari
- Department of Cognitive and Psychological Sciences, Brown University, Rhode Island, US
| | | | - David Badre
- Department of Cognitive and Psychological Sciences, Brown University, Rhode Island, US
- Carney Institute for Brain Science, Brown University, Providence, Rhode Island, US
| |
Collapse
|
7
|
Kikumoto A, Bhandari A, Shibata K, Badre D. A Transient High-dimensional Geometry Affords Stable Conjunctive Subspaces for Efficient Action Selection. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.06.09.544428. [PMID: 37333209 PMCID: PMC10274903 DOI: 10.1101/2023.06.09.544428] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/20/2023]
Abstract
Flexible action selection requires cognitive control mechanisms capable of mapping the same inputs to different output actions depending on the context. From a neural state-space perspective, this requires a control representation that separates similar input neural states by context. Additionally, for action selection to be robust and time-invariant, information must be stable in time, enabling efficient readout. Here, using EEG decoding methods, we investigate how the geometry and dynamics of control representations constrain flexible action selection in the human brain. Participants performed a context-dependent action selection task. A forced response procedure probed action selection different states in neural trajectories. The result shows that before successful responses, there is a transient expansion of representational dimensionality that separated conjunctive subspaces. Further, the dynamics stabilizes in the same time window, with entry into this stable, high-dimensional state predictive of individual trial performance. These results establish the neural geometry and dynamics the human brain needs for flexible control over behavior.
Collapse
Affiliation(s)
- Atsushi Kikumoto
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Rhode Island, U.S
- RIKEN Center for Brain Science, Wako, Saitama, Japan
| | - Apoorva Bhandari
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Rhode Island, U.S
| | | | - David Badre
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Rhode Island, U.S
- Carney Institute for Brain Science, Brown University, Providence, Rhode Island, U.S
| |
Collapse
|
8
|
Courellis HS, Valiante TA, Mamelak AN, Adolphs R, Rutishauser U. Neural dynamics underlying minute-timescale persistent behavior in the human brain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.16.603717. [PMID: 39071326 PMCID: PMC11275932 DOI: 10.1101/2024.07.16.603717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2024]
Abstract
The ability to pursue long-term goals relies on a representations of task context that can both be maintained over long periods of time and switched flexibly when goals change. Little is known about the neural substrate for such minute-scale maintenance of task sets. Utilizing recordings in neurosurgical patients, we examined how groups of neurons in the human medial frontal cortex and hippocampus represent task contexts. When cued explicitly, task context was encoded in both brain areas and changed rapidly at task boundaries. Hippocampus exhibited a temporally dynamic code with fast decorrelation over time, preventing cross-temporal generalization. Medial frontal cortex exhibited a static code that decorrelated slowly, allowing generalization across minutes of time. When task context needed to be inferred as a latent variable, hippocampus encoded task context with a static code. These findings reveal two possible regimes for encoding minute-scale task-context representations that were engaged differently based on task demands.
Collapse
|
9
|
Li Q, Sorscher B, Sompolinsky H. Representations and generalization in artificial and brain neural networks. Proc Natl Acad Sci U S A 2024; 121:e2311805121. [PMID: 38913896 PMCID: PMC11228472 DOI: 10.1073/pnas.2311805121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/26/2024] Open
Abstract
Humans and animals excel at generalizing from limited data, a capability yet to be fully replicated in artificial intelligence. This perspective investigates generalization in biological and artificial deep neural networks (DNNs), in both in-distribution and out-of-distribution contexts. We introduce two hypotheses: First, the geometric properties of the neural manifolds associated with discrete cognitive entities, such as objects, words, and concepts, are powerful order parameters. They link the neural substrate to the generalization capabilities and provide a unified methodology bridging gaps between neuroscience, machine learning, and cognitive science. We overview recent progress in studying the geometry of neural manifolds, particularly in visual object recognition, and discuss theories connecting manifold dimension and radius to generalization capacity. Second, we suggest that the theory of learning in wide DNNs, especially in the thermodynamic limit, provides mechanistic insights into the learning processes generating desired neural representational geometries and generalization. This includes the role of weight norm regularization, network architecture, and hyper-parameters. We will explore recent advances in this theory and ongoing challenges. We also discuss the dynamics of learning and its relevance to the issue of representational drift in the brain.
Collapse
Affiliation(s)
- Qianyi Li
- The Harvard Biophysics Graduate Program, Harvard University, Cambridge, MA02138
- Center for Brain Science, Harvard University, Cambridge, MA02138
| | - Ben Sorscher
- The Applied Physics Department, Stanford University, Stanford, CA94305
| | - Haim Sompolinsky
- Center for Brain Science, Harvard University, Cambridge, MA02138
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem9190401, Israel
| |
Collapse
|
10
|
Ostojic S, Fusi S. Computational role of structure in neural activity and connectivity. Trends Cogn Sci 2024; 28:677-690. [PMID: 38553340 DOI: 10.1016/j.tics.2024.03.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 02/29/2024] [Accepted: 03/07/2024] [Indexed: 07/05/2024]
Abstract
One major challenge of neuroscience is identifying structure in seemingly disorganized neural activity. Different types of structure have different computational implications that can help neuroscientists understand the functional role of a particular brain area. Here, we outline a unified approach to characterize structure by inspecting the representational geometry and the modularity properties of the recorded activity and show that a similar approach can also reveal structure in connectivity. We start by setting up a general framework for determining geometry and modularity in activity and connectivity and relating these properties with computations performed by the network. We then use this framework to review the types of structure found in recent studies of model networks performing three classes of computations.
Collapse
Affiliation(s)
- Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research University, 75005 Paris, France.
| | - Stefano Fusi
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| |
Collapse
|
11
|
Stroud JP, Duncan J, Lengyel M. The computational foundations of dynamic coding in working memory. Trends Cogn Sci 2024; 28:614-627. [PMID: 38580528 DOI: 10.1016/j.tics.2024.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 02/29/2024] [Accepted: 02/29/2024] [Indexed: 04/07/2024]
Abstract
Working memory (WM) is a fundamental aspect of cognition. WM maintenance is classically thought to rely on stable patterns of neural activities. However, recent evidence shows that neural population activities during WM maintenance undergo dynamic variations before settling into a stable pattern. Although this has been difficult to explain theoretically, neural network models optimized for WM typically also exhibit such dynamics. Here, we examine stable versus dynamic coding in neural data, classical models, and task-optimized networks. We review principled mathematical reasons for why classical models do not, while task-optimized models naturally do exhibit dynamic coding. We suggest an update to our understanding of WM maintenance, in which dynamic coding is a fundamental computational feature rather than an epiphenomenon.
Collapse
Affiliation(s)
- Jake P Stroud
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK.
| | - John Duncan
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK; Center for Cognitive Computation, Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|
12
|
Kim JH, Daie K, Li N. A combinatorial neural code for long-term motor memory. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.05.597627. [PMID: 38895416 PMCID: PMC11185691 DOI: 10.1101/2024.06.05.597627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
Motor skill repertoire can be stably retained over long periods, but the neural mechanism underlying stable memory storage remains poorly understood. Moreover, it is unknown how existing motor memories are maintained as new motor skills are continuously acquired. Here we tracked neural representation of learned actions throughout a significant portion of a mouse's lifespan, and we show that learned actions are stably retained in motor memory in combination with context, which protects existing memories from erasure during new motor learning. We used automated home-cage training to establish a continual learning paradigm in which mice learned to perform directional licking in different task contexts. We combined this paradigm with chronic two-photon imaging of motor cortex activity for up to 6 months. Within the same task context, activity driving directional licking was stable over time with little representational drift. When learning new task contexts, new preparatory activity emerged to drive the same licking actions. Learning created parallel new motor memories while retaining the previous memories. Re-learning to make the same actions in the previous task context re-activated the previous preparatory activity, even months later. At the same time, continual learning of new task contexts kept creating new preparatory activity patterns. Context-specific memories, as we observed in the motor system, may provide a solution for stable memory storage throughout continual learning. Learning in new contexts produces parallel new representations instead of modifying existing representations, thus protecting existing motor repertoire from erasure.
Collapse
|
13
|
Bellafard A, Namvar G, Kao JC, Vaziri A, Golshani P. Volatile working memory representations crystallize with practice. Nature 2024; 629:1109-1117. [PMID: 38750359 PMCID: PMC11136659 DOI: 10.1038/s41586-024-07425-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 04/15/2024] [Indexed: 05/31/2024]
Abstract
Working memory, the process through which information is transiently maintained and manipulated over a brief period, is essential for most cognitive functions1-4. However, the mechanisms underlying the generation and evolution of working-memory neuronal representations at the population level over long timescales remain unclear. Here, to identify these mechanisms, we trained head-fixed mice to perform an olfactory delayed-association task in which the mice made decisions depending on the sequential identity of two odours separated by a 5 s delay. Optogenetic inhibition of secondary motor neurons during the late-delay and choice epochs strongly impaired the task performance of the mice. Mesoscopic calcium imaging of large neuronal populations of the secondary motor cortex (M2), retrosplenial cortex (RSA) and primary motor cortex (M1) showed that many late-delay-epoch-selective neurons emerged in M2 as the mice learned the task. Working-memory late-delay decoding accuracy substantially improved in the M2, but not in the M1 or RSA, as the mice became experts. During the early expert phase, working-memory representations during the late-delay epoch drifted across days, while the stimulus and choice representations stabilized. In contrast to single-plane layer 2/3 (L2/3) imaging, simultaneous volumetric calcium imaging of up to 73,307 M2 neurons, which included superficial L5 neurons, also revealed stabilization of late-delay working-memory representations with continued practice. Thus, delay- and choice-related activities that are essential for working-memory performance drift during learning and stabilize only after several days of expert performance.
Collapse
Affiliation(s)
- Arash Bellafard
- Department of Neurology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA.
| | - Ghazal Namvar
- Department of Neurology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Jonathan C Kao
- Department of Electrical and Computer Engineering, Henry Samueli School of Engineering, University of California, Los Angeles, CA, USA
| | - Alipasha Vaziri
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA
- The Kavli Neural Systems Institute, The Rockefeller University, New York, NY, USA
| | - Peyman Golshani
- Department of Neurology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA.
- Greater Los Angeles VA Medical Center, Los Angeles, CA, USA.
- Semel Institute for Neuroscience and Human Behavior, University of California, Los Angeles, CA, USA.
- Integrative Center for Learning and Memory, University of California, Los Angeles, CA, USA.
- Intellectual and Developmental Disability Research Center, University of California, Los Angeles, CA, USA.
| |
Collapse
|
14
|
Losey DM, Hennig JA, Oby ER, Golub MD, Sadtler PT, Quick KM, Ryu SI, Tyler-Kabara EC, Batista AP, Yu BM, Chase SM. Learning leaves a memory trace in motor cortex. Curr Biol 2024; 34:1519-1531.e4. [PMID: 38531360 PMCID: PMC11097210 DOI: 10.1016/j.cub.2024.03.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Revised: 12/06/2023] [Accepted: 03/04/2024] [Indexed: 03/28/2024]
Abstract
How are we able to learn new behaviors without disrupting previously learned ones? To understand how the brain achieves this, we used a brain-computer interface (BCI) learning paradigm, which enables us to detect the presence of a memory of one behavior while performing another. We found that learning to use a new BCI map altered the neural activity that monkeys produced when they returned to using a familiar BCI map in a way that was specific to the learning experience. That is, learning left a "memory trace" in the primary motor cortex. This memory trace coexisted with proficient performance under the familiar map, primarily by altering neural activity in dimensions that did not impact behavior. Forming memory traces might be how the brain is able to provide for the joint learning of multiple behaviors without interference.
Collapse
Affiliation(s)
- Darby M Losey
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Jay A Hennig
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Emily R Oby
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Matthew D Golub
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA; Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA 98195, USA
| | - Patrick T Sadtler
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Kristin M Quick
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Stephen I Ryu
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA; Department of Neurosurgery, Palo Alto Medical Foundation, Palo Alto, CA 94301, USA
| | - Elizabeth C Tyler-Kabara
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, PA 15213, USA; Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA 15213, USA; Department of Neurosurgery, Dell Medical School, University of Texas at Austin, Austin, TX 78712, USA
| | - Aaron P Batista
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA.
| | - Byron M Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| | - Steven M Chase
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| |
Collapse
|
15
|
Pereira-Obilinovic U, Hou H, Svoboda K, Wang XJ. Brain mechanism of foraging: Reward-dependent synaptic plasticity versus neural integration of values. Proc Natl Acad Sci U S A 2024; 121:e2318521121. [PMID: 38551832 PMCID: PMC10998608 DOI: 10.1073/pnas.2318521121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 01/16/2024] [Indexed: 04/02/2024] Open
Abstract
During foraging behavior, action values are persistently encoded in neural activity and updated depending on the history of choice outcomes. What is the neural mechanism for action value maintenance and updating? Here, we explore two contrasting network models: synaptic learning of action value versus neural integration. We show that both models can reproduce extant experimental data, but they yield distinct predictions about the underlying biological neural circuits. In particular, the neural integrator model but not the synaptic model requires that reward signals are mediated by neural pools selective for action alternatives and their projections are aligned with linear attractor axes in the valuation system. We demonstrate experimentally observable neural dynamical signatures and feasible perturbations to differentiate the two contrasting scenarios, suggesting that the synaptic model is a more robust candidate mechanism. Overall, this work provides a modeling framework to guide future experimental research on probabilistic foraging.
Collapse
Affiliation(s)
- Ulises Pereira-Obilinovic
- Center for Neural Science, New York University, New York, NY10003
- Allen Institute for Neural Dynamics, Seattle, WA98109
| | - Han Hou
- Allen Institute for Neural Dynamics, Seattle, WA98109
| | - Karel Svoboda
- Allen Institute for Neural Dynamics, Seattle, WA98109
| | - Xiao-Jing Wang
- Center for Neural Science, New York University, New York, NY10003
| |
Collapse
|
16
|
Churchland MM, Shenoy KV. Preparatory activity and the expansive null-space. Nat Rev Neurosci 2024; 25:213-236. [PMID: 38443626 DOI: 10.1038/s41583-024-00796-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/26/2024] [Indexed: 03/07/2024]
Abstract
The study of the cortical control of movement experienced a conceptual shift over recent decades, as the basic currency of understanding shifted from single-neuron tuning towards population-level factors and their dynamics. This transition was informed by a maturing understanding of recurrent networks, where mechanism is often characterized in terms of population-level factors. By estimating factors from data, experimenters could test network-inspired hypotheses. Central to such hypotheses are 'output-null' factors that do not directly drive motor outputs yet are essential to the overall computation. In this Review, we highlight how the hypothesis of output-null factors was motivated by the venerable observation that motor-cortex neurons are active during movement preparation, well before movement begins. We discuss how output-null factors then became similarly central to understanding neural activity during movement. We discuss how this conceptual framework provided key analysis tools, making it possible for experimenters to address long-standing questions regarding motor control. We highlight an intriguing trend: as experimental and theoretical discoveries accumulate, the range of computational roles hypothesized to be subserved by output-null factors continues to expand.
Collapse
Affiliation(s)
- Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA.
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA.
| | - Krishna V Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Neurobiology, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Bio-X Institute, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
| |
Collapse
|
17
|
Hasnain MA, Birnbaum JE, Nunez JLU, Hartman EK, Chandrasekaran C, Economo MN. Separating cognitive and motor processes in the behaving mouse. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.23.554474. [PMID: 37662199 PMCID: PMC10473744 DOI: 10.1101/2023.08.23.554474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
The cognitive processes supporting complex animal behavior are closely associated with ubiquitous movements responsible for our posture, facial expressions, ability to actively sample our sensory environments, and other critical processes. These movements are strongly related to neural activity across much of the brain and are often highly correlated with ongoing cognitive processes, making it challenging to dissociate the neural dynamics that support cognitive processes from those supporting related movements. In such cases, a critical issue is whether cognitive processes are separable from related movements, or if they are driven by common neural mechanisms. Here, we demonstrate how the separability of cognitive and motor processes can be assessed, and, when separable, how the neural dynamics associated with each component can be isolated. We establish a novel two-context behavioral task in mice that involves multiple cognitive processes and show that commonly observed dynamics taken to support cognitive processes are strongly contaminated by movements. When cognitive and motor components are isolated using a novel approach for subspace decomposition, we find that they exhibit distinct dynamical trajectories. Further, properly accounting for movement revealed that largely separate populations of cells encode cognitive and motor variables, in contrast to the 'mixed selectivity' often reported. Accurately isolating the dynamics associated with particular cognitive and motor processes will be essential for developing conceptual and computational models of neural circuit function and evaluating the function of the cell types of which neural circuits are composed.
Collapse
Affiliation(s)
- Munib A. Hasnain
- Department of Biomedical Engineering, Boston University, Boston, MA
- Center for Neurophotonics, Boston University, Boston, MA
| | - Jaclyn E. Birnbaum
- Graduate Program for Neuroscience, Boston University, Boston, MA
- Center for Neurophotonics, Boston University, Boston, MA
| | | | - Emma K. Hartman
- Department of Biomedical Engineering, Boston University, Boston, MA
| | - Chandramouli Chandrasekaran
- Department of Psychological and Brain Sciences, Boston University, Boston, MA
- Department of Neurobiology & Anatomy, Boston University, Boston, MA
- Center for Systems Neuroscience, Boston University, Boston, MA
| | - Michael N. Economo
- Department of Biomedical Engineering, Boston University, Boston, MA
- Center for Neurophotonics, Boston University, Boston, MA
- Center for Systems Neuroscience, Boston University, Boston, MA
| |
Collapse
|
18
|
Chen S, Liu Y, Wang ZA, Colonell J, Liu LD, Hou H, Tien NW, Wang T, Harris T, Druckmann S, Li N, Svoboda K. Brain-wide neural activity underlying memory-guided movement. Cell 2024; 187:676-691.e16. [PMID: 38306983 PMCID: PMC11492138 DOI: 10.1016/j.cell.2023.12.035] [Citation(s) in RCA: 18] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 09/19/2023] [Accepted: 12/27/2023] [Indexed: 02/04/2024]
Abstract
Behavior relies on activity in structured neural circuits that are distributed across the brain, but most experiments probe neurons in a single area at a time. Using multiple Neuropixels probes, we recorded from multi-regional loops connected to the anterior lateral motor cortex (ALM), a circuit node mediating memory-guided directional licking. Neurons encoding sensory stimuli, choices, and actions were distributed across the brain. However, choice coding was concentrated in the ALM and subcortical areas receiving input from the ALM in an ALM-dependent manner. Diverse orofacial movements were encoded in the hindbrain; midbrain; and, to a lesser extent, forebrain. Choice signals were first detected in the ALM and the midbrain, followed by the thalamus and other brain areas. At movement initiation, choice-selective activity collapsed across the brain, followed by new activity patterns driving specific actions. Our experiments provide the foundation for neural circuit models of decision-making and movement initiation.
Collapse
Affiliation(s)
- Susu Chen
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Yi Liu
- Stanford University, Palo Alto, CA, USA
| | | | - Jennifer Colonell
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Liu D Liu
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA; Baylor College of Medicine, Houston, TX, USA
| | - Han Hou
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA; Allen Institute for Neural Dynamics, Seattle, WA, USA
| | - Nai-Wen Tien
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Tim Wang
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA; Allen Institute for Neural Dynamics, Seattle, WA, USA
| | - Timothy Harris
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA; Johns Hopkins University, Baltimore, MD, USA
| | - Shaul Druckmann
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA; Stanford University, Palo Alto, CA, USA.
| | - Nuo Li
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA; Baylor College of Medicine, Houston, TX, USA.
| | - Karel Svoboda
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA; Allen Institute for Neural Dynamics, Seattle, WA, USA.
| |
Collapse
|
19
|
Stroud JP, Watanabe K, Suzuki T, Stokes MG, Lengyel M. Optimal information loading into working memory explains dynamic coding in the prefrontal cortex. Proc Natl Acad Sci U S A 2023; 120:e2307991120. [PMID: 37983510 PMCID: PMC10691340 DOI: 10.1073/pnas.2307991120] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 09/29/2023] [Indexed: 11/22/2023] Open
Abstract
Working memory involves the short-term maintenance of information and is critical in many tasks. The neural circuit dynamics underlying working memory remain poorly understood, with different aspects of prefrontal cortical (PFC) responses explained by different putative mechanisms. By mathematical analysis, numerical simulations, and using recordings from monkey PFC, we investigate a critical but hitherto ignored aspect of working memory dynamics: information loading. We find that, contrary to common assumptions, optimal loading of information into working memory involves inputs that are largely orthogonal, rather than similar, to the late delay activities observed during memory maintenance, naturally leading to the widely observed phenomenon of dynamic coding in PFC. Using a theoretically principled metric, we show that PFC exhibits the hallmarks of optimal information loading. We also find that optimal information loading emerges as a general dynamical strategy in task-optimized recurrent neural networks. Our theory unifies previous, seemingly conflicting theories of memory maintenance based on attractor or purely sequential dynamics and reveals a normative principle underlying dynamic coding.
Collapse
Affiliation(s)
- Jake P. Stroud
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, CambridgeCB2 1PZ, United Kingdom
| | - Kei Watanabe
- Graduate School of Frontier Biosciences, Osaka University, Osaka565-0871, Japan
| | - Takafumi Suzuki
- Center for Information and Neural Networks, National Institute of Communication and Information Technology, Osaka565-0871, Japan
| | - Mark G. Stokes
- Department of Experimental Psychology, University of Oxford, OxfordOX2 6GG, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, OxfordOX3 9DU, United Kingdom
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, CambridgeCB2 1PZ, United Kingdom
- Center for Cognitive Computation, Department of Cognitive Science, Central European University, BudapestH-1051, Hungary
| |
Collapse
|
20
|
Nocon JC, Witter J, Gritton H, Han X, Houghton C, Sen K. A robust and compact population code for competing sounds in auditory cortex. J Neurophysiol 2023; 130:775-787. [PMID: 37646080 PMCID: PMC10642980 DOI: 10.1152/jn.00148.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 08/22/2023] [Accepted: 08/24/2023] [Indexed: 09/01/2023] Open
Abstract
Cortical circuits encoding sensory information consist of populations of neurons, yet how information aggregates via pooling individual cells remains poorly understood. Such pooling may be particularly important in noisy settings where single-neuron encoding is degraded. One example is the cocktail party problem, with competing sounds from multiple spatial locations. How populations of neurons in auditory cortex code competing sounds have not been previously investigated. Here, we apply a novel information-theoretic approach to estimate information in populations of neurons in mouse auditory cortex about competing sounds from multiple spatial locations, including both summed population (SP) and labeled line (LL) codes. We find that a small subset of neurons is sufficient to nearly maximize mutual information over different spatial configurations, with the labeled line code outperforming the summed population code and approaching information levels attained in the absence of competing stimuli. Finally, information in the labeled line code increases with spatial separation between target and masker, in correspondence with behavioral results on spatial release from masking in humans and animals. Taken together, our results reveal that a compact population of neurons in auditory cortex provides a robust code for competing sounds from different spatial locations.NEW & NOTEWORTHY Little is known about how populations of neurons within cortical circuits encode sensory stimuli in the presence of competing stimuli at other spatial locations. Here, we investigate this problem in auditory cortex using a recently proposed information-theoretic approach. We find a small subset of neurons nearly maximizes information about target sounds in the presence of competing maskers, approaching information levels for isolated stimuli, and provides a noise-robust code for sounds in a complex auditory scene.
Collapse
Affiliation(s)
- Jian Carlo Nocon
- Neurophotonics Center, Boston University, Boston, Massachusetts, United States
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, United States
- Hearing Research Center, Boston University, Boston, Massachusetts, United States
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States
| | - Jake Witter
- Department of Computer Science, University of Bristol, Bristol, United Kingdom
| | - Howard Gritton
- Department of Comparative Biosciences, University of Illinois, Urbana, Illinois, United States
- Department of Bioengineering, University of Illinois, Urbana, Illinois, United States
| | - Xue Han
- Neurophotonics Center, Boston University, Boston, Massachusetts, United States
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, United States
- Hearing Research Center, Boston University, Boston, Massachusetts, United States
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States
| | - Conor Houghton
- Department of Computer Science, University of Bristol, Bristol, United Kingdom
| | - Kamal Sen
- Neurophotonics Center, Boston University, Boston, Massachusetts, United States
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, United States
- Hearing Research Center, Boston University, Boston, Massachusetts, United States
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States
| |
Collapse
|
21
|
Vishne G, Gerber EM, Knight RT, Deouell LY. Distinct ventral stream and prefrontal cortex representational dynamics during sustained conscious visual perception. Cell Rep 2023; 42:112752. [PMID: 37422763 PMCID: PMC10530642 DOI: 10.1016/j.celrep.2023.112752] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 05/12/2023] [Accepted: 06/20/2023] [Indexed: 07/11/2023] Open
Abstract
Instances of sustained stationary sensory input are ubiquitous. However, previous work focused almost exclusively on transient onset responses. This presents a critical challenge for neural theories of consciousness, which should account for the full temporal extent of experience. To address this question, we use intracranial recordings from ten human patients with epilepsy to view diverse images of multiple durations. We reveal that, in sensory regions, despite dramatic changes in activation magnitude, the distributed representation of categories and exemplars remains sustained and stable. In contrast, in frontoparietal regions, we find transient content representation at stimulus onset. Our results highlight the connection between the anatomical and temporal correlates of experience. To the extent perception is sustained, it may rely on sensory representations and to the extent perception is discrete, centered on perceptual updating, it may rely on frontoparietal representations.
Collapse
Affiliation(s)
- Gal Vishne
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel.
| | - Edden M Gerber
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel
| | - Robert T Knight
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA; Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Leon Y Deouell
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel; Department of Psychology, The Hebrew University of Jerusalem, Jerusalem 9190501, Israel.
| |
Collapse
|
22
|
Domanski APF, Kucewicz MT, Russo E, Tricklebank MD, Robinson ESJ, Durstewitz D, Jones MW. Distinct hippocampal-prefrontal neural assemblies coordinate memory encoding, maintenance, and recall. Curr Biol 2023; 33:1220-1236.e4. [PMID: 36898372 PMCID: PMC10728550 DOI: 10.1016/j.cub.2023.02.029] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 01/05/2023] [Accepted: 02/08/2023] [Indexed: 03/11/2023]
Abstract
Short-term memory enables incorporation of recent experience into subsequent decision-making. This processing recruits both the prefrontal cortex and hippocampus, where neurons encode task cues, rules, and outcomes. However, precisely which information is carried when, and by which neurons, remains unclear. Using population decoding of activity in rat medial prefrontal cortex (mPFC) and dorsal hippocampal CA1, we confirm that mPFC populations lead in maintaining sample information across delays of an operant non-match to sample task, despite individual neurons firing only transiently. During sample encoding, distinct mPFC subpopulations joined distributed CA1-mPFC cell assemblies hallmarked by 4-5 Hz rhythmic modulation; CA1-mPFC assemblies re-emerged during choice episodes but were not 4-5 Hz modulated. Delay-dependent errors arose when attenuated rhythmic assembly activity heralded collapse of sustained mPFC encoding. Our results map component processes of memory-guided decisions onto heterogeneous CA1-mPFC subpopulations and the dynamics of physiologically distinct, distributed cell assemblies.
Collapse
Affiliation(s)
- Aleksander P F Domanski
- School of Physiology, Pharmacology & Neuroscience, Faculty of Life Sciences, University of Bristol, University Walk, Bristol BS8 1TD, UK; The Alan Turing Institute, British Library, 96 Euston Rd, London, UK; The Francis Crick Institute, 1 Midland Road, London, UK
| | - Michal T Kucewicz
- School of Physiology, Pharmacology & Neuroscience, Faculty of Life Sciences, University of Bristol, University Walk, Bristol BS8 1TD, UK; BioTechMed Center, Brain & Mind Electrophysiology Laboratory, Multimedia Systems Department, Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology, 80-233 Gdansk, Poland.
| | - Eleonora Russo
- Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, 68159 Mannheim, Germany; Department of Psychiatry and Psychotherapy, University Medical Center, Johannes Gutenberg University, 55131 Mainz, Germany
| | - Mark D Tricklebank
- Centre for Neuroimaging Science, King's College London, Denmark Hill, London SE5 8AF, UK
| | - Emma S J Robinson
- School of Physiology, Pharmacology & Neuroscience, Faculty of Life Sciences, University of Bristol, University Walk, Bristol BS8 1TD, UK
| | - Daniel Durstewitz
- Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, 68159 Mannheim, Germany
| | - Matt W Jones
- School of Physiology, Pharmacology & Neuroscience, Faculty of Life Sciences, University of Bristol, University Walk, Bristol BS8 1TD, UK
| |
Collapse
|
23
|
Winding M, Pedigo BD, Barnes CL, Patsolic HG, Park Y, Kazimiers T, Fushiki A, Andrade IV, Khandelwal A, Valdes-Aleman J, Li F, Randel N, Barsotti E, Correia A, Fetter RD, Hartenstein V, Priebe CE, Vogelstein JT, Cardona A, Zlatic M. The connectome of an insect brain. Science 2023; 379:eadd9330. [PMID: 36893230 PMCID: PMC7614541 DOI: 10.1126/science.add9330] [Citation(s) in RCA: 141] [Impact Index Per Article: 70.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2022] [Accepted: 02/07/2023] [Indexed: 03/11/2023]
Abstract
Brains contain networks of interconnected neurons and so knowing the network architecture is essential for understanding brain function. We therefore mapped the synaptic-resolution connectome of an entire insect brain (Drosophila larva) with rich behavior, including learning, value computation, and action selection, comprising 3016 neurons and 548,000 synapses. We characterized neuron types, hubs, feedforward and feedback pathways, as well as cross-hemisphere and brain-nerve cord interactions. We found pervasive multisensory and interhemispheric integration, highly recurrent architecture, abundant feedback from descending neurons, and multiple novel circuit motifs. The brain's most recurrent circuits comprised the input and output neurons of the learning center. Some structural features, including multilayer shortcuts and nested recurrent loops, resembled state-of-the-art deep learning architectures. The identified brain architecture provides a basis for future experimental and theoretical studies of neural circuits.
Collapse
Affiliation(s)
- Michael Winding
- University of Cambridge, Department of Zoology, Cambridge, UK
- MRC Laboratory of Molecular Biology, Neurobiology Division, Cambridge, UK
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Benjamin D. Pedigo
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, MD, USA
| | - Christopher L. Barnes
- MRC Laboratory of Molecular Biology, Neurobiology Division, Cambridge, UK
- University of Cambridge, Department of Physiology, Development, and Neuroscience, Cambridge, UK
| | - Heather G. Patsolic
- Johns Hopkins University, Department of Applied Mathematics and Statistics, Baltimore, MD, USA
- Accenture, Arlington, VA, USA
| | - Youngser Park
- Johns Hopkins University, Center for Imaging Science, Baltimore, MD, USA
| | - Tom Kazimiers
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- kazmos GmbH, Dresden, Germany
| | - Akira Fushiki
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Ingrid V. Andrade
- University of California Los Angeles, Department of Molecular, Cell and Developmental Biology, Los Angeles, CA, USA
| | - Avinash Khandelwal
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Javier Valdes-Aleman
- University of Cambridge, Department of Zoology, Cambridge, UK
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Feng Li
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Nadine Randel
- University of Cambridge, Department of Zoology, Cambridge, UK
- MRC Laboratory of Molecular Biology, Neurobiology Division, Cambridge, UK
| | - Elizabeth Barsotti
- MRC Laboratory of Molecular Biology, Neurobiology Division, Cambridge, UK
- University of Cambridge, Department of Physiology, Development, and Neuroscience, Cambridge, UK
| | - Ana Correia
- MRC Laboratory of Molecular Biology, Neurobiology Division, Cambridge, UK
- University of Cambridge, Department of Physiology, Development, and Neuroscience, Cambridge, UK
| | - Richard D. Fetter
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Stanford University, Stanford, CA, USA
| | - Volker Hartenstein
- University of California Los Angeles, Department of Molecular, Cell and Developmental Biology, Los Angeles, CA, USA
| | - Carey E. Priebe
- Johns Hopkins University, Department of Applied Mathematics and Statistics, Baltimore, MD, USA
- Johns Hopkins University, Center for Imaging Science, Baltimore, MD, USA
| | - Joshua T. Vogelstein
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, MD, USA
- Johns Hopkins University, Center for Imaging Science, Baltimore, MD, USA
| | - Albert Cardona
- MRC Laboratory of Molecular Biology, Neurobiology Division, Cambridge, UK
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- University of Cambridge, Department of Physiology, Development, and Neuroscience, Cambridge, UK
| | - Marta Zlatic
- University of Cambridge, Department of Zoology, Cambridge, UK
- MRC Laboratory of Molecular Biology, Neurobiology Division, Cambridge, UK
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| |
Collapse
|
24
|
Qin S, Farashahi S, Lipshutz D, Sengupta AM, Chklovskii DB, Pehlevan C. Coordinated drift of receptive fields in Hebbian/anti-Hebbian network models during noisy representation learning. Nat Neurosci 2023; 26:339-349. [PMID: 36635497 DOI: 10.1038/s41593-022-01225-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 10/28/2022] [Indexed: 01/13/2023]
Abstract
Recent experiments have revealed that neural population codes in many brain areas continuously change even when animals have fully learned and stably perform their tasks. This representational 'drift' naturally leads to questions about its causes, dynamics and functions. Here we explore the hypothesis that neural representations optimize a representational objective with a degenerate solution space, and noisy synaptic updates drive the network to explore this (near-)optimal space causing representational drift. We illustrate this idea and explore its consequences in simple, biologically plausible Hebbian/anti-Hebbian network models of representation learning. We find that the drifting receptive fields of individual neurons can be characterized by a coordinated random walk, with effective diffusion constants depending on various parameters such as learning rate, noise amplitude and input statistics. Despite such drift, the representational similarity of population codes is stable over time. Our model recapitulates experimental observations in the hippocampus and posterior parietal cortex and makes testable predictions that can be probed in future experiments.
Collapse
Affiliation(s)
- Shanshan Qin
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Shiva Farashahi
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
| | - David Lipshutz
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
| | - Anirvan M Sengupta
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
- Department of Physics and Astronomy, Rutgers University, New Brunswick, NJ, USA
| | - Dmitri B Chklovskii
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
- NYU Langone Medical Center, New York, NY, USA
| | - Cengiz Pehlevan
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA.
- Center for Brain Science, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
25
|
Abstract
An impactful understanding of the brain will require entirely new approaches and unprecedented collaborative efforts. The next steps will require brain researchers to develop theoretical frameworks that allow them to tease apart dependencies and causality in complex dynamical systems, as well as the ability to maintain awe while not getting lost in the effort. The outstanding question is: How do we go about it?
Collapse
|
26
|
Tuning instability of non-columnar neurons in the salt-and-pepper whisker map in somatosensory cortex. Nat Commun 2022; 13:6611. [PMID: 36329010 PMCID: PMC9633707 DOI: 10.1038/s41467-022-34261-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 10/19/2022] [Indexed: 11/06/2022] Open
Abstract
Rodent sensory cortex contains salt-and-pepper maps of sensory features, whose structure is not fully known. Here we investigated the structure of the salt-and-pepper whisker somatotopic map among L2/3 pyramidal neurons in somatosensory cortex, in awake mice performing one-vs-all whisker discrimination. Neurons tuned for columnar (CW) and non-columnar (non-CW) whiskers were spatially intermixed, with co-tuned neurons forming local (20 µm) clusters. Whisker tuning was markedly unstable in expert mice, with 35-46% of pyramidal cells significantly shifting tuning over 5-18 days. Tuning instability was highly concentrated in non-CW tuned neurons, and thus was structured in the map. Instability of non-CW neurons was unchanged during chronic whisker paralysis and when mice discriminated individual whiskers, suggesting it is an inherent feature. Thus, L2/3 combines two distinct components: a stable columnar framework of CW-tuned cells that may promote spatial perceptual stability, plus an intermixed, non-columnar surround with highly unstable tuning.
Collapse
|
27
|
Sawant Y, Kundu JN, Radhakrishnan VB, Sridharan D. A Midbrain Inspired Recurrent Neural Network Model for Robust Change Detection. J Neurosci 2022; 42:8262-8283. [PMID: 36123120 PMCID: PMC9653281 DOI: 10.1523/jneurosci.0164-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 07/26/2022] [Accepted: 07/30/2022] [Indexed: 11/21/2022] Open
Abstract
We present a biologically inspired recurrent neural network (RNN) that efficiently detects changes in natural images. The model features sparse, topographic connectivity (st-RNN), closely modeled on the circuit architecture of a "midbrain attention network." We deployed the st-RNN in a challenging change blindness task, in which changes must be detected in a discontinuous sequence of images. Compared with a conventional RNN, the st-RNN learned 9x faster and achieved state-of-the-art performance with 15x fewer connections. An analysis of low-dimensional dynamics revealed putative circuit mechanisms, including a critical role for a global inhibitory (GI) motif, for successful change detection. The model reproduced key experimental phenomena, including midbrain neurons' sensitivity to dynamic stimuli, neural signatures of stimulus competition, as well as hallmark behavioral effects of midbrain microstimulation. Finally, the model accurately predicted human gaze fixations in a change blindness experiment, surpassing state-of-the-art saliency-based methods. The st-RNN provides a novel deep learning model for linking neural computations underlying change detection with psychophysical mechanisms.SIGNIFICANCE STATEMENT For adaptive survival, our brains must be able to accurately and rapidly detect changing aspects of our visual world. We present a novel deep learning model, a sparse, topographic recurrent neural network (st-RNN), that mimics the neuroanatomy of an evolutionarily conserved "midbrain attention network." The st-RNN achieved robust change detection in challenging change blindness tasks, outperforming conventional RNN architectures. The model also reproduced hallmark experimental phenomena, both neural and behavioral, reported in seminal midbrain studies. Lastly, the st-RNN outperformed state-of-the-art models at predicting human gaze fixations in a laboratory change blindness experiment. Our deep learning model may provide important clues about key mechanisms by which the brain efficiently detects changes.
Collapse
Affiliation(s)
- Yash Sawant
- Centre for Neuroscience, Indian Institute of Science, Bangalore 560012, India
| | - Jogendra Nath Kundu
- Department of Computational and Data Sciences, Indian Institute of Science, Bangalore 560012, India
| | | | - Devarajan Sridharan
- Centre for Neuroscience, Indian Institute of Science, Bangalore 560012, India
- Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560012, India
| |
Collapse
|
28
|
Becker S, Nold A, Tchumatchenko T. Modulation of working memory duration by synaptic and astrocytic mechanisms. PLoS Comput Biol 2022; 18:e1010543. [PMID: 36191056 PMCID: PMC9560596 DOI: 10.1371/journal.pcbi.1010543] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Revised: 10/13/2022] [Accepted: 09/05/2022] [Indexed: 12/24/2022] Open
Abstract
Short-term synaptic plasticity and modulations of the presynaptic vesicle release rate are key components of many working memory models. At the same time, an increasing number of studies suggests a potential role of astrocytes in modulating higher cognitive function such as WM through their influence on synaptic transmission. Which influence astrocytic signaling could have on the stability and duration of WM representations, however, is still unclear. Here, we introduce a slow, activity-dependent astrocytic regulation of the presynaptic release probability in a synaptic attractor model of WM. We compare and analyze simulations of a simple WM protocol in firing rate and spiking networks with and without astrocytic regulation, and underpin our observations with analyses of the phase space dynamics in the rate network. We find that the duration and stability of working memory representations are altered by astrocytic signaling and by noise. We show that astrocytic signaling modulates the mean duration of WM representations. Moreover, if the astrocytic regulation is strong, a slow presynaptic timescale introduces a 'window of vulnerability', during which WM representations are easily disruptable by noise before being stabilized. We identify two mechanisms through which noise from different sources in the network can either stabilize or destabilize WM representations. Our findings suggest that (i) astrocytic regulation can act as a crucial determinant for the duration of WM representations in synaptic attractor models of WM, and (ii) that astrocytic signaling could facilitate different mechanisms for volitional top-down control of WM representations and their duration.
Collapse
Affiliation(s)
- Sophia Becker
- Laboratory of Computational Neuroscience, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- Theory of Neural Dynamics group, Max Planck Institute for Brain Research, Frankfurt am Main, Germany
| | - Andreas Nold
- Theory of Neural Dynamics group, Max Planck Institute for Brain Research, Frankfurt am Main, Germany
- Institute of Experimental Epileptology and Cognition Research, Life and Brain Center, Universitätsklinikum Bonn, Bonn, Germany
| | - Tatjana Tchumatchenko
- Theory of Neural Dynamics group, Max Planck Institute for Brain Research, Frankfurt am Main, Germany
- Institute of Experimental Epileptology and Cognition Research, Life and Brain Center, Universitätsklinikum Bonn, Bonn, Germany
- Institute for Physiological Chemistry, Medical Center of the Johannes Gutenberg-University Mainz, Mainz, Germany
| |
Collapse
|
29
|
Warriner CL, Fageiry S, Saxena S, Costa RM, Miri A. Motor cortical influence relies on task-specific activity covariation. Cell Rep 2022; 40:111427. [PMID: 36170841 PMCID: PMC9536049 DOI: 10.1016/j.celrep.2022.111427] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2021] [Revised: 03/01/2022] [Accepted: 09/08/2022] [Indexed: 11/18/2022] Open
Abstract
During limb movement, spinal circuits facilitate the alternating activation of antagonistic flexor and extensor muscles. Yet antagonist cocontraction is often required to stabilize joints, like when loads are handled. Previous results suggest that these different muscle activation patterns are mediated by separate flexion- and extension-related motor cortical output populations, while others suggest recruitment of task-specific populations. To distinguish between hypotheses, we developed a paradigm in which mice toggle between forelimb tasks requiring antagonist alternation or cocontraction and measured activity in motor cortical layer 5b. Our results conform to neither hypothesis: consistent flexion- and extension-related activity is not observed across tasks, and no task-specific populations are observed. Instead, activity covariation among motor cortical neurons dramatically changes between tasks, thereby altering the relation between neural and muscle activity. This is also observed specifically for corticospinal neurons. Collectively, our findings indicate that motor cortex drives different muscle activation patterns via task-specific activity covariation.
Collapse
Affiliation(s)
- Claire L Warriner
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Samaher Fageiry
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Shreya Saxena
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Department of Statistics, Columbia University, New York, NY 10027, USA; Grossman Center for Statistics of the Mind, Columbia University, New York, NY 10027, USA
| | - Rui M Costa
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Andrew Miri
- Department of Neurobiology, Northwestern University, Evanston, IL 60208, USA.
| |
Collapse
|
30
|
Voitov I, Mrsic-Flogel TD. Cortical feedback loops bind distributed representations of working memory. Nature 2022; 608:381-389. [PMID: 35896749 PMCID: PMC9365695 DOI: 10.1038/s41586-022-05014-3] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 06/22/2022] [Indexed: 11/16/2022]
Abstract
Working memory—the brain’s ability to internalize information and use it flexibly to guide behaviour—is an essential component of cognition. Although activity related to working memory has been observed in several brain regions1–3, how neural populations actually represent working memory4–7 and the mechanisms by which this activity is maintained8–12 remain unclear13–15. Here we describe the neural implementation of visual working memory in mice alternating between a delayed non-match-to-sample task and a simple discrimination task that does not require working memory but has identical stimulus, movement and reward statistics. Transient optogenetic inactivations revealed that distributed areas of the neocortex were required selectively for the maintenance of working memory. Population activity in visual area AM and premotor area M2 during the delay period was dominated by orderly low-dimensional dynamics16,17 that were, however, independent of working memory. Instead, working memory representations were embedded in high-dimensional population activity, present in both cortical areas, persisted throughout the inter-stimulus delay period, and predicted behavioural responses during the working memory task. To test whether the distributed nature of working memory was dependent on reciprocal interactions between cortical regions18–20, we silenced one cortical area (AM or M2) while recording the feedback it received from the other. Transient inactivation of either area led to the selective disruption of inter-areal communication of working memory. Therefore, reciprocally interconnected cortical areas maintain bound high-dimensional representations of working memory. Experiments in mice alternating between a visual working memory task and a task that is independent of working memory provide insight into the neural representation of working memory and the distributed nature of its maintenance.
Collapse
Affiliation(s)
- Ivan Voitov
- Sainsbury Wellcome Centre, University College London, London, UK. .,Biozentrum, University of Basel, Basel, Switzerland.
| | | |
Collapse
|
31
|
Inagaki HK, Chen S, Daie K, Finkelstein A, Fontolan L, Romani S, Svoboda K. Neural Algorithms and Circuits for Motor Planning. Annu Rev Neurosci 2022; 45:249-271. [PMID: 35316610 DOI: 10.1146/annurev-neuro-092021-121730] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The brain plans and executes volitional movements. The underlying patterns of neural population activity have been explored in the context of movements of the eyes, limbs, tongue, and head in nonhuman primates and rodents. How do networks of neurons produce the slow neural dynamics that prepare specific movements and the fast dynamics that ultimately initiate these movements? Recent work exploits rapid and calibrated perturbations of neural activity to test specific dynamical systems models that are capable of producing the observed neural activity. These joint experimental and computational studies show that cortical dynamics during motor planning reflect fixed points of neural activity (attractors). Subcortical control signals reshape and move attractors over multiple timescales, causing commitment to specific actions and rapid transitions to movement execution. Experiments in rodents are beginning to reveal how these algorithms are implemented at the level of brain-wide neural circuits.
Collapse
Affiliation(s)
| | - Susu Chen
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA
| | - Kayvon Daie
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA.,Allen Institute for Neural Dynamics, Seattle, Washington, USA;
| | - Arseny Finkelstein
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA.,Department of Physiology and Pharmacology, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv-Yafo, Israel
| | - Lorenzo Fontolan
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA
| | - Sandro Romani
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA
| | - Karel Svoboda
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA.,Allen Institute for Neural Dynamics, Seattle, Washington, USA;
| |
Collapse
|
32
|
Gu J, Lim S. Unsupervised learning for robust working memory. PLoS Comput Biol 2022; 18:e1009083. [PMID: 35500033 PMCID: PMC9098088 DOI: 10.1371/journal.pcbi.1009083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 05/12/2022] [Accepted: 03/16/2022] [Indexed: 11/18/2022] Open
Abstract
Working memory is a core component of critical cognitive functions such as planning and decision-making. Persistent activity that lasts long after the stimulus offset has been considered a neural substrate for working memory. Attractor dynamics based on network interactions can successfully reproduce such persistent activity. However, it requires a fine-tuning of network connectivity, in particular, to form continuous attractors which were suggested for encoding continuous signals in working memory. Here, we investigate whether a specific form of synaptic plasticity rules can mitigate such tuning problems in two representative working memory models, namely, rate-coded and location-coded persistent activity. We consider two prominent types of plasticity rules, differential plasticity correcting the rapid activity changes and homeostatic plasticity regularizing the long-term average of activity, both of which have been proposed to fine-tune the weights in an unsupervised manner. Consistent with the findings of previous works, differential plasticity alone was enough to recover a graded-level persistent activity after perturbations in the connectivity. For the location-coded memory, differential plasticity could also recover persistent activity. However, its pattern can be irregular for different stimulus locations under slow learning speed or large perturbation in the connectivity. On the other hand, homeostatic plasticity shows a robust recovery of smooth spatial patterns under particular types of synaptic perturbations, such as perturbations in incoming synapses onto the entire or local populations. However, homeostatic plasticity was not effective against perturbations in outgoing synapses from local populations. Instead, combining it with differential plasticity recovers location-coded persistent activity for a broader range of perturbations, suggesting compensation between two plasticity rules.
Collapse
Affiliation(s)
- Jintao Gu
- Neural Science, New York University Shanghai, Shanghai, China
| | - Sukbin Lim
- Neural Science, New York University Shanghai, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
- * E-mail:
| |
Collapse
|
33
|
Mejías JF, Wang XJ. Mechanisms of distributed working memory in a large-scale network of macaque neocortex. eLife 2022; 11:e72136. [PMID: 35200137 PMCID: PMC8871396 DOI: 10.7554/elife.72136] [Citation(s) in RCA: 51] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Accepted: 01/19/2022] [Indexed: 12/15/2022] Open
Abstract
Neural activity underlying working memory is not a local phenomenon but distributed across multiple brain regions. To elucidate the circuit mechanism of such distributed activity, we developed an anatomically constrained computational model of large-scale macaque cortex. We found that mnemonic internal states may emerge from inter-areal reverberation, even in a regime where none of the isolated areas is capable of generating self-sustained activity. The mnemonic activity pattern along the cortical hierarchy indicates a transition in space, separating areas engaged in working memory and those which do not. A host of spatially distinct attractor states is found, potentially subserving various internal processes. The model yields testable predictions, including the idea of counterstream inhibitory bias, the role of prefrontal areas in controlling distributed attractors, and the resilience of distributed activity to lesions or inactivation. This work provides a theoretical framework for identifying large-scale brain mechanisms and computational principles of distributed cognitive processes.
Collapse
Affiliation(s)
- Jorge F Mejías
- Swammerdam Institute for Life Sciences, University of AmsterdamAmsterdamNetherlands
| | - Xiao-Jing Wang
- Center for Neural Science, New York UniversityNew YorkUnited States
| |
Collapse
|
34
|
Rule ME, O'Leary T. Self-healing codes: How stable neural populations can track continually reconfiguring neural representations. Proc Natl Acad Sci U S A 2022; 119:e2106692119. [PMID: 35145024 PMCID: PMC8851551 DOI: 10.1073/pnas.2106692119] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 12/29/2021] [Indexed: 12/19/2022] Open
Abstract
As an adaptive system, the brain must retain a faithful representation of the world while continuously integrating new information. Recent experiments have measured population activity in cortical and hippocampal circuits over many days and found that patterns of neural activity associated with fixed behavioral variables and percepts change dramatically over time. Such "representational drift" raises the question of how malleable population codes can interact coherently with stable long-term representations that are found in other circuits and with relatively rigid topographic mappings of peripheral sensory and motor signals. We explore how known plasticity mechanisms can allow single neurons to reliably read out an evolving population code without external error feedback. We find that interactions between Hebbian learning and single-cell homeostasis can exploit redundancy in a distributed population code to compensate for gradual changes in tuning. Recurrent feedback of partially stabilized readouts could allow a pool of readout cells to further correct inconsistencies introduced by representational drift. This shows how relatively simple, known mechanisms can stabilize neural tuning in the short term and provides a plausible explanation for how plastic neural codes remain integrated with consolidated, long-term representations.
Collapse
Affiliation(s)
- Michael E Rule
- Engineering Department, University of Cambridge, Cambridge CB2 1PZ, United Kingdom
| | - Timothy O'Leary
- Engineering Department, University of Cambridge, Cambridge CB2 1PZ, United Kingdom
| |
Collapse
|
35
|
Thivierge JP, Pilzak A. Estimating null and potent modes of feedforward communication in a computational model of cortical activity. Sci Rep 2022; 12:742. [PMID: 35031628 PMCID: PMC8760251 DOI: 10.1038/s41598-021-04684-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 12/15/2021] [Indexed: 11/08/2022] Open
Abstract
Communication across anatomical areas of the brain is key to both sensory and motor processes. Dimensionality reduction approaches have shown that the covariation of activity across cortical areas follows well-delimited patterns. Some of these patterns fall within the "potent space" of neural interactions and generate downstream responses; other patterns fall within the "null space" and prevent the feedforward propagation of synaptic inputs. Despite growing evidence for the role of null space activity in visual processing as well as preparatory motor control, a mechanistic understanding of its neural origins is lacking. Here, we developed a mean-rate model that allowed for the systematic control of feedforward propagation by potent and null modes of interaction. In this model, altering the number of null modes led to no systematic changes in firing rates, pairwise correlations, or mean synaptic strengths across areas, making it difficult to characterize feedforward communication with common measures of functional connectivity. A novel measure termed the null ratio captured the proportion of null modes relayed from one area to another. Applied to simultaneous recordings of primate cortical areas V1 and V2 during image viewing, the null ratio revealed that feedforward interactions have a broad null space that may reflect properties of visual stimuli.
Collapse
Affiliation(s)
- Jean-Philippe Thivierge
- School of Psychology, University of Ottawa, Ottawa, ON, Canada.
- Brain and Mind Research Institute, University of Ottawa, Ottawa, ON, Canada.
| | - Artem Pilzak
- School of Psychology, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
36
|
Hennig JA, Oby ER, Losey DM, Batista AP, Yu BM, Chase SM. How learning unfolds in the brain: toward an optimization view. Neuron 2021; 109:3720-3735. [PMID: 34648749 PMCID: PMC8639641 DOI: 10.1016/j.neuron.2021.09.005] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 08/25/2021] [Accepted: 09/02/2021] [Indexed: 12/17/2022]
Abstract
How do changes in the brain lead to learning? To answer this question, consider an artificial neural network (ANN), where learning proceeds by optimizing a given objective or cost function. This "optimization framework" may provide new insights into how the brain learns, as many idiosyncratic features of neural activity can be recapitulated by an ANN trained to perform the same task. Nevertheless, there are key features of how neural population activity changes throughout learning that cannot be readily explained in terms of optimization and are not typically features of ANNs. Here we detail three of these features: (1) the inflexibility of neural variability throughout learning, (2) the use of multiple learning processes even during simple tasks, and (3) the presence of large task-nonspecific activity changes. We propose that understanding the role of these features in the brain will be key to describing biological learning using an optimization framework.
Collapse
Affiliation(s)
- Jay A Hennig
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Emily R Oby
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Darby M Losey
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Aaron P Batista
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Byron M Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Steven M Chase
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
37
|
Hu X, Zeng Z. Bridging the Functional and Wiring Properties of V1 Neurons Through Sparse Coding. Neural Comput 2021; 34:104-137. [PMID: 34758484 DOI: 10.1162/neco_a_01453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 07/20/2021] [Indexed: 11/04/2022]
Abstract
The functional properties of neurons in the primary visual cortex (V1) are thought to be closely related to the structural properties of this network, but the specific relationships remain unclear. Previous theoretical studies have suggested that sparse coding, an energy-efficient coding method, might underlie the orientation selectivity of V1 neurons. We thus aimed to delineate how the neurons are wired to produce this feature. We constructed a model and endowed it with a simple Hebbian learning rule to encode images of natural scenes. The excitatory neurons fired sparsely in response to images and developed strong orientation selectivity. After learning, the connectivity between excitatory neuron pairs, inhibitory neuron pairs, and excitatory-inhibitory neuron pairs depended on firing pattern and receptive field similarity between the neurons. The receptive fields (RFs) of excitatory neurons and inhibitory neurons were well predicted by the RFs of presynaptic excitatory neurons and inhibitory neurons, respectively. The excitatory neurons formed a small-world network, in which certain local connection patterns were significantly overrepresented. Bidirectionally manipulating the firing rates of inhibitory neurons caused linear transformations of the firing rates of excitatory neurons, and vice versa. These wiring properties and modulatory effects were congruent with a wide variety of data measured in V1, suggesting that the sparse coding principle might underlie both the functional and wiring properties of V1 neurons.
Collapse
Affiliation(s)
- Xiaolin Hu
- Department of Computer Science and Technology, State Key Laboratory of Intelligent Technology and Systems, BNRist, Tsinghua Laboratory of Brain and Intelligence, and IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
| | - Zhigang Zeng
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China, and Key Laboratory of Image Processing and Intelligent Control, Education Ministry of China, Wuhan 430074, China
| |
Collapse
|
38
|
A Stable Population Code for Attention in Prefrontal Cortex Leads a Dynamic Attention Code in Visual Cortex. J Neurosci 2021; 41:9163-9176. [PMID: 34583956 DOI: 10.1523/jneurosci.0608-21.2021] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 08/13/2021] [Accepted: 09/15/2021] [Indexed: 11/21/2022] Open
Abstract
Attention often requires maintaining a stable mental state over time while simultaneously improving perceptual sensitivity. These requirements place conflicting demands on neural populations, as sensitivity implies a robust response to perturbation by incoming stimuli, which is antithetical to stability. Functional specialization of cortical areas provides one potential mechanism to resolve this conflict. We reasoned that attention signals in executive control areas might be highly stable over time, reflecting maintenance of the cognitive state, thereby freeing up sensory areas to be more sensitive to sensory input (i.e., unstable), which would be reflected by more dynamic attention signals in those areas. To test these predictions, we simultaneously recorded neural populations in prefrontal cortex (PFC) and visual cortical area V4 in rhesus macaque monkeys performing an endogenous spatial selective attention task. Using a decoding approach, we found that the neural code for attention states in PFC was substantially more stable over time compared with the attention code in V4 on a moment-by-moment basis, in line with our guiding thesis. Moreover, attention signals in PFC predicted the future attention state of V4 better than vice versa, consistent with a top-down role for PFC in attention. These results suggest a functional specialization of attention mechanisms across cortical areas with a division of labor. PFC signals the cognitive state and maintains this state stably over time, whereas V4 responds to sensory input in a manner dynamically modulated by that cognitive state.SIGNIFICANCE STATEMENT Attention requires maintaining a stable mental state while simultaneously improving perceptual sensitivity. We hypothesized that these two demands (stability and sensitivity) are distributed between prefrontal and visual cortical areas, respectively. Specifically, we predicted attention signals in visual cortex would be less stable than in prefrontal cortex, and furthermore prefrontal cortical signals would predict attention signals in visual cortex in line with the hypothesized role of prefrontal cortex in top-down executive control. Our results are consistent with suggestions deriving from previous work using separate recordings in the two brain areas in different animals performing different tasks and represent the first direct evidence in support of this hypothesis with simultaneous multiarea recordings within individual animals.
Collapse
|
39
|
Wang XJ. 50 years of mnemonic persistent activity: quo vadis? Trends Neurosci 2021; 44:888-902. [PMID: 34654556 PMCID: PMC9087306 DOI: 10.1016/j.tins.2021.09.001] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 08/27/2021] [Accepted: 09/07/2021] [Indexed: 10/20/2022]
Abstract
Half a century ago persistent spiking activity in the neocortex was discovered to be a neural substrate of working memory. Since then scientists have sought to understand this core cognitive function across biological and computational levels. Studies are reviewed here that cumulatively lend support to a synaptic theory of recurrent circuits for mnemonic persistent activity that depends on various cellular and network substrates and is mathematically described by a multiple-attractor network model. Crucially, a mnemonic attractor state of the brain is consistent with temporal variations and heterogeneity across neurons in a subspace of population activity. Persistent activity should be broadly understood as a contrast to decaying transients. Mechanisms in the absence of neural firing ('activity-silent state') are suitable for passive short-term memory but not for working memory - which is characterized by executive control for filtering out distractors, limited capacity, and internal manipulation of information.
Collapse
Affiliation(s)
- Xiao-Jing Wang
- Center for Neural Science, New York University, 4 Washington Place, New York, NY 20003, USA.
| |
Collapse
|
40
|
Trial-to-Trial Variability of Spiking Delay Activity in Prefrontal Cortex Constrains Burst-Coding Models of Working Memory. J Neurosci 2021; 41:8928-8945. [PMID: 34551937 DOI: 10.1523/jneurosci.0167-21.2021] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 08/17/2021] [Accepted: 08/29/2021] [Indexed: 11/21/2022] Open
Abstract
A hallmark neuronal correlate of working memory (WM) is stimulus-selective spiking activity of neurons in PFC during mnemonic delays. These observations have motivated an influential computational modeling framework in which WM is supported by persistent activity. Recently, this framework has been challenged by arguments that observed persistent activity may be an artifact of trial-averaging, which potentially masks high variability of delay activity at the single-trial level. In an alternative scenario, WM delay activity could be encoded in bursts of selective neuronal firing which occur intermittently across trials. However, this alternative proposal has not been tested on single-neuron spike-train data. Here, we developed a framework for addressing this issue by characterizing the trial-to-trial variability of neuronal spiking quantified by Fano factor (FF). By building a doubly stochastic Poisson spiking model, we first demonstrated that the burst-coding proposal implies a significant increase in FF positively correlated with firing rate, and thus loss of stability across trials during the delay. Simulation of spiking cortical circuit WM models further confirmed that FF is a sensitive measure that can well dissociate distinct WM mechanisms. We then tested these predictions on datasets of single-neuron recordings from macaque PFC during three WM tasks. In sharp contrast to the burst-coding model predictions, we only found a small fraction of neurons showing increased WM-dependent burstiness, and stability across trials during delay was strengthened in empirical data. Therefore, reduced trial-to-trial variability during delay provides strong constraints on the contribution of single-neuron intermittent bursting to WM maintenance.SIGNIFICANCE STATEMENT There are diverging classes of theoretical models explaining how information is maintained in working memory by cortical circuits. In an influential model class, neurons exhibit persistent elevated memorandum-selective firing, whereas a recently developed class of burst-coding models suggests that persistent activity is an artifact of trial-averaging, and spiking is sparse in each single trial, subserved by brief intermittent bursts. However, this alternative picture has not been characterized or tested on empirical spike-train data. Here we combine mathematical analysis, computational model simulation, and experimental data analysis to test empirically these two classes of models and show that the trial-to-trial variability of empirical spike trains is not consistent with burst coding. These findings provide constraints for theoretical models of working memory.
Collapse
|
41
|
Abstract
Working memory (WM) is the ability to maintain and manipulate information in the conscious mind over a timescale of seconds. This ability is thought to be maintained through the persistent discharges of neurons in a network of brain areas centered on the prefrontal cortex, as evidenced by neurophysiological recordings in nonhuman primates, though both the localization and the neural basis of WM has been a matter of debate in recent years. Neural correlates of WM are evident in species other than primates, including rodents and corvids. A specialized network of excitatory and inhibitory neurons, aided by neuromodulatory influences of dopamine, is critical for the maintenance of neuronal activity. Limitations in WM capacity and duration, as well as its enhancement during development, can be attributed to properties of neural activity and circuits. Changes in these factors can be observed through training-induced improvements and in pathological impairments. WM thus provides a prototypical cognitive function whose properties can be tied to the spiking activity of brain neurons. © 2021 American Physiological Society. Compr Physiol 11:1-41, 2021.
Collapse
Affiliation(s)
- Russell J Jaffe
- Department of Neurobiology & Anatomy, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA
| | - Christos Constantinidis
- Department of Neurobiology & Anatomy, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA
- Department of Biomedical Engineering, Vanderbilt University, Nashville, Tennessee, USA
- Neuroscience Program, Vanderbilt University, Nashville, Tennessee, USA
- Department of Ophthalmology and Visual Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
42
|
Slow manifolds within network dynamics encode working memory efficiently and robustly. PLoS Comput Biol 2021; 17:e1009366. [PMID: 34525089 PMCID: PMC8475983 DOI: 10.1371/journal.pcbi.1009366] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 09/27/2021] [Accepted: 08/19/2021] [Indexed: 11/19/2022] Open
Abstract
Working memory is a cognitive function involving the storage and manipulation of latent information over brief intervals of time, thus making it crucial for context-dependent computation. Here, we use a top-down modeling approach to examine network-level mechanisms of working memory, an enigmatic issue and central topic of study in neuroscience. We optimize thousands of recurrent rate-based neural networks on a working memory task and then perform dynamical systems analysis on the ensuing optimized networks, wherein we find that four distinct dynamical mechanisms can emerge. In particular, we show the prevalence of a mechanism in which memories are encoded along slow stable manifolds in the network state space, leading to a phasic neuronal activation profile during memory periods. In contrast to mechanisms in which memories are directly encoded at stable attractors, these networks naturally forget stimuli over time. Despite this seeming functional disadvantage, they are more efficient in terms of how they leverage their attractor landscape and paradoxically, are considerably more robust to noise. Our results provide new hypotheses regarding how working memory function may be encoded within the dynamics of neural circuits.
Collapse
|
43
|
Xia J, Marks TD, Goard MJ, Wessel R. Stable representation of a naturalistic movie emerges from episodic activity with gain variability. Nat Commun 2021; 12:5170. [PMID: 34453045 PMCID: PMC8397750 DOI: 10.1038/s41467-021-25437-2] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Accepted: 08/11/2021] [Indexed: 01/08/2023] Open
Abstract
Visual cortical responses are known to be highly variable across trials within an experimental session. However, the long-term stability of visual cortical responses is poorly understood. Here using chronic imaging of V1 in mice we show that neural responses to repeated natural movie clips are unstable across weeks. Individual neuronal responses consist of sparse episodic activity which are stable in time but unstable in gain across weeks. Further, we find that the individual episode, instead of neuron, serves as the basic unit of the week-to-week fluctuation. To investigate how population activity encodes the stimulus, we extract a stable one-dimensional representation of the time in the natural movie, using an unsupervised method. Most week-to-week fluctuation is perpendicular to the stimulus encoding direction, thus leaving the stimulus representation largely unaffected. We propose that precise episodic activity with coordinated gain changes are keys to maintain a stable stimulus representation in V1.
Collapse
Affiliation(s)
- Ji Xia
- Department of Physics, Washington University in St. Louis, St. Louis, MO, USA.
| | - Tyler D Marks
- Neuroscience Research Institute, University of California, Santa Barbara, CA, USA
| | - Michael J Goard
- Neuroscience Research Institute, University of California, Santa Barbara, CA, USA
- Department of Molecular, Cellular, and Developmental Biology, University of California, Santa Barbara, CA, USA
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Ralf Wessel
- Department of Physics, Washington University in St. Louis, St. Louis, MO, USA
| |
Collapse
|
44
|
Abstract
Significant experimental, computational, and theoretical work has identified rich structure within the coordinated activity of interconnected neural populations. An emerging challenge now is to uncover the nature of the associated computations, how they are implemented, and what role they play in driving behavior. We term this computation through neural population dynamics. If successful, this framework will reveal general motifs of neural population activity and quantitatively describe how neural population dynamics implement computations necessary for driving goal-directed behavior. Here, we start with a mathematical primer on dynamical systems theory and analytical tools necessary to apply this perspective to experimental data. Next, we highlight some recent discoveries resulting from successful application of dynamical systems. We focus on studies spanning motor control, timing, decision-making, and working memory. Finally, we briefly discuss promising recent lines of investigation and future directions for the computation through neural population dynamics framework.
Collapse
Affiliation(s)
- Saurabh Vyas
- Department of Bioengineering, Stanford University, Stanford, California 94305, USA; .,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA
| | - Matthew D Golub
- Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA.,Google AI, Google Inc., Mountain View, California 94305, USA
| | - Krishna V Shenoy
- Department of Bioengineering, Stanford University, Stanford, California 94305, USA; .,Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA.,Department of Neurobiology, Bio-X Institute, Neurosciences Program, and Howard Hughes Medical Institute, Stanford University, Stanford, California 94305, USA
| |
Collapse
|
45
|
Sarazin MXB, Victor J, Medernach D, Naudé J, Delord B. Online Learning and Memory of Neural Trajectory Replays for Prefrontal Persistent and Dynamic Representations in the Irregular Asynchronous State. Front Neural Circuits 2021; 15:648538. [PMID: 34305535 PMCID: PMC8298038 DOI: 10.3389/fncir.2021.648538] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Accepted: 05/31/2021] [Indexed: 11/13/2022] Open
Abstract
In the prefrontal cortex (PFC), higher-order cognitive functions and adaptive flexible behaviors rely on continuous dynamical sequences of spiking activity that constitute neural trajectories in the state space of activity. Neural trajectories subserve diverse representations, from explicit mappings in physical spaces to generalized mappings in the task space, and up to complex abstract transformations such as working memory, decision-making and behavioral planning. Computational models have separately assessed learning and replay of neural trajectories, often using unrealistic learning rules or decoupling simulations for learning from replay. Hence, the question remains open of how neural trajectories are learned, memorized and replayed online, with permanently acting biological plasticity rules. The asynchronous irregular regime characterizing cortical dynamics in awake conditions exerts a major source of disorder that may jeopardize plasticity and replay of locally ordered activity. Here, we show that a recurrent model of local PFC circuitry endowed with realistic synaptic spike timing-dependent plasticity and scaling processes can learn, memorize and replay large-size neural trajectories online under asynchronous irregular dynamics, at regular or fast (sped-up) timescale. Presented trajectories are quickly learned (within seconds) as synaptic engrams in the network, and the model is able to chunk overlapping trajectories presented separately. These trajectory engrams last long-term (dozen hours) and trajectory replays can be triggered over an hour. In turn, we show the conditions under which trajectory engrams and replays preserve asynchronous irregular dynamics in the network. Functionally, spiking activity during trajectory replays at regular timescale accounts for both dynamical coding with temporal tuning in individual neurons, persistent activity at the population level, and large levels of variability consistent with observed cognitive-related PFC dynamics. Together, these results offer a consistent theoretical framework accounting for how neural trajectories can be learned, memorized and replayed in PFC networks circuits to subserve flexible dynamic representations and adaptive behaviors.
Collapse
Affiliation(s)
- Matthieu X B Sarazin
- Institut des Systèmes Intelligents et de Robotique, CNRS, Inserm, Sorbonne Université, Paris, France
| | - Julie Victor
- CEA Paris-Saclay, CNRS, NeuroSpin, Saclay, France
| | - David Medernach
- Institut des Systèmes Intelligents et de Robotique, CNRS, Inserm, Sorbonne Université, Paris, France
| | - Jérémie Naudé
- Neuroscience Paris Seine - Institut de biologie Paris Seine, CNRS, Inserm, Sorbonne Université, Paris, France
| | - Bruno Delord
- Institut des Systèmes Intelligents et de Robotique, CNRS, Inserm, Sorbonne Université, Paris, France
| |
Collapse
|
46
|
Modularity and robustness of frontal cortical networks. Cell 2021; 184:3717-3730.e24. [PMID: 34214471 DOI: 10.1016/j.cell.2021.05.026] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 11/24/2020] [Accepted: 05/17/2021] [Indexed: 01/05/2023]
Abstract
Neural activity underlying short-term memory is maintained by interconnected networks of brain regions. It remains unknown how brain regions interact to maintain persistent activity while exhibiting robustness to corrupt information in parts of the network. We simultaneously measured activity in large neuronal populations across mouse frontal hemispheres to probe interactions between brain regions. Activity across hemispheres was coordinated to maintain coherent short-term memory. Across mice, we uncovered individual variability in the organization of frontal cortical networks. A modular organization was required for the robustness of persistent activity to perturbations: each hemisphere retained persistent activity during perturbations of the other hemisphere, thus preventing local perturbations from spreading. A dynamic gating mechanism allowed hemispheres to coordinate coherent information while gating out corrupt information. Our results show that robust short-term memory is mediated by redundant modular representations across brain regions. Redundant modular representations naturally emerge in neural network models that learned robust dynamics.
Collapse
|
47
|
Cellular connectomes as arbiters of local circuit models in the cerebral cortex. Nat Commun 2021; 12:2785. [PMID: 33986261 PMCID: PMC8119988 DOI: 10.1038/s41467-021-22856-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2017] [Accepted: 03/28/2021] [Indexed: 02/03/2023] Open
Abstract
With the availability of cellular-resolution connectivity maps, connectomes, from the mammalian nervous system, it is in question how informative such massive connectomic data can be for the distinction of local circuit models in the mammalian cerebral cortex. Here, we investigated whether cellular-resolution connectomic data can in principle allow model discrimination for local circuit modules in layer 4 of mouse primary somatosensory cortex. We used approximate Bayesian model selection based on a set of simple connectome statistics to compute the posterior probability over proposed models given a to-be-measured connectome. We find that the distinction of the investigated local cortical models is faithfully possible based on purely structural connectomic data with an accuracy of more than 90%, and that such distinction is stable against substantial errors in the connectome measurement. Furthermore, mapping a fraction of only 10% of the local connectome is sufficient for connectome-based model distinction under realistic experimental constraints. Together, these results show for a concrete local circuit example that connectomic data allows model selection in the cerebral cortex and define the experimental strategy for obtaining such connectomic data.
Collapse
|
48
|
Semedo JD, Gokcen E, Machens CK, Kohn A, Yu BM. Statistical methods for dissecting interactions between brain areas. Curr Opin Neurobiol 2020; 65:59-69. [PMID: 33142111 PMCID: PMC7935404 DOI: 10.1016/j.conb.2020.09.009] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Revised: 09/23/2020] [Accepted: 09/24/2020] [Indexed: 12/12/2022]
Abstract
The brain is composed of many functionally distinct areas. This organization supports distributed processing, and requires the coordination of signals across areas. Our understanding of how populations of neurons in different areas interact with each other is still in its infancy. As the availability of recordings from large populations of neurons across multiple brain areas increases, so does the need for statistical methods that are well suited for dissecting and interrogating these recordings. Here we review multivariate statistical methods that have been, or could be, applied to this class of recordings. By leveraging population responses, these methods can provide a rich description of inter-areal interactions. At the same time, these methods can introduce interpretational challenges. We thus conclude by discussing how to interpret the outputs of these methods to further our understanding of inter-areal interactions.
Collapse
Affiliation(s)
- João D Semedo
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Evren Gokcen
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Christian K Machens
- Champalimaud Research, Champalimaud Centre for the Unknown, Lisbon, Portugal
| | - Adam Kohn
- Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA; Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, NY, USA; Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Byron M Yu
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
49
|
Li N, Mrsic-Flogel TD. Cortico-cerebellar interactions during goal-directed behavior. Curr Opin Neurobiol 2020; 65:27-37. [PMID: 32979846 PMCID: PMC7770085 DOI: 10.1016/j.conb.2020.08.010] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Revised: 08/17/2020] [Accepted: 08/21/2020] [Indexed: 12/14/2022]
Abstract
Preparatory activity is observed across multiple interconnected brain regions before goal-directed movement. Preparatory activity reflects discrete activity states representing specific future actions. It is unclear how this activity is mediated by multi-regional interactions. Recent evidence suggests that the cerebellum, classically associated with fine motor control, contributes to preparatory activity in the neocortex. We review recent advances and offer perspective on the function of cortico-cerebellar interactions during goal-directed behavior. We propose that the cerebellum learns to facilitate transitions between neocortical activity states. Transitions between activity states enable flexible and appropriately timed behavioral responses.
Collapse
Affiliation(s)
- Nuo Li
- Department of Neuroscience, Baylor College of Medicine, United States.
| | - Thomas D Mrsic-Flogel
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, United Kingdom.
| |
Collapse
|
50
|
Stokes MG, Muhle-Karbe PS, Myers NE. Theoretical distinction between functional states in working memory and their corresponding neural states. VISUAL COGNITION 2020; 28:420-432. [PMID: 33223922 PMCID: PMC7655036 DOI: 10.1080/13506285.2020.1825141] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Accepted: 09/10/2020] [Indexed: 12/15/2022]
Abstract
Working memory (WM) is important for guiding behaviour, but not always for the next possible action. Here we define a WM item that is currently relevant for guiding behaviour as the functionally "active" item; whereas items maintained in WM, but not immediately relevant to behaviour, are defined as functionally "latent". Traditional neurophysiological theories of WM proposed that content is maintained via persistent neural activity (e.g., stable attractors); however, more recent theories have highlighted the potential role for "activity-silent" mechanisms (e.g., short-term synaptic plasticity). Given these somewhat parallel dichotomies, functionally active and latent cognitive states of WM have been associated with storage based on persistent-activity and activity-silent neural mechanisms, respectively. However, in this article we caution against a one-to-one correspondence between functional and activity states. We argue that the principal theoretical requirement for active and latent WM is that the corresponding neural states play qualitatively different functional roles. We consider a number of candidate solutions, and conclude that the neurophysiological mechanisms for functionally active and latent WM items are theoretically independent of the distinction between persistent activity-based and activity-silent forms of WM storage.
Collapse
Affiliation(s)
- Mark G. Stokes
- Wellcome Centre for Integrative Neuroimaging and Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Paul S. Muhle-Karbe
- Wellcome Centre for Integrative Neuroimaging and Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Nicholas E. Myers
- Wellcome Centre for Integrative Neuroimaging and Department of Experimental Psychology, University of Oxford, Oxford, UK
| |
Collapse
|