101
|
Jazayeri M, Ostojic S. Interpreting neural computations by examining intrinsic and embedding dimensionality of neural activity. Curr Opin Neurobiol 2021; 70:113-120. [PMID: 34537579 PMCID: PMC8688220 DOI: 10.1016/j.conb.2021.08.002] [Citation(s) in RCA: 82] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 08/11/2021] [Accepted: 08/12/2021] [Indexed: 11/16/2022]
Abstract
The ongoing exponential rise in recording capacity calls for new approaches for analysing and interpreting neural data. Effective dimensionality has emerged as an important property of neural activity across populations of neurons, yet different studies rely on different definitions and interpretations of this quantity. Here, we focus on intrinsic and embedding dimensionality, and discuss how they might reveal computational principles from data. Reviewing recent works, we propose that the intrinsic dimensionality reflects information about the latent variables encoded in collective activity while embedding dimensionality reveals the manner in which this information is processed. We conclude by highlighting the role of network models as an ideal substrate for testing more specifically various hypotheses on the computational principles reflected through intrinsic and embedding dimensionality.
Collapse
Affiliation(s)
- Mehrdad Jazayeri
- McGovern Institute for Brain Research, Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives, INSERM U960, École Normale Supérieure - PSL Research University, 75005, Paris, France.
| |
Collapse
|
102
|
Meirhaeghe N, Sohn H, Jazayeri M. A precise and adaptive neural mechanism for predictive temporal processing in the frontal cortex. Neuron 2021; 109:2995-3011.e5. [PMID: 34534456 PMCID: PMC9737059 DOI: 10.1016/j.neuron.2021.08.025] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 07/02/2021] [Accepted: 08/18/2021] [Indexed: 12/14/2022]
Abstract
The theory of predictive processing posits that the brain computes expectations to process information predictively. Empirical evidence in support of this theory, however, is scarce and largely limited to sensory areas. Here, we report a precise and adaptive mechanism in the frontal cortex of non-human primates consistent with predictive processing of temporal events. We found that the speed of neural dynamics is precisely adjusted according to the average time of an expected stimulus. This speed adjustment, in turn, enables neurons to encode stimuli in terms of deviations from expectation. This lawful relationship was evident across multiple experiments and held true during learning: when temporal statistics underwent covert changes, neural responses underwent predictable changes that reflected the new mean. Together, these results highlight a precise mathematical relationship between temporal statistics in the environment and neural activity in the frontal cortex that may serve as a mechanism for predictive temporal processing.
Collapse
Affiliation(s)
- Nicolas Meirhaeghe
- Harvard-MIT Division of Health Sciences & Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - Hansem Sohn
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA,Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| |
Collapse
|
103
|
Nakajima M. Neuronal identity and cognitive control dynamics in the PFC. Semin Cell Dev Biol 2021; 129:14-21. [PMID: 34535385 DOI: 10.1016/j.semcdb.2021.08.014] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 06/14/2021] [Accepted: 08/30/2021] [Indexed: 10/20/2022]
Abstract
Adaptive behavior is supported by context-dependent cognitive control that enables stable and flexible sensorimotor transformations. Impairments in this type of control are often attributed to dysfunction in the prefrontal cortex (PFC). However, the underlying circuit principles of PFC function that support cognitive control have remained elusive. While the complex, diverse responses of PFC neurons to cognitive variables have been studied both from the perspective of individual cell activity and overall population dynamics, these two levels have often been investigated separately. This review discusses two specific cell groups, context/brain state responsive interneuron subtypes and output decoder neurons, that might bridge conceptual frameworks derived from these two research approaches. I highlight the unique properties and functions of these cell groups and discuss how future studies leveraging their features are likely to provide a new understanding of PFC dynamics combining single-neuron and network perspectives.
Collapse
Affiliation(s)
- Miho Nakajima
- Center for Brain Science, RIKEN, Wako, Saitama 351-0198, Japan.
| |
Collapse
|
104
|
Salvi JD, Rauch SL, Baker JT. Behavior as Physiology: How Dynamical-Systems Theory Could Advance Psychiatry. Am J Psychiatry 2021; 178:791-792. [PMID: 34516231 PMCID: PMC8442738 DOI: 10.1176/appi.ajp.2020.20081151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Joshua D. Salvi
- Harvard Medical School, Boston, Massachusetts,MGH / McLean Adult Psychiatry Residency Program, Boston, Massachusetts,Correspondence: Joshua D. Salvi, Massachusetts General Hospital, 55 Fruit Street, Boston, MA, 02114,
| | | | | |
Collapse
|
105
|
Lee EK, Balasubramanian H, Tsolias A, Anakwe SU, Medalla M, Shenoy KV, Chandrasekaran C. Non-linear dimensionality reduction on extracellular waveforms reveals cell type diversity in premotor cortex. eLife 2021; 10:e67490. [PMID: 34355695 PMCID: PMC8452311 DOI: 10.7554/elife.67490] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Accepted: 08/04/2021] [Indexed: 11/13/2022] Open
Abstract
Cortical circuits are thought to contain a large number of cell types that coordinate to produce behavior. Current in vivo methods rely on clustering of specified features of extracellular waveforms to identify putative cell types, but these capture only a small amount of variation. Here, we develop a new method (WaveMAP) that combines non-linear dimensionality reduction with graph clustering to identify putative cell types. We apply WaveMAP to extracellular waveforms recorded from dorsal premotor cortex of macaque monkeys performing a decision-making task. Using WaveMAP, we robustly establish eight waveform clusters and show that these clusters recapitulate previously identified narrow- and broad-spiking types while revealing previously unknown diversity within these subtypes. The eight clusters exhibited distinct laminar distributions, characteristic firing rate patterns, and decision-related dynamics. Such insights were weaker when using feature-based approaches. WaveMAP therefore provides a more nuanced understanding of the dynamics of cell types in cortical circuits.
Collapse
Affiliation(s)
- Eric Kenji Lee
- Psychological and Brain Sciences, Boston UniversityBostonUnited States
| | - Hymavathy Balasubramanian
- Bernstein Center for Computational Neuroscience, Bernstein Center for Computational NeuroscienceBerlinGermany
| | - Alexandra Tsolias
- Department of Anatomy and Neurobiology, Boston UniversityBostonUnited States
| | | | - Maria Medalla
- Department of Anatomy and Neurobiology, Boston UniversityBostonUnited States
| | - Krishna V Shenoy
- Department of Electrical Engineering, Stanford UniversityStanfordUnited States
- Department of Bioengineering, Stanford UniversityStanfordUnited States
- Department of Neurobiology, Stanford UniversityStanfordUnited States
- Wu Tsai Neurosciences Institute, Stanford UniversityStanfordUnited States
- Bio-X Institute, Stanford UniversityStanfordUnited States
- Howard Hughes Medical Institute, Stanford UniversityStanfordUnited States
| | - Chandramouli Chandrasekaran
- Psychological and Brain Sciences, Boston UniversityBostonUnited States
- Department of Anatomy and Neurobiology, Boston UniversityBostonUnited States
- Center for Systems Neuroscience, Boston UniversityBostonUnited States
- Department of Biomedical Engineering, Boston UniversityBostonUnited States
| |
Collapse
|
106
|
Abstract
Significant experimental, computational, and theoretical work has identified rich structure within the coordinated activity of interconnected neural populations. An emerging challenge now is to uncover the nature of the associated computations, how they are implemented, and what role they play in driving behavior. We term this computation through neural population dynamics. If successful, this framework will reveal general motifs of neural population activity and quantitatively describe how neural population dynamics implement computations necessary for driving goal-directed behavior. Here, we start with a mathematical primer on dynamical systems theory and analytical tools necessary to apply this perspective to experimental data. Next, we highlight some recent discoveries resulting from successful application of dynamical systems. We focus on studies spanning motor control, timing, decision-making, and working memory. Finally, we briefly discuss promising recent lines of investigation and future directions for the computation through neural population dynamics framework.
Collapse
Affiliation(s)
- Saurabh Vyas
- Department of Bioengineering, Stanford University, Stanford, California 94305, USA; .,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA
| | - Matthew D Golub
- Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA.,Google AI, Google Inc., Mountain View, California 94305, USA
| | - Krishna V Shenoy
- Department of Bioengineering, Stanford University, Stanford, California 94305, USA; .,Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA.,Department of Neurobiology, Bio-X Institute, Neurosciences Program, and Howard Hughes Medical Institute, Stanford University, Stanford, California 94305, USA
| |
Collapse
|
107
|
Jovanovic L, López-Moliner J, Mamassian P. Contrasting contributions of movement onset and duration to self-evaluation of sensorimotor timing performance. Eur J Neurosci 2021; 54:5092-5111. [PMID: 34196067 PMCID: PMC9291449 DOI: 10.1111/ejn.15378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Accepted: 06/22/2021] [Indexed: 12/01/2022]
Abstract
Movement execution is not always optimal. Understanding how humans evaluate their own motor decisions can give us insights into their suboptimality. Here, we investigated how humans time the action of synchronizing an arm movement with a predictable visual event and how well they can evaluate the outcome of this action. On each trial, participants had to decide when to start (reaction time) and for how long to move (movement duration) to reach a target on time. After each trial, participants judged the confidence they had that their performance on that trial was better than average. We found that participants mostly varied their reaction time, keeping the average movement duration short and relatively constant across conditions. Interestingly, confidence judgements reflected deviations from the planned reaction time and were not related to planned movement duration. In two other experiments, we replicated these results in conditions where the contribution of sensory uncertainty was reduced. In contrast to confidence judgements, when asked to make an explicit estimation of their temporal error, participants' estimates were related in a similar manner to both reaction time and movement duration. In summary, humans control the timing of their actions primarily by adjusting the delay to initiate the action, and they estimate their confidence in their action from the difference between the planned and executed movement onset. Our results highlight the critical role of the internal model for the self‐evaluation of one's motor performance.
Collapse
Affiliation(s)
- Ljubica Jovanovic
- Laboratoire des systèmes perceptifs, Département d'études cognitives, École normale supérieure, PSL University, CNRS, Paris, France.,School of Psychology, University of Nottingham, Nottingham, UK
| | - Joan López-Moliner
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Catalonia, Spain
| | - Pascal Mamassian
- Laboratoire des systèmes perceptifs, Département d'études cognitives, École normale supérieure, PSL University, CNRS, Paris, France
| |
Collapse
|
108
|
Damsma A, Schlichting N, van Rijn H. Temporal Context Actively Shapes EEG Signatures of Time Perception. J Neurosci 2021; 41:4514-4523. [PMID: 33833083 PMCID: PMC8152605 DOI: 10.1523/jneurosci.0628-20.2021] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2020] [Revised: 03/24/2021] [Accepted: 03/27/2021] [Indexed: 11/21/2022] Open
Abstract
Our subjective perception of time is optimized to temporal regularities in the environment. This is illustrated by the central tendency effect: When estimating a range of intervals, short intervals are overestimated, whereas long intervals are underestimated to reduce the overall estimation error. Most models of interval timing ascribe this effect to the weighting of the current interval with previous memory traces after the interval has been perceived. Alternatively, the perception of the duration could already be flexibly tuned to its temporal context. We investigated this hypothesis using an interval reproduction task in which human participants (both sexes) reproduced a shorter and longer interval range. As expected, reproductions were biased toward the subjective mean of each presented range. EEG analyses showed that temporal context indeed affected neural dynamics during the perception phase. Specifically, longer previous durations decreased contingent negative variation and P2 amplitude and increased beta power. In addition, multivariate pattern analysis showed that it is possible to decode context from the transient EEG signal quickly after both onset and offset of the perception phase. Together, these results suggest that temporal context creates dynamic expectations which actively affect the perception of duration.SIGNIFICANCE STATEMENT The subjective sense of duration does not arise in isolation, but is informed by previous experiences. This is demonstrated by abundant evidence showing that the production of duration estimates is biased toward previously experienced time intervals. However, it is yet unknown whether this temporal context actively affects perception or only asserts its influence in later, postperceptual stages as proposed by most current formal models of this task. Using an interval reproduction task, we show that EEG signatures flexibly adapt to the temporal context during perceptual encoding. Furthermore, interval history can be decoded from the transient EEG signal even when the current duration was identical. Thus, our results demonstrate that context actively influences perception.
Collapse
Affiliation(s)
- Atser Damsma
- Department of Psychology, University of Groningen, Groningen, 9712 TS, The Netherlands
| | - Nadine Schlichting
- Department of Psychology, University of Groningen, Groningen, 9712 TS, The Netherlands
| | - Hedderik van Rijn
- Department of Psychology, University of Groningen, Groningen, 9712 TS, The Netherlands
| |
Collapse
|
109
|
Weidel P, Duarte R, Morrison A. Unsupervised Learning and Clustered Connectivity Enhance Reinforcement Learning in Spiking Neural Networks. Front Comput Neurosci 2021; 15:543872. [PMID: 33746728 PMCID: PMC7970044 DOI: 10.3389/fncom.2021.543872] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Accepted: 02/08/2021] [Indexed: 11/13/2022] Open
Abstract
Reinforcement learning is a paradigm that can account for how organisms learn to adapt their behavior in complex environments with sparse rewards. To partition an environment into discrete states, implementations in spiking neuronal networks typically rely on input architectures involving place cells or receptive fields specified ad hoc by the researcher. This is problematic as a model for how an organism can learn appropriate behavioral sequences in unknown environments, as it fails to account for the unsupervised and self-organized nature of the required representations. Additionally, this approach presupposes knowledge on the part of the researcher on how the environment should be partitioned and represented and scales poorly with the size or complexity of the environment. To address these issues and gain insights into how the brain generates its own task-relevant mappings, we propose a learning architecture that combines unsupervised learning on the input projections with biologically motivated clustered connectivity within the representation layer. This combination allows input features to be mapped to clusters; thus the network self-organizes to produce clearly distinguishable activity patterns that can serve as the basis for reinforcement learning on the output projections. On the basis of the MNIST and Mountain Car tasks, we show that our proposed model performs better than either a comparable unclustered network or a clustered network with static input projections. We conclude that the combination of unsupervised learning and clustered connectivity provides a generic representational substrate suitable for further computation.
Collapse
Affiliation(s)
- Philipp Weidel
- Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure-Function Relationship (JBI-1 / INM-10), Research Centre Jülich, Jülich, Germany.,Department of Computer Science 3 - Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Renato Duarte
- Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure-Function Relationship (JBI-1 / INM-10), Research Centre Jülich, Jülich, Germany
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure-Function Relationship (JBI-1 / INM-10), Research Centre Jülich, Jülich, Germany.,Department of Computer Science 3 - Software Engineering, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
110
|
Sohn H, Meirhaeghe N, Rajalingham R, Jazayeri M. A Network Perspective on Sensorimotor Learning. Trends Neurosci 2021; 44:170-181. [PMID: 33349476 PMCID: PMC9744184 DOI: 10.1016/j.tins.2020.11.007] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Revised: 09/11/2020] [Accepted: 11/20/2020] [Indexed: 12/15/2022]
Abstract
What happens in the brain when we learn? Ever since the foundational work of Cajal, the field has made numerous discoveries as to how experience could change the structure and function of individual synapses. However, more recent advances have highlighted the need for understanding learning in terms of complex interactions between populations of neurons and synapses. How should one think about learning at such a macroscopic level? Here, we develop a conceptual framework to bridge the gap between the different scales at which learning operates, from synapses to neurons to behavior. Using this framework, we explore the principles that guide sensorimotor learning across these scales, and set the stage for future experimental and theoretical work in the field.
Collapse
Affiliation(s)
| | - Nicolas Meirhaeghe
- Harvard-MIT Division of Health Sciences & Technology, Massachusetts Institute of Technology,Corresponding authors: Nicolas Meirhaeghe, , Mehrdad Jazayeri, Ph.D.,
| | | | - Mehrdad Jazayeri
- McGovern Institute for Brain Research,,Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology,Corresponding authors: Nicolas Meirhaeghe, , Mehrdad Jazayeri, Ph.D.,
| |
Collapse
|
111
|
The Best Laid Plans: Computational Principles of Anterior Cingulate Cortex. Trends Cogn Sci 2021; 25:316-329. [PMID: 33593641 DOI: 10.1016/j.tics.2021.01.008] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 01/17/2021] [Accepted: 01/19/2021] [Indexed: 12/26/2022]
Abstract
Despite continual debate for the past 30 years about the function of anterior cingulate cortex (ACC), its key contribution to neurocognition remains unknown. However, recent computational modeling work has provided insight into this question. Here we review computational models that illustrate three core principles of ACC function, related to hierarchy, world models, and cost. We also discuss four constraints on the neural implementation of these principles, related to modularity, binding, encoding, and learning and regulation. These observations suggest a role for ACC in hierarchical model-based hierarchical reinforcement learning (HMB-HRL), which instantiates a mechanism motivating the execution of high-level plans.
Collapse
|
112
|
Espinoza-Monroy M, de Lafuente V. Discrimination of Regular and Irregular Rhythms Explained by a Time Difference Accumulation Model. Neuroscience 2021; 459:16-26. [PMID: 33549694 DOI: 10.1016/j.neuroscience.2021.01.035] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Revised: 01/20/2021] [Accepted: 01/28/2021] [Indexed: 02/07/2023]
Abstract
Perceiving the temporal regularity in a sequence of repetitive sensory events facilitates the preparation and execution of relevant behaviors with tight temporal constraints. How we estimate temporal regularity from repeating patterns of sensory stimuli is not completely understood. We developed a decision-making task in which participants had to decide whether a train of visual, auditory, or tactile pulses, had a regular or an irregular temporal pattern. We tested the hypothesis that subjects categorize stimuli as irregular by accumulating the time differences between the predicted and observed times of sensory pulses defining a temporal rhythm. Results suggest that instead of waiting for a single large temporal deviation, participants accumulate timing-error signals and judge a pattern as irregular when the amount of evidence reaches a decision threshold. Model fits of bounded integration showed that this accumulation occurs with negligible leak of evidence. Consistent with previous findings, we show that participants perform better when evaluating the regularity of auditory pulses, as compared with visual or tactile stimuli. Our results suggest that temporal regularity is estimated by comparing expected and measured pulse onset times, and that each prediction error is accumulated towards a threshold to generate a behavioral choice.
Collapse
Affiliation(s)
- Marisol Espinoza-Monroy
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Querétaro, QRO 76230, Mexico
| | - Victor de Lafuente
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Querétaro, QRO 76230, Mexico.
| |
Collapse
|
113
|
Balasubramaniam R, Haegens S, Jazayeri M, Merchant H, Sternad D, Song JH. Neural Encoding and Representation of Time for Sensorimotor Control and Learning. J Neurosci 2021; 41:866-872. [PMID: 33380468 PMCID: PMC7880297 DOI: 10.1523/jneurosci.1652-20.2020] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 11/10/2020] [Accepted: 11/12/2020] [Indexed: 11/21/2022] Open
Abstract
The ability to perceive and produce movements in the real world with precise timing is critical for survival in animals, including humans. However, research on sensorimotor timing has rarely considered the tight interrelation between perception, action, and cognition. In this review, we present new evidence from behavioral, computational, and neural studies in humans and nonhuman primates, suggesting a pivotal link between sensorimotor control and temporal processing, as well as describing new theoretical frameworks regarding timing in perception and action. We first discuss the link between movement coordination and interval-based timing by addressing how motor training develops accurate spatiotemporal patterns in behavior and influences the perception of temporal intervals. We then discuss how motor expertise results from establishing task-relevant neural manifolds in sensorimotor cortical areas and how the geometry and dynamics of these manifolds help reduce timing variability. We also highlight how neural dynamics in sensorimotor areas are involved in beat-based timing. These lines of research aim to extend our understanding of how timing arises from and contributes to perceptual-motor behaviors in complex environments to seamlessly interact with other cognitive processes.
Collapse
Affiliation(s)
| | | | | | - Hugo Merchant
- Instituto de Neurobiologia, UNAM, campus Juriquilla, Querétaro, México 76230
| | | | | |
Collapse
|
114
|
Measurement, manipulation and modeling of brain-wide neural population dynamics. Nat Commun 2021; 12:633. [PMID: 33504773 PMCID: PMC7840924 DOI: 10.1038/s41467-020-20371-1] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2020] [Accepted: 11/24/2020] [Indexed: 12/12/2022] Open
Abstract
Neural recording technologies increasingly enable simultaneous measurement of neural activity from multiple brain areas. To gain insight into distributed neural computations, a commensurate advance in experimental and analytical methods is necessary. We discuss two opportunities towards this end: the manipulation and modeling of neural population dynamics.
Collapse
|
115
|
Ehrlich DB, Stone JT, Brandfonbrener D, Atanasov A, Murray JD. PsychRNN: An Accessible and Flexible Python Package for Training Recurrent Neural Network Models on Cognitive Tasks. eNeuro 2021; 8:ENEURO.0427-20.2020. [PMID: 33328247 PMCID: PMC7814477 DOI: 10.1523/eneuro.0427-20.2020] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Revised: 11/24/2020] [Accepted: 12/02/2020] [Indexed: 12/02/2022] Open
Abstract
Task-trained artificial recurrent neural networks (RNNs) provide a computational modeling framework of increasing interest and application in computational, systems, and cognitive neuroscience. RNNs can be trained, using deep-learning methods, to perform cognitive tasks used in animal and human experiments and can be studied to investigate potential neural representations and circuit mechanisms underlying cognitive computations and behavior. Widespread application of these approaches within neuroscience has been limited by technical barriers in use of deep-learning software packages to train network models. Here, we introduce PsychRNN, an accessible, flexible, and extensible Python package for training RNNs on cognitive tasks. Our package is designed for accessibility, for researchers to define tasks and train RNN models using only Python and NumPy, without requiring knowledge of deep-learning software. The training backend is based on TensorFlow and is readily extensible for researchers with TensorFlow knowledge to develop projects with additional customization. PsychRNN implements a number of specialized features to support applications in systems and cognitive neuroscience. Users can impose neurobiologically relevant constraints on synaptic connectivity patterns. Furthermore, specification of cognitive tasks has a modular structure, which facilitates parametric variation of task demands to examine their impact on model solutions. PsychRNN also enables task shaping during training, or curriculum learning, in which tasks are adjusted in closed-loop based on performance. Shaping is ubiquitous in training of animals in cognitive tasks, and PsychRNN allows investigation of how shaping trajectories impact learning and model solutions. Overall, the PsychRNN framework facilitates application of trained RNNs in neuroscience research.
Collapse
Affiliation(s)
- Daniel B Ehrlich
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06520-8074
| | - Jasmine T Stone
- Department of Computer Science, Yale University, New Haven, CT 06520-8285
| | - David Brandfonbrener
- Department of Computer Science, Yale University, New Haven, CT 06520-8285
- Department of Computer Science, New York University, New York, NY 10012
| | - Alexander Atanasov
- Department of Physics, Yale University, New Haven, CT 06511-8499
- Department of Physics, Harvard University, Cambridge, MA 02138
| | - John D Murray
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06520-8074
- Department of Physics, Yale University, New Haven, CT 06511-8499
- Department of Psychiatry, Yale School of Medicine, New Haven, CT 06511
| |
Collapse
|
116
|
Robinson EM, Wiener M. Dissociable neural indices for time and space estimates during virtual distance reproduction. Neuroimage 2020; 226:117607. [PMID: 33290808 DOI: 10.1016/j.neuroimage.2020.117607] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 11/23/2020] [Accepted: 11/25/2020] [Indexed: 10/22/2022] Open
Abstract
The perception and measurement of spatial and temporal dimensions have been widely studied. Yet, whether these two dimensions are processed independently is still being debated. Additionally, whether EEG components are uniquely associated with time or space, or whether they reflect a more general measure of magnitude quantity remains unknown. While undergoing EEG, subjects performed a virtual distance reproduction task, in which they were required to first walk forward for an unknown distance or time, and then reproduce that distance or time. Walking speed was varied between estimation and reproduction phases, to prevent interference between distance or time in each estimate. Behaviorally, subject performance was more variable when reproducing time than when reproducing distance, but with similar patterns of accuracy. During estimation, EEG data revealed the contingent negative variation (CNV), a measure previously associated with timing and expectation, tracked the probability of the upcoming interval, for both time and distance. However, during reproduction, the CNV exclusively oriented to the upcoming temporal interval at the start of reproduction, with no change across spatial distances. Our findings indicate that time and space are neurally separable dimensions, with the CNV both serving a supramodal role in temporal and spatial expectation, yet an exclusive role in preparing duration reproduction.
Collapse
Affiliation(s)
- Eva Marie Robinson
- Department of Psychology, University of Arizona, Tuscon, AZ 85721, United States; Department of Psychology, George Mason University, 4400 University Drive, 3F5, Fairfax, VA 22030, United States
| | - Martin Wiener
- Department of Psychology, George Mason University, 4400 University Drive, 3F5, Fairfax, VA 22030, United States.
| |
Collapse
|
117
|
Márton CD, Schultz SR, Averbeck BB. Learning to select actions shapes recurrent dynamics in the corticostriatal system. Neural Netw 2020; 132:375-393. [PMID: 32992244 PMCID: PMC7685243 DOI: 10.1016/j.neunet.2020.09.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 09/03/2020] [Accepted: 09/11/2020] [Indexed: 01/03/2023]
Abstract
Learning to select appropriate actions based on their values is fundamental to adaptive behavior. This form of learning is supported by fronto-striatal systems. The dorsal-lateral prefrontal cortex (dlPFC) and the dorsal striatum (dSTR), which are strongly interconnected, are key nodes in this circuitry. Substantial experimental evidence, including neurophysiological recordings, have shown that neurons in these structures represent key aspects of learning. The computational mechanisms that shape the neurophysiological responses, however, are not clear. To examine this, we developed a recurrent neural network (RNN) model of the dlPFC-dSTR circuit and trained it on an oculomotor sequence learning task. We compared the activity generated by the model to activity recorded from monkey dlPFC and dSTR in the same task. This network consisted of a striatal component which encoded action values, and a prefrontal component which selected appropriate actions. After training, this system was able to autonomously represent and update action values and select actions, thus being able to closely approximate the representational structure in corticostriatal recordings. We found that learning to select the correct actions drove action-sequence representations further apart in activity space, both in the model and in the neural data. The model revealed that learning proceeds by increasing the distance between sequence-specific representations. This makes it more likely that the model will select the appropriate action sequence as learning develops. Our model thus supports the hypothesis that learning in networks drives the neural representations of actions further apart, increasing the probability that the network generates correct actions as learning proceeds. Altogether, this study advances our understanding of how neural circuit dynamics are involved in neural computation, revealing how dynamics in the corticostriatal system support task learning.
Collapse
Affiliation(s)
- Christian D Márton
- Centre for Neurotechnology & Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK; Laboratory of Neuropsychology, Section on Learning and Decision Making, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA.
| | - Simon R Schultz
- Centre for Neurotechnology & Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK
| | - Bruno B Averbeck
- Laboratory of Neuropsychology, Section on Learning and Decision Making, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
118
|
Wang J, Hosseini E, Meirhaeghe N, Akkad A, Jazayeri M. Reinforcement regulates timing variability in thalamus. eLife 2020; 9:55872. [PMID: 33258769 PMCID: PMC7707818 DOI: 10.7554/elife.55872] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2020] [Accepted: 11/06/2020] [Indexed: 01/19/2023] Open
Abstract
Learning reduces variability but variability can facilitate learning. This paradoxical relationship has made it challenging to tease apart sources of variability that degrade performance from those that improve it. We tackled this question in a context-dependent timing task requiring humans and monkeys to flexibly produce different time intervals with different effectors. We identified two opposing factors contributing to timing variability: slow memory fluctuation that degrades performance and reward-dependent exploratory behavior that improves performance. Signatures of these opposing factors were evident across populations of neurons in the dorsomedial frontal cortex (DMFC), DMFC-projecting neurons in the ventrolateral thalamus, and putative target of DMFC in the caudate. However, only in the thalamus were the performance-optimizing regulation of variability aligned to the slow performance-degrading memory fluctuations. These findings reveal how variability caused by exploratory behavior might help to mitigate other undesirable sources of variability and highlight a potential role for thalamocortical projections in this process.
Collapse
Affiliation(s)
- Jing Wang
- Department of Bioengineering, University of Missouri, Columbia, United States.,McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Eghbal Hosseini
- McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Nicolas Meirhaeghe
- Harvard-MIT Division of Health Sciences and Technology, Cambridge, United States
| | - Adam Akkad
- McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| |
Collapse
|
119
|
Piette C, Touboul J, Venance L. Engrams of Fast Learning. Front Cell Neurosci 2020; 14:575915. [PMID: 33250712 PMCID: PMC7676431 DOI: 10.3389/fncel.2020.575915] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 09/24/2020] [Indexed: 01/22/2023] Open
Abstract
Fast learning designates the behavioral and neuronal mechanisms underlying the acquisition of a long-term memory trace after a unique and brief experience. As such it is opposed to incremental, slower reinforcement or procedural learning requiring repetitive training. This learning process, found in most animal species, exists in a large spectrum of natural behaviors, such as one-shot associative, spatial, or perceptual learning, and is a core principle of human episodic memory. We review here the neuronal and synaptic long-term changes associated with fast learning in mammals and discuss some hypotheses related to their underlying mechanisms. We first describe the variety of behavioral paradigms used to test fast learning memories: those preferentially involve a single and brief (from few hundred milliseconds to few minutes) exposures to salient stimuli, sufficient to trigger a long-lasting memory trace and new adaptive responses. We then focus on neuronal activity patterns observed during fast learning and the emergence of long-term selective responses, before documenting the physiological correlates of fast learning. In the search for the engrams of fast learning, a growing body of evidence highlights long-term changes in gene expression, structural, intrinsic, and synaptic plasticities. Finally, we discuss the potential role of the sparse and bursting nature of neuronal activity observed during the fast learning, especially in the induction plasticity mechanisms leading to the rapid establishment of long-term synaptic modifications. We conclude with more theoretical perspectives on network dynamics that could enable fast learning, with an overview of some theoretical approaches in cognitive neuroscience and artificial intelligence.
Collapse
Affiliation(s)
- Charlotte Piette
- Center for Interdisciplinary Research in Biology, College de France, INSERM U1050, CNRS UMR7241, Université PSL, Paris, France.,Department of Mathematics and Volen National Center for Complex Systems, Brandeis University, Waltham, MA, United States
| | - Jonathan Touboul
- Department of Mathematics and Volen National Center for Complex Systems, Brandeis University, Waltham, MA, United States
| | - Laurent Venance
- Center for Interdisciplinary Research in Biology, College de France, INSERM U1050, CNRS UMR7241, Université PSL, Paris, France
| |
Collapse
|
120
|
Li N, Mrsic-Flogel TD. Cortico-cerebellar interactions during goal-directed behavior. Curr Opin Neurobiol 2020; 65:27-37. [PMID: 32979846 PMCID: PMC7770085 DOI: 10.1016/j.conb.2020.08.010] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Revised: 08/17/2020] [Accepted: 08/21/2020] [Indexed: 12/14/2022]
Abstract
Preparatory activity is observed across multiple interconnected brain regions before goal-directed movement. Preparatory activity reflects discrete activity states representing specific future actions. It is unclear how this activity is mediated by multi-regional interactions. Recent evidence suggests that the cerebellum, classically associated with fine motor control, contributes to preparatory activity in the neocortex. We review recent advances and offer perspective on the function of cortico-cerebellar interactions during goal-directed behavior. We propose that the cerebellum learns to facilitate transitions between neocortical activity states. Transitions between activity states enable flexible and appropriately timed behavioral responses.
Collapse
Affiliation(s)
- Nuo Li
- Department of Neuroscience, Baylor College of Medicine, United States.
| | - Thomas D Mrsic-Flogel
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, United Kingdom.
| |
Collapse
|
121
|
Perich MG, Rajan K. Rethinking brain-wide interactions through multi-region 'network of networks' models. Curr Opin Neurobiol 2020; 65:146-151. [PMID: 33254073 DOI: 10.1016/j.conb.2020.11.003] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Revised: 10/17/2020] [Accepted: 11/08/2020] [Indexed: 12/20/2022]
Abstract
The neural control of behavior is distributed across many functionally and anatomically distinct brain regions even in small nervous systems. While classical neuroscience models treated these regions as a set of hierarchically isolated nodes, the brain comprises a recurrently interconnected network in which each region is intimately modulated by many others. Uncovering these interactions is now possible through experimental techniques that access large neural populations from many brain regions simultaneously. Harnessing these large-scale datasets, however, requires new theoretical approaches. Here, we review recent work to understand brain-wide interactions using multi-region 'network of networks' models and discuss how they can guide future experiments. We also emphasize the importance of multi-region recordings, and posit that studying individual components in isolation will be insufficient to understand the neural basis of behavior.
Collapse
Affiliation(s)
- Matthew G Perich
- Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| | - Kanaka Rajan
- Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| |
Collapse
|
122
|
Saxe A, Nelli S, Summerfield C. If deep learning is the answer, what is the question? Nat Rev Neurosci 2020; 22:55-67. [PMID: 33199854 DOI: 10.1038/s41583-020-00395-8] [Citation(s) in RCA: 133] [Impact Index Per Article: 26.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/02/2020] [Indexed: 11/09/2022]
Abstract
Neuroscience research is undergoing a minor revolution. Recent advances in machine learning and artificial intelligence research have opened up new ways of thinking about neural computation. Many researchers are excited by the possibility that deep neural networks may offer theories of perception, cognition and action for biological brains. This approach has the potential to radically reshape our approach to understanding neural systems, because the computations performed by deep networks are learned from experience, and not endowed by the researcher. If so, how can neuroscientists use deep networks to model and understand biological brains? What is the outlook for neuroscientists who seek to characterize computations or neural codes, or who wish to understand perception, attention, memory and executive functions? In this Perspective, our goal is to offer a road map for systems neuroscience research in the age of deep learning. We discuss the conceptual and methodological challenges of comparing behaviour, learning dynamics and neural representations in artificial and biological systems, and we highlight new research questions that have emerged for neuroscience as a direct consequence of recent advances in machine learning.
Collapse
Affiliation(s)
- Andrew Saxe
- Department of Experimental Psychology, University of Oxford, Oxford, UK.
| | - Stephanie Nelli
- Department of Experimental Psychology, University of Oxford, Oxford, UK.
| | | |
Collapse
|
123
|
Obeid D, Zavatone-Veth JA, Pehlevan C. Statistical structure of the trial-to-trial timing variability in synfire chains. Phys Rev E 2020; 102:052406. [PMID: 33327145 DOI: 10.1103/physreve.102.052406] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2020] [Accepted: 10/16/2020] [Indexed: 11/07/2022]
Abstract
Timing and its variability are crucial for behavior. Consequently, neural circuits that take part in the control of timing and in the measurement of temporal intervals have been the subject of much research. Here we provide an analytical and computational account of the temporal variability in what is perhaps the most basic model of a timing circuit-the synfire chain. First we study the statistical structure of trial-to-trial timing variability in a reduced but analytically tractable model: a chain of single integrate-and-fire neurons. We show that this circuit's variability is well described by a generative model consisting of local, global, and jitter components. We relate each of these components to distinct neural mechanisms in the model. Next we establish in simulations that these results carry over to a noisy homogeneous synfire chain. Finally, motivated by the fact that a synfire chain is thought to underlie the circuit that takes part in the control and timing of the zebra finch song, we present simulations of a biologically realistic synfire chain model of the zebra finch timekeeping circuit. We find the structure of trial-to-trial timing variability to be consistent with our previous findings and to agree with experimental observations of the song's temporal variability. Our study therefore provides a possible neuronal account of behavioral variability in zebra finches.
Collapse
Affiliation(s)
- Dina Obeid
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138, USA
| | | | - Cengiz Pehlevan
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138, USA
- Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138, USA
| |
Collapse
|
124
|
Stokes MG, Muhle-Karbe PS, Myers NE. Theoretical distinction between functional states in working memory and their corresponding neural states. VISUAL COGNITION 2020; 28:420-432. [PMID: 33223922 PMCID: PMC7655036 DOI: 10.1080/13506285.2020.1825141] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Accepted: 09/10/2020] [Indexed: 12/15/2022]
Abstract
Working memory (WM) is important for guiding behaviour, but not always for the next possible action. Here we define a WM item that is currently relevant for guiding behaviour as the functionally "active" item; whereas items maintained in WM, but not immediately relevant to behaviour, are defined as functionally "latent". Traditional neurophysiological theories of WM proposed that content is maintained via persistent neural activity (e.g., stable attractors); however, more recent theories have highlighted the potential role for "activity-silent" mechanisms (e.g., short-term synaptic plasticity). Given these somewhat parallel dichotomies, functionally active and latent cognitive states of WM have been associated with storage based on persistent-activity and activity-silent neural mechanisms, respectively. However, in this article we caution against a one-to-one correspondence between functional and activity states. We argue that the principal theoretical requirement for active and latent WM is that the corresponding neural states play qualitatively different functional roles. We consider a number of candidate solutions, and conclude that the neurophysiological mechanisms for functionally active and latent WM items are theoretically independent of the distinction between persistent activity-based and activity-silent forms of WM storage.
Collapse
Affiliation(s)
- Mark G. Stokes
- Wellcome Centre for Integrative Neuroimaging and Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Paul S. Muhle-Karbe
- Wellcome Centre for Integrative Neuroimaging and Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Nicholas E. Myers
- Wellcome Centre for Integrative Neuroimaging and Department of Experimental Psychology, University of Oxford, Oxford, UK
| |
Collapse
|
125
|
Cueva CJ, Saez A, Marcos E, Genovesio A, Jazayeri M, Romo R, Salzman CD, Shadlen MN, Fusi S. Low-dimensional dynamics for working memory and time encoding. Proc Natl Acad Sci U S A 2020; 117:23021-23032. [PMID: 32859756 PMCID: PMC7502752 DOI: 10.1073/pnas.1915984117] [Citation(s) in RCA: 72] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
Our decisions often depend on multiple sensory experiences separated by time delays. The brain can remember these experiences and, simultaneously, estimate the timing between events. To understand the mechanisms underlying working memory and time encoding, we analyze neural activity recorded during delays in four experiments on nonhuman primates. To disambiguate potential mechanisms, we propose two analyses, namely, decoding the passage of time from neural data and computing the cumulative dimensionality of the neural trajectory over time. Time can be decoded with high precision in tasks where timing information is relevant and with lower precision when irrelevant for performing the task. Neural trajectories are always observed to be low-dimensional. In addition, our results further constrain the mechanisms underlying time encoding as we find that the linear "ramping" component of each neuron's firing rate strongly contributes to the slow timescale variations that make decoding time possible. These constraints rule out working memory models that rely on constant, sustained activity and neural networks with high-dimensional trajectories, like reservoir networks. Instead, recurrent networks trained with backpropagation capture the time-encoding properties and the dimensionality observed in the data.
Collapse
Affiliation(s)
- Christopher J Cueva
- Department of Neuroscience, Columbia University, New York, NY 10027;
- Center for Theoretical Neuroscience, Columbia University, New York, NY 10027
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
| | - Alex Saez
- Department of Neuroscience, Columbia University, New York, NY 10027
| | - Encarni Marcos
- Instituto de Neurociencias de Alicante, Consejo Superior de Investigaciones Científicas-Universidad Miguel Hernández de Elche, San Juan de Alicante 03550, Spain
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome 00185, Italy
| | - Aldo Genovesio
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome 00185, Italy
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Ranulfo Romo
- Instituto de Fisiolgía Celular-Neurociencias, Universidad Nacional Autónoma de México, 04510 Mexico City, Mexico;
- El Colegio Nacional, 06020 Mexico City, Mexico
| | - C Daniel Salzman
- Department of Neuroscience, Columbia University, New York, NY 10027
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
- Kavli Institute for Brain Science, Columbia University, New York, NY 10027
- Department of Psychiatry, Columbia University, New York, NY 10032
- New York State Psychiatric Institute, New York, NY 10032
| | - Michael N Shadlen
- Department of Neuroscience, Columbia University, New York, NY 10027
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
- Kavli Institute for Brain Science, Columbia University, New York, NY 10027
- Department of Psychiatry, Columbia University, New York, NY 10032
- New York State Psychiatric Institute, New York, NY 10032
| | - Stefano Fusi
- Department of Neuroscience, Columbia University, New York, NY 10027;
- Center for Theoretical Neuroscience, Columbia University, New York, NY 10027
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
- Kavli Institute for Brain Science, Columbia University, New York, NY 10027
| |
Collapse
|
126
|
|
127
|
Tang C, Herikstad R, Parthasarathy A, Libedinsky C, Yen SC. Minimally dependent activity subspaces for working memory and motor preparation in the lateral prefrontal cortex. eLife 2020; 9:e58154. [PMID: 32902383 PMCID: PMC7481007 DOI: 10.7554/elife.58154] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Accepted: 08/21/2020] [Indexed: 12/15/2022] Open
Abstract
The lateral prefrontal cortex is involved in the integration of multiple types of information, including working memory and motor preparation. However, it is not known how downstream regions can extract one type of information without interference from the others present in the network. Here, we show that the lateral prefrontal cortex of non-human primates contains two minimally dependent low-dimensional subspaces: one that encodes working memory information, and another that encodes motor preparation information. These subspaces capture all the information about the target in the delay periods, and the information in both subspaces is reduced in error trials. A single population of neurons with mixed selectivity forms both subspaces, but the information is kept largely independent from each other. A bump attractor model with divisive normalization replicates the properties of the neural data. These results provide new insights into neural processing in prefrontal regions.
Collapse
Affiliation(s)
- Cheng Tang
- Institute of Molecular and Cell Biology, A*STARSingaporeSingapore
| | - Roger Herikstad
- The N1 Institute for Health, National University of Singapore (NUS)SingaporeSingapore
| | | | - Camilo Libedinsky
- Institute of Molecular and Cell Biology, A*STARSingaporeSingapore
- The N1 Institute for Health, National University of Singapore (NUS)SingaporeSingapore
- Department of Psychology, NUSSingaporeSingapore
| | - Shih-Cheng Yen
- The N1 Institute for Health, National University of Singapore (NUS)SingaporeSingapore
- Innovation and Design Programme, Faculty of Engineering, NUSSingaporeSingapore
| |
Collapse
|
128
|
Cortical-like dynamics in recurrent circuits optimized for sampling-based probabilistic inference. Nat Neurosci 2020; 23:1138-1149. [PMID: 32778794 PMCID: PMC7610392 DOI: 10.1038/s41593-020-0671-1] [Citation(s) in RCA: 66] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Accepted: 06/16/2020] [Indexed: 12/30/2022]
Abstract
Sensory cortices display a suite of ubiquitous dynamical features, such as ongoing noise variability, transient overshoots and oscillations, that have so far escaped a common, principled theoretical account. We developed a unifying model for these phenomena by training a recurrent excitatory-inhibitory neural circuit model of a visual cortical hypercolumn to perform sampling-based probabilistic inference. The optimized network displayed several key biological properties, including divisive normalization and stimulus-modulated noise variability, inhibition-dominated transients at stimulus onset and strong gamma oscillations. These dynamical features had distinct functional roles in speeding up inferences and made predictions that we confirmed in novel analyses of recordings from awake monkeys. Our results suggest that the basic motifs of cortical dynamics emerge as a consequence of the efficient implementation of the same computational function-fast sampling-based inference-and predict further properties of these motifs that can be tested in future experiments.
Collapse
|
129
|
Abstract
Humans and animals can effortlessly coordinate their movements with external stimuli. This capacity indicates that sensory inputs can rapidly and flexibly reconfigure the ongoing dynamics in the neural circuits that control movements. Here, we develop a circuit-level model that coordinates movement times with expected and unexpected temporal events. The model consists of two interacting modules, a motor planning module that controls movement times and a sensory anticipation module that anticipates external events. Both modules harbor a reservoir of latent dynamics, and their interaction forms a control system whose output is adjusted adaptively to minimize timing errors. We show that the model’s output matches human behavior in a range of tasks including time interval production, periodic production, synchronization/continuation, and Bayesian time interval reproduction. These results demonstrate how recurrent interactions in a simple and modular neural circuit could create the dynamics needed to control timing behavior. We can flexibly coordinate our movements with external stimuli, but no circuit-level model exists to explain this ability. Inspired by fundamental concepts in control theory, the authors construct a modular neural circuit that captures human behavior in a wide range of temporal coordination tasks.
Collapse
|
130
|
Ben Hadj Hassen S, Ben Hamed S. Functional and behavioural correlates of shared neuronal noise variability in vision and visual cognition. CURRENT OPINION IN PHYSIOLOGY 2020. [DOI: 10.1016/j.cophys.2020.07.015] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|
131
|
Pollock E, Jazayeri M. Engineering recurrent neural networks from task-relevant manifolds and dynamics. PLoS Comput Biol 2020; 16:e1008128. [PMID: 32785228 PMCID: PMC7446915 DOI: 10.1371/journal.pcbi.1008128] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Revised: 08/24/2020] [Accepted: 07/08/2020] [Indexed: 12/11/2022] Open
Abstract
Many cognitive processes involve transformations of distributed representations in neural populations, creating a need for population-level models. Recurrent neural network models fulfill this need, but there are many open questions about how their connectivity gives rise to dynamics that solve a task. Here, we present a method for finding the connectivity of networks for which the dynamics are specified to solve a task in an interpretable way. We apply our method to a working memory task by synthesizing a network that implements a drift-diffusion process over a ring-shaped manifold. We also use our method to demonstrate how inputs can be used to control network dynamics for cognitive flexibility and explore the relationship between representation geometry and network capacity. Our work fits within the broader context of understanding neural computations as dynamics over relatively low-dimensional manifolds formed by correlated patterns of neurons.
Collapse
Affiliation(s)
- Eli Pollock
- Department of Brain & Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Mehrdad Jazayeri
- Department of Brain & Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| |
Collapse
|
132
|
Bellmund JLS, Polti I, Doeller CF. Sequence Memory in the Hippocampal-Entorhinal Region. J Cogn Neurosci 2020; 32:2056-2070. [PMID: 32530378 DOI: 10.1162/jocn_a_01592] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Episodic memories are constructed from sequences of events. When recalling such a memory, we not only recall individual events, but we also retrieve information about how the sequence of events unfolded. Here, we focus on the role of the hippocampal-entorhinal region in processing and remembering sequences of events, which are thought to be stored in relational networks. We summarize evidence that temporal relations are a central organizational principle for memories in the hippocampus. Importantly, we incorporate novel insights from recent studies about the role of the adjacent entorhinal cortex in sequence memory. In rodents, the lateral entorhinal subregion carries temporal information during ongoing behavior. The human homologue is recruited during memory recall where its representations reflect the temporal relationships between events encountered in a sequence. We further introduce the idea that the hippocampal-entorhinal region might enable temporal scaling of sequence representations. Flexible changes of sequence progression speed could underlie the traversal of episodic memories and mental simulations at different paces. In conclusion, we describe how the entorhinal cortex and hippocampus contribute to remembering event sequences-a core component of episodic memory.
Collapse
Affiliation(s)
- Jacob L S Bellmund
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Ignacio Polti
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, Norwegian University of Science and Technology, Trondheim, Norway
| | - Christian F Doeller
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
133
|
Russo AA, Khajeh R, Bittner SR, Perkins SM, Cunningham JP, Abbott LF, Churchland MM. Neural Trajectories in the Supplementary Motor Area and Motor Cortex Exhibit Distinct Geometries, Compatible with Different Classes of Computation. Neuron 2020; 107:745-758.e6. [PMID: 32516573 DOI: 10.1016/j.neuron.2020.05.020] [Citation(s) in RCA: 89] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2019] [Revised: 12/25/2019] [Accepted: 05/11/2020] [Indexed: 12/21/2022]
Abstract
The supplementary motor area (SMA) is believed to contribute to higher order aspects of motor control. We considered a key higher order role: tracking progress throughout an action. We propose that doing so requires population activity to display low "trajectory divergence": situations with different future motor outputs should be distinct, even when present motor output is identical. We examined neural activity in SMA and primary motor cortex (M1) as monkeys cycled various distances through a virtual environment. SMA exhibited multiple response features that were absent in M1. At the single-neuron level, these included ramping firing rates and cycle-specific responses. At the population level, they included a helical population-trajectory geometry with shifts in the occupied subspace as movement unfolded. These diverse features all served to reduce trajectory divergence, which was much lower in SMA versus M1. Analogous population-trajectory geometry, also with low divergence, naturally arose in networks trained to internally guide multi-cycle movement.
Collapse
Affiliation(s)
- Abigail A Russo
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Ramin Khajeh
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA
| | - Sean R Bittner
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA
| | - Sean M Perkins
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Department of Biomedical Engineering, Columbia University, New York, NY 10027, USA
| | - John P Cunningham
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Department of Statistics, Columbia University, New York, NY 10027, USA
| | - L F Abbott
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Department of Physiology and Cellular Biophysics, Columbia University Medical Center, New York, NY 10032, USA; Kavli Institute for Brain Science, Columbia University, New York, NY 10027, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University, New York, NY 10027, USA.
| |
Collapse
|
134
|
Wang Y, Zhang X, Xin Q, Hung W, Florman J, Huo J, Xu T, Xie Y, Alkema MJ, Zhen M, Wen Q. Flexible motor sequence generation during stereotyped escape responses. eLife 2020; 9:e56942. [PMID: 32501216 PMCID: PMC7338056 DOI: 10.7554/elife.56942] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2020] [Accepted: 06/05/2020] [Indexed: 01/15/2023] Open
Abstract
Complex animal behaviors arise from a flexible combination of stereotyped motor primitives. Here we use the escape responses of the nematode Caenorhabditis elegans to study how a nervous system dynamically explores the action space. The initiation of the escape responses is predictable: the animal moves away from a potential threat, a mechanical or thermal stimulus. But the motor sequence and the timing that follow are variable. We report that a feedforward excitation between neurons encoding distinct motor states underlies robust motor sequence generation, while mutual inhibition between these neurons controls the flexibility of timing in a motor sequence. Electrical synapses contribute to feedforward coupling whereas glutamatergic synapses contribute to inhibition. We conclude that C. elegans generates robust and flexible motor sequences by combining an excitatory coupling and a winner-take-all operation via mutual inhibition between motor modules.
Collapse
Affiliation(s)
- Yuan Wang
- Hefei National Laboratory for Physical Sciences at the Microscale, Center for Integrative Imaging, School of Life Sciences, University of Science and Technology of ChinaHefeiChina
- Chinese Academy of Sciences Key Laboratory of Brain Function and DiseaseHefeiChina
| | - Xiaoqian Zhang
- Hefei National Laboratory for Physical Sciences at the Microscale, Center for Integrative Imaging, School of Life Sciences, University of Science and Technology of ChinaHefeiChina
- Chinese Academy of Sciences Key Laboratory of Brain Function and DiseaseHefeiChina
| | - Qi Xin
- Hefei National Laboratory for Physical Sciences at the Microscale, Center for Integrative Imaging, School of Life Sciences, University of Science and Technology of ChinaHefeiChina
- Chinese Academy of Sciences Key Laboratory of Brain Function and DiseaseHefeiChina
| | - Wesley Hung
- Samuel Lunenfeld Research Institute, Mount Sinai HospitalTorontoCanada
- University of TorontoTorontoCanada
| | - Jeremy Florman
- Department of Neurobiology, University of Massachusetts Medical SchoolWorcesterUnited States
| | - Jing Huo
- Hefei National Laboratory for Physical Sciences at the Microscale, Center for Integrative Imaging, School of Life Sciences, University of Science and Technology of ChinaHefeiChina
- Chinese Academy of Sciences Key Laboratory of Brain Function and DiseaseHefeiChina
| | - Tianqi Xu
- Hefei National Laboratory for Physical Sciences at the Microscale, Center for Integrative Imaging, School of Life Sciences, University of Science and Technology of ChinaHefeiChina
- Chinese Academy of Sciences Key Laboratory of Brain Function and DiseaseHefeiChina
| | - Yu Xie
- Hefei National Laboratory for Physical Sciences at the Microscale, Center for Integrative Imaging, School of Life Sciences, University of Science and Technology of ChinaHefeiChina
| | - Mark J Alkema
- Department of Neurobiology, University of Massachusetts Medical SchoolWorcesterUnited States
| | - Mei Zhen
- Samuel Lunenfeld Research Institute, Mount Sinai HospitalTorontoCanada
- University of TorontoTorontoCanada
| | - Quan Wen
- Hefei National Laboratory for Physical Sciences at the Microscale, Center for Integrative Imaging, School of Life Sciences, University of Science and Technology of ChinaHefeiChina
- Chinese Academy of Sciences Key Laboratory of Brain Function and DiseaseHefeiChina
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of SciencesShanghaiChina
| |
Collapse
|
135
|
Liu Y, Brincat SL, Miller EK, Hasselmo ME. A Geometric Characterization of Population Coding in the Prefrontal Cortex and Hippocampus during a Paired-Associate Learning Task. J Cogn Neurosci 2020; 32:1455-1465. [PMID: 32379002 DOI: 10.1162/jocn_a_01569] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Large-scale neuronal recording techniques have enabled discoveries of population-level mechanisms for neural computation. However, it is not clear how these mechanisms form by trial-and-error learning. In this article, we present an initial effort to characterize the population activity in monkey prefrontal cortex (PFC) and hippocampus (HPC) during the learning phase of a paired-associate task. To analyze the population data, we introduce the normalized distance, a dimensionless metric that describes the encoding of cognitive variables from the geometrical relationship among neural trajectories in state space. It is found that PFC exhibits a more sustained encoding of the visual stimuli, whereas HPC only transiently encodes the identity of the associate stimuli. Surprisingly, after learning, the neural activity is not reorganized to reflect the task structure, raising the possibility that learning is accompanied by some "silent" mechanism that does not explicitly change the neural representations. We did find partial evidence on the learning-dependent changes for some of the task variables. This study shows the feasibility of using normalized distance as a metric to characterize and compare population-level encoding of task variables and suggests further directions to explore learning-dependent changes in the neural circuits.
Collapse
|
136
|
Abstract
Perceiving, maintaining, and using time intervals in working memory are crucial for animals to anticipate or act correctly at the right time in the ever-changing world. Here, we systematically study the underlying neural mechanisms by training recurrent neural networks to perform temporal tasks or complex tasks in combination with spatial information processing and decision making. We found that neural networks perceive time through state evolution along stereotypical trajectories and produce time intervals by scaling evolution speed. Temporal and nontemporal information is jointly coded in a way that facilitates decoding generalizability. We also provided potential sources for the temporal signals observed in nontiming tasks. Our study revealed the computational principles of a number of experimental phenomena and provided several predictions. To maximize future rewards in this ever-changing world, animals must be able to discover the temporal structure of stimuli and then anticipate or act correctly at the right time. How do animals perceive, maintain, and use time intervals ranging from hundreds of milliseconds to multiseconds in working memory? How is temporal information processed concurrently with spatial information and decision making? Why are there strong neuronal temporal signals in tasks in which temporal information is not required? A systematic understanding of the underlying neural mechanisms is still lacking. Here, we addressed these problems using supervised training of recurrent neural network models. We revealed that neural networks perceive elapsed time through state evolution along stereotypical trajectory, maintain time intervals in working memory in the monotonic increase or decrease of the firing rates of interval-tuned neurons, and compare or produce time intervals by scaling state evolution speed. Temporal and nontemporal information is coded in subspaces orthogonal with each other, and the state trajectories with time at different nontemporal information are quasiparallel and isomorphic. Such coding geometry facilitates the decoding generalizability of temporal and nontemporal information across each other. The network structure exhibits multiple feedforward sequences that mutually excite or inhibit depending on whether their preferences of nontemporal information are similar or not. We identified four factors that facilitate strong temporal signals in nontiming tasks, including the anticipation of coming events. Our work discloses fundamental computational principles of temporal processing, and it is supported by and gives predictions to a number of experimental phenomena.
Collapse
|
137
|
Bondanelli G, Ostojic S. Coding with transient trajectories in recurrent neural networks. PLoS Comput Biol 2020; 16:e1007655. [PMID: 32053594 PMCID: PMC7043794 DOI: 10.1371/journal.pcbi.1007655] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 02/26/2020] [Accepted: 01/14/2020] [Indexed: 01/04/2023] Open
Abstract
Following a stimulus, the neural response typically strongly varies in time and across neurons before settling to a steady-state. While classical population coding theory disregards the temporal dimension, recent works have argued that trajectories of transient activity can be particularly informative about stimulus identity and may form the basis of computations through dynamics. Yet the dynamical mechanisms needed to generate a population code based on transient trajectories have not been fully elucidated. Here we examine transient coding in a broad class of high-dimensional linear networks of recurrently connected units. We start by reviewing a well-known result that leads to a distinction between two classes of networks: networks in which all inputs lead to weak, decaying transients, and networks in which specific inputs elicit amplified transient responses and are mapped onto output states during the dynamics. Theses two classes are simply distinguished based on the spectrum of the symmetric part of the connectivity matrix. For the second class of networks, which is a sub-class of non-normal networks, we provide a procedure to identify transiently amplified inputs and the corresponding readouts. We first apply these results to standard randomly-connected and two-population networks. We then build minimal, low-rank networks that robustly implement trajectories mapping a specific input onto a specific orthogonal output state. Finally, we demonstrate that the capacity of the obtained networks increases proportionally with their size.
Collapse
Affiliation(s)
- Giulio Bondanelli
- Laboratoire de Neurosciences Cognitives et Computationelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| |
Collapse
|
138
|
Gallego JA, Perich MG, Chowdhury RH, Solla SA, Miller LE. Long-term stability of cortical population dynamics underlying consistent behavior. Nat Neurosci 2020; 23:260-270. [PMID: 31907438 PMCID: PMC7007364 DOI: 10.1038/s41593-019-0555-4] [Citation(s) in RCA: 165] [Impact Index Per Article: 33.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2018] [Accepted: 11/11/2019] [Indexed: 01/08/2023]
Abstract
Animals readily execute learned behaviors in a consistent manner over long periods of time, and yet no equally stable neural correlate has been demonstrated. How does the cortex achieve this stable control? Using the sensorimotor system as a model of cortical processing, we investigated the hypothesis that the dynamics of neural latent activity, which captures the dominant co-variation patterns within the neural population, must be preserved across time. We recorded from populations of neurons in premotor, primary motor and somatosensory cortices as monkeys performed a reaching task, for up to 2 years. Intriguingly, despite a steady turnover in the recorded neurons, the low-dimensional latent dynamics remained stable. The stability allowed reliable decoding of behavioral features for the entire timespan, while fixed decoders based directly on the recorded neural activity degraded substantially. We posit that stable latent cortical dynamics within the manifold are the fundamental building blocks underlying consistent behavioral execution.
Collapse
Affiliation(s)
- Juan A Gallego
- Neural and Cognitive Engineering Group, Center for Automation and Robotics, Spanish National Research Council, Arganda del Rey, Spain.
- Department of Physiology, Northwestern University, Chicago, IL, USA.
- Department of Bioengineering, Imperial College London, London, UK.
| | - Matthew G Perich
- Department of Fundamental Neuroscience, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Raeed H Chowdhury
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, USA
| | - Sara A Solla
- Department of Physiology, Northwestern University, Chicago, IL, USA
- Department of Physics and Astronomy, Northwestern University, Evanston, IL, USA
| | - Lee E Miller
- Department of Physiology, Northwestern University, Chicago, IL, USA.
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, USA.
- Department of Physical Medicine and Rehabilitation, Northwestern University, and Shirley Ryan Ability Lab, Chicago, IL, USA.
| |
Collapse
|
139
|
Slayton MA, Romero-Sosa JL, Shore K, Buonomano DV, Viskontas IV. Musical expertise generalizes to superior temporal scaling in a Morse code tapping task. PLoS One 2020; 15:e0221000. [PMID: 31905200 PMCID: PMC6944339 DOI: 10.1371/journal.pone.0221000] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2019] [Accepted: 12/10/2019] [Indexed: 11/26/2022] Open
Abstract
A key feature of the brain’s ability to tell time and generate complex temporal patterns is its capacity to produce similar temporal patterns at different speeds. For example, humans can tie a shoe, type, or play an instrument at different speeds or tempi—a phenomenon referred to as temporal scaling. While it is well established that training improves timing precision and accuracy, it is not known whether expertise improves temporal scaling, and if so, whether it generalizes across skill domains. We quantified temporal scaling and timing precision in musicians and non-musicians as they learned to tap a Morse code sequence. We found that non-musicians improved significantly over the course of days of training at the standard speed. In contrast, musicians exhibited a high level of temporal precision on the first day, which did not improve significantly with training. Although there was no significant difference in performance at the end of training at the standard speed, musicians were significantly better at temporal scaling—i.e., at reproducing the learned Morse code pattern at faster and slower speeds. Interestingly, both musicians and non-musicians exhibited a Weber-speed effect, where temporal precision at the same absolute time was higher when producing patterns at the faster speed. These results are the first to establish that the ability to generate the same motor patterns at different speeds improves with extensive training and generalizes to non-musical domains.
Collapse
Affiliation(s)
- Matthew A. Slayton
- San Francisco Conservatory of Music, San Francisco, CA, United States of America
| | - Juan L. Romero-Sosa
- Department of Neurobiology, University of California Los Angeles, Los Angeles, CA, United States of America
- Neuroscience Interdepartmental Program, University of California Los Angeles, Los Angeles, CA, United States of America
| | - Katrina Shore
- San Francisco Conservatory of Music, San Francisco, CA, United States of America
| | - Dean V. Buonomano
- Department of Neurobiology, University of California Los Angeles, Los Angeles, CA, United States of America
- Neuroscience Interdepartmental Program, University of California Los Angeles, Los Angeles, CA, United States of America
- Department of Psychology, University of California Los Angeles, Los Angeles, CA, United States of America
- * E-mail: (DVB); (IVV)
| | - Indre V. Viskontas
- San Francisco Conservatory of Music, San Francisco, CA, United States of America
- Department of Psychology, University of San Francisco, San Francisco, CA, United States of America
- * E-mail: (DVB); (IVV)
| |
Collapse
|
140
|
Ritz H, Frömer R, Shenhav A. Bridging Motor and Cognitive Control: It's About Time! Trends Cogn Sci 2020; 24:6-8. [PMID: 31780248 PMCID: PMC6989175 DOI: 10.1016/j.tics.2019.11.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Accepted: 11/08/2019] [Indexed: 11/22/2022]
Abstract
Is how we control our thoughts similar to how we control our movements? Egger et al. show that the neural dynamics underlying the control of internal states exhibit similar algorithmic properties as those that control movements. This experiment reveals a promising connection between how we control our brain and our body.
Collapse
Affiliation(s)
- Harrison Ritz
- Brown University, 190 Thayer Street, Box 1821, Providence, RI 02912, USA
| | - Romy Frömer
- Brown University, 190 Thayer Street, Box 1821, Providence, RI 02912, USA
| | - Amitai Shenhav
- Brown University, 190 Thayer Street, Box 1821, Providence, RI 02912, USA.
| |
Collapse
|
141
|
Kao JC. Considerations in using recurrent neural networks to probe neural dynamics. J Neurophysiol 2019; 122:2504-2521. [PMID: 31619125 DOI: 10.1152/jn.00467.2018] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Recurrent neural networks (RNNs) are increasingly being used to model complex cognitive and motor tasks performed by behaving animals. RNNs are trained to reproduce animal behavior while also capturing key statistics of empirically recorded neural activity. In this manner, the RNN can be viewed as an in silico circuit whose computational elements share similar motifs with the cortical area it is modeling. Furthermore, because the RNN's governing equations and parameters are fully known, they can be analyzed to propose hypotheses for how neural populations compute. In this context, we present important considerations when using RNNs to model motor behavior in a delayed reach task. First, by varying the network's nonlinear activation and rate regularization, we show that RNNs reproducing single-neuron firing rate motifs may not adequately capture important population motifs. Second, we find that even when RNNs reproduce key neurophysiological features on both the single neuron and population levels, they can do so through distinctly different dynamical mechanisms. To distinguish between these mechanisms, we show that an RNN consistent with a previously proposed dynamical mechanism is more robust to input noise. Finally, we show that these dynamics are sufficient for the RNN to generalize to tasks it was not trained on. Together, these results emphasize important considerations when using RNN models to probe neural dynamics.NEW & NOTEWORTHY Artificial neurons in a recurrent neural network (RNN) may resemble empirical single-unit activity but not adequately capture important features on the neural population level. Dynamics of RNNs can be visualized in low-dimensional projections to provide insight into the RNN's dynamical mechanism. RNNs trained in different ways may reproduce neurophysiological motifs but do so with distinctly different mechanisms. RNNs trained to only perform a delayed reach task can generalize to perform tasks where the target is switched or the target location is changed.
Collapse
Affiliation(s)
- Jonathan C Kao
- Department of Electrical and Computer Engineering, University of California, Los Angeles, California.,Neurosciences Program, University of California, Los Angeles, California
| |
Collapse
|
142
|
Maheswaranathan N, Williams AH, Golub MD, Ganguli S, Sussillo D. Universality and individuality in neural dynamics across large populations of recurrent networks. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2019; 2019:15629-15641. [PMID: 32782422 PMCID: PMC7416639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Task-based modeling with recurrent neural networks (RNNs) has emerged as a popular way to infer the computational function of different brain regions. These models are quantitatively assessed by comparing the low-dimensional neural representations of the model with the brain, for example using canonical correlation analysis (CCA). However, the nature of the detailed neurobiological inferences one can draw from such efforts remains elusive. For example, to what extent does training neural networks to solve common tasks uniquely determine the network dynamics, independent of modeling architectural choices? Or alternatively, are the learned dynamics highly sensitive to different model choices? Knowing the answer to these questions has strong implications for whether and how we should use task-based RNN modeling to understand brain dynamics. To address these foundational questions, we study populations of thousands of networks, with commonly used RNN architectures, trained to solve neuroscientifically motivated tasks and characterize their nonlinear dynamics. We find the geometry of the RNN representations can be highly sensitive to different network architectures, yielding a cautionary tale for measures of similarity that rely on representational geometry, such as CCA. Moreover, we find that while the geometry of neural dynamics can vary greatly across architectures, the underlying computational scaffold-the topological structure of fixed points, transitions between them, limit cycles, and linearized dynamics-often appears universal across all architectures.
Collapse
Affiliation(s)
| | | | | | - Surya Ganguli
- Stanford University and Google Brain, Stanford, CA and Mountain View, CA
| | | |
Collapse
|
143
|
Parthasarathy A, Tang C, Herikstad R, Cheong LF, Yen SC, Libedinsky C. Time-invariant working memory representations in the presence of code-morphing in the lateral prefrontal cortex. Nat Commun 2019; 10:4995. [PMID: 31676790 PMCID: PMC6825148 DOI: 10.1038/s41467-019-12841-y] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Accepted: 09/27/2019] [Indexed: 11/09/2022] Open
Abstract
Maintenance of working memory is thought to involve the activity of prefrontal neuronal populations with strong recurrent connections. However, it was recently shown that distractors evoke a morphing of the prefrontal population code, even when memories are maintained throughout the delay. How can a morphing code maintain time-invariant memory information? We hypothesized that dynamic prefrontal activity contains time-invariant memory information within a subspace of neural activity. Using an optimization algorithm, we found a low-dimensional subspace that contains time-invariant memory information. This information was reduced in trials where the animals made errors in the task, and was also found in periods of the trial not used to find the subspace. A bump attractor model replicated these properties, and provided predictions that were confirmed in the neural data. Our results suggest that the high-dimensional responses of prefrontal cortex contain subspaces where different types of information can be simultaneously encoded with minimal interference.
Collapse
Affiliation(s)
| | - Cheng Tang
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore
| | - Roger Herikstad
- The N.1 Institute for Health, National University of Singapore (NUS), Singapore, Singapore
| | - Loong Fah Cheong
- Department of Electrical and Computer Engineering, NUS, Singapore, Singapore
| | - Shih-Cheng Yen
- The N.1 Institute for Health, National University of Singapore (NUS), Singapore, Singapore.
- Innovation and Design Programme, Faculty of Engineering, NUS, Singapore, Singapore.
| | - Camilo Libedinsky
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore.
- The N.1 Institute for Health, National University of Singapore (NUS), Singapore, Singapore.
- Department of Psychology, NUS, Singapore, Singapore.
| |
Collapse
|
144
|
Nobre AC, Stokes MG. Premembering Experience: A Hierarchy of Time-Scales for Proactive Attention. Neuron 2019; 104:132-146. [PMID: 31600510 PMCID: PMC6873797 DOI: 10.1016/j.neuron.2019.08.030] [Citation(s) in RCA: 88] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Revised: 08/07/2019] [Accepted: 08/20/2019] [Indexed: 12/30/2022]
Abstract
Memories are about the past, but they serve the future. Memory research often emphasizes the former aspect: focusing on the functions that re-constitute (re-member) experience and elucidating the various types of memories and their interrelations, timescales, and neural bases. Here we highlight the prospective nature of memory in guiding selective attention, focusing on functions that use previous experience to anticipate the relevant events about to unfold-to "premember" experience. Memories of various types and timescales play a fundamental role in guiding perception and performance adaptively, proactively, and dynamically. Consonant with this perspective, memories are often recorded according to expected future demands. Using working memory as an example, we consider how mnemonic content is selected and represented for future use. This perspective moves away from the traditional representational account of memory toward a functional account in which forward-looking memory traces are informationally and computationally tuned for interacting with incoming sensory signals to guide adaptive behavior.
Collapse
Affiliation(s)
- Anna C Nobre
- Department of Experimental Psychology, University of Oxford, Oxford, UK; Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| | - Mark G Stokes
- Department of Experimental Psychology, University of Oxford, Oxford, UK; Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| |
Collapse
|
145
|
Internal models of sensorimotor integration regulate cortical dynamics. Nat Neurosci 2019; 22:1871-1882. [PMID: 31591558 PMCID: PMC6903408 DOI: 10.1038/s41593-019-0500-6] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2018] [Accepted: 08/16/2019] [Indexed: 01/20/2023]
Abstract
Sensorimotor control during overt movements is characterized in terms of three building blocks: a controller, a simulator, and a state estimator. We asked whether the same framework could explain the control of internal states in the absence of movements. Recently, it was shown that the brain controls the timing of future movements by adjusting an internal speed command. We trained monkeys in a novel task in which the speed command had to be controlled dynamically based on the timing of a sequence of flashes. Recordings from the frontal cortex provided evidence that the brain updates the internal speed command after each flash based on the error between the timing of the flash and the anticipated timing of the flash derived from a simulated motor plan. These findings suggest that cognitive control of internal states may be understood in terms of the same computational principles as motor control.
Collapse
|
146
|
Neuroscience out of control: control-theoretic perspectives on neural circuit dynamics. Curr Opin Neurobiol 2019; 58:122-129. [DOI: 10.1016/j.conb.2019.09.001] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 07/16/2019] [Accepted: 09/03/2019] [Indexed: 12/19/2022]
|
147
|
Beer C, Barak O. One Step Back, Two Steps Forward: Interference and Learning in Recurrent Neural Networks. Neural Comput 2019; 31:1985-2003. [DOI: 10.1162/neco_a_01222] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Artificial neural networks, trained to perform cognitive tasks, have recently been used as models for neural recordings from animals performing these tasks. While some progress has been made in performing such comparisons, the evolution of network dynamics throughout learning remains unexplored. This is paralleled by an experimental focus on recording from trained animals, with few studies following neural activity throughout training. In this work, we address this gap in the realm of artificial networks by analyzing networks that are trained to perform memory and pattern generation tasks. The functional aspect of these tasks corresponds to dynamical objects in the fully trained network—a line attractor or a set of limit cycles for the two respective tasks. We use these dynamical objects as anchors to study the effect of learning on their emergence. We find that the sequential nature of learning—one trial at a time—has major consequences for the learning trajectory and its final outcome. Specifically, we show that least mean squares (LMS), a simple gradient descent suggested as a biologically plausible version of the FORCE algorithm, is constantly obstructed by forgetting, which is manifested as the destruction of dynamical objects from previous trials. The degree of interference is determined by the correlation between different trials. We show which specific ingredients of FORCE avoid this phenomenon. Overall, this difference results in convergence that is orders of magnitude slower for LMS. Learning implies accumulating information across multiple trials to form the overall concept of the task. Our results show that interference between trials can greatly affect learning in a learning-rule-dependent manner. These insights can help design experimental protocols that minimize such interference, and possibly infer underlying learning rules by observing behavior and neural activity throughout learning.
Collapse
Affiliation(s)
- Chen Beer
- Viterby Faculty of Electrical Engineering and Network Biology Research Laboratories, Technion Israel Institute of Technology, Haifa 320003, Israel
| | - Omri Barak
- Network Biology Research Laboratories and Rappaport Faculty of Medicine, Technion Israel Institute of Technology, Haifa 320003, Israel
| |
Collapse
|
148
|
Musall S, Urai AE, Sussillo D, Churchland AK. Harnessing behavioral diversity to understand neural computations for cognition. Curr Opin Neurobiol 2019; 58:229-238. [PMID: 31670073 PMCID: PMC6931281 DOI: 10.1016/j.conb.2019.09.011] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Revised: 08/28/2019] [Accepted: 09/11/2019] [Indexed: 11/28/2022]
Abstract
With the increasing acquisition of large-scale neural recordings comes the challenge of inferring the computations they perform and understanding how these give rise to behavior. Here, we review emerging conceptual and technological advances that begin to address this challenge, garnering insights from both biological and artificial neural networks. We argue that neural data should be recorded during rich behavioral tasks, to model cognitive processes and estimate latent behavioral variables. Careful quantification of animal movements can also provide a more complete picture of how movements shape neural dynamics and reflect changes in brain state, such as arousal or stress. Artificial neural networks (ANNs) could serve as artificial model organisms to connect neural dynamics and rich behavioral data. ANNs have already begun to reveal how a wide range of different behaviors can be implemented, generating hypotheses about how observed neural activity might drive behavior and explaining diversity in behavioral strategies.
Collapse
Affiliation(s)
- Simon Musall
- Cold Spring Harbor Laboratory, Neuroscience, Cold Spring Harbor, NY, USA
| | - Anne E Urai
- Cold Spring Harbor Laboratory, Neuroscience, Cold Spring Harbor, NY, USA
| | - David Sussillo
- Google AI, Google, Inc., Mountain View, CA, USA; Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Stanford Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Anne K Churchland
- Cold Spring Harbor Laboratory, Neuroscience, Cold Spring Harbor, NY, USA.
| |
Collapse
|
149
|
Whiteway MR, Butts DA. The quest for interpretable models of neural population activity. Curr Opin Neurobiol 2019; 58:86-93. [PMID: 31426024 DOI: 10.1016/j.conb.2019.07.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Accepted: 07/14/2019] [Indexed: 11/24/2022]
Abstract
Many aspects of brain function arise from the coordinated activity of large populations of neurons. Recent developments in neural recording technologies are providing unprecedented access to the activity of such populations during increasingly complex experimental contexts; however, extracting scientific insights from such recordings requires the concurrent development of analytical tools that relate this population activity to system-level function. This is a primary motivation for latent variable models, which seek to provide a low-dimensional description of population activity that can be related to experimentally controlled variables, as well as uncontrolled variables such as internal states (e.g. attention and arousal) and elements of behavior. While deriving an understanding of function from traditional latent variable methods relies on low-dimensional visualizations, new approaches are targeting more interpretable descriptions of the components underlying system-level function.
Collapse
Affiliation(s)
- Matthew R Whiteway
- Zuckerman Mind Brain Behavior Institute, Jerome L Greene Science Center, Columbia University, 3227 Broadway, 5th Floor, Quad D, New York, NY 10027, USA
| | - Daniel A Butts
- Department of Biology and Program in Neuroscience and Cognitive Science, University of Maryland, 1210 Biology-Psychology Bldg. #144, College Park, MD 20742, USA.
| |
Collapse
|
150
|
Sohn H, Narain D, Meirhaeghe N, Jazayeri M. Bayesian Computation through Cortical Latent Dynamics. Neuron 2019; 103:934-947.e5. [PMID: 31320220 DOI: 10.1016/j.neuron.2019.06.012] [Citation(s) in RCA: 120] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2018] [Revised: 04/15/2019] [Accepted: 06/13/2019] [Indexed: 10/26/2022]
Abstract
Statistical regularities in the environment create prior beliefs that we rely on to optimize our behavior when sensory information is uncertain. Bayesian theory formalizes how prior beliefs can be leveraged and has had a major impact on models of perception, sensorimotor function, and cognition. However, it is not known how recurrent interactions among neurons mediate Bayesian integration. By using a time-interval reproduction task in monkeys, we found that prior statistics warp neural representations in the frontal cortex, allowing the mapping of sensory inputs to motor outputs to incorporate prior statistics in accordance with Bayesian inference. Analysis of recurrent neural network models performing the task revealed that this warping was enabled by a low-dimensional curved manifold and allowed us to further probe the potential causal underpinnings of this computational strategy. These results uncover a simple and general principle whereby prior beliefs exert their influence on behavior by sculpting cortical latent dynamics.
Collapse
Affiliation(s)
- Hansem Sohn
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Devika Narain
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Erasmus Medical Center, Rotterdam 3015CN, the Netherlands
| | - Nicolas Meirhaeghe
- Harvard-MIT Division of Health Sciences and Technology, Cambridge, MA 02139, USA
| | - Mehrdad Jazayeri
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| |
Collapse
|