1
|
Ruff DA, Markman SK, Kim JZ, Cohen MR. Linking neural population formatting to function. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.01.03.631242. [PMID: 39803479 PMCID: PMC11722384 DOI: 10.1101/2025.01.03.631242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/23/2025]
Abstract
Animals capable of complex behaviors tend to have more distinct brain areas than simpler organisms, and artificial networks that perform many tasks tend to self-organize into modules (1-3). This suggests that different brain areas serve distinct functions supporting complex behavior. However, a common observation is that essentially anything that an animal senses, knows, or does can be decoded from neural activity in any brain area (4-6). If everything is everywhere, why have distinct areas? Here we show that the function of a brain area is more related to how different types of information are combined (formatted) in neural representations than merely whether that information is present. We compared two brain areas: the middle temporal area (MT), which is important for visual motion perception (7, 8), and the dorsolateral prefrontal cortex (dlPFC), which is linked to decision-making and reward expectation (9, 10)). When monkeys based decisions on a combination of motion and reward information, both types of information were present in both areas. However, they were formatted differently: in MT, they were encoded separably, while in dlPFC, they were represented jointly in ways that reflected the monkeys' decision-making. A recurrent neural network (RNN) model that mirrored the information formatting in MT and dlPFC predicted that manipulating activity in these areas would differently affect decision-making. Consistent with model predictions, electrically stimulating MT biased choices midway between the visual motion stimulus and the preferred direction of the stimulated units (11), while stimulating dlPFC produced 'winner-take-all' decisions that sometimes reflected the visual motion stimulus and sometimes reflected the preference of the stimulated units, but never in between. These results are consistent with the tantalizing possibility that a modular structure enables complex behavior by flexibly reformatting information to accomplish behavioral goals.
Collapse
Affiliation(s)
- Douglas A Ruff
- Department of Neurobiology, University of Chicago, IL, USA
| | - Sol K Markman
- Department of Neurobiology, University of Chicago, IL, USA
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, MA, USA
| | - Jason Z Kim
- Department of Physics, Cornell University, Ithaca, NY, USA
| | | |
Collapse
|
2
|
Bardella G, Franchini S, Pani P, Ferraina S. Lattice physics approaches for neural networks. iScience 2024; 27:111390. [PMID: 39679297 PMCID: PMC11638618 DOI: 10.1016/j.isci.2024.111390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2024] Open
Abstract
Modern neuroscience has evolved into a frontier field that draws on numerous disciplines, resulting in the flourishing of novel conceptual frames primarily inspired by physics and complex systems science. Contributing in this direction, we recently introduced a mathematical framework to describe the spatiotemporal interactions of systems of neurons using lattice field theory, the reference paradigm for theoretical particle physics. In this note, we provide a concise summary of the basics of the theory, aiming to be intuitive to the interdisciplinary neuroscience community. We contextualize our methods, illustrating how to readily connect the parameters of our formulation to experimental variables using well-known renormalization procedures. This synopsis yields the key concepts needed to describe neural networks using lattice physics. Such classes of methods are attention-worthy in an era of blistering improvements in numerical computations, as they can facilitate relating the observation of neural activity to generative models underpinned by physical principles.
Collapse
Affiliation(s)
- Giampiero Bardella
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Simone Franchini
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Pierpaolo Pani
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Stefano Ferraina
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
3
|
Yu H, Zhao Q. Brain-inspired multisensory integration neural network for cross-modal recognition through spatiotemporal dynamics and deep learning. Cogn Neurodyn 2024; 18:3615-3628. [PMID: 39712112 PMCID: PMC11655826 DOI: 10.1007/s11571-023-09932-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 12/25/2022] [Accepted: 01/13/2023] [Indexed: 02/05/2023] Open
Abstract
The integration and interaction of cross-modal senses in brain neural networks can facilitate high-level cognitive functionalities. In this work, we proposed a bioinspired multisensory integration neural network (MINN) that integrates visual and audio senses for recognizing multimodal information across different sensory modalities. This deep learning-based model incorporates a cascading framework of parallel convolutional neural networks (CNNs) for extracting intrinsic features from visual and audio inputs, and a recurrent neural network (RNN) for multimodal information integration and interaction. The network was trained using synthetic training data generated for digital recognition tasks. It was revealed that the spatial and temporal features extracted from visual and audio inputs by CNNs were encoded in subspaces orthogonal with each other. In integration epoch, network state evolved along quasi-rotation-symmetric trajectories and a structural manifold with stable attractors was formed in RNN, supporting accurate cross-modal recognition. We further evaluated the robustness of the MINN algorithm with noisy inputs and asynchronous digital inputs. Experimental results demonstrated the superior performance of MINN for flexible integration and accurate recognition of multisensory information with distinct sense properties. The present results provide insights into the computational principles governing multisensory integration and a comprehensive neural network model for brain-inspired intelligence.
Collapse
Affiliation(s)
- Haitao Yu
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072 China
| | - Quanfa Zhao
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072 China
| |
Collapse
|
4
|
Yang J, Zhang H, Lim S. Sensory-memory interactions via modular structure explain errors in visual working memory. eLife 2024; 13:RP95160. [PMID: 39388221 PMCID: PMC11466453 DOI: 10.7554/elife.95160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/12/2024] Open
Abstract
Errors in stimulus estimation reveal how stimulus representation changes during cognitive processes. Repulsive bias and minimum variance observed near cardinal axes are well-known error patterns typically associated with visual orientation perception. Recent experiments suggest that these errors continuously evolve during working memory, posing a challenge that neither static sensory models nor traditional memory models can address. Here, we demonstrate that these evolving errors, maintaining characteristic shapes, require network interaction between two distinct modules. Each module fulfills efficient sensory encoding and memory maintenance, which cannot be achieved simultaneously in a single-module network. The sensory module exhibits heterogeneous tuning with strong inhibitory modulation reflecting natural orientation statistics. While the memory module, operating alone, supports homogeneous representation via continuous attractor dynamics, the fully connected network forms discrete attractors with moderate drift speed and nonuniform diffusion processes. Together, our work underscores the significance of sensory-memory interaction in continuously shaping stimulus representation during working memory.
Collapse
Affiliation(s)
- Jun Yang
- Weiyang College, Tsinghua UniversityBeijingChina
| | - Hanqi Zhang
- Shanghai Frontiers Science Center of Artificial Intelligence and Deep LearningShanghaiChina
- Neural ScienceShanghaiChina
- NYU-ECNU Institute of Brain and Cognitive ScienceShanghaiChina
| | - Sukbin Lim
- Shanghai Frontiers Science Center of Artificial Intelligence and Deep LearningShanghaiChina
- Neural ScienceShanghaiChina
- NYU-ECNU Institute of Brain and Cognitive ScienceShanghaiChina
| |
Collapse
|
5
|
Kikumoto A, Bhandari A, Shibata K, Badre D. A transient high-dimensional geometry affords stable conjunctive subspaces for efficient action selection. Nat Commun 2024; 15:8513. [PMID: 39353961 PMCID: PMC11445473 DOI: 10.1038/s41467-024-52777-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 09/18/2024] [Indexed: 10/03/2024] Open
Abstract
Flexible action selection requires cognitive control mechanisms capable of mapping the same inputs to different output actions depending on the context. From a neural state-space perspective, this requires a control representation that separates similar input neural states by context. Additionally, for action selection to be robust and time-invariant, information must be stable in time, enabling efficient readout. Here, using EEG decoding methods, we investigate how the geometry and dynamics of control representations constrain flexible action selection in the human brain. Participants performed a context-dependent action selection task. A forced response procedure probed action selection different states in neural trajectories. The result shows that before successful responses, there is a transient expansion of representational dimensionality that separated conjunctive subspaces. Further, the dynamics stabilizes in the same time window, with entry into this stable, high-dimensional state predictive of individual trial performance. These results establish the neural geometry and dynamics the human brain needs for flexible control over behavior.
Collapse
Affiliation(s)
- Atsushi Kikumoto
- Department of Cognitive and Psychological Sciences, Brown University, Rhode Island, US.
- RIKEN Center for Brain Science, Wako, Saitama, Japan.
| | - Apoorva Bhandari
- Department of Cognitive and Psychological Sciences, Brown University, Rhode Island, US
| | | | - David Badre
- Department of Cognitive and Psychological Sciences, Brown University, Rhode Island, US
- Carney Institute for Brain Science, Brown University, Providence, Rhode Island, US
| |
Collapse
|
6
|
Kikumoto A, Bhandari A, Shibata K, Badre D. A Transient High-dimensional Geometry Affords Stable Conjunctive Subspaces for Efficient Action Selection. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.06.09.544428. [PMID: 37333209 PMCID: PMC10274903 DOI: 10.1101/2023.06.09.544428] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/20/2023]
Abstract
Flexible action selection requires cognitive control mechanisms capable of mapping the same inputs to different output actions depending on the context. From a neural state-space perspective, this requires a control representation that separates similar input neural states by context. Additionally, for action selection to be robust and time-invariant, information must be stable in time, enabling efficient readout. Here, using EEG decoding methods, we investigate how the geometry and dynamics of control representations constrain flexible action selection in the human brain. Participants performed a context-dependent action selection task. A forced response procedure probed action selection different states in neural trajectories. The result shows that before successful responses, there is a transient expansion of representational dimensionality that separated conjunctive subspaces. Further, the dynamics stabilizes in the same time window, with entry into this stable, high-dimensional state predictive of individual trial performance. These results establish the neural geometry and dynamics the human brain needs for flexible control over behavior.
Collapse
Affiliation(s)
- Atsushi Kikumoto
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Rhode Island, U.S
- RIKEN Center for Brain Science, Wako, Saitama, Japan
| | - Apoorva Bhandari
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Rhode Island, U.S
| | | | - David Badre
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Rhode Island, U.S
- Carney Institute for Brain Science, Brown University, Providence, Rhode Island, U.S
| |
Collapse
|
7
|
Lin Z, Huang H. Spiking mode-based neural networks. Phys Rev E 2024; 110:024306. [PMID: 39295018 DOI: 10.1103/physreve.110.024306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 07/22/2024] [Indexed: 09/21/2024]
Abstract
Spiking neural networks play an important role in brainlike neuromorphic computations and in studying working mechanisms of neural circuits. One drawback of training a large-scale spiking neural network is that updating all weights is quite expensive. Furthermore, after training, all information related to the computational task is hidden into the weight matrix, prohibiting us from a transparent understanding of circuit mechanisms. Therefore, in this work, we address these challenges by proposing a spiking mode-based training protocol, where the recurrent weight matrix is explained as a Hopfield-like multiplication of three matrices: input modes, output modes, and a score matrix. The first advantage is that the weight is interpreted by input and output modes and their associated scores characterizing the importance of each decomposition term. The number of modes is thus adjustable, allowing more degrees of freedom for modeling the experimental data. This significantly reduces the training cost because of significantly reduced space complexity for learning. Training spiking networks is thus carried out in the mode-score space. The second advantage is that one can project the high-dimensional neural activity (filtered spike train) in the state space onto the mode space which is typically of a low dimension, e.g., a few modes are sufficient to capture the shape of the underlying neural manifolds. We successfully apply our framework in two computational tasks-digit classification and selective sensory integration tasks. Our method thus accelerates the training of spiking neural networks by a Hopfield-like decomposition, and moreover this training leads to low-dimensional attractor structures of high-dimensional neural dynamics.
Collapse
|
8
|
Eissa TL, Kilpatrick ZP. Learning efficient representations of environmental priors in working memory. PLoS Comput Biol 2023; 19:e1011622. [PMID: 37943956 PMCID: PMC10662764 DOI: 10.1371/journal.pcbi.1011622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 11/21/2023] [Accepted: 10/20/2023] [Indexed: 11/12/2023] Open
Abstract
Experience shapes our expectations and helps us learn the structure of the environment. Inference models render such learning as a gradual refinement of the observer's estimate of the environmental prior. For instance, when retaining an estimate of an object's features in working memory, learned priors may bias the estimate in the direction of common feature values. Humans display such biases when retaining color estimates on short time intervals. We propose that these systematic biases emerge from modulation of synaptic connectivity in a neural circuit based on the experienced stimulus history, shaping the persistent and collective neural activity that encodes the stimulus estimate. Resulting neural activity attractors are aligned to common stimulus values. Using recently published human response data from a delayed-estimation task in which stimuli (colors) were drawn from a heterogeneous distribution that did not necessarily correspond with reported population biases, we confirm that most subjects' response distributions are better described by experience-dependent learning models than by models with fixed biases. This work suggests systematic limitations in working memory reflect efficient representations of inferred environmental structure, providing new insights into how humans integrate environmental knowledge into their cognitive strategies.
Collapse
Affiliation(s)
- Tahra L. Eissa
- Department of Applied Mathematics, University of Colorado Boulder, Boulder, Colorado, United States of America
| | - Zachary P. Kilpatrick
- Department of Applied Mathematics, University of Colorado Boulder, Boulder, Colorado, United States of America
- Institute of Cognitive Science, University of Colorado Boulder, Boulder, Colorado, United States of America
| |
Collapse
|
9
|
Langdon C, Genkin M, Engel TA. A unifying perspective on neural manifolds and circuits for cognition. Nat Rev Neurosci 2023; 24:363-377. [PMID: 37055616 PMCID: PMC11058347 DOI: 10.1038/s41583-023-00693-x] [Citation(s) in RCA: 53] [Impact Index Per Article: 26.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/06/2023] [Indexed: 04/15/2023]
Abstract
Two different perspectives have informed efforts to explain the link between the brain and behaviour. One approach seeks to identify neural circuit elements that carry out specific functions, emphasizing connectivity between neurons as a substrate for neural computations. Another approach centres on neural manifolds - low-dimensional representations of behavioural signals in neural population activity - and suggests that neural computations are realized by emergent dynamics. Although manifolds reveal an interpretable structure in heterogeneous neuronal activity, finding the corresponding structure in connectivity remains a challenge. We highlight examples in which establishing the correspondence between low-dimensional activity and connectivity has been possible, unifying the neural manifold and circuit perspectives. This relationship is conspicuous in systems in which the geometry of neural responses mirrors their spatial layout in the brain, such as the fly navigational system. Furthermore, we describe evidence that, in systems in which neural responses are heterogeneous, the circuit comprises interactions between activity patterns on the manifold via low-rank connectivity. We suggest that unifying the manifold and circuit approaches is important if we are to be able to causally test theories about the neural computations that underlie behaviour.
Collapse
Affiliation(s)
- Christopher Langdon
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Mikhail Genkin
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Tatiana A Engel
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA.
| |
Collapse
|
10
|
Soda T, Ahmadi A, Tani J, Honda M, Hanakawa T, Yamashita Y. Simulating developmental diversity: Impact of neural stochasticity on atypical flexibility and hierarchy. Front Psychiatry 2023; 14:1080668. [PMID: 37009124 PMCID: PMC10050443 DOI: 10.3389/fpsyt.2023.1080668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 02/21/2023] [Indexed: 03/17/2023] Open
Abstract
Introduction Investigating the pathological mechanisms of developmental disorders is a challenge because the symptoms are a result of complex and dynamic factors such as neural networks, cognitive behavior, environment, and developmental learning. Recently, computational methods have started to provide a unified framework for understanding developmental disorders, enabling us to describe the interactions among those multiple factors underlying symptoms. However, this approach is still limited because most studies to date have focused on cross-sectional task performance and lacked the perspectives of developmental learning. Here, we proposed a new research method for understanding the mechanisms of the acquisition and its failures in hierarchical Bayesian representations using a state-of-the-art computational model, referred to as in silico neurodevelopment framework for atypical representation learning. Methods Simple simulation experiments were conducted using the proposed framework to examine whether manipulating the neural stochasticity and noise levels in external environments during the learning process can lead to the altered acquisition of hierarchical Bayesian representation and reduced flexibility. Results Networks with normal neural stochasticity acquired hierarchical representations that reflected the underlying probabilistic structures in the environment, including higher-order representation, and exhibited good behavioral and cognitive flexibility. When the neural stochasticity was high during learning, top-down generation using higher-order representation became atypical, although the flexibility did not differ from that of the normal stochasticity settings. However, when the neural stochasticity was low in the learning process, the networks demonstrated reduced flexibility and altered hierarchical representation. Notably, this altered acquisition of higher-order representation and flexibility was ameliorated by increasing the level of noises in external stimuli. Discussion These results demonstrated that the proposed method assists in modeling developmental disorders by bridging between multiple factors, such as the inherent characteristics of neural dynamics, acquisitions of hierarchical representation, flexible behavior, and external environment.
Collapse
Affiliation(s)
- Takafumi Soda
- Department of Information Medicine, National Institute of Neuroscience, National Center of Neurology and Psychiatry, Kodaira, Japan
- Department of NCNP Brain Physiology and Pathology, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, Tokyo, Japan
| | | | - Jun Tani
- Cognitive Neurorobotics Research Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
| | - Manabu Honda
- Department of Information Medicine, National Institute of Neuroscience, National Center of Neurology and Psychiatry, Kodaira, Japan
| | - Takashi Hanakawa
- Integrated Neuroanatomy and Neuroimaging, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Yuichi Yamashita
- Department of Information Medicine, National Institute of Neuroscience, National Center of Neurology and Psychiatry, Kodaira, Japan
| |
Collapse
|
11
|
DePasquale B, Sussillo D, Abbott LF, Churchland MM. The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks. Neuron 2023; 111:631-649.e10. [PMID: 36630961 PMCID: PMC10118067 DOI: 10.1016/j.neuron.2022.12.007] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 06/17/2022] [Accepted: 12/05/2022] [Indexed: 01/12/2023]
Abstract
Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.
Collapse
Affiliation(s)
- Brian DePasquale
- Princeton Neuroscience Institute, Princeton University, Princeton NJ, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - L F Abbott
- Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Physiology and Cellular Biophysics, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
| |
Collapse
|
12
|
Beiran M, Meirhaeghe N, Sohn H, Jazayeri M, Ostojic S. Parametric control of flexible timing through low-dimensional neural manifolds. Neuron 2023; 111:739-753.e8. [PMID: 36640766 PMCID: PMC9992137 DOI: 10.1016/j.neuron.2022.12.016] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 09/23/2022] [Accepted: 12/08/2022] [Indexed: 01/15/2023]
Abstract
Biological brains possess an unparalleled ability to adapt behavioral responses to changing stimuli and environments. How neural processes enable this capacity is a fundamental open question. Previous works have identified two candidate mechanisms: a low-dimensional organization of neural activity and a modulation by contextual inputs. We hypothesized that combining the two might facilitate generalization and adaptation in complex tasks. We tested this hypothesis in flexible timing tasks where dynamics play a key role. Examining trained recurrent neural networks, we found that confining the dynamics to a low-dimensional subspace allowed tonic inputs to parametrically control the overall input-output transform, enabling generalization to novel inputs and adaptation to changing conditions. Reverse-engineering and theoretical analyses demonstrated that this parametric control relies on a mechanism where tonic inputs modulate the dynamics along non-linear manifolds while preserving their geometry. Comparisons with data from behaving monkeys confirmed the behavioral and neural signatures of this mechanism.
Collapse
Affiliation(s)
- Manuel Beiran
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL University, 75005 Paris, France; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Nicolas Meirhaeghe
- Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Institut de Neurosciences de la Timone (INT), UMR 7289, CNRS, Aix-Marseille Université, Marseille 13005, France
| | - Hansem Sohn
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL University, 75005 Paris, France.
| |
Collapse
|
13
|
Yoon HG, Kim P. STDP-based associative memory formation and retrieval. J Math Biol 2023; 86:49. [PMID: 36826758 DOI: 10.1007/s00285-023-01883-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 12/11/2022] [Accepted: 01/31/2023] [Indexed: 02/25/2023]
Abstract
Spike-timing-dependent plasticity (STDP) is a biological process in which the precise order and timing of neuronal spikes affect the degree of synaptic modification. While there has been numerous research focusing on the role of STDP in neural coding, the functional implications of STDP at the macroscopic level in the brain have not been fully explored yet. In this work, we propose a neurodynamical model based on STDP that renders storage and retrieval of a group of associative memories. We showed that the function of STDP at the macroscopic level is to form a "memory plane" in the neural state space which dynamically encodes high dimensional data. We derived the analytic relation between the input, the memory plane, and the induced macroscopic neural oscillations around the memory plane. Such plane produces a limit cycle in reaction to a similar memory cue, which can be used for retrieval of the original input.
Collapse
Affiliation(s)
- Hong-Gyu Yoon
- Department of Mathematical Sciences, Ulsan National Institute of Science and Technology (UNIST), Ulsan Metropolitan City, 44919, Republic of Korea
| | - Pilwon Kim
- Department of Mathematical Sciences, Ulsan National Institute of Science and Technology (UNIST), Ulsan Metropolitan City, 44919, Republic of Korea.
| |
Collapse
|
14
|
Zhang X, Long X, Zhang SJ, Chen ZS. Excitatory-inhibitory recurrent dynamics produce robust visual grids and stable attractors. Cell Rep 2022; 41:111777. [PMID: 36516752 PMCID: PMC9805366 DOI: 10.1016/j.celrep.2022.111777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Revised: 09/28/2022] [Accepted: 11/14/2022] [Indexed: 12/15/2022] Open
Abstract
Spatially modulated grid cells have been recently found in the rat secondary visual cortex (V2) during active navigation. However, the computational mechanism and functional significance of V2 grid cells remain unknown. To address the knowledge gap, we train a biologically inspired excitatory-inhibitory recurrent neural network to perform a two-dimensional spatial navigation task with multisensory input. We find grid-like responses in both excitatory and inhibitory RNN units, which are robust with respect to spatial cues, dimensionality of visual input, and activation function. Population responses reveal a low-dimensional, torus-like manifold and attractor. We find a link between functional grid clusters with similar receptive fields and structured excitatory-to-excitatory connections. Additionally, multistable torus-like attractors emerged with increasing sparsity in inter- and intra-subnetwork connectivity. Finally, irregular grid patterns are found in recurrent neural network (RNN) units during a visual sequence recognition task. Together, our results suggest common computational mechanisms of V2 grid cells for spatial and non-spatial tasks.
Collapse
Affiliation(s)
- Xiaohan Zhang
- Department of Psychiatry, New York University Grossman School of Medicine, New York, NY, USA
| | - Xiaoyang Long
- Department of Neurosurgery, Xinqiao Hospital, Chongqing, China
| | - Sheng-Jia Zhang
- Department of Neurosurgery, Xinqiao Hospital, Chongqing, China
| | - Zhe Sage Chen
- Department of Psychiatry, New York University Grossman School of Medicine, New York, NY, USA; Department of Neurosurgery, Xinqiao Hospital, Chongqing, China; Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, USA.
| |
Collapse
|
15
|
Valente A, Ostojic S, Pillow J. Probing the Relationship Between Latent Linear Dynamical Systems and Low-Rank Recurrent Neural Network Models. Neural Comput 2022; 34:1871-1892. [PMID: 35896161 DOI: 10.1162/neco_a_01522] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 04/15/2022] [Indexed: 11/04/2022]
Abstract
A large body of work has suggested that neural populations exhibit low-dimensional dynamics during behavior. However, there are a variety of different approaches for modeling low-dimensional neural population activity. One approach involves latent linear dynamical system (LDS) models, in which population activity is described by a projection of low-dimensional latent variables with linear dynamics. A second approach involves low-rank recurrent neural networks (RNNs), in which population activity arises directly from a low-dimensional projection of past activity. Although these two modeling approaches have strong similarities, they arise in different contexts and tend to have different domains of application. Here we examine the precise relationship between latent LDS models and linear low-rank RNNs. When can one model class be converted to the other, and vice versa? We show that latent LDS models can only be converted to RNNs in specific limit cases, due to the non-Markovian property of latent LDS models. Conversely, we show that linear RNNs can be mapped onto LDS models, with latent dimensionality at most twice the rank of the RNN. A surprising consequence of our results is that a partially observed RNN is better represented by an LDS model than by an RNN consisting of only observed units.
Collapse
Affiliation(s)
- Adrian Valente
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure-PSL Research University, 75005 Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure-PSL Research University, 75005 Paris, France
| | - Jonathan Pillow
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ 08544, U.S.A.
| |
Collapse
|
16
|
Darshan R, Rivkind A. Learning to represent continuous variables in heterogeneous neural networks. Cell Rep 2022; 39:110612. [PMID: 35385721 DOI: 10.1016/j.celrep.2022.110612] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Revised: 02/08/2022] [Accepted: 03/11/2022] [Indexed: 12/13/2022] Open
Abstract
Animals must monitor continuous variables such as position or head direction. Manifold attractor networks-which enable a continuum of persistent neuronal states-provide a key framework to explain this monitoring ability. Neural networks with symmetric synaptic connectivity dominate this framework but are inconsistent with the diverse synaptic connectivity and neuronal representations observed in experiments. Here, we developed a theory for manifold attractors in trained neural networks, which approximates a continuum of persistent states, without assuming unrealistic symmetry. We exploit the theory to predict how asymmetries in the representation and heterogeneity in the connectivity affect the formation of the manifold via training, shape network response to stimulus, and govern mechanisms that possibly lead to destabilization of the manifold. Our work suggests that the functional properties of manifold attractors in the brain can be inferred from the overlooked asymmetries in connectivity and in the low-dimensional representation of the encoded variable.
Collapse
Affiliation(s)
- Ran Darshan
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.
| | | |
Collapse
|
17
|
Jazayeri M, Ostojic S. Interpreting neural computations by examining intrinsic and embedding dimensionality of neural activity. Curr Opin Neurobiol 2021; 70:113-120. [PMID: 34537579 PMCID: PMC8688220 DOI: 10.1016/j.conb.2021.08.002] [Citation(s) in RCA: 78] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 08/11/2021] [Accepted: 08/12/2021] [Indexed: 11/16/2022]
Abstract
The ongoing exponential rise in recording capacity calls for new approaches for analysing and interpreting neural data. Effective dimensionality has emerged as an important property of neural activity across populations of neurons, yet different studies rely on different definitions and interpretations of this quantity. Here, we focus on intrinsic and embedding dimensionality, and discuss how they might reveal computational principles from data. Reviewing recent works, we propose that the intrinsic dimensionality reflects information about the latent variables encoded in collective activity while embedding dimensionality reveals the manner in which this information is processed. We conclude by highlighting the role of network models as an ideal substrate for testing more specifically various hypotheses on the computational principles reflected through intrinsic and embedding dimensionality.
Collapse
Affiliation(s)
- Mehrdad Jazayeri
- McGovern Institute for Brain Research, Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives, INSERM U960, École Normale Supérieure - PSL Research University, 75005, Paris, France.
| |
Collapse
|
18
|
Rajakumar A, Rinzel J, Chen ZS. Stimulus-Driven and Spontaneous Dynamics in Excitatory-Inhibitory Recurrent Neural Networks for Sequence Representation. Neural Comput 2021; 33:2603-2645. [PMID: 34530451 PMCID: PMC8750453 DOI: 10.1162/neco_a_01418] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 04/08/2021] [Indexed: 11/04/2022]
Abstract
Recurrent neural networks (RNNs) have been widely used to model sequential neural dynamics ("neural sequences") of cortical circuits in cognitive and motor tasks. Efforts to incorporate biological constraints and Dale's principle will help elucidate the neural representations and mechanisms of underlying circuits. We trained an excitatory-inhibitory RNN to learn neural sequences in a supervised manner and studied the representations and dynamic attractors of the trained network. The trained RNN was robust to trigger the sequence in response to various input signals and interpolated a time-warped input for sequence representation. Interestingly, a learned sequence can repeat periodically when the RNN evolved beyond the duration of a single sequence. The eigenspectrum of the learned recurrent connectivity matrix with growing or damping modes, together with the RNN's nonlinearity, were adequate to generate a limit cycle attractor. We further examined the stability of dynamic attractors while training the RNN to learn two sequences. Together, our results provide a general framework for understanding neural sequence representation in the excitatory-inhibitory RNN.
Collapse
Affiliation(s)
- Alfred Rajakumar
- Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, U.S.A.
| | - John Rinzel
- Courant Institute of Mathematical Sciences and Center for Neural Science, New York University, New York, NY 10012, USA.
| | - Zhe S Chen
- Department of Psychiatry and Neuroscience Institute, New York University School of Medicine, New York, NY 10016, U.S.A.
| |
Collapse
|