1
|
Wu S, Huang H, Wang S, Chen G, Zhou C, Yang D. Neural heterogeneity enhances reliable neural information processing: Local sensitivity and globally input-slaved transient dynamics. SCIENCE ADVANCES 2025; 11:eadr3903. [PMID: 40173217 PMCID: PMC11963962 DOI: 10.1126/sciadv.adr3903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Accepted: 02/26/2025] [Indexed: 04/04/2025]
Abstract
Cortical neuronal activity varies over time and across repeated trials, yet consistently represents stimulus features. The dynamical mechanism underlying this reliable representation and computation remains elusive. This study uncovers a mechanism for reliable neural information processing, leveraging a biologically plausible network model incorporating neural heterogeneity. First, we investigate neuronal timescale diversity, revealing that it disrupts intrinsic coherent spatiotemporal patterns, induces firing rate heterogeneity, enhances local responsive sensitivity, and aligns network activity closely with input. The system exhibits globally input-slaved transient dynamics, essential for reliable neural information processing. Other neural heterogeneities, such as nonuniform input connections, spike threshold heterogeneity, and network in-degree heterogeneity, play similar roles, highlighting the importance of neural heterogeneity in shaping consistent stimulus representation. This mechanism offers a potentially general framework for understanding neural heterogeneity in reliable computation and informs the design of reservoir computing models endowed with liquid wave reservoirs for neuromorphic computing.
Collapse
Affiliation(s)
- Shengdun Wu
- Research Centre for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou 311100, China
| | - Haiping Huang
- PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou 510275, China
| | - Shengjun Wang
- Department of Physics, Shaanxi Normal University, Xi’an 710119, China
| | - Guozhang Chen
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, China
| | - Changsong Zhou
- Department of Physics, Hong Kong Baptist University, Kowloon Tong, Hong Kong, China
| | - Dongping Yang
- Research Centre for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou 311100, China
| |
Collapse
|
2
|
Gosztolai A, Peach RL, Arnaudon A, Barahona M, Vandergheynst P. MARBLE: interpretable representations of neural population dynamics using geometric deep learning. Nat Methods 2025; 22:612-620. [PMID: 39962310 PMCID: PMC11903309 DOI: 10.1038/s41592-024-02582-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 09/27/2024] [Accepted: 11/26/2024] [Indexed: 03/14/2025]
Abstract
The dynamics of neuron populations commonly evolve on low-dimensional manifolds. Thus, we need methods that learn the dynamical processes over neural manifolds to infer interpretable and consistent latent representations. We introduce a representation learning method, MARBLE, which decomposes on-manifold dynamics into local flow fields and maps them into a common latent space using unsupervised geometric deep learning. In simulated nonlinear dynamical systems, recurrent neural networks and experimental single-neuron recordings from primates and rodents, we discover emergent low-dimensional latent representations that parametrize high-dimensional neural dynamics during gain modulation, decision-making and changes in the internal state. These representations are consistent across neural networks and animals, enabling the robust comparison of cognitive computations. Extensive benchmarking demonstrates state-of-the-art within- and across-animal decoding accuracy of MARBLE compared to current representation learning approaches, with minimal user input. Our results suggest that a manifold structure provides a powerful inductive bias to develop decoding algorithms and assimilate data across experiments.
Collapse
Affiliation(s)
- Adam Gosztolai
- Institute of Artificial Intelligence, Medical University of Vienna, Vienna, Austria.
| | - Robert L Peach
- Department of Neurology, University Hospital Würzburg, Würzburg, Germany
- Department of Brain Sciences, Imperial College London, London, UK
| | - Alexis Arnaudon
- Blue Brain Project, EPFL, Campus Biotech, Geneva, Switzerland
| | | | | |
Collapse
|
3
|
Abe ETT, Brunton BW. TiDHy: Timescale Demixing via Hypernetworks to learn simultaneous dynamics from mixed observations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.01.28.635316. [PMID: 39974964 PMCID: PMC11838317 DOI: 10.1101/2025.01.28.635316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 02/21/2025]
Abstract
Neural activity and behavior arise from multiple concurrent time-varying systems, including neuromodulation, neural state, and history; however, most current approaches model these data as one set of dynamics with a single timescale. Here we develop Timescale Demixing via Hypernetworks (TiDHy) as a new computational method to model spatiotemporal data, decomposing them into multiple simultaneous latent dynamical systems that may span orders-of-magnitude different timescales. Specifically, we train a hypernetwork to dynamically reweigh linear combinations of latent dynamics. This approach enables accurate data reconstruction, converges to true latent dynamics, and captures multiple timescales of variation. We first demonstrate that TiDHy can demix dynamics and timescales from synthetic data comprising multiple independent switching linear dynamical systems, even when the observations are mixed. Next, with a simulated locomotion behavior dataset, we show that TiDHy accurately captures both the fast dynamics of movement kinematics and the slow dynamics of changing terrains. Finally, in an open-source multi-animal social behavior dataset, we show that the keypoint trajectory dynamics extracted with TiDHy can be used to accurately identify social behaviors of multiple mice. Taken together, TiDHy is a powerful new algorithm for demixing simultaneous latent dynamical systems with applications to diverse computational domains.
Collapse
Affiliation(s)
- Elliott T. T. Abe
- Biology Department, University of Washington, Seattle, Washington, USA
- eScience Institute, University of Washington, Seattle, Washington, USA
- Computational Neuroscience Center, University of Washington, Seattle, Washington, USA
| | - Bingni W. Brunton
- Biology Department, University of Washington, Seattle, Washington, USA
- eScience Institute, University of Washington, Seattle, Washington, USA
- Computational Neuroscience Center, University of Washington, Seattle, Washington, USA
| |
Collapse
|
4
|
Rajalingham R, Sohn H, Jazayeri M. Dynamic tracking of objects in the macaque dorsomedial frontal cortex. Nat Commun 2025; 16:346. [PMID: 39746908 PMCID: PMC11696028 DOI: 10.1038/s41467-024-54688-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Accepted: 11/18/2024] [Indexed: 01/04/2025] Open
Abstract
A central tenet of cognitive neuroscience is that humans build an internal model of the external world and use mental simulation of the model to perform physical inferences. Decades of human experiments have shown that behaviors in many physical reasoning tasks are consistent with predictions from the mental simulation theory. However, evidence for the defining feature of mental simulation - that neural population dynamics reflect simulations of physical states in the environment - is limited. We test the mental simulation hypothesis by combining a naturalistic ball-interception task, large-scale electrophysiology in non-human primates, and recurrent neural network modeling. We find that neurons in the monkeys' dorsomedial frontal cortex (DMFC) represent task-relevant information about the ball position in a multiplexed fashion. At a population level, the activity pattern in DMFC comprises a low-dimensional neural embedding that tracks the ball both when it is visible and invisible, serving as a neural substrate for mental simulation. A systematic comparison of different classes of task-optimized RNN models with the DMFC data provides further evidence supporting the mental simulation hypothesis. Our findings provide evidence that neural dynamics in the frontal cortex are consistent with internal simulation of external states in the environment.
Collapse
Affiliation(s)
- Rishi Rajalingham
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
- Reality Labs, Meta; 390 9th Ave, New York, NY, USA
| | - Hansem Sohn
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, Republic of Korea
- Department of Biomedical Engineering, Sungkyunkwan University (SKKU), Suwon, Republic of Korea
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA.
- Howard Hughes Medical Institute, Massachusetts Institute of Technology, Cambridge, USA.
| |
Collapse
|
5
|
Serrano-Fernández L, Beirán M, Romo R, Parga N. Representation of a perceptual bias in the prefrontal cortex. Proc Natl Acad Sci U S A 2024; 121:e2312831121. [PMID: 39636858 DOI: 10.1073/pnas.2312831121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 11/06/2024] [Indexed: 12/07/2024] Open
Abstract
Perception is influenced by sensory stimulation, prior knowledge, and contextual cues, which collectively contribute to the emergence of perceptual biases. However, the precise neural mechanisms underlying these biases remain poorly understood. This study aims to address this gap by analyzing neural recordings from the prefrontal cortex (PFC) of monkeys performing a vibrotactile frequency discrimination task. Our findings provide empirical evidence supporting the hypothesis that perceptual biases can be reflected in the neural activity of the PFC. We found that the state-space trajectories of PFC neuronal activity encoded a warped representation of the first frequency presented during the task. Remarkably, this distorted representation of the frequency aligned with the predictions of its Bayesian estimator. The identification of these neural correlates expands our understanding of the neural basis of perceptual biases and highlights the involvement of the PFC in shaping perceptual experiences. Similar analyses could be employed in other delayed comparison tasks and in various brain regions to explore where and how neural activity reflects perceptual biases during different stages of the trial.
Collapse
Affiliation(s)
- Luis Serrano-Fernández
- Departamento de Física Teórica, Universidad Autónoma de Madrid, 28049 Madrid, Spain
- Centro de Investigación Avanzada en Física Fundamental, Universidad Autónoma de Madrid, 28049 Madrid, Spain
| | - Manuel Beirán
- Center for Theoretical Neuroscience, Department of Neuroscience, Zuckerman Institute, Columbia University, New York, NY 10027
| | | | - Néstor Parga
- Departamento de Física Teórica, Universidad Autónoma de Madrid, 28049 Madrid, Spain
- Centro de Investigación Avanzada en Física Fundamental, Universidad Autónoma de Madrid, 28049 Madrid, Spain
| |
Collapse
|
6
|
Cao R, Bright IM, Howard MW. Ramping cells in the rodent medial prefrontal cortex encode time to past and future events via real Laplace transform. Proc Natl Acad Sci U S A 2024; 121:e2404169121. [PMID: 39254998 PMCID: PMC11420195 DOI: 10.1073/pnas.2404169121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Accepted: 08/05/2024] [Indexed: 09/11/2024] Open
Abstract
In interval reproduction tasks, animals must remember the event starting the interval and anticipate the time of the planned response to terminate the interval. The interval reproduction task thus allows for studying both memory for the past and anticipation of the future. We analyzed previously published recordings from the rodent medial prefrontal cortex [J. Henke et al., eLife10, e71612 (2021)] during an interval reproduction task and identified two cell groups by modeling their temporal receptive fields using hierarchical Bayesian models. The firing in the "past cells" group peaked at the start of the interval and relaxed exponentially back to baseline. The firing in the "future cells" group increased exponentially and peaked right before the planned action at the end of the interval. Contrary to the previous assumption that timing information in the brain has one or two time scales for a given interval, we found strong evidence for a continuous distribution of the exponential rate constants for both past and future cell populations. The real Laplace transformation of time predicts exponential firing with a continuous distribution of rate constants across the population. Therefore, the firing pattern of the past cells can be identified with the Laplace transform of time since the past event while the firing pattern of the future cells can be identified with the Laplace transform of time until the planned future event.
Collapse
Affiliation(s)
- Rui Cao
- Department of Psychological and Brain Sciences, Boston University, Boston, MA02215
| | - Ian M. Bright
- Department of Psychological and Brain Sciences, Boston University, Boston, MA02215
| | - Marc W. Howard
- Department of Psychological and Brain Sciences, Boston University, Boston, MA02215
| |
Collapse
|
7
|
Gillett M, Brunel N. Dynamic control of sequential retrieval speed in networks with heterogeneous learning rules. eLife 2024; 12:RP88805. [PMID: 39197099 PMCID: PMC11357343 DOI: 10.7554/elife.88805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/30/2024] Open
Abstract
Temporal rescaling of sequential neural activity has been observed in multiple brain areas during behaviors involving time estimation and motor execution at variable speeds. Temporally asymmetric Hebbian rules have been used in network models to learn and retrieve sequential activity, with characteristics that are qualitatively consistent with experimental observations. However, in these models sequential activity is retrieved at a fixed speed. Here, we investigate the effects of a heterogeneity of plasticity rules on network dynamics. In a model in which neurons differ by the degree of temporal symmetry of their plasticity rule, we find that retrieval speed can be controlled by varying external inputs to the network. Neurons with temporally symmetric plasticity rules act as brakes and tend to slow down the dynamics, while neurons with temporally asymmetric rules act as accelerators of the dynamics. We also find that such networks can naturally generate separate 'preparatory' and 'execution' activity patterns with appropriate external inputs.
Collapse
Affiliation(s)
- Maxwell Gillett
- Department of Neurobiology, Duke UniversityDurhamUnited States
| | - Nicolas Brunel
- Department of Neurobiology, Duke UniversityDurhamUnited States
- Department of Physics, Duke UniversityDurhamUnited States
| |
Collapse
|
8
|
Lin Z, Huang H. Spiking mode-based neural networks. Phys Rev E 2024; 110:024306. [PMID: 39295018 DOI: 10.1103/physreve.110.024306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 07/22/2024] [Indexed: 09/21/2024]
Abstract
Spiking neural networks play an important role in brainlike neuromorphic computations and in studying working mechanisms of neural circuits. One drawback of training a large-scale spiking neural network is that updating all weights is quite expensive. Furthermore, after training, all information related to the computational task is hidden into the weight matrix, prohibiting us from a transparent understanding of circuit mechanisms. Therefore, in this work, we address these challenges by proposing a spiking mode-based training protocol, where the recurrent weight matrix is explained as a Hopfield-like multiplication of three matrices: input modes, output modes, and a score matrix. The first advantage is that the weight is interpreted by input and output modes and their associated scores characterizing the importance of each decomposition term. The number of modes is thus adjustable, allowing more degrees of freedom for modeling the experimental data. This significantly reduces the training cost because of significantly reduced space complexity for learning. Training spiking networks is thus carried out in the mode-score space. The second advantage is that one can project the high-dimensional neural activity (filtered spike train) in the state space onto the mode space which is typically of a low dimension, e.g., a few modes are sufficient to capture the shape of the underlying neural manifolds. We successfully apply our framework in two computational tasks-digit classification and selective sensory integration tasks. Our method thus accelerates the training of spiking neural networks by a Hopfield-like decomposition, and moreover this training leads to low-dimensional attractor structures of high-dimensional neural dynamics.
Collapse
|
9
|
Costacurta JC, Bhandarkar S, Zoltowski DM, Linderman SW. Structured flexibility in recurrent neural networks via neuromodulation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.26.605315. [PMID: 39091788 PMCID: PMC11291173 DOI: 10.1101/2024.07.26.605315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/04/2024]
Abstract
The goal of theoretical neuroscience is to develop models that help us better understand biological intelligence. Such models range broadly in complexity and biological detail. For example, task-optimized recurrent neural networks (RNNs) have generated hypotheses about how the brain may perform various computations, but these models typically assume a fixed weight matrix representing the synaptic connectivity between neurons. From decades of neuroscience research, we know that synaptic weights are constantly changing, controlled in part by chemicals such as neuromodulators. In this work we explore the computational implications of synaptic gain scaling, a form of neuromodulation, using task-optimized low-rank RNNs. In our neuromodulated RNN (NM-RNN) model, a neuromodulatory subnetwork outputs a low-dimensional neuromodulatory signal that dynamically scales the low-rank recurrent weights of an output-generating RNN. In empirical experiments, we find that the structured flexibility in the NM-RNN allows it to both train and generalize with a higher degree of accuracy than low-rank RNNs on a set of canonical tasks. Additionally, via theoretical analyses we show how neuromodulatory gain scaling endows networks with gating mechanisms commonly found in artificial RNNs. We end by analyzing the low-rank dynamics of trained NM-RNNs, to show how task computations are distributed.
Collapse
Affiliation(s)
- Julia C Costacurta
- Wu Tsai Neurosciences Institute, Stanford, CA, USA
- Department of Electrical Engineering, Stanford, CA, USA
| | | | - David M Zoltowski
- Wu Tsai Neurosciences Institute, Stanford, CA, USA
- Department of Statistics, Stanford University, Stanford, CA, USA
| | - Scott W Linderman
- Wu Tsai Neurosciences Institute, Stanford, CA, USA
- Department of Statistics, Stanford University, Stanford, CA, USA
| |
Collapse
|
10
|
Serrano-Fernández L, Beirán M, Parga N. Emergent perceptual biases from state-space geometry in trained spiking recurrent neural networks. Cell Rep 2024; 43:114412. [PMID: 38968075 DOI: 10.1016/j.celrep.2024.114412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 04/08/2024] [Accepted: 06/12/2024] [Indexed: 07/07/2024] Open
Abstract
A stimulus held in working memory is perceived as contracted toward the average stimulus. This contraction bias has been extensively studied in psychophysics, but little is known about its origin from neural activity. By training recurrent networks of spiking neurons to discriminate temporal intervals, we explored the causes of this bias and how behavior relates to population firing activity. We found that the trained networks exhibited animal-like behavior. Various geometric features of neural trajectories in state space encoded warped representations of the durations of the first interval modulated by sensory history. Formulating a normative model, we showed that these representations conveyed a Bayesian estimate of the interval durations, thus relating activity and behavior. Importantly, our findings demonstrate that Bayesian computations already occur during the sensory phase of the first stimulus and persist throughout its maintenance in working memory, until the time of stimulus comparison.
Collapse
Affiliation(s)
- Luis Serrano-Fernández
- Departamento de Física Teórica, Universidad Autónoma de Madrid, 28049 Madrid, Spain; Centro de Investigación Avanzada en Física Fundamental, Universidad Autónoma de Madrid, 28049 Madrid, Spain
| | - Manuel Beirán
- Center for Theoretical Neuroscience, Zuckerman Institute, Columbia University, New York, NY, USA
| | - Néstor Parga
- Departamento de Física Teórica, Universidad Autónoma de Madrid, 28049 Madrid, Spain; Centro de Investigación Avanzada en Física Fundamental, Universidad Autónoma de Madrid, 28049 Madrid, Spain.
| |
Collapse
|
11
|
Bayones L, Zainos A, Alvarez M, Romo R, Franci A, Rossi-Pool R. Orthogonality of sensory and contextual categorical dynamics embedded in a continuum of responses from the second somatosensory cortex. Proc Natl Acad Sci U S A 2024; 121:e2316765121. [PMID: 38990946 PMCID: PMC11260089 DOI: 10.1073/pnas.2316765121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 06/12/2024] [Indexed: 07/13/2024] Open
Abstract
How does the brain simultaneously process signals that bring complementary information, like raw sensory signals and their transformed counterparts, without any disruptive interference? Contemporary research underscores the brain's adeptness in using decorrelated responses to reduce such interference. Both neurophysiological findings and artificial neural networks support the notion of orthogonal representation for signal differentiation and parallel processing. Yet, where, and how raw sensory signals are transformed into more abstract representations remains unclear. Using a temporal pattern discrimination task in trained monkeys, we revealed that the second somatosensory cortex (S2) efficiently segregates faithful and transformed neural responses into orthogonal subspaces. Importantly, S2 population encoding for transformed signals, but not for faithful ones, disappeared during a nondemanding version of this task, which suggests that signal transformation and their decoding from downstream areas are only active on-demand. A mechanistic computation model points to gain modulation as a possible biological mechanism for the observed context-dependent computation. Furthermore, individual neural activities that underlie the orthogonal population representations exhibited a continuum of responses, with no well-determined clusters. These findings advocate that the brain, while employing a continuum of heterogeneous neural responses, splits population signals into orthogonal subspaces in a context-dependent fashion to enhance robustness, performance, and improve coding efficiency.
Collapse
Affiliation(s)
- Lucas Bayones
- Instituto de Fisiología Celular, Departamento de Neurociencia Cognitiva, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
| | - Antonio Zainos
- Instituto de Fisiología Celular, Departamento de Neurociencia Cognitiva, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
| | - Manuel Alvarez
- Instituto de Fisiología Celular, Departamento de Neurociencia Cognitiva, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
| | | | - Alessio Franci
- Departmento de Matemática, Facultad de Ciencias, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
- Montefiore Institute, University of Liège, Liège4000, Belgique
- Wallon ExceLlence (WEL) Research Institute, Wavre1300, Belgique
| | - Román Rossi-Pool
- Instituto de Fisiología Celular, Departamento de Neurociencia Cognitiva, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
- Centro de Ciencias de la Complejidad, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
| |
Collapse
|
12
|
Ostojic S, Fusi S. Computational role of structure in neural activity and connectivity. Trends Cogn Sci 2024; 28:677-690. [PMID: 38553340 DOI: 10.1016/j.tics.2024.03.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 02/29/2024] [Accepted: 03/07/2024] [Indexed: 07/05/2024]
Abstract
One major challenge of neuroscience is identifying structure in seemingly disorganized neural activity. Different types of structure have different computational implications that can help neuroscientists understand the functional role of a particular brain area. Here, we outline a unified approach to characterize structure by inspecting the representational geometry and the modularity properties of the recorded activity and show that a similar approach can also reveal structure in connectivity. We start by setting up a general framework for determining geometry and modularity in activity and connectivity and relating these properties with computations performed by the network. We then use this framework to review the types of structure found in recent studies of model networks performing three classes of computations.
Collapse
Affiliation(s)
- Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research University, 75005 Paris, France.
| | - Stefano Fusi
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| |
Collapse
|
13
|
Zhou S, Buonomano DV. Unified control of temporal and spatial scales of sensorimotor behavior through neuromodulation of short-term synaptic plasticity. SCIENCE ADVANCES 2024; 10:eadk7257. [PMID: 38701208 DOI: 10.1126/sciadv.adk7257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 04/03/2024] [Indexed: 05/05/2024]
Abstract
Neuromodulators have been shown to alter the temporal profile of short-term synaptic plasticity (STP); however, the computational function of this neuromodulation remains unexplored. Here, we propose that the neuromodulation of STP provides a general mechanism to scale neural dynamics and motor outputs in time and space. We trained recurrent neural networks that incorporated STP to produce complex motor trajectories-handwritten digits-with different temporal (speed) and spatial (size) scales. Neuromodulation of STP produced temporal and spatial scaling of the learned dynamics and enhanced temporal or spatial generalization compared to standard training of the synaptic weights in the absence of STP. The model also accounted for the results of two experimental studies involving flexible sensorimotor timing. Neuromodulation of STP provides a unified and biologically plausible mechanism to control the temporal and spatial scales of neural dynamics and sensorimotor behaviors.
Collapse
Affiliation(s)
- Shanglin Zhou
- Institute for Translational Brain Research, Fudan University, Shanghai, China
- State Key Laboratory of Medical Neurobiology, Fudan University, Shanghai, China
- MOE Frontiers Center for Brain Science, Fudan University, Shanghai, China
- Zhongshan Hospital, Fudan University, Shanghai, China
| | - Dean V Buonomano
- Department of Neurobiology, University of California, Los Angeles, Los Angeles, CA, USA
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, USA
| |
Collapse
|
14
|
Cao R, Bright IM, Howard MW. Ramping cells in rodent mPFC encode time to past and future events via real Laplace transform. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.13.580170. [PMID: 38405896 PMCID: PMC10888827 DOI: 10.1101/2024.02.13.580170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
In interval reproduction tasks, animals must remember the event starting the interval and anticipate the time of the planned response to terminate the interval. The interval reproduction task thus allows for studying both memory for the past and anticipation of the future. We analyzed previously published recordings from rodent mPFC (Henke et al., 2021) during an interval reproduction task and identified two cell groups by modeling their temporal receptive fields using hierarchical Bayesian models. The firing in the "past cells" group peaked at the start of the interval and relaxed exponentially back to baseline. The firing in the "future cells" group increased exponentially and peaked right before the planned action at the end of the interval. Contrary to the previous assumption that timing information in the brain has one or two time scales for a given interval, we found strong evidence for a continuous distribution of the exponential rate constants for both past and future cell populations. The real Laplace transformation of time predicts exponential firing with a continuous distribution of rate constants across the population. Therefore, the firing pattern of the past cells can be identified with the Laplace transform of time since the past event while the firing pattern of the future cells can be identified with the Laplace transform of time until the planned future event.
Collapse
Affiliation(s)
- Rui Cao
- Department of Psychological and Brain Sciences, Boston University
| | - Ian M Bright
- Department of Psychological and Brain Sciences, Boston University
| | - Marc W Howard
- Department of Psychological and Brain Sciences, Boston University
| |
Collapse
|
15
|
Gort J. Emergence of Universal Computations Through Neural Manifold Dynamics. Neural Comput 2024; 36:227-270. [PMID: 38101328 DOI: 10.1162/neco_a_01631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 09/05/2023] [Indexed: 12/17/2023]
Abstract
There is growing evidence that many forms of neural computation may be implemented by low-dimensional dynamics unfolding at the population scale. However, neither the connectivity structure nor the general capabilities of these embedded dynamical processes are currently understood. In this work, the two most common formalisms of firing-rate models are evaluated using tools from analysis, topology, and nonlinear dynamics in order to provide plausible explanations for these problems. It is shown that low-rank structured connectivities predict the formation of invariant and globally attracting manifolds in all these models. Regarding the dynamics arising in these manifolds, it is proved they are topologically equivalent across the considered formalisms. This letter also shows that under the low-rank hypothesis, the flows emerging in neural manifolds, including input-driven systems, are universal, which broadens previous findings. It explores how low-dimensional orbits can bear the production of continuous sets of muscular trajectories, the implementation of central pattern generators, and the storage of memory states. These dynamics can robustly simulate any Turing machine over arbitrary bounded memory strings, virtually endowing rate models with the power of universal computation. In addition, the letter shows how the low-rank hypothesis predicts the parsimonious correlation structure observed in cortical activity. Finally, it discusses how this theory could provide a useful tool from which to study neuropsychological phenomena using mathematical methods.
Collapse
Affiliation(s)
- Joan Gort
- Facultat de Psicologia, Universitat Autònoma de Barcelona, 08193, Bellaterra, Barcelona, Spain
| |
Collapse
|
16
|
Rolando F, Kononowicz TW, Duhamel JR, Doyère V, Wirth S. Distinct neural adaptations to time demand in the striatum and the hippocampus. Curr Biol 2024; 34:156-170.e7. [PMID: 38141617 DOI: 10.1016/j.cub.2023.11.066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 10/18/2023] [Accepted: 11/30/2023] [Indexed: 12/25/2023]
Abstract
How do neural codes adjust to track time across a range of resolutions, from milliseconds to multi-seconds, as a function of the temporal frequency at which events occur? To address this question, we studied time-modulated cells in the striatum and the hippocampus, while macaques categorized three nested intervals within the sub-second or the supra-second range (up to 1, 2, 4, or 8 s), thereby modifying the temporal resolution needed to solve the task. Time-modulated cells carried more information for intervals with explicit timing demand, than for any other interval. The striatum, particularly the caudate, supported the most accurate temporal prediction throughout all time ranges. Strikingly, its temporal readout adjusted non-linearly to the time range, suggesting that the striatal resolution shifted from a precise millisecond to a coarse multi-second range as a function of demand. This is in line with monkey's behavioral latencies, which indicated that they tracked time until 2 s but employed a coarse categorization strategy for durations beyond. By contrast, the hippocampus discriminated only the beginning from the end of intervals, regardless of the range. We propose that the hippocampus may provide an overall poor signal marking an event's beginning, whereas the striatum optimizes neural resources to process time throughout an interval adapting to the ongoing timing necessity.
Collapse
Affiliation(s)
- Felipe Rolando
- Institut des Sciences Cognitives Marc Jeannerod, CNRS, Université Lyon 1, 67 boulevard Pinel, 69500 Bron, France
| | - Tadeusz W Kononowicz
- Institut des Sciences Cognitives Marc Jeannerod, CNRS, Université Lyon 1, 67 boulevard Pinel, 69500 Bron, France; Université Paris-Saclay, CNRS, Institut des Neurosciences Paris-Saclay (NeuroPSI), 91400 Saclay, France; Institute of Psychology, The Polish Academy of Sciences, ul. Jaracza 1, 00-378 Warsaw, Poland
| | - Jean-René Duhamel
- Institut des Sciences Cognitives Marc Jeannerod, CNRS, Université Lyon 1, 67 boulevard Pinel, 69500 Bron, France
| | - Valérie Doyère
- Université Paris-Saclay, CNRS, Institut des Neurosciences Paris-Saclay (NeuroPSI), 91400 Saclay, France
| | - Sylvia Wirth
- Institut des Sciences Cognitives Marc Jeannerod, CNRS, Université Lyon 1, 67 boulevard Pinel, 69500 Bron, France.
| |
Collapse
|
17
|
Durstewitz D, Koppe G, Thurm MI. Reconstructing computational system dynamics from neural data with recurrent neural networks. Nat Rev Neurosci 2023; 24:693-710. [PMID: 37794121 DOI: 10.1038/s41583-023-00740-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/18/2023] [Indexed: 10/06/2023]
Abstract
Computational models in neuroscience usually take the form of systems of differential equations. The behaviour of such systems is the subject of dynamical systems theory. Dynamical systems theory provides a powerful mathematical toolbox for analysing neurobiological processes and has been a mainstay of computational neuroscience for decades. Recently, recurrent neural networks (RNNs) have become a popular machine learning tool for studying the non-linear dynamics of neural and behavioural processes by emulating an underlying system of differential equations. RNNs have been routinely trained on similar behavioural tasks to those used for animal subjects to generate hypotheses about the underlying computational mechanisms. By contrast, RNNs can also be trained on the measured physiological and behavioural data, thereby directly inheriting their temporal and geometrical properties. In this way they become a formal surrogate for the experimentally probed system that can be further analysed, perturbed and simulated. This powerful approach is called dynamical system reconstruction. In this Perspective, we focus on recent trends in artificial intelligence and machine learning in this exciting and rapidly expanding field, which may be less well known in neuroscience. We discuss formal prerequisites, different model architectures and training approaches for RNN-based dynamical system reconstructions, ways to evaluate and validate model performance, how to interpret trained models in a neuroscience context, and current challenges.
Collapse
Affiliation(s)
- Daniel Durstewitz
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany.
- Interdisciplinary Center for Scientific Computing, Heidelberg University, Heidelberg, Germany.
- Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany.
| | - Georgia Koppe
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Dept. of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Hector Institute for Artificial Intelligence in Psychiatry, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Max Ingo Thurm
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
18
|
Kawai Y, Park J, Tsuda I, Asada M. Learning long-term motor timing/patterns on an orthogonal basis in random neural networks. Neural Netw 2023; 163:298-311. [PMID: 37087852 DOI: 10.1016/j.neunet.2023.04.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 03/15/2023] [Accepted: 04/05/2023] [Indexed: 04/25/2023]
Abstract
The ability of the brain to generate complex spatiotemporal patterns with specific timings is essential for motor learning and temporal processing. An approach that can model this function, using the spontaneous activity of a random neural network (RNN), is associated with orbital instability. We propose a simple system that learns an arbitrary time series as the linear sum of stable trajectories produced by several small network modules. New finding in computer experiments is that the trajectories of the module outputs are orthogonal to each other. They created a dynamic orthogonal basis acquiring a high representational capacity, which enabled the system to learn the timing of extremely long intervals, such as tens of seconds for a millisecond computation unit, and also the complex time series of Lorenz attractors. This self-sustained system satisfies the stability and orthogonality requirements and thus provides a new neurocomputing framework and perspective for the neural mechanisms of motor learning.
Collapse
Affiliation(s)
- Yuji Kawai
- Symbiotic Intelligent Systems Research Center, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, 1-1 Yamadaoka, Suita, Osaka 565-0871, Japan.
| | - Jihoon Park
- Symbiotic Intelligent Systems Research Center, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, 1-1 Yamadaoka, Suita, Osaka 565-0871, Japan; Center for Information and Neural Networks, National Institute of Information and Communications Technology, 1-4 Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Ichiro Tsuda
- Chubu University Academy of Emerging Sciences/Center for Mathematical Science and Artificial Intelligence, Chubu University, 1200 Matsumoto-cho, Kasugai, Aichi 487-8501, Japan
| | - Minoru Asada
- Symbiotic Intelligent Systems Research Center, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, 1-1 Yamadaoka, Suita, Osaka 565-0871, Japan; Center for Information and Neural Networks, National Institute of Information and Communications Technology, 1-4 Yamadaoka, Suita, Osaka 565-0871, Japan; Chubu University Academy of Emerging Sciences/Center for Mathematical Science and Artificial Intelligence, Chubu University, 1200 Matsumoto-cho, Kasugai, Aichi 487-8501, Japan; International Professional University of Technology in Osaka, 3-3-1 Umeda, Kita-ku, Osaka 530-0001, Japan
| |
Collapse
|