1
|
Mattera A, Alfieri V, Granato G, Baldassarre G. Chaotic recurrent neural networks for brain modelling: A review. Neural Netw 2025; 184:107079. [PMID: 39756119 DOI: 10.1016/j.neunet.2024.107079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2024] [Revised: 11/25/2024] [Accepted: 12/19/2024] [Indexed: 01/07/2025]
Abstract
Even in the absence of external stimuli, the brain is spontaneously active. Indeed, most cortical activity is internally generated by recurrence. Both theoretical and experimental studies suggest that chaotic dynamics characterize this spontaneous activity. While the precise function of brain chaotic activity is still puzzling, we know that chaos confers many advantages. From a computational perspective, chaos enhances the complexity of network dynamics. From a behavioural point of view, chaotic activity could generate the variability required for exploration. Furthermore, information storage and transfer are maximized at the critical border between order and chaos. Despite these benefits, many computational brain models avoid incorporating spontaneous chaotic activity due to the challenges it poses for learning algorithms. In recent years, however, multiple approaches have been proposed to overcome this limitation. As a result, many different algorithms have been developed, initially within the reservoir computing paradigm. Over time, the field has evolved to increase the biological plausibility and performance of the algorithms, sometimes going beyond the reservoir computing framework. In this review article, we examine the computational benefits of chaos and the unique properties of chaotic recurrent neural networks, with a particular focus on those typically utilized in reservoir computing. We also provide a detailed analysis of the algorithms designed to train chaotic RNNs, tracing their historical evolution and highlighting key milestones in their development. Finally, we explore the applications and limitations of chaotic RNNs for brain modelling, consider their potential broader impacts beyond neuroscience, and outline promising directions for future research.
Collapse
Affiliation(s)
- Andrea Mattera
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy.
| | - Valerio Alfieri
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy; International School of Advanced Studies, Center for Neuroscience, University of Camerino, Via Gentile III Da Varano, 62032, Camerino, Italy
| | - Giovanni Granato
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy
| | - Gianluca Baldassarre
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy
| |
Collapse
|
2
|
Dernoncourt F, Avrillon S, Logtens T, Cattagni T, Farina D, Hug F. Flexible control of motor units: is the multidimensionality of motor unit manifolds a sufficient condition? J Physiol 2025; 603:2349-2368. [PMID: 39964831 PMCID: PMC12013786 DOI: 10.1113/jp287857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2024] [Accepted: 01/27/2025] [Indexed: 02/20/2025] Open
Abstract
Understanding flexibility in the neural control of movement requires identifying the distribution of common inputs to the motor units. In this study, we identified large samples of motor units from two lower limb muscles: the vastus lateralis (VL; up to 60 motor units per participant) and the gastrocnemius medialis (GM; up to 67 motor units per participant). First, we applied a linear dimensionality reduction method to assess the dimensionality of the manifolds underlying the motor unit activity. We subsequently investigated the flexibility in motor unit control under two conditions: sinusoidal contractions with torque feedback, and online control with visual feedback on motor unit firing rates. Overall, we found that the activity of GM motor units was effectively captured by a single latent factor defining a unidimensional manifold, whereas the VL motor units were better represented by three latent factors defining a multidimensional manifold. Despite this difference in dimensionality, the recruitment of motor units in the two muscles exhibited similarly low levels of flexibility. Using a spiking network model, we tested the hypothesis that dimensionality derived from factorization does not solely represent descending cortical commands but is also influenced by spinal circuitry. We demonstrated that a heterogeneous distribution of inputs to motor units, or specific configurations of recurrent inhibitory circuits, could produce a multidimensional manifold. This study clarifies an important debated issue, demonstrating that while motor unit firings of a non-compartmentalized muscle can lie in a multidimensional manifold, the CNS may still have limited capacity for flexible control of these units. KEY POINTS: To generate movement, the CNS distributes both excitatory and inhibitory inputs to the motor units. The level of flexibility in the neural control of these motor units remains a topic of debate with significant implications for identifying the smallest unit of movement control. By combining experimental data and in silico models, we demonstrated that the activity of a large sample of motor units from a single muscle can be represented by a multidimensional linear manifold; however, these units show very limited flexibility in their recruitment. The dimensionality of the linear manifold may not directly reflect the dimensionality of descending inputs but could instead relate to the organization of local spinal circuits.
Collapse
Affiliation(s)
| | - Simon Avrillon
- Université Côte d'Azur, LAMHESSNiceFrance
- Department of Bioengineering, Faculty of EngineeringImperial College LondonLondonUK
| | | | - Thomas Cattagni
- Nantes Université, Laboratory ‘MovementInteractions, Performance’ (UR 4334)NantesFrance
| | - Dario Farina
- Department of Bioengineering, Faculty of EngineeringImperial College LondonLondonUK
| | - François Hug
- Université Côte d'Azur, LAMHESSNiceFrance
- The University of QueenslandSchool of Biomedical SciencesBrisbaneQueenslandAustralia
| |
Collapse
|
3
|
Perkins SM, Amematsro EA, Cunningham J, Wang Q, Churchland MM. An emerging view of neural geometry in motor cortex supports high-performance decoding. eLife 2025; 12:RP89421. [PMID: 39898793 PMCID: PMC11790250 DOI: 10.7554/elife.89421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2025] Open
Abstract
Decoders for brain-computer interfaces (BCIs) assume constraints on neural activity, chosen to reflect scientific beliefs while yielding tractable computations. Recent scientific advances suggest that the true constraints on neural activity, especially its geometry, may be quite different from those assumed by most decoders. We designed a decoder, MINT, to embrace statistical constraints that are potentially more appropriate. If those constraints are accurate, MINT should outperform standard methods that explicitly make different assumptions. Additionally, MINT should be competitive with expressive machine learning methods that can implicitly learn constraints from data. MINT performed well across tasks, suggesting its assumptions are well-matched to the data. MINT outperformed other interpretable methods in every comparison we made. MINT outperformed expressive machine learning methods in 37 of 42 comparisons. MINT's computations are simple, scale favorably with increasing neuron counts, and yield interpretable quantities such as data likelihoods. MINT's performance and simplicity suggest it may be a strong candidate for many BCI applications.
Collapse
Affiliation(s)
- Sean M Perkins
- Department of Biomedical Engineering, Columbia UniversityNew YorkUnited States
- Zuckerman Institute, Columbia UniversityNew YorkUnited States
| | - Elom A Amematsro
- Zuckerman Institute, Columbia UniversityNew YorkUnited States
- Department of Neuroscience, Columbia University Medical CenterNew YorkUnited States
| | - John Cunningham
- Zuckerman Institute, Columbia UniversityNew YorkUnited States
- Department of Statistics, Columbia UniversityNew YorkUnited States
- Center for Theoretical Neuroscience, Columbia University Medical CenterNew YorkUnited States
- Grossman Center for the Statistics of Mind, Columbia UniversityNew YorkUnited States
| | - Qi Wang
- Department of Biomedical Engineering, Columbia UniversityNew YorkUnited States
| | - Mark M Churchland
- Zuckerman Institute, Columbia UniversityNew YorkUnited States
- Department of Neuroscience, Columbia University Medical CenterNew YorkUnited States
- Grossman Center for the Statistics of Mind, Columbia UniversityNew YorkUnited States
- Kavli Institute for Brain Science, Columbia University Medical CenterNew YorkUnited States
| |
Collapse
|
4
|
Schmutz V, Brea J, Gerstner W. Emergent Rate-Based Dynamics in Duplicate-Free Populations of Spiking Neurons. PHYSICAL REVIEW LETTERS 2025; 134:018401. [PMID: 39913719 DOI: 10.1103/physrevlett.134.018401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Accepted: 11/22/2024] [Indexed: 05/07/2025]
Abstract
Can spiking neural networks (SNNs) approximate the dynamics of recurrent neural networks? Arguments in classical mean-field theory based on laws of large numbers provide a positive answer when each neuron in the network has many "duplicates", i.e., other neurons with almost perfectly correlated inputs. Using a disordered network model that guarantees the absence of duplicates, we show that duplicate-free SNNs can converge to recurrent neural networks, thanks to the concentration of measure phenomenon. This result reveals a general mechanism underlying the emergence of rate-based dynamics in large SNNs.
Collapse
Affiliation(s)
- Valentin Schmutz
- École Polytechnique Fédérale de Lausanne, School of Life Sciences and School of Computer and Communication Sciences, 1015 Lausanne, Switzerland
- University College London, UCL Queen Square Institute of Neurology, WC1E 6BT London, United Kingdom
| | - Johanni Brea
- École Polytechnique Fédérale de Lausanne, School of Life Sciences and School of Computer and Communication Sciences, 1015 Lausanne, Switzerland
| | - Wulfram Gerstner
- École Polytechnique Fédérale de Lausanne, School of Life Sciences and School of Computer and Communication Sciences, 1015 Lausanne, Switzerland
| |
Collapse
|
5
|
Serrano-Fernández L, Beirán M, Romo R, Parga N. Representation of a perceptual bias in the prefrontal cortex. Proc Natl Acad Sci U S A 2024; 121:e2312831121. [PMID: 39636858 DOI: 10.1073/pnas.2312831121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 11/06/2024] [Indexed: 12/07/2024] Open
Abstract
Perception is influenced by sensory stimulation, prior knowledge, and contextual cues, which collectively contribute to the emergence of perceptual biases. However, the precise neural mechanisms underlying these biases remain poorly understood. This study aims to address this gap by analyzing neural recordings from the prefrontal cortex (PFC) of monkeys performing a vibrotactile frequency discrimination task. Our findings provide empirical evidence supporting the hypothesis that perceptual biases can be reflected in the neural activity of the PFC. We found that the state-space trajectories of PFC neuronal activity encoded a warped representation of the first frequency presented during the task. Remarkably, this distorted representation of the frequency aligned with the predictions of its Bayesian estimator. The identification of these neural correlates expands our understanding of the neural basis of perceptual biases and highlights the involvement of the PFC in shaping perceptual experiences. Similar analyses could be employed in other delayed comparison tasks and in various brain regions to explore where and how neural activity reflects perceptual biases during different stages of the trial.
Collapse
Affiliation(s)
- Luis Serrano-Fernández
- Departamento de Física Teórica, Universidad Autónoma de Madrid, 28049 Madrid, Spain
- Centro de Investigación Avanzada en Física Fundamental, Universidad Autónoma de Madrid, 28049 Madrid, Spain
| | - Manuel Beirán
- Center for Theoretical Neuroscience, Department of Neuroscience, Zuckerman Institute, Columbia University, New York, NY 10027
| | | | - Néstor Parga
- Departamento de Física Teórica, Universidad Autónoma de Madrid, 28049 Madrid, Spain
- Centro de Investigación Avanzada en Física Fundamental, Universidad Autónoma de Madrid, 28049 Madrid, Spain
| |
Collapse
|
6
|
Lacal I, Das A, Logiaco L, Molano-Mazón M, Schwaner MJ, Trach JE. Emerging perspectives for the study of the neural basis of motor behaviour. Eur J Neurosci 2024; 60:6342-6356. [PMID: 39364639 DOI: 10.1111/ejn.16553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2024] [Revised: 09/07/2024] [Accepted: 09/16/2024] [Indexed: 10/05/2024]
Abstract
The 33rd Annual Meeting of the Society for the Neural Control of Movement (NCM) brought together over 500 experts to discuss recent advancements in motor control. This article highlights key topics from the conference, including the foundational mechanisms of motor control, the ongoing debate over the context-dependency of feedforward and feedback processes, and the interplay between motor and cognitive functions in learning, memory, and decision-making. It also presents innovative methods for studying movement in complex, real-world environments.
Collapse
Affiliation(s)
- Irene Lacal
- Sensorimotor Group, German Primate Center, Göttingen, Germany
- Leibniz ScienceCampus Primate Cognition, Göttingen, Germany
| | - Anwesha Das
- Faculty of Medicine, Otto-von-Guericke University Magdeburg, Magdeburg, Germany
- Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Laureline Logiaco
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Boston, Massachusetts, USA
| | | | - M Janneke Schwaner
- Department of Movement Sciences, Katholieke Universiteit Leuven, Leuven, Belgium
| | - Juliana E Trach
- Department of Psychology, Yale University, New Haven, Connecticut, USA
| |
Collapse
|
7
|
Mathis MW, Perez Rotondo A, Chang EF, Tolias AS, Mathis A. Decoding the brain: From neural representations to mechanistic models. Cell 2024; 187:5814-5832. [PMID: 39423801 PMCID: PMC11637322 DOI: 10.1016/j.cell.2024.08.051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Revised: 07/29/2024] [Accepted: 08/26/2024] [Indexed: 10/21/2024]
Abstract
A central principle in neuroscience is that neurons within the brain act in concert to produce perception, cognition, and adaptive behavior. Neurons are organized into specialized brain areas, dedicated to different functions to varying extents, and their function relies on distributed circuits to continuously encode relevant environmental and body-state features, enabling other areas to decode (interpret) these representations for computing meaningful decisions and executing precise movements. Thus, the distributed brain can be thought of as a series of computations that act to encode and decode information. In this perspective, we detail important concepts of neural encoding and decoding and highlight the mathematical tools used to measure them, including deep learning methods. We provide case studies where decoding concepts enable foundational and translational science in motor, visual, and language processing.
Collapse
Affiliation(s)
- Mackenzie Weygandt Mathis
- Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland; Neuro-X Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland.
| | - Adriana Perez Rotondo
- Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland; Neuro-X Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Edward F Chang
- Department of Neurological Surgery, UCSF, San Francisco, CA, USA
| | - Andreas S Tolias
- Department of Ophthalmology, Byers Eye Institute, Stanford University, Stanford, CA, USA; Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Stanford BioX, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Alexander Mathis
- Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland; Neuro-X Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| |
Collapse
|
8
|
Wu S, Huang C, Snyder AC, Smith MA, Doiron B, Yu BM. Automated customization of large-scale spiking network models to neuronal population activity. NATURE COMPUTATIONAL SCIENCE 2024; 4:690-705. [PMID: 39285002 PMCID: PMC12047676 DOI: 10.1038/s43588-024-00688-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 08/08/2024] [Indexed: 09/22/2024]
Abstract
Understanding brain function is facilitated by constructing computational models that accurately reproduce aspects of brain activity. Networks of spiking neurons capture the underlying biophysics of neuronal circuits, yet their activity's dependence on model parameters is notoriously complex. As a result, heuristic methods have been used to configure spiking network models, which can lead to an inability to discover activity regimes complex enough to match large-scale neuronal recordings. Here we propose an automatic procedure, Spiking Network Optimization using Population Statistics (SNOPS), to customize spiking network models that reproduce the population-wide covariability of large-scale neuronal recordings. We first confirmed that SNOPS accurately recovers simulated neural activity statistics. Then, we applied SNOPS to recordings in macaque visual and prefrontal cortices and discovered previously unknown limitations of spiking network models. Taken together, SNOPS can guide the development of network models, thereby enabling deeper insight into how networks of neurons give rise to brain function.
Collapse
Affiliation(s)
- Shenghao Wu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
- Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Chengcheng Huang
- Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
| | - Adam C Snyder
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Matthew A Smith
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Brent Doiron
- Department of Statistics, University of Chicago, Chicago, IL, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
| | - Byron M Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA.
- Neural Basis of Cognition, Pittsburgh, PA, USA.
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
| |
Collapse
|
9
|
Colins Rodriguez A, Perich MG, Miller LE, Humphries MD. Motor Cortex Latent Dynamics Encode Spatial and Temporal Arm Movement Parameters Independently. J Neurosci 2024; 44:e1777232024. [PMID: 39060178 PMCID: PMC11358606 DOI: 10.1523/jneurosci.1777-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 07/12/2024] [Accepted: 07/17/2024] [Indexed: 07/28/2024] Open
Abstract
The fluid movement of an arm requires multiple spatiotemporal parameters to be set independently. Recent studies have argued that arm movements are generated by the collective dynamics of neurons in motor cortex. An untested prediction of this hypothesis is that independent parameters of movement must map to independent components of the neural dynamics. Using a task where three male monkeys made a sequence of reaching movements to randomly placed targets, we show that the spatial and temporal parameters of arm movements are independently encoded in the low-dimensional trajectories of population activity in motor cortex: each movement's direction corresponds to a fixed neural trajectory through neural state space and its speed to how quickly that trajectory is traversed. Recurrent neural network models show that this coding allows independent control over the spatial and temporal parameters of movement by separate network parameters. Our results support a key prediction of the dynamical systems view of motor cortex, and also argue that not all parameters of movement are defined by different trajectories of population activity.
Collapse
Affiliation(s)
| | - Matt G Perich
- Département de neurosciences, Faculté de médecine, Université de Montréal, Montreal, Quebec H3T 1J4, Canada
- Québec Artificial Intelligence Institute (Mila), Montreal, Quebec H2S 3H1, Canada
| | - Lee E Miller
- Department of Biomedical Engineering, Northwestern University, Chicago, Illinois 60208
| | - Mark D Humphries
- School of Psychology, University of Nottingham, Nottingham NG7 2RD, United Kingdom
| |
Collapse
|
10
|
Lin Z, Huang H. Spiking mode-based neural networks. Phys Rev E 2024; 110:024306. [PMID: 39295018 DOI: 10.1103/physreve.110.024306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 07/22/2024] [Indexed: 09/21/2024]
Abstract
Spiking neural networks play an important role in brainlike neuromorphic computations and in studying working mechanisms of neural circuits. One drawback of training a large-scale spiking neural network is that updating all weights is quite expensive. Furthermore, after training, all information related to the computational task is hidden into the weight matrix, prohibiting us from a transparent understanding of circuit mechanisms. Therefore, in this work, we address these challenges by proposing a spiking mode-based training protocol, where the recurrent weight matrix is explained as a Hopfield-like multiplication of three matrices: input modes, output modes, and a score matrix. The first advantage is that the weight is interpreted by input and output modes and their associated scores characterizing the importance of each decomposition term. The number of modes is thus adjustable, allowing more degrees of freedom for modeling the experimental data. This significantly reduces the training cost because of significantly reduced space complexity for learning. Training spiking networks is thus carried out in the mode-score space. The second advantage is that one can project the high-dimensional neural activity (filtered spike train) in the state space onto the mode space which is typically of a low dimension, e.g., a few modes are sufficient to capture the shape of the underlying neural manifolds. We successfully apply our framework in two computational tasks-digit classification and selective sensory integration tasks. Our method thus accelerates the training of spiking neural networks by a Hopfield-like decomposition, and moreover this training leads to low-dimensional attractor structures of high-dimensional neural dynamics.
Collapse
|
11
|
Bayones L, Zainos A, Alvarez M, Romo R, Franci A, Rossi-Pool R. Orthogonality of sensory and contextual categorical dynamics embedded in a continuum of responses from the second somatosensory cortex. Proc Natl Acad Sci U S A 2024; 121:e2316765121. [PMID: 38990946 PMCID: PMC11260089 DOI: 10.1073/pnas.2316765121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 06/12/2024] [Indexed: 07/13/2024] Open
Abstract
How does the brain simultaneously process signals that bring complementary information, like raw sensory signals and their transformed counterparts, without any disruptive interference? Contemporary research underscores the brain's adeptness in using decorrelated responses to reduce such interference. Both neurophysiological findings and artificial neural networks support the notion of orthogonal representation for signal differentiation and parallel processing. Yet, where, and how raw sensory signals are transformed into more abstract representations remains unclear. Using a temporal pattern discrimination task in trained monkeys, we revealed that the second somatosensory cortex (S2) efficiently segregates faithful and transformed neural responses into orthogonal subspaces. Importantly, S2 population encoding for transformed signals, but not for faithful ones, disappeared during a nondemanding version of this task, which suggests that signal transformation and their decoding from downstream areas are only active on-demand. A mechanistic computation model points to gain modulation as a possible biological mechanism for the observed context-dependent computation. Furthermore, individual neural activities that underlie the orthogonal population representations exhibited a continuum of responses, with no well-determined clusters. These findings advocate that the brain, while employing a continuum of heterogeneous neural responses, splits population signals into orthogonal subspaces in a context-dependent fashion to enhance robustness, performance, and improve coding efficiency.
Collapse
Affiliation(s)
- Lucas Bayones
- Instituto de Fisiología Celular, Departamento de Neurociencia Cognitiva, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
| | - Antonio Zainos
- Instituto de Fisiología Celular, Departamento de Neurociencia Cognitiva, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
| | - Manuel Alvarez
- Instituto de Fisiología Celular, Departamento de Neurociencia Cognitiva, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
| | | | - Alessio Franci
- Departmento de Matemática, Facultad de Ciencias, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
- Montefiore Institute, University of Liège, Liège4000, Belgique
- Wallon ExceLlence (WEL) Research Institute, Wavre1300, Belgique
| | - Román Rossi-Pool
- Instituto de Fisiología Celular, Departamento de Neurociencia Cognitiva, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
- Centro de Ciencias de la Complejidad, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
| |
Collapse
|
12
|
Jiang H, Bu X, Sui X, Tang H, Pan X, Chen Y. Spike Neural Network of Motor Cortex Model for Arm Reaching Control. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40039622 DOI: 10.1109/embc53108.2024.10781802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Motor cortex modeling is crucial for understanding movement planning and execution. While interconnected recurrent neural networks have successfully described the dynamics of neural population activity, most existing methods utilize continuous signal-based neural networks, which do not reflect the biological spike neural signal. To address this limitation, we propose a recurrent spike neural network to simulate motor cortical activity during an arm-reaching task. Specifically, our model is built upon integrate-and-fire spiking neurons with conductance-based synapses. We carefully designed the interconnections of neurons with two different firing time scales - "fast" and "slow" neurons. Experimental results demonstrate the effectiveness of our method, with the model's neuronal activity in good agreement with monkey's motor cortex data at both single-cell and population levels. Quantitative analysis reveals a correlation coefficient 0.89 between the model's and real data. These results suggest the possibility of multiple timescales in motor cortical control.
Collapse
|
13
|
Rodriguez AC, Perich MG, Miller L, Humphries MD. Motor cortex latent dynamics encode spatial and temporal arm movement parameters independently. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.05.26.542452. [PMID: 37292834 PMCID: PMC10246015 DOI: 10.1101/2023.05.26.542452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The fluid movement of an arm requires multiple spatiotemporal parameters to be set independently. Recent studies have argued that arm movements are generated by the collective dynamics of neurons in motor cortex. An untested prediction of this hypothesis is that independent parameters of movement must map to independent components of the neural dynamics. Using a task where monkeys made a sequence of reaching movements to randomly placed targets, we show that the spatial and temporal parameters of arm movements are independently encoded in the low-dimensional trajectories of population activity in motor cortex: Each movement's direction corresponds to a fixed neural trajectory through neural state space and its speed to how quickly that trajectory is traversed. Recurrent neural network models show this coding allows independent control over the spatial and temporal parameters of movement by separate network parameters. Our results support a key prediction of the dynamical systems view of motor cortex, but also argue that not all parameters of movement are defined by different trajectories of population activity.
Collapse
Affiliation(s)
| | - Matthew G. Perich
- Département de neurosciences, Faculté de médecine, Université de Montréal, Montréal, Canada
- Québec Artificial Intelligence Institute (Mila), Québec, Canada
| | - Lee Miller
- Northwestern University, Department of Biomedical Engineering, Chicago, USA
| | - Mark D. Humphries
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
14
|
Podlaski WF, Machens CK. Approximating Nonlinear Functions With Latent Boundaries in Low-Rank Excitatory-Inhibitory Spiking Networks. Neural Comput 2024; 36:803-857. [PMID: 38658028 DOI: 10.1162/neco_a_01658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 01/02/2024] [Indexed: 04/26/2024]
Abstract
Deep feedforward and recurrent neural networks have become successful functional models of the brain, but they neglect obvious biological details such as spikes and Dale's law. Here we argue that these details are crucial in order to understand how real neural circuits operate. Towards this aim, we put forth a new framework for spike-based computation in low-rank excitatory-inhibitory spiking networks. By considering populations with rank-1 connectivity, we cast each neuron's spiking threshold as a boundary in a low-dimensional input-output space. We then show how the combined thresholds of a population of inhibitory neurons form a stable boundary in this space, and those of a population of excitatory neurons form an unstable boundary. Combining the two boundaries results in a rank-2 excitatory-inhibitory (EI) network with inhibition-stabilized dynamics at the intersection of the two boundaries. The computation of the resulting networks can be understood as the difference of two convex functions and is thereby capable of approximating arbitrary non-linear input-output mappings. We demonstrate several properties of these networks, including noise suppression and amplification, irregular activity and synaptic balance, as well as how they relate to rate network dynamics in the limit that the boundary becomes soft. Finally, while our work focuses on small networks (5-50 neurons), we discuss potential avenues for scaling up to much larger networks. Overall, our work proposes a new perspective on spiking networks that may serve as a starting point for a mechanistic understanding of biological spike-based computation.
Collapse
Affiliation(s)
- William F Podlaski
- Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| | - Christian K Machens
- Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| |
Collapse
|
15
|
Churchland MM, Shenoy KV. Preparatory activity and the expansive null-space. Nat Rev Neurosci 2024; 25:213-236. [PMID: 38443626 DOI: 10.1038/s41583-024-00796-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/26/2024] [Indexed: 03/07/2024]
Abstract
The study of the cortical control of movement experienced a conceptual shift over recent decades, as the basic currency of understanding shifted from single-neuron tuning towards population-level factors and their dynamics. This transition was informed by a maturing understanding of recurrent networks, where mechanism is often characterized in terms of population-level factors. By estimating factors from data, experimenters could test network-inspired hypotheses. Central to such hypotheses are 'output-null' factors that do not directly drive motor outputs yet are essential to the overall computation. In this Review, we highlight how the hypothesis of output-null factors was motivated by the venerable observation that motor-cortex neurons are active during movement preparation, well before movement begins. We discuss how output-null factors then became similarly central to understanding neural activity during movement. We discuss how this conceptual framework provided key analysis tools, making it possible for experimenters to address long-standing questions regarding motor control. We highlight an intriguing trend: as experimental and theoretical discoveries accumulate, the range of computational roles hypothesized to be subserved by output-null factors continues to expand.
Collapse
Affiliation(s)
- Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA.
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA.
| | - Krishna V Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Neurobiology, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Bio-X Institute, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
| |
Collapse
|
16
|
Zhang X, Dou Z, Kim SH, Upadhyay G, Havert D, Kang S, Kazemi K, Huang K, Aydin O, Huang R, Rahman S, Ellis‐Mohr A, Noblet HA, Lim KH, Chung HJ, Gritton HJ, Saif MTA, Kong HJ, Beggs JM, Gazzola M. Mind In Vitro Platforms: Versatile, Scalable, Robust, and Open Solutions to Interfacing with Living Neurons. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2306826. [PMID: 38161217 PMCID: PMC10953569 DOI: 10.1002/advs.202306826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 12/12/2023] [Indexed: 01/03/2024]
Abstract
Motivated by the unexplored potential of in vitro neural systems for computing and by the corresponding need of versatile, scalable interfaces for multimodal interaction, an accurate, modular, fully customizable, and portable recording/stimulation solution that can be easily fabricated, robustly operated, and broadly disseminated is presented. This approach entails a reconfigurable platform that works across multiple industry standards and that enables a complete signal chain, from neural substrates sampled through micro-electrode arrays (MEAs) to data acquisition, downstream analysis, and cloud storage. Built-in modularity supports the seamless integration of electrical/optical stimulation and fluidic interfaces. Custom MEA fabrication leverages maskless photolithography, favoring the rapid prototyping of a variety of configurations, spatial topologies, and constitutive materials. Through a dedicated analysis and management software suite, the utility and robustness of this system are demonstrated across neural cultures and applications, including embryonic stem cell-derived and primary neurons, organotypic brain slices, 3D engineered tissue mimics, concurrent calcium imaging, and long-term recording. Overall, this technology, termed "mind in vitro" to underscore the computing inspiration, provides an end-to-end solution that can be widely deployed due to its affordable (>10× cost reduction) and open-source nature, catering to the expanding needs of both conventional and unconventional electrophysiology.
Collapse
Affiliation(s)
- Xiaotian Zhang
- Carl R. Woese Institute for Genomic BiologyUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
| | - Zhi Dou
- Department of Mechanical Science and EngineeringUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
| | - Seung Hyun Kim
- Department of Mechanical Science and EngineeringUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
| | - Gaurav Upadhyay
- Department of Mechanical Science and EngineeringUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
| | - Daniel Havert
- Department of PhysicsIndiana University BloomingtonBloomingtonIN47405USA
| | - Sehong Kang
- Department of Mechanical Science and EngineeringUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
| | - Kimia Kazemi
- Department of Mechanical Science and EngineeringUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
| | - Kai‐Yu Huang
- Department of Chemical and Biomolecular EngineeringUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
| | - Onur Aydin
- Carl R. Woese Institute for Genomic BiologyUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
| | - Raymond Huang
- Department of Mechanical Science and EngineeringUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
| | - Saeedur Rahman
- Department of Mechanical Science and EngineeringUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
| | - Austin Ellis‐Mohr
- Department of Electrical and Computer EngineeringUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
| | - Hayden A. Noblet
- Molecular and Integrative PhysiologyUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
- Neuroscience ProgramUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
| | - Ki H. Lim
- Molecular and Integrative PhysiologyUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
| | - Hee Jung Chung
- Carl R. Woese Institute for Genomic BiologyUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
- Molecular and Integrative PhysiologyUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
- Neuroscience ProgramUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
- Beckman Institute for Advanced Science and TechnologyUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
| | - Howard J. Gritton
- Beckman Institute for Advanced Science and TechnologyUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
- Department of Comparative BiosciencesUniversity of Illinois at Urbana–ChampaignUrbanaIL61802USA
| | - M. Taher A. Saif
- Carl R. Woese Institute for Genomic BiologyUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
- Department of Mechanical Science and EngineeringUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
| | - Hyun Joon Kong
- Carl R. Woese Institute for Genomic BiologyUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
- Department of Chemical and Biomolecular EngineeringUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
| | - John M. Beggs
- Department of PhysicsIndiana University BloomingtonBloomingtonIN47405USA
| | - Mattia Gazzola
- Carl R. Woese Institute for Genomic BiologyUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
- Department of Mechanical Science and EngineeringUniversity of Illinois at Urbana–ChampaignUrbanaIL61801USA
| |
Collapse
|
17
|
Maslennikov O, Perc M, Nekorkin V. Topological features of spike trains in recurrent spiking neural networks that are trained to generate spatiotemporal patterns. Front Comput Neurosci 2024; 18:1363514. [PMID: 38463243 PMCID: PMC10920356 DOI: 10.3389/fncom.2024.1363514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Accepted: 02/06/2024] [Indexed: 03/12/2024] Open
Abstract
In this study, we focus on training recurrent spiking neural networks to generate spatiotemporal patterns in the form of closed two-dimensional trajectories. Spike trains in the trained networks are examined in terms of their dissimilarity using the Victor-Purpura distance. We apply algebraic topology methods to the matrices obtained by rank-ordering the entries of the distance matrices, specifically calculating the persistence barcodes and Betti curves. By comparing the features of different types of output patterns, we uncover the complex relations between low-dimensional target signals and the underlying multidimensional spike trains.
Collapse
Affiliation(s)
- Oleg Maslennikov
- Federal Research Center A.V. Gaponov-Grekhov Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, Russia
| | - Matjaž Perc
- Faculty of Natural Sciences and Mathematics, University of Maribor, Maribor, Slovenia
- Department of Medical Research, China Medical University Hospital, China Medical University, Taichung City, Taiwan
- Complexity Science Hub Vienna, Vienna, Austria
- Department of Physics, Kyung Hee University, Seoul, Republic of Korea
| | - Vladimir Nekorkin
- Federal Research Center A.V. Gaponov-Grekhov Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, Russia
| |
Collapse
|
18
|
Wheeler DW, Kopsick JD, Sutton N, Tecuatl C, Komendantov AO, Nadella K, Ascoli GA. Hippocampome.org 2.0 is a knowledge base enabling data-driven spiking neural network simulations of rodent hippocampal circuits. eLife 2024; 12:RP90597. [PMID: 38345923 PMCID: PMC10942544 DOI: 10.7554/elife.90597] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/15/2024] Open
Abstract
Hippocampome.org is a mature open-access knowledge base of the rodent hippocampal formation focusing on neuron types and their properties. Previously, Hippocampome.org v1.0 established a foundational classification system identifying 122 hippocampal neuron types based on their axonal and dendritic morphologies, main neurotransmitter, membrane biophysics, and molecular expression (Wheeler et al., 2015). Releases v1.1 through v1.12 furthered the aggregation of literature-mined data, including among others neuron counts, spiking patterns, synaptic physiology, in vivo firing phases, and connection probabilities. Those additional properties increased the online information content of this public resource over 100-fold, enabling numerous independent discoveries by the scientific community. Hippocampome.org v2.0, introduced here, besides incorporating over 50 new neuron types, now recenters its focus on extending the functionality to build real-scale, biologically detailed, data-driven computational simulations. In all cases, the freely downloadable model parameters are directly linked to the specific peer-reviewed empirical evidence from which they were derived. Possible research applications include quantitative, multiscale analyses of circuit connectivity and spiking neural network simulations of activity dynamics. These advances can help generate precise, experimentally testable hypotheses and shed light on the neural mechanisms underlying associative memory and spatial navigation.
Collapse
Affiliation(s)
- Diek W Wheeler
- Center for Neural Informatics, Structures, & Plasticity, Krasnow Institute for Advanced Study, George Mason UniversityFairfaxUnited States
- Bioengineering Department and Center for Neural Informatics, Structures, & Plasticity, College of Engineering and Computing, George Mason UniversityFairfaxUnited States
| | - Jeffrey D Kopsick
- Center for Neural Informatics, Structures, & Plasticity, Krasnow Institute for Advanced Study, George Mason UniversityFairfaxUnited States
- Interdisciplinary Program in Neuroscience, College of Science, George Mason UniversityFairfaxUnited States
| | - Nate Sutton
- Center for Neural Informatics, Structures, & Plasticity, Krasnow Institute for Advanced Study, George Mason UniversityFairfaxUnited States
- Bioengineering Department and Center for Neural Informatics, Structures, & Plasticity, College of Engineering and Computing, George Mason UniversityFairfaxUnited States
| | - Carolina Tecuatl
- Center for Neural Informatics, Structures, & Plasticity, Krasnow Institute for Advanced Study, George Mason UniversityFairfaxUnited States
- Bioengineering Department and Center for Neural Informatics, Structures, & Plasticity, College of Engineering and Computing, George Mason UniversityFairfaxUnited States
| | - Alexander O Komendantov
- Center for Neural Informatics, Structures, & Plasticity, Krasnow Institute for Advanced Study, George Mason UniversityFairfaxUnited States
- Bioengineering Department and Center for Neural Informatics, Structures, & Plasticity, College of Engineering and Computing, George Mason UniversityFairfaxUnited States
| | - Kasturi Nadella
- Center for Neural Informatics, Structures, & Plasticity, Krasnow Institute for Advanced Study, George Mason UniversityFairfaxUnited States
- Bioengineering Department and Center for Neural Informatics, Structures, & Plasticity, College of Engineering and Computing, George Mason UniversityFairfaxUnited States
| | - Giorgio A Ascoli
- Center for Neural Informatics, Structures, & Plasticity, Krasnow Institute for Advanced Study, George Mason UniversityFairfaxUnited States
- Bioengineering Department and Center for Neural Informatics, Structures, & Plasticity, College of Engineering and Computing, George Mason UniversityFairfaxUnited States
- Interdisciplinary Program in Neuroscience, College of Science, George Mason UniversityFairfaxUnited States
| |
Collapse
|
19
|
Zimnik AJ, Cora Ames K, An X, Driscoll L, Lara AH, Russo AA, Susoy V, Cunningham JP, Paninski L, Churchland MM, Glaser JI. Identifying Interpretable Latent Factors with Sparse Component Analysis. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.05.578988. [PMID: 38370650 PMCID: PMC10871230 DOI: 10.1101/2024.02.05.578988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
In many neural populations, the computationally relevant signals are posited to be a set of 'latent factors' - signals shared across many individual neurons. Understanding the relationship between neural activity and behavior requires the identification of factors that reflect distinct computational roles. Methods for identifying such factors typically require supervision, which can be suboptimal if one is unsure how (or whether) factors can be grouped into distinct, meaningful sets. Here, we introduce Sparse Component Analysis (SCA), an unsupervised method that identifies interpretable latent factors. SCA seeks factors that are sparse in time and occupy orthogonal dimensions. With these simple constraints, SCA facilitates surprisingly clear parcellations of neural activity across a range of behaviors. We applied SCA to motor cortex activity from reaching and cycling monkeys, single-trial imaging data from C. elegans, and activity from a multitask artificial network. SCA consistently identified sets of factors that were useful in describing network computations.
Collapse
Affiliation(s)
- Andrew J Zimnik
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - K Cora Ames
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Xinyue An
- Department of Neurology, Northwestern University, Chicago, IL, USA
- Interdepartmental Neuroscience Program, Northwestern University, Chicago, IL, USA
| | - Laura Driscoll
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Allen Institute for Neural Dynamics, Allen Institute, Seattle, CA, USA
| | - Antonio H Lara
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Abigail A Russo
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Vladislav Susoy
- Department of Physics, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - John P Cunningham
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
| | - Liam Paninski
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY, USA
| | - Joshua I Glaser
- Department of Neurology, Northwestern University, Chicago, IL, USA
- Department of Computer Science, Northwestern University, Evanston, IL, USA
| |
Collapse
|
20
|
Gort J. Emergence of Universal Computations Through Neural Manifold Dynamics. Neural Comput 2024; 36:227-270. [PMID: 38101328 DOI: 10.1162/neco_a_01631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 09/05/2023] [Indexed: 12/17/2023]
Abstract
There is growing evidence that many forms of neural computation may be implemented by low-dimensional dynamics unfolding at the population scale. However, neither the connectivity structure nor the general capabilities of these embedded dynamical processes are currently understood. In this work, the two most common formalisms of firing-rate models are evaluated using tools from analysis, topology, and nonlinear dynamics in order to provide plausible explanations for these problems. It is shown that low-rank structured connectivities predict the formation of invariant and globally attracting manifolds in all these models. Regarding the dynamics arising in these manifolds, it is proved they are topologically equivalent across the considered formalisms. This letter also shows that under the low-rank hypothesis, the flows emerging in neural manifolds, including input-driven systems, are universal, which broadens previous findings. It explores how low-dimensional orbits can bear the production of continuous sets of muscular trajectories, the implementation of central pattern generators, and the storage of memory states. These dynamics can robustly simulate any Turing machine over arbitrary bounded memory strings, virtually endowing rate models with the power of universal computation. In addition, the letter shows how the low-rank hypothesis predicts the parsimonious correlation structure observed in cortical activity. Finally, it discusses how this theory could provide a useful tool from which to study neuropsychological phenomena using mathematical methods.
Collapse
Affiliation(s)
- Joan Gort
- Facultat de Psicologia, Universitat Autònoma de Barcelona, 08193, Bellaterra, Barcelona, Spain
| |
Collapse
|
21
|
Wheeler DW, Kopsick JD, Sutton N, Tecuatl C, Komendantov AO, Nadella K, Ascoli GA. Hippocampome.org v2.0: a knowledge base enabling data-driven spiking neural network simulations of rodent hippocampal circuits. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.05.12.540597. [PMID: 37425693 PMCID: PMC10327012 DOI: 10.1101/2023.05.12.540597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/11/2023]
Abstract
Hippocampome.org is a mature open-access knowledge base of the rodent hippocampal formation focusing on neuron types and their properties. Hippocampome.org v1.0 established a foundational classification system identifying 122 hippocampal neuron types based on their axonal and dendritic morphologies, main neurotransmitter, membrane biophysics, and molecular expression. Releases v1.1 through v1.12 furthered the aggregation of literature-mined data, including among others neuron counts, spiking patterns, synaptic physiology, in vivo firing phases, and connection probabilities. Those additional properties increased the online information content of this public resource over 100-fold, enabling numerous independent discoveries by the scientific community. Hippocampome.org v2.0, introduced here, besides incorporating over 50 new neuron types, now recenters its focus on extending the functionality to build real-scale, biologically detailed, data-driven computational simulations. In all cases, the freely downloadable model parameters are directly linked to the specific peer-reviewed empirical evidence from which they were derived. Possible research applications include quantitative, multiscale analyses of circuit connectivity and spiking neural network simulations of activity dynamics. These advances can help generate precise, experimentally testable hypotheses and shed light on the neural mechanisms underlying associative memory and spatial navigation.
Collapse
Affiliation(s)
- Diek W. Wheeler
- Center for Neural Informatics, Structures, & Plasticity; Krasnow Institute for Advanced Study; George Mason University, Fairfax, VA, USA
- Bioengineering Department and Center for Neural Informatics, Structures, & Plasticity; College of Engineering and Computing; George Mason University, Fairfax, VA, USA
| | - Jeffrey D. Kopsick
- Center for Neural Informatics, Structures, & Plasticity; Krasnow Institute for Advanced Study; George Mason University, Fairfax, VA, USA
- Interdisciplinary Program in Neuroscience; College of Science; George Mason University, Fairfax, VA, USA
| | - Nate Sutton
- Center for Neural Informatics, Structures, & Plasticity; Krasnow Institute for Advanced Study; George Mason University, Fairfax, VA, USA
- Bioengineering Department and Center for Neural Informatics, Structures, & Plasticity; College of Engineering and Computing; George Mason University, Fairfax, VA, USA
| | - Carolina Tecuatl
- Center for Neural Informatics, Structures, & Plasticity; Krasnow Institute for Advanced Study; George Mason University, Fairfax, VA, USA
- Bioengineering Department and Center for Neural Informatics, Structures, & Plasticity; College of Engineering and Computing; George Mason University, Fairfax, VA, USA
| | - Alexander O. Komendantov
- Center for Neural Informatics, Structures, & Plasticity; Krasnow Institute for Advanced Study; George Mason University, Fairfax, VA, USA
- Bioengineering Department and Center for Neural Informatics, Structures, & Plasticity; College of Engineering and Computing; George Mason University, Fairfax, VA, USA
| | - Kasturi Nadella
- Center for Neural Informatics, Structures, & Plasticity; Krasnow Institute for Advanced Study; George Mason University, Fairfax, VA, USA
- Bioengineering Department and Center for Neural Informatics, Structures, & Plasticity; College of Engineering and Computing; George Mason University, Fairfax, VA, USA
| | - Giorgio A. Ascoli
- Center for Neural Informatics, Structures, & Plasticity; Krasnow Institute for Advanced Study; George Mason University, Fairfax, VA, USA
- Interdisciplinary Program in Neuroscience; College of Science; George Mason University, Fairfax, VA, USA
- Bioengineering Department and Center for Neural Informatics, Structures, & Plasticity; College of Engineering and Computing; George Mason University, Fairfax, VA, USA
| |
Collapse
|
22
|
Zhai S, Cui Q, Simmons DV, Surmeier DJ. Distributed dopaminergic signaling in the basal ganglia and its relationship to motor disability in Parkinson's disease. Curr Opin Neurobiol 2023; 83:102798. [PMID: 37866012 PMCID: PMC10842063 DOI: 10.1016/j.conb.2023.102798] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Revised: 09/19/2023] [Accepted: 09/20/2023] [Indexed: 10/24/2023]
Abstract
The degeneration of mesencephalic dopaminergic neurons that innervate the basal ganglia is responsible for the cardinal motor symptoms of Parkinson's disease (PD). It has been thought that loss of dopaminergic signaling in one basal ganglia region - the striatum - was solely responsible for the network pathophysiology causing PD motor symptoms. While our understanding of dopamine (DA)'s role in modulating striatal circuitry has deepened in recent years, it also has become clear that it acts in other regions of the basal ganglia to influence movement. Underscoring this point, examination of a new progressive mouse model of PD shows that striatal dopamine DA depletion alone is not sufficient to induce parkinsonism and that restoration of extra-striatal DA signaling attenuates parkinsonian motor deficits once they appear. This review summarizes recent advances in the effort to understand basal ganglia circuitry, its modulation by DA, and how its dysfunction drives PD motor symptoms.
Collapse
Affiliation(s)
- Shenyu Zhai
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA
| | - Qiaoling Cui
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA
| | - DeNard V Simmons
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA
| | - D James Surmeier
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA; Aligning Science Across Parkinson's (ASAP) Collaborative Research Network, Chevy Chase, MD 20815, USA.
| |
Collapse
|
23
|
Wu S, Huang C, Snyder A, Smith M, Doiron B, Yu B. Automated customization of large-scale spiking network models to neuronal population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.21.558920. [PMID: 37790533 PMCID: PMC10542160 DOI: 10.1101/2023.09.21.558920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Understanding brain function is facilitated by constructing computational models that accurately reproduce aspects of brain activity. Networks of spiking neurons capture the underlying biophysics of neuronal circuits, yet the dependence of their activity on model parameters is notoriously complex. As a result, heuristic methods have been used to configure spiking network models, which can lead to an inability to discover activity regimes complex enough to match large-scale neuronal recordings. Here we propose an automatic procedure, Spiking Network Optimization using Population Statistics (SNOPS), to customize spiking network models that reproduce the population-wide covariability of large-scale neuronal recordings. We first confirmed that SNOPS accurately recovers simulated neural activity statistics. Then, we applied SNOPS to recordings in macaque visual and prefrontal cortices and discovered previously unknown limitations of spiking network models. Taken together, SNOPS can guide the development of network models and thereby enable deeper insight into how networks of neurons give rise to brain function.
Collapse
Affiliation(s)
- Shenghao Wu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Chengcheng Huang
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
| | - Adam Snyder
- Department of Neuroscience, University of Rochester, Rochester, NY, USA
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Matthew Smith
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Brent Doiron
- Department of Neurobiology, University of Chicago, Chicago, IL, USA
- Department of Statistics, University of Chicago, Chicago, IL, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
| | - Byron Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| |
Collapse
|
24
|
Cimeša L, Ciric L, Ostojic S. Geometry of population activity in spiking networks with low-rank structure. PLoS Comput Biol 2023; 19:e1011315. [PMID: 37549194 PMCID: PMC10461857 DOI: 10.1371/journal.pcbi.1011315] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 08/28/2023] [Accepted: 06/27/2023] [Indexed: 08/09/2023] Open
Abstract
Recurrent network models are instrumental in investigating how behaviorally-relevant computations emerge from collective neural dynamics. A recently developed class of models based on low-rank connectivity provides an analytically tractable framework for understanding of how connectivity structure determines the geometry of low-dimensional dynamics and the ensuing computations. Such models however lack some fundamental biological constraints, and in particular represent individual neurons in terms of abstract units that communicate through continuous firing rates rather than discrete action potentials. Here we examine how far the theoretical insights obtained from low-rank rate networks transfer to more biologically plausible networks of spiking neurons. Adding a low-rank structure on top of random excitatory-inhibitory connectivity, we systematically compare the geometry of activity in networks of integrate-and-fire neurons to rate networks with statistically equivalent low-rank connectivity. We show that the mean-field predictions of rate networks allow us to identify low-dimensional dynamics at constant population-average activity in spiking networks, as well as novel non-linear regimes of activity such as out-of-phase oscillations and slow manifolds. We finally exploit these results to directly build spiking networks that perform nonlinear computations.
Collapse
Affiliation(s)
- Ljubica Cimeša
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Lazar Ciric
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| |
Collapse
|
25
|
Surmeier DJ, Zhai S, Cui Q, Simmons DV. Rethinking the network determinants of motor disability in Parkinson's disease. Front Synaptic Neurosci 2023; 15:1186484. [PMID: 37448451 PMCID: PMC10336242 DOI: 10.3389/fnsyn.2023.1186484] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Accepted: 06/12/2023] [Indexed: 07/15/2023] Open
Abstract
For roughly the last 30 years, the notion that striatal dopamine (DA) depletion was the critical determinant of network pathophysiology underlying the motor symptoms of Parkinson's disease (PD) has dominated the field. While the basal ganglia circuit model underpinning this hypothesis has been of great heuristic value, the hypothesis itself has never been directly tested. Moreover, studies in the last couple of decades have made it clear that the network model underlying this hypothesis fails to incorporate key features of the basal ganglia, including the fact that DA acts throughout the basal ganglia, not just in the striatum. Underscoring this point, recent work using a progressive mouse model of PD has shown that striatal DA depletion alone is not sufficient to induce parkinsonism and that restoration of extra-striatal DA signaling attenuates parkinsonian motor deficits once they appear. Given the broad array of discoveries in the field, it is time for a new model of the network determinants of motor disability in PD.
Collapse
Affiliation(s)
- Dalton James Surmeier
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL, United States
| | | | | | | |
Collapse
|