1
|
Zhang Y, Chen Y, Wang T, Cui H. Neural geometry from mixed sensorimotor selectivity for predictive sensorimotor control. eLife 2025; 13:RP100064. [PMID: 40310450 PMCID: PMC12045623 DOI: 10.7554/elife.100064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2025] Open
Abstract
Although recent studies suggest that activity in the motor cortex, in addition to generating motor outputs, receives substantial information regarding sensory inputs, it is still unclear how sensory context adjusts the motor commands. Here, we recorded population neural activity in the motor cortex via microelectrode arrays while monkeys performed flexible manual interceptions of moving targets. During this task, which requires predictive sensorimotor control, the activity of most neurons in the motor cortex encoding upcoming movements was influenced by ongoing target motion. Single-trial neural states at the movement onset formed staggered orbital geometries, suggesting that target motion modulates peri-movement activity in an orthogonal manner. This neural geometry was further evaluated with a representational model and recurrent neural networks (RNNs) with task-specific input-output mapping. We propose that the sensorimotor dynamics can be derived from neuronal mixed sensorimotor selectivity and dynamic interaction between modulations.
Collapse
Affiliation(s)
- Yiheng Zhang
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of SciencesShanghaiChina
- Chinese Institute for Brain ResearchBeijingChina
- University of Chinese Academy of SciencesBeijingChina
| | - Yun Chen
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of SciencesShanghaiChina
- Chinese Institute for Brain ResearchBeijingChina
- University of Chinese Academy of SciencesBeijingChina
| | - Tianwei Wang
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of SciencesShanghaiChina
- Chinese Institute for Brain ResearchBeijingChina
| | - He Cui
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of SciencesShanghaiChina
- Chinese Institute for Brain ResearchBeijingChina
| |
Collapse
|
2
|
Smoulder AL, Marino PJ, Oby ER, Snyder SE, Batista AP, Chase SM. Reward influences movement vigor through multiple motor cortical mechanisms. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.04.09.648001. [PMID: 40291660 PMCID: PMC12027334 DOI: 10.1101/2025.04.09.648001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/30/2025]
Abstract
The prospect of greater rewards often invigorates movements. What neural mechanisms support this increase of movement vigor for greater rewards? We had three rhesus monkeys perform reaching movements to targets worth different magnitudes of reward. We recorded neural population activity from primary motor and dorsal premotor cortex, brain areas at the output of cortical processing for voluntary movements, and asked how neural activity mediated the translation of reward into increased vigor. We identified features of neural activity during movement preparation, initiation, and execution that were both correlated with vigor and modulated by reward. We also found that the neural metrics that correlate with different aspects of movement vigor exhibit only limited correlation with one another, suggesting that there are multiple mechanisms through which reward modulates vigor. Finally, we note that the majority of reward's modulation of motor cortical activity cannot be accounted for by reward-mediated vigor differences in behavior, indicating that reward modulations within motor cortex may serve roles in addition to affecting vigor. Overall, our results provide insight into the neural mechanisms that link reward-driven motivation to the modulation of the details of movement.
Collapse
|
3
|
Veillette JP, Chao AF, Nith R, Lopes P, Nusbaum HC. Overlapping Cortical Substrate of Biomechanical Control and Subjective Agency. J Neurosci 2025; 45:e1673242025. [PMID: 40127938 PMCID: PMC12044032 DOI: 10.1523/jneurosci.1673-24.2025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2024] [Revised: 02/13/2025] [Accepted: 03/06/2025] [Indexed: 03/26/2025] Open
Abstract
Every movement requires the nervous system to solve a complex biomechanical control problem, but this process is mostly veiled from one's conscious awareness. Simultaneously, we also have conscious experience of controlling our movements-our sense of agency (SoA). Whether SoA corresponds to those neural representations that implement actual neuromuscular control is an open question with ethical, medical, and legal implications. If SoA is the conscious experience of control, this predicts that SoA can be decoded from the same brain structures that implement the so-called "inverse dynamics" computations for planning movement. We correlated human (male and female) fMRI measurements during hand movements with the internal representations of a deep neural network (DNN) performing the same hand control task in a biomechanical simulation-revealing detailed cortical encodings of sensorimotor states, idiosyncratic to each subject. We then manipulated SoA by usurping control of participants' muscles via electrical stimulation, and found that the same voxels which were best explained by modeled inverse dynamics representations-which, strikingly, were located in canonically visual areas-also predicted SoA. Importantly, model-brain correspondences and robust SoA decoding could both be achieved within single subjects, enabling relationships between motor representations and awareness to be studied at the level of the individual.Significance Statement The inherent complexity of biomechanical control problems is belied by the seeming simplicity of directing movements in our subjective experience. This aspect of our experience suggests we have limited conscious access to the neural and mental representations involved in controlling the body - but of which of the many possible representations are we, in fact, aware? Understanding which motor control representations percolate into awareness has taken on increasing importance as emerging neural interface technologies push the boundaries of human autonomy. In our study, we leverage machine learning models that have learned to control simulated bodies to localize biomechanical control representations in the brain. Then, we show that these brain regions predict perceived agency over the musculature during functional electrical stimulation.
Collapse
Affiliation(s)
- John P Veillette
- Department of Psychology, University of Chicago, Chicago, IL 60637
| | - Alfred F Chao
- Department of Psychology, University of Chicago, Chicago, IL 60637
| | - Romain Nith
- Department of Computer Science, University of Chicago, Chicago, IL 60637
| | - Pedro Lopes
- Department of Computer Science, University of Chicago, Chicago, IL 60637
| | - Howard C Nusbaum
- Department of Psychology, University of Chicago, Chicago, IL 60637
| |
Collapse
|
4
|
Alcolea PI, Ma X, Bodkin K, Miller LE, Danziger ZC. Less is more: selection from a small set of options improves BCI velocity control. J Neural Eng 2025; 22:10.1088/1741-2552/adbcd9. [PMID: 40043320 PMCID: PMC12051477 DOI: 10.1088/1741-2552/adbcd9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2024] [Accepted: 03/05/2025] [Indexed: 03/12/2025]
Abstract
Objective.Decoding algorithms used in invasive brain-computer interfaces (iBCIs) typically convert neural activity into continuously varying velocity commands. We hypothesized that putting constraints on which decoded velocity commands are permissible could improve user performance. To test this hypothesis, we designed the discrete direction selection (DDS) decoder, which uses neural activity to select among a small menu of preset cursor velocities.Approach. We tested DDS in a closed-loop cursor control task against many common continuous velocity decoders in both a human-operated real-time iBCI simulator (the jaBCI) and in a monkey using an iBCI. In the jaBCI, we compared performance across four visits by each of 48 naïve, able-bodied human subjects using either DDS, direct regression with assist (an affine map from neural activity to cursor velocity, DR-A), ReFIT, or the velocity Kalman Filter (vKF). In a follow up study to verify the jaBCI results, we compared a monkey's performance using an iBCI with either DDS or the Wiener filter decoder (a direct regression decoder that includes time history, WF).Main Result. In the jaBCI, DDS substantially outperformed all other decoders with 93% mean targets hit per visit compared to DR-A, ReFIT, and vKF with 56%, 39%, and 26% mean targets hit, respectively. With the iBCI, the monkey achieved a 61% success rate with DDS and a 37% success rate with WF.Significance. Discretizing the decoded velocity with DDS effectively traded high resolution velocity commands for less tortuous and lower noise trajectories, highlighting the potential benefits of discretization in simplifying online BCI control.
Collapse
Affiliation(s)
- Pedro I Alcolea
- Department of Biomedical Engineering, Florida International University, Miami, FL 33199, United States of America
| | - Xuan Ma
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, United States of America
| | - Kevin Bodkin
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, United States of America
| | - Lee E Miller
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, United States of America
- Department of Physical Medicine and Rehabilitation, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, United States of America
- Department of Biomedical Engineering, McCormick School of Engineering, Northwestern University, Evanston, IL 60208, United States of America
- Shirley Ryan AbilityLab, Chicago, IL 60611, United States of America
| | - Zachary C Danziger
- Department of Biomedical Engineering, Florida International University, Miami, FL 33199, United States of America
- Department of Rehabilitation Medicine—Division of Physical Therapy, Emory University, Atlanta, GA 30322, United States of America
- W.H. Coulter Department of Biomedical Engineering, Emory University Atlanta, Atlanta, GA 30322, United States of America
| |
Collapse
|
5
|
Perkins SM, Amematsro EA, Cunningham J, Wang Q, Churchland MM. An emerging view of neural geometry in motor cortex supports high-performance decoding. eLife 2025; 12:RP89421. [PMID: 39898793 PMCID: PMC11790250 DOI: 10.7554/elife.89421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2025] Open
Abstract
Decoders for brain-computer interfaces (BCIs) assume constraints on neural activity, chosen to reflect scientific beliefs while yielding tractable computations. Recent scientific advances suggest that the true constraints on neural activity, especially its geometry, may be quite different from those assumed by most decoders. We designed a decoder, MINT, to embrace statistical constraints that are potentially more appropriate. If those constraints are accurate, MINT should outperform standard methods that explicitly make different assumptions. Additionally, MINT should be competitive with expressive machine learning methods that can implicitly learn constraints from data. MINT performed well across tasks, suggesting its assumptions are well-matched to the data. MINT outperformed other interpretable methods in every comparison we made. MINT outperformed expressive machine learning methods in 37 of 42 comparisons. MINT's computations are simple, scale favorably with increasing neuron counts, and yield interpretable quantities such as data likelihoods. MINT's performance and simplicity suggest it may be a strong candidate for many BCI applications.
Collapse
Affiliation(s)
- Sean M Perkins
- Department of Biomedical Engineering, Columbia UniversityNew YorkUnited States
- Zuckerman Institute, Columbia UniversityNew YorkUnited States
| | - Elom A Amematsro
- Zuckerman Institute, Columbia UniversityNew YorkUnited States
- Department of Neuroscience, Columbia University Medical CenterNew YorkUnited States
| | - John Cunningham
- Zuckerman Institute, Columbia UniversityNew YorkUnited States
- Department of Statistics, Columbia UniversityNew YorkUnited States
- Center for Theoretical Neuroscience, Columbia University Medical CenterNew YorkUnited States
- Grossman Center for the Statistics of Mind, Columbia UniversityNew YorkUnited States
| | - Qi Wang
- Department of Biomedical Engineering, Columbia UniversityNew YorkUnited States
| | - Mark M Churchland
- Zuckerman Institute, Columbia UniversityNew YorkUnited States
- Department of Neuroscience, Columbia University Medical CenterNew YorkUnited States
- Grossman Center for the Statistics of Mind, Columbia UniversityNew YorkUnited States
- Kavli Institute for Brain Science, Columbia University Medical CenterNew YorkUnited States
| |
Collapse
|
6
|
Oby ER, Degenhart AD, Grigsby EM, Motiwala A, McClain NT, Marino PJ, Yu BM, Batista AP. Dynamical constraints on neural population activity. Nat Neurosci 2025; 28:383-393. [PMID: 39825141 PMCID: PMC11802451 DOI: 10.1038/s41593-024-01845-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Accepted: 10/25/2024] [Indexed: 01/20/2025]
Abstract
The manner in which neural activity unfolds over time is thought to be central to sensory, motor and cognitive functions in the brain. Network models have long posited that the brain's computations involve time courses of activity that are shaped by the underlying network. A prediction from this view is that the activity time courses should be difficult to violate. We leveraged a brain-computer interface to challenge monkeys to violate the naturally occurring time courses of neural population activity that we observed in the motor cortex. This included challenging animals to traverse the natural time course of neural activity in a time-reversed manner. Animals were unable to violate the natural time courses of neural activity when directly challenged to do so. These results provide empirical support for the view that activity time courses observed in the brain indeed reflect the underlying network-level computational mechanisms that they are believed to implement.
Collapse
Affiliation(s)
- Emily R Oby
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Biomedical and Molecular Sciences, Queen's University, Kingston, Ontario, Canada
| | - Alan D Degenhart
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Erinn M Grigsby
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, PA, USA
- Rehabilitation and Neural Engineering Laboratory, University of Pittsburgh, Pittsburgh, PA, USA
| | - Asma Motiwala
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Nicole T McClain
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Patrick J Marino
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Byron M Yu
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Aaron P Batista
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA.
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.
| |
Collapse
|
7
|
Ruffini G, Castaldo F, Vohryzek J. Structured Dynamics in the Algorithmic Agent. ENTROPY (BASEL, SWITZERLAND) 2025; 27:90. [PMID: 39851710 PMCID: PMC11765005 DOI: 10.3390/e27010090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2024] [Revised: 01/10/2025] [Accepted: 01/14/2025] [Indexed: 01/26/2025]
Abstract
In the Kolmogorov Theory of Consciousness, algorithmic agents utilize inferred compressive models to track coarse-grained data produced by simplified world models, capturing regularities that structure subjective experience and guide action planning. Here, we study the dynamical aspects of this framework by examining how the requirement of tracking natural data drives the structural and dynamical properties of the agent. We first formalize the notion of a generative model using the language of symmetry from group theory, specifically employing Lie pseudogroups to describe the continuous transformations that characterize invariance in natural data. Then, adopting a generic neural network as a proxy for the agent dynamical system and drawing parallels to Noether's theorem in physics, we demonstrate that data tracking forces the agent to mirror the symmetry properties of the generative world model. This dual constraint on the agent's constitutive parameters and dynamical repertoire enforces a hierarchical organization consistent with the manifold hypothesis in the neural network. Our findings bridge perspectives from algorithmic information theory (Kolmogorov complexity, compressive modeling), symmetry (group theory), and dynamics (conservation laws, reduced manifolds), offering insights into the neural correlates of agenthood and structured experience in natural systems, as well as the design of artificial intelligence and computational models of the brain.
Collapse
Affiliation(s)
- Giulio Ruffini
- Brain Modeling Department, Neuroelectrics, 08035 Barcelona, Spain;
| | | | - Jakub Vohryzek
- Computational Neuroscience Group, Universitat Pompeu Fabra, 08005 Barcelona, Spain;
- Centre for Eudaimonia and Human Flourishing, Linacre College, Oxford OX3 9BX, UK
| |
Collapse
|
8
|
Schuessler F, Mastrogiuseppe F, Ostojic S, Barak O. Aligned and oblique dynamics in recurrent neural networks. eLife 2024; 13:RP93060. [PMID: 39601404 DOI: 10.7554/elife.93060] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2024] Open
Abstract
The relation between neural activity and behaviorally relevant variables is at the heart of neuroscience research. When strong, this relation is termed a neural representation. There is increasing evidence, however, for partial dissociations between activity in an area and relevant external variables. While many explanations have been proposed, a theoretical framework for the relationship between external and internal variables is lacking. Here, we utilize recurrent neural networks (RNNs) to explore the question of when and how neural dynamics and the network's output are related from a geometrical point of view. We find that training RNNs can lead to two dynamical regimes: dynamics can either be aligned with the directions that generate output variables, or oblique to them. We show that the choice of readout weight magnitude before training can serve as a control knob between the regimes, similar to recent findings in feedforward networks. These regimes are functionally distinct. Oblique networks are more heterogeneous and suppress noise in their output directions. They are furthermore more robust to perturbations along the output directions. Crucially, the oblique regime is specific to recurrent (but not feedforward) networks, arising from dynamical stability considerations. Finally, we show that tendencies toward the aligned or the oblique regime can be dissociated in neural recordings. Altogether, our results open a new perspective for interpreting neural activity by relating network dynamics and their output.
Collapse
Affiliation(s)
- Friedrich Schuessler
- Faculty of Electrical Engineering and Computer Science, Technical University of Berlin, Berlin, Germany
- Science of Intelligence, Research Cluster of Excellence, Berlin, Germany
| | | | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure-PSL Research University, Paris, France
| | - Omri Barak
- Rappaport Faculty of Medicine and Network Biology Research Laboratories, Technion - Israel Institute of Technology, Haifa, Israel
| |
Collapse
|
9
|
Mathis MW, Perez Rotondo A, Chang EF, Tolias AS, Mathis A. Decoding the brain: From neural representations to mechanistic models. Cell 2024; 187:5814-5832. [PMID: 39423801 PMCID: PMC11637322 DOI: 10.1016/j.cell.2024.08.051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Revised: 07/29/2024] [Accepted: 08/26/2024] [Indexed: 10/21/2024]
Abstract
A central principle in neuroscience is that neurons within the brain act in concert to produce perception, cognition, and adaptive behavior. Neurons are organized into specialized brain areas, dedicated to different functions to varying extents, and their function relies on distributed circuits to continuously encode relevant environmental and body-state features, enabling other areas to decode (interpret) these representations for computing meaningful decisions and executing precise movements. Thus, the distributed brain can be thought of as a series of computations that act to encode and decode information. In this perspective, we detail important concepts of neural encoding and decoding and highlight the mathematical tools used to measure them, including deep learning methods. We provide case studies where decoding concepts enable foundational and translational science in motor, visual, and language processing.
Collapse
Affiliation(s)
- Mackenzie Weygandt Mathis
- Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland; Neuro-X Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland.
| | - Adriana Perez Rotondo
- Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland; Neuro-X Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Edward F Chang
- Department of Neurological Surgery, UCSF, San Francisco, CA, USA
| | - Andreas S Tolias
- Department of Ophthalmology, Byers Eye Institute, Stanford University, Stanford, CA, USA; Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Stanford BioX, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Alexander Mathis
- Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland; Neuro-X Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| |
Collapse
|
10
|
Churchland MM. When preparation pays off. eLife 2024; 13:e102187. [PMID: 39311855 PMCID: PMC11419667 DOI: 10.7554/elife.102187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/25/2024] Open
Abstract
Computational principles shed light on why movement is preceded by preparatory activity within the neural networks that control muscles.
Collapse
Affiliation(s)
- Mark M Churchland
- Grossman Center for the Statistics of Mind, Columbia University in the City of New YorkNew YorkUnited States
- Kavli Institute for Brain Science, Columbia University in the City of New YorkNew YorkUnited States
- Department of Neuroscience, Columbia University in the City of New YorkNew YorkUnited States
| |
Collapse
|
11
|
Colins Rodriguez A, Perich MG, Miller LE, Humphries MD. Motor Cortex Latent Dynamics Encode Spatial and Temporal Arm Movement Parameters Independently. J Neurosci 2024; 44:e1777232024. [PMID: 39060178 PMCID: PMC11358606 DOI: 10.1523/jneurosci.1777-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 07/12/2024] [Accepted: 07/17/2024] [Indexed: 07/28/2024] Open
Abstract
The fluid movement of an arm requires multiple spatiotemporal parameters to be set independently. Recent studies have argued that arm movements are generated by the collective dynamics of neurons in motor cortex. An untested prediction of this hypothesis is that independent parameters of movement must map to independent components of the neural dynamics. Using a task where three male monkeys made a sequence of reaching movements to randomly placed targets, we show that the spatial and temporal parameters of arm movements are independently encoded in the low-dimensional trajectories of population activity in motor cortex: each movement's direction corresponds to a fixed neural trajectory through neural state space and its speed to how quickly that trajectory is traversed. Recurrent neural network models show that this coding allows independent control over the spatial and temporal parameters of movement by separate network parameters. Our results support a key prediction of the dynamical systems view of motor cortex, and also argue that not all parameters of movement are defined by different trajectories of population activity.
Collapse
Affiliation(s)
| | - Matt G Perich
- Département de neurosciences, Faculté de médecine, Université de Montréal, Montreal, Quebec H3T 1J4, Canada
- Québec Artificial Intelligence Institute (Mila), Montreal, Quebec H2S 3H1, Canada
| | - Lee E Miller
- Department of Biomedical Engineering, Northwestern University, Chicago, Illinois 60208
| | - Mark D Humphries
- School of Psychology, University of Nottingham, Nottingham NG7 2RD, United Kingdom
| |
Collapse
|
12
|
Kirk EA, Hope KT, Sober SJ, Sauerbrei BA. An output-null signature of inertial load in motor cortex. Nat Commun 2024; 15:7309. [PMID: 39181866 PMCID: PMC11344817 DOI: 10.1038/s41467-024-51750-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 08/15/2024] [Indexed: 08/27/2024] Open
Abstract
Coordinated movement requires the nervous system to continuously compensate for changes in mechanical load across different conditions. For voluntary movements like reaching, the motor cortex is a critical hub that generates commands to move the limbs and counteract loads. How does cortex contribute to load compensation when rhythmic movements are sequenced by a spinal pattern generator? Here, we address this question by manipulating the mass of the forelimb in unrestrained mice during locomotion. While load produces changes in motor output that are robust to inactivation of motor cortex, it also induces a profound shift in cortical dynamics. This shift is minimally affected by cerebellar perturbation and significantly larger than the load response in the spinal motoneuron population. This latent representation may enable motor cortex to generate appropriate commands when a voluntary movement must be integrated with an ongoing, spinally-generated rhythm.
Collapse
Affiliation(s)
- Eric A Kirk
- Department of Neurosciences, Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - Keenan T Hope
- Department of Neurosciences, Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - Samuel J Sober
- Department of Biology, Emory University, Atlanta, GA, USA
| | - Britton A Sauerbrei
- Department of Neurosciences, Case Western Reserve University School of Medicine, Cleveland, OH, USA.
| |
Collapse
|
13
|
Sabatini DA, Kaufman MT. Reach-dependent reorientation of rotational dynamics in motor cortex. Nat Commun 2024; 15:7007. [PMID: 39143078 PMCID: PMC11325044 DOI: 10.1038/s41467-024-51308-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2023] [Accepted: 08/05/2024] [Indexed: 08/16/2024] Open
Abstract
During reaching, neurons in motor cortex exhibit complex, time-varying activity patterns. Though single-neuron activity correlates with movement parameters, movement correlations explain neural activity only partially. Neural responses also reflect population-level dynamics thought to generate outputs. These dynamics have previously been described as "rotational," such that activity orbits in neural state space. Here, we reanalyze reaching datasets from male Rhesus macaques and find two essential features that cannot be accounted for with standard dynamics models. First, the planes in which rotations occur differ for different reaches. Second, this variation in planes reflects the overall location of activity in neural state space. Our "location-dependent rotations" model fits nearly all motor cortex activity during reaching, and high-quality decoding of reach kinematics reveals a quasilinear relationship with spiking. Varying rotational planes allows motor cortex to produce richer outputs than possible under previous models. Finally, our model links representational and dynamical ideas: representation is present in the state space location, which dynamics then convert into time-varying command signals.
Collapse
Affiliation(s)
- David A Sabatini
- Department of Organismal Biology and Anatomy, The University of Chicago, Chicago, IL, 60637, USA
- Neuroscience Institute, The University of Chicago, Chicago, IL, 60637, USA
| | - Matthew T Kaufman
- Department of Organismal Biology and Anatomy, The University of Chicago, Chicago, IL, 60637, USA.
- Neuroscience Institute, The University of Chicago, Chicago, IL, 60637, USA.
| |
Collapse
|
14
|
Marino PJ, Bahureksa L, Fisac CF, Oby ER, Smoulder AL, Motiwala A, Degenhart AD, Grigsby EM, Joiner WM, Chase SM, Yu BM, Batista AP. A posture subspace in primary motor cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.08.12.607361. [PMID: 39185208 PMCID: PMC11343157 DOI: 10.1101/2024.08.12.607361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 08/27/2024]
Abstract
To generate movements, the brain must combine information about movement goal and body posture. Motor cortex (M1) is a key node for the convergence of these information streams. How are posture and goal information organized within M1's activity to permit the flexible generation of movement commands? To answer this question, we recorded M1 activity while monkeys performed a variety of tasks with the forearm in a range of postures. We found that posture- and goal-related components of neural population activity were separable and resided in nearly orthogonal subspaces. The posture subspace was stable across tasks. Within each task, neural trajectories for each goal had similar shapes across postures. Our results reveal a simpler organization of posture information in M1 than previously recognized. The compartmentalization of posture and goal information might allow the two to be flexibly combined in the service of our broad repertoire of actions.
Collapse
Affiliation(s)
- Patrick J. Marino
- Dept. of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA
| | - Lindsay Bahureksa
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA
- Dept. of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Carmen Fernández Fisac
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA
- Dept. of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Emily R. Oby
- Dept. of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA
- Dept. of Biomedical and Molecular Sciences, Queen’s University, Kingston, Ontario K7L 3N6, Canda
| | - Adam L. Smoulder
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA
- Dept. of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Asma Motiwala
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA
- Dept. Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Alan D. Degenhart
- Dept. of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA
- Dept. Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
- Starfish Neuroscience, Bellevue, WA 98004, USA
| | - Erinn M. Grigsby
- Dept. of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA
- Dept. of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, PA 15260, USA
- Rehab Neural Engineering Labs, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Wilsaan M. Joiner
- Dept. of Neurobiology, Physiology, and Behavior, University of California, Davis, Davis, CA 95616, USA
| | - Steven M. Chase
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA
- Dept. of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA
- Senior author
- These authors contributed equally
| | - Byron M. Yu
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA
- Dept. of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
- Dept. Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA
- Senior author
- These authors contributed equally
| | - Aaron P. Batista
- Dept. of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA
- Senior author
- These authors contributed equally
- Lead contact
| |
Collapse
|
15
|
Quass GL, Rogalla MM, Ford AN, Apostolides PF. Mixed Representations of Sound and Action in the Auditory Midbrain. J Neurosci 2024; 44:e1831232024. [PMID: 38918064 PMCID: PMC11270520 DOI: 10.1523/jneurosci.1831-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 06/05/2024] [Accepted: 06/14/2024] [Indexed: 06/27/2024] Open
Abstract
Linking sensory input and its consequences is a fundamental brain operation. During behavior, the neural activity of neocortical and limbic systems often reflects dynamic combinations of sensory and task-dependent variables, and these "mixed representations" are suggested to be important for perception, learning, and plasticity. However, the extent to which such integrative computations might occur outside of the forebrain is less clear. Here, we conduct cellular-resolution two-photon Ca2+ imaging in the superficial "shell" layers of the inferior colliculus (IC), as head-fixed mice of either sex perform a reward-based psychometric auditory task. We find that the activity of individual shell IC neurons jointly reflects auditory cues, mice's actions, and behavioral trial outcomes, such that trajectories of neural population activity diverge depending on mice's behavioral choice. Consequently, simple classifier models trained on shell IC neuron activity can predict trial-by-trial outcomes, even when training data are restricted to neural activity occurring prior to mice's instrumental actions. Thus, in behaving mice, auditory midbrain neurons transmit a population code that reflects a joint representation of sound, actions, and task-dependent variables.
Collapse
Affiliation(s)
- Gunnar L Quass
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan 48109
| | - Meike M Rogalla
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan 48109
| | - Alexander N Ford
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan 48109
| | - Pierre F Apostolides
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan 48109
- Department of Molecular and Integrative Physiology, University of Michigan Medical School, Ann Arbor, Michigan 48109
| |
Collapse
|
16
|
Driscoll LN, Shenoy K, Sussillo D. Flexible multitask computation in recurrent networks utilizes shared dynamical motifs. Nat Neurosci 2024; 27:1349-1363. [PMID: 38982201 PMCID: PMC11239504 DOI: 10.1038/s41593-024-01668-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 04/26/2024] [Indexed: 07/11/2024]
Abstract
Flexible computation is a hallmark of intelligent behavior. However, little is known about how neural networks contextually reconfigure for different computations. In the present work, we identified an algorithmic neural substrate for modular computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses revealed learned computational strategies mirroring the modular subtask structure of the training task set. Dynamical motifs, which are recurring patterns of neural activity that implement specific computations through dynamics, such as attractors, decision boundaries and rotations, were reused across tasks. For example, tasks requiring memory of a continuous circular variable repurposed the same ring attractor. We showed that dynamical motifs were implemented by clusters of units when the unit activation function was restricted to be positive. Cluster lesions caused modular performance deficits. Motifs were reconfigured for fast transfer learning after an initial phase of learning. This work establishes dynamical motifs as a fundamental unit of compositional computation, intermediate between neuron and network. As whole-brain studies simultaneously record activity from multiple specialized systems, the dynamical motif framework will guide questions about specialization and generalization.
Collapse
Affiliation(s)
- Laura N Driscoll
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
| | - Krishna Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Neurobiology, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Bio-X Institute, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| |
Collapse
|
17
|
Gillon CJ, Baker C, Ly R, Balzani E, Brunton BW, Schottdorf M, Ghosh S, Dehghani N. Open Data In Neurophysiology: Advancements, Solutions & Challenges. ARXIV 2024:arXiv:2407.00976v1. [PMID: 39010879 PMCID: PMC11247910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/17/2024]
Abstract
Across the life sciences, an ongoing effort over the last 50 years has made data and methods more reproducible and transparent. This openness has led to transformative insights and vastly accelerated scientific progress1,2. For example, structural biology3 and genomics4,5 have undertaken systematic collection and publication of protein sequences and structures over the past half-century, and these data have led to scientific breakthroughs that were unthinkable when data collection first began (e.g.6). We believe that neuroscience is poised to follow the same path, and that principles of open data and open science will transform our understanding of the nervous system in ways that are impossible to predict at the moment. To this end, new social structures along with active and open scientific communities are essential7 to facilitate and expand the still limited adoption of open science practices in our field8. Unified by shared values of openness, we set out to organize a symposium for Open Data in Neuroscience (ODIN) to strengthen our community and facilitate transformative neuroscience research at large. In this report, we share what we learned during this first ODIN event. We also lay out plans for how to grow this movement, document emerging conversations, and propose a path toward a better and more transparent science of tomorrow.
Collapse
Affiliation(s)
- Colleen J Gillon
- These authors contributed equally to this paper
- Department of Bioengineering, Imperial College London, London, UK
| | - Cody Baker
- These authors contributed equally to this paper
- CatalystNeuro, Benicia, CA, USA
| | - Ryan Ly
- These authors contributed equally to this paper
- Scientific Data Division, Lawrence Berkeley National Laboratory, Berkeley, CA, USA
| | - Edoardo Balzani
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
| | - Bingni W Brunton
- Department of Biology, University of Washington, Seattle, WA, USA
| | - Manuel Schottdorf
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Satrajit Ghosh
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
| | - Nima Dehghani
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
- These authors contributed equally to this paper
| |
Collapse
|
18
|
Wimalasena LN, Pandarinath C, Yong NA. Spinal interneuron population dynamics underlying flexible pattern generation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.20.599927. [PMID: 38948833 PMCID: PMC11213001 DOI: 10.1101/2024.06.20.599927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
The mammalian spinal locomotor network is composed of diverse populations of interneurons that collectively orchestrate and execute a range of locomotor behaviors. Despite the identification of many classes of spinal interneurons constituting the locomotor network, it remains unclear how the network's collective activity computes and modifies locomotor output on a step-by-step basis. To investigate this, we analyzed lumbar interneuron population recordings and multi-muscle electromyography from spinalized cats performing air stepping and used artificial intelligence methods to uncover state space trajectories of spinal interneuron population activity on single step cycles and at millisecond timescales. Our analyses of interneuron population trajectories revealed that traversal of specific state space regions held millisecond-timescale correspondence to the timing adjustments of extensor-flexor alternation. Similarly, we found that small variations in the path of state space trajectories were tightly linked to single-step, microvolt-scale adjustments in the magnitude of muscle output. One sentence summary Features of spinal interneuron state space trajectories capture variations in the timing and magnitude of muscle activations across individual step cycles, with precision on the scales of milliseconds and microvolts respectively.
Collapse
|
19
|
Alcolea P, Ma X, Bodkin K, Miller LE, Danziger ZC. Less is more: selection from a small set of options improves BCI velocity control. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.03.596241. [PMID: 38895473 PMCID: PMC11185569 DOI: 10.1101/2024.06.03.596241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
We designed the discrete direction selection (DDS) decoder for intracortical brain computer interface (iBCI) cursor control and showed that it outperformed currently used decoders in a human-operated real-time iBCI simulator and in monkey iBCI use. Unlike virtually all existing decoders that map between neural activity and continuous velocity commands, DDS uses neural activity to select among a small menu of preset cursor velocities. We compared closed-loop cursor control across four visits by each of 48 naïve, able-bodied human subjects using either DDS or one of three common continuous velocity decoders: direct regression with assist (an affine map from neural activity to cursor velocity), ReFIT, and the velocity Kalman Filter. DDS outperformed all three by a substantial margin. Subsequently, a monkey using an iBCI also had substantially better performance with DDS than with the Wiener filter decoder (direct regression decoder that includes time history). Discretizing the decoded velocity with DDS effectively traded high resolution velocity commands for less tortuous and lower noise trajectories, highlighting the potential benefits of simplifying online iBCI control.
Collapse
Affiliation(s)
- Pedro Alcolea
- Department of Biomedical Engineering, Florida International University, Miami, FL 33199, USA
| | - Xuan Ma
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA
| | - Kevin Bodkin
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA
| | - Lee E. Miller
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA
- Department of Physical Medicine and Rehabilitation, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA
- Department of Biomedical Engineering, McCormick School of Engineering, Northwestern University, Evanston, IL 60208, USA
- Shirley Ryan AbilityLab, Chicago, IL 60611, USA
| | - Zachary C. Danziger
- Department of Biomedical Engineering, Florida International University, Miami, FL 33199, USA
- Department of Rehabilitation Medicine - Division of Physical Therapy, Emory University, Atlanta, GA 30322, USA
- W.H. Coulter Department of Biomedical Engineering, Emory University Atlanta, GA 30322, USA
| |
Collapse
|
20
|
Rodriguez AC, Perich MG, Miller L, Humphries MD. Motor cortex latent dynamics encode spatial and temporal arm movement parameters independently. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.05.26.542452. [PMID: 37292834 PMCID: PMC10246015 DOI: 10.1101/2023.05.26.542452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The fluid movement of an arm requires multiple spatiotemporal parameters to be set independently. Recent studies have argued that arm movements are generated by the collective dynamics of neurons in motor cortex. An untested prediction of this hypothesis is that independent parameters of movement must map to independent components of the neural dynamics. Using a task where monkeys made a sequence of reaching movements to randomly placed targets, we show that the spatial and temporal parameters of arm movements are independently encoded in the low-dimensional trajectories of population activity in motor cortex: Each movement's direction corresponds to a fixed neural trajectory through neural state space and its speed to how quickly that trajectory is traversed. Recurrent neural network models show this coding allows independent control over the spatial and temporal parameters of movement by separate network parameters. Our results support a key prediction of the dynamical systems view of motor cortex, but also argue that not all parameters of movement are defined by different trajectories of population activity.
Collapse
Affiliation(s)
| | - Matthew G. Perich
- Département de neurosciences, Faculté de médecine, Université de Montréal, Montréal, Canada
- Québec Artificial Intelligence Institute (Mila), Québec, Canada
| | - Lee Miller
- Northwestern University, Department of Biomedical Engineering, Chicago, USA
| | - Mark D. Humphries
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
21
|
Zhou S, Buonomano DV. Unified control of temporal and spatial scales of sensorimotor behavior through neuromodulation of short-term synaptic plasticity. SCIENCE ADVANCES 2024; 10:eadk7257. [PMID: 38701208 DOI: 10.1126/sciadv.adk7257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 04/03/2024] [Indexed: 05/05/2024]
Abstract
Neuromodulators have been shown to alter the temporal profile of short-term synaptic plasticity (STP); however, the computational function of this neuromodulation remains unexplored. Here, we propose that the neuromodulation of STP provides a general mechanism to scale neural dynamics and motor outputs in time and space. We trained recurrent neural networks that incorporated STP to produce complex motor trajectories-handwritten digits-with different temporal (speed) and spatial (size) scales. Neuromodulation of STP produced temporal and spatial scaling of the learned dynamics and enhanced temporal or spatial generalization compared to standard training of the synaptic weights in the absence of STP. The model also accounted for the results of two experimental studies involving flexible sensorimotor timing. Neuromodulation of STP provides a unified and biologically plausible mechanism to control the temporal and spatial scales of neural dynamics and sensorimotor behaviors.
Collapse
Affiliation(s)
- Shanglin Zhou
- Institute for Translational Brain Research, Fudan University, Shanghai, China
- State Key Laboratory of Medical Neurobiology, Fudan University, Shanghai, China
- MOE Frontiers Center for Brain Science, Fudan University, Shanghai, China
- Zhongshan Hospital, Fudan University, Shanghai, China
| | - Dean V Buonomano
- Department of Neurobiology, University of California, Los Angeles, Los Angeles, CA, USA
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, USA
| |
Collapse
|
22
|
Terada Y, Toyoizumi T. Chaotic neural dynamics facilitate probabilistic computations through sampling. Proc Natl Acad Sci U S A 2024; 121:e2312992121. [PMID: 38648479 PMCID: PMC11067032 DOI: 10.1073/pnas.2312992121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 02/13/2024] [Indexed: 04/25/2024] Open
Abstract
Cortical neurons exhibit highly variable responses over trials and time. Theoretical works posit that this variability arises potentially from chaotic network dynamics of recurrently connected neurons. Here, we demonstrate that chaotic neural dynamics, formed through synaptic learning, allow networks to perform sensory cue integration in a sampling-based implementation. We show that the emergent chaotic dynamics provide neural substrates for generating samples not only of a static variable but also of a dynamical trajectory, where generic recurrent networks acquire these abilities with a biologically plausible learning rule through trial and error. Furthermore, the networks generalize their experience in the stimulus-evoked samples to the inference without partial or all sensory information, which suggests a computational role of spontaneous activity as a representation of the priors as well as a tractable biological computation for marginal distributions. These findings suggest that chaotic neural dynamics may serve for the brain function as a Bayesian generative model.
Collapse
Affiliation(s)
- Yu Terada
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Saitama351-0198, Japan
- Department of Neurobiology, University of California, San Diego, La Jolla, CA92093
- The Institute for Physics of Intelligence, The University of Tokyo, Tokyo113-0033, Japan
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Saitama351-0198, Japan
- Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo113-8656, Japan
| |
Collapse
|
23
|
Rush ER, Heckman C, Jayaram K, Humbert JS. Neural dynamics of robust legged robots. Front Robot AI 2024; 11:1324404. [PMID: 38699630 PMCID: PMC11063321 DOI: 10.3389/frobt.2024.1324404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 03/26/2024] [Indexed: 05/05/2024] Open
Abstract
Legged robot control has improved in recent years with the rise of deep reinforcement learning, however, much of the underlying neural mechanisms remain difficult to interpret. Our aim is to leverage bio-inspired methods from computational neuroscience to better understand the neural activity of robust robot locomotion controllers. Similar to past work, we observe that terrain-based curriculum learning improves agent stability. We study the biomechanical responses and neural activity within our neural network controller by simultaneously pairing physical disturbances with targeted neural ablations. We identify an agile hip reflex that enables the robot to regain its balance and recover from lateral perturbations. Model gradients are employed to quantify the relative degree that various sensory feedback channels drive this reflexive behavior. We also find recurrent dynamics are implicated in robust behavior, and utilize sampling-based ablation methods to identify these key neurons. Our framework combines model-based and sampling-based methods for drawing causal relationships between neural network activity and robust embodied robot behavior.
Collapse
Affiliation(s)
- Eugene R. Rush
- Department of Mechanical Engineering, University of Colorado Boulder, Boulder, CO, United States
| | - Christoffer Heckman
- Department of Computer Science, University of Colorado Boulder, Boulder, CO, United States
| | - Kaushik Jayaram
- Department of Mechanical Engineering, University of Colorado Boulder, Boulder, CO, United States
| | - J. Sean Humbert
- Department of Mechanical Engineering, University of Colorado Boulder, Boulder, CO, United States
| |
Collapse
|
24
|
Medrano J, Friston K, Zeidman P. Linking fast and slow: The case for generative models. Netw Neurosci 2024; 8:24-43. [PMID: 38562283 PMCID: PMC10861163 DOI: 10.1162/netn_a_00343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 10/11/2023] [Indexed: 04/04/2024] Open
Abstract
A pervasive challenge in neuroscience is testing whether neuronal connectivity changes over time due to specific causes, such as stimuli, events, or clinical interventions. Recent hardware innovations and falling data storage costs enable longer, more naturalistic neuronal recordings. The implicit opportunity for understanding the self-organised brain calls for new analysis methods that link temporal scales: from the order of milliseconds over which neuronal dynamics evolve, to the order of minutes, days, or even years over which experimental observations unfold. This review article demonstrates how hierarchical generative models and Bayesian inference help to characterise neuronal activity across different time scales. Crucially, these methods go beyond describing statistical associations among observations and enable inference about underlying mechanisms. We offer an overview of fundamental concepts in state-space modeling and suggest a taxonomy for these methods. Additionally, we introduce key mathematical principles that underscore a separation of temporal scales, such as the slaving principle, and review Bayesian methods that are being used to test hypotheses about the brain with multiscale data. We hope that this review will serve as a useful primer for experimental and computational neuroscientists on the state of the art and current directions of travel in the complex systems modelling literature.
Collapse
Affiliation(s)
- Johan Medrano
- The Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, UK
| | - Karl Friston
- The Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, UK
| | - Peter Zeidman
- The Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, UK
| |
Collapse
|
25
|
Churchland MM, Shenoy KV. Preparatory activity and the expansive null-space. Nat Rev Neurosci 2024; 25:213-236. [PMID: 38443626 DOI: 10.1038/s41583-024-00796-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/26/2024] [Indexed: 03/07/2024]
Abstract
The study of the cortical control of movement experienced a conceptual shift over recent decades, as the basic currency of understanding shifted from single-neuron tuning towards population-level factors and their dynamics. This transition was informed by a maturing understanding of recurrent networks, where mechanism is often characterized in terms of population-level factors. By estimating factors from data, experimenters could test network-inspired hypotheses. Central to such hypotheses are 'output-null' factors that do not directly drive motor outputs yet are essential to the overall computation. In this Review, we highlight how the hypothesis of output-null factors was motivated by the venerable observation that motor-cortex neurons are active during movement preparation, well before movement begins. We discuss how output-null factors then became similarly central to understanding neural activity during movement. We discuss how this conceptual framework provided key analysis tools, making it possible for experimenters to address long-standing questions regarding motor control. We highlight an intriguing trend: as experimental and theoretical discoveries accumulate, the range of computational roles hypothesized to be subserved by output-null factors continues to expand.
Collapse
Affiliation(s)
- Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA.
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA.
| | - Krishna V Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Neurobiology, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Bio-X Institute, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
| |
Collapse
|
26
|
Temmar H, Willsey MS, Costello JT, Mender MJ, Cubillos LH, Lam JL, Wallace DM, Kelberman MM, Patil PG, Chestek CA. Artificial neural network for brain-machine interface consistently produces more naturalistic finger movements than linear methods. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.01.583000. [PMID: 38496403 PMCID: PMC10942378 DOI: 10.1101/2024.03.01.583000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
Brain-machine interfaces (BMI) aim to restore function to persons living with spinal cord injuries by 'decoding' neural signals into behavior. Recently, nonlinear BMI decoders have outperformed previous state-of-the-art linear decoders, but few studies have investigated what specific improvements these nonlinear approaches provide. In this study, we compare how temporally convolved feedforward neural networks (tcFNNs) and linear approaches predict individuated finger movements in open and closed-loop settings. We show that nonlinear decoders generate more naturalistic movements, producing distributions of velocities 85.3% closer to true hand control than linear decoders. Addressing concerns that neural networks may come to inconsistent solutions, we find that regularization techniques improve the consistency of tcFNN convergence by 194.6%, along with improving average performance, and training speed. Finally, we show that tcFNN can leverage training data from multiple task variations to improve generalization. The results of this study show that nonlinear methods produce more naturalistic movements and show potential for generalizing over less constrained tasks. Teaser A neural network decoder produces consistent naturalistic movements and shows potential for real-world generalization through task variations.
Collapse
|
27
|
Banerjee A, Chen F, Druckmann S, Long MA. Temporal scaling of motor cortical dynamics reveals hierarchical control of vocal production. Nat Neurosci 2024; 27:527-535. [PMID: 38291282 DOI: 10.1038/s41593-023-01556-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 12/13/2023] [Indexed: 02/01/2024]
Abstract
Neocortical activity is thought to mediate voluntary control over vocal production, but the underlying neural mechanisms remain unclear. In a highly vocal rodent, the male Alston's singing mouse, we investigate neural dynamics in the orofacial motor cortex (OMC), a structure critical for vocal behavior. We first describe neural activity that is modulated by component notes (~100 ms), probably representing sensory feedback. At longer timescales, however, OMC neurons exhibit diverse and often persistent premotor firing patterns that stretch or compress with song duration (~10 s). Using computational modeling, we demonstrate that such temporal scaling, acting through downstream motor production circuits, can enable vocal flexibility. These results provide a framework for studying hierarchical control circuits, a common design principle across many natural and artificial systems.
Collapse
Affiliation(s)
- Arkarup Banerjee
- NYU Neuroscience Institute, New York University Langone Health, New York, NY, USA.
- Department of Otolaryngology, New York University Langone Health, New York, NY, USA.
- Center for Neural Science, New York University, New York, NY, USA.
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA.
| | - Feng Chen
- Department of Applied Physics, Stanford University, Stanford, CA, USA
| | - Shaul Druckmann
- Department of Neurobiology, Stanford University, Stanford, CA, USA
| | - Michael A Long
- NYU Neuroscience Institute, New York University Langone Health, New York, NY, USA.
- Department of Otolaryngology, New York University Langone Health, New York, NY, USA.
- Center for Neural Science, New York University, New York, NY, USA.
| |
Collapse
|
28
|
Almani MN, Lazzari J, Chacon A, Saxena S. μSim: A goal-driven framework for elucidating the neural control of movement through musculoskeletal modeling. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.02.578628. [PMID: 38405828 PMCID: PMC10888726 DOI: 10.1101/2024.02.02.578628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
How does the motor cortex (MC) produce purposeful and generalizable movements from the complex musculoskeletal system in a dynamic environment? To elucidate the underlying neural dynamics, we use a goal-driven approach to model MC by considering its goal as a controller driving the musculoskeletal system through desired states to achieve movement. Specifically, we formulate the MC as a recurrent neural network (RNN) controller producing muscle commands while receiving sensory feedback from biologically accurate musculoskeletal models. Given this real-time simulated feedback implemented in advanced physics simulation engines, we use deep reinforcement learning to train the RNN to achieve desired movements under specified neural and musculoskeletal constraints. Activity of the trained model can accurately decode experimentally recorded neural population dynamics and single-unit MC activity, while generalizing well to testing conditions significantly different from training. Simultaneous goal- and data- driven modeling in which we use the recorded neural activity as observed states of the MC further enhances direct and generalizable single-unit decoding. Finally, we show that this framework elucidates computational principles of how neural dynamics enable flexible control of movement and make this framework easy-to-use for future experiments.
Collapse
|
29
|
Zimnik AJ, Cora Ames K, An X, Driscoll L, Lara AH, Russo AA, Susoy V, Cunningham JP, Paninski L, Churchland MM, Glaser JI. Identifying Interpretable Latent Factors with Sparse Component Analysis. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.05.578988. [PMID: 38370650 PMCID: PMC10871230 DOI: 10.1101/2024.02.05.578988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
In many neural populations, the computationally relevant signals are posited to be a set of 'latent factors' - signals shared across many individual neurons. Understanding the relationship between neural activity and behavior requires the identification of factors that reflect distinct computational roles. Methods for identifying such factors typically require supervision, which can be suboptimal if one is unsure how (or whether) factors can be grouped into distinct, meaningful sets. Here, we introduce Sparse Component Analysis (SCA), an unsupervised method that identifies interpretable latent factors. SCA seeks factors that are sparse in time and occupy orthogonal dimensions. With these simple constraints, SCA facilitates surprisingly clear parcellations of neural activity across a range of behaviors. We applied SCA to motor cortex activity from reaching and cycling monkeys, single-trial imaging data from C. elegans, and activity from a multitask artificial network. SCA consistently identified sets of factors that were useful in describing network computations.
Collapse
Affiliation(s)
- Andrew J Zimnik
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - K Cora Ames
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Xinyue An
- Department of Neurology, Northwestern University, Chicago, IL, USA
- Interdepartmental Neuroscience Program, Northwestern University, Chicago, IL, USA
| | - Laura Driscoll
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Allen Institute for Neural Dynamics, Allen Institute, Seattle, CA, USA
| | - Antonio H Lara
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Abigail A Russo
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Vladislav Susoy
- Department of Physics, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - John P Cunningham
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
| | - Liam Paninski
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY, USA
| | - Joshua I Glaser
- Department of Neurology, Northwestern University, Chicago, IL, USA
- Department of Computer Science, Northwestern University, Evanston, IL, USA
| |
Collapse
|
30
|
Wan Y, Macias LH, Garcia LR. Unraveling the hierarchical structure of posture and muscle activity changes during mating of Caenorhabditis elegans. PNAS NEXUS 2024; 3:pgae032. [PMID: 38312221 PMCID: PMC10837012 DOI: 10.1093/pnasnexus/pgae032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 01/16/2024] [Indexed: 02/06/2024]
Abstract
One goal of neurobiology is to explain how decision-making in neuromuscular circuits produces behaviors. However, two obstacles complicate such efforts: individual behavioral variability and the challenge of simultaneously assessing multiple neuronal activities during behavior. Here, we circumvent these obstacles by analyzing whole animal behavior from a library of Caenorhabditis elegans male mating recordings. The copulating males express the GCaMP calcium sensor in the muscles, allowing simultaneous recording of posture and muscle activities. Our library contains wild type and males with selective neuronal desensitization in serotonergic neurons, which include male-specific posterior cord motor/interneurons and sensory ray neurons that modulate mating behavior. Incorporating deep learning-enabled computer vision, we developed a software to automatically quantify posture and muscle activities. By modeling, the posture and muscle activity data are classified into stereotyped modules, with the behaviors represented by serial executions and transitions among the modules. Detailed analysis of the modules reveals previously unidentified subtypes of the male's copulatory spicule prodding behavior. We find that wild-type and serotonergic neurons-suppressed males had different usage preferences for those module subtypes, highlighting the requirement of serotonergic neurons in the coordinated function of some muscles. In the structure of the behavior, bi-module repeats coincide with most of the previously described copulation steps, suggesting a recursive "repeat until success/give up" program is used for each step during mating. On the other hand, the transition orders of the bi-module repeats reveal the sub-behavioral hierarchy males employ to locate and inseminate hermaphrodites.
Collapse
Affiliation(s)
- Yufeng Wan
- Department of Biology, Texas A&M University, 3258 TAMU, College Station, TX 77843, USA
| | - Luca Henze Macias
- Department of Biology, Texas A&M University, 3258 TAMU, College Station, TX 77843, USA
| | - Luis Rene Garcia
- Department of Biology, Texas A&M University, 3258 TAMU, College Station, TX 77843, USA
| |
Collapse
|
31
|
Oby ER, Degenhart AD, Grigsby EM, Motiwala A, McClain NT, Marino PJ, Yu BM, Batista AP. Dynamical constraints on neural population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.03.573543. [PMID: 38260549 PMCID: PMC10802336 DOI: 10.1101/2024.01.03.573543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
The manner in which neural activity unfolds over time is thought to be central to sensory, motor, and cognitive functions in the brain. Network models have long posited that the brain's computations involve time courses of activity that are shaped by the underlying network. A prediction from this view is that the activity time courses should be difficult to violate. We leveraged a brain-computer interface (BCI) to challenge monkeys to violate the naturally-occurring time courses of neural population activity that we observed in motor cortex. This included challenging animals to traverse the natural time course of neural activity in a time-reversed manner. Animals were unable to violate the natural time courses of neural activity when directly challenged to do so. These results provide empirical support for the view that activity time courses observed in the brain indeed reflect the underlying network-level computational mechanisms that they are believed to implement.
Collapse
|
32
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent factors and structures in neural population activity. Nat Biomed Eng 2024; 8:85-108. [PMID: 38082181 PMCID: PMC11735406 DOI: 10.1038/s41551-023-01106-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 09/12/2023] [Indexed: 12/26/2023]
Abstract
Modelling the spatiotemporal dynamics in the activity of neural populations while also enabling their flexible inference is hindered by the complexity and noisiness of neural observations. Here we show that the lower-dimensional nonlinear latent factors and latent structures can be computationally modelled in a manner that allows for flexible inference causally, non-causally and in the presence of missing neural observations. To enable flexible inference, we developed a neural network that separates the model into jointly trained manifold and dynamic latent factors such that nonlinearity is captured through the manifold factors and the dynamics can be modelled in tractable linear form on this nonlinear manifold. We show that the model, which we named 'DFINE' (for 'dynamical flexible inference for nonlinear embeddings') achieves flexible inference in simulations of nonlinear dynamics and across neural datasets representing a diversity of brain regions and behaviours. Compared with earlier neural-network models, DFINE enables flexible inference, better predicts neural activity and behaviour, and better captures the latent neural manifold structure. DFINE may advance the development of neurotechnology and investigations in neuroscience.
Collapse
Affiliation(s)
- Hamidreza Abbaspourazad
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Eray Erturk
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Departments of Neurosurgery, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
33
|
Kirk EA, Hope KT, Sober SJ, Sauerbrei BA. An output-null signature of inertial load in motor cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.11.06.565869. [PMID: 37986810 PMCID: PMC10659339 DOI: 10.1101/2023.11.06.565869] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Abstract
Coordinated movement requires the nervous system to continuously compensate for changes in mechanical load across different contexts. For voluntary movements like reaching, the motor cortex is a critical hub that generates commands to move the limbs and counteract loads. How does cortex contribute to load compensation when rhythmic movements are clocked by a spinal pattern generator? Here, we address this question by manipulating the mass of the forelimb in unrestrained mice during locomotion. While load produces changes in motor output that are robust to inactivation of motor cortex, it also induces a profound shift in cortical dynamics, which is minimally affected by cerebellar perturbation and significantly larger than the response in the spinal motoneuron population. This latent representation may enable motor cortex to generate appropriate commands when a voluntary movement must be integrated with an ongoing, spinally-generated rhythm.
Collapse
Affiliation(s)
- Eric A. Kirk
- CaseWestern Reserve University School ofMedicine, Department of Neurosciences
| | - Keenan T. Hope
- CaseWestern Reserve University School ofMedicine, Department of Neurosciences
| | | | | |
Collapse
|
34
|
Soo WWM, Goudar V, Wang XJ. Training biologically plausible recurrent neural networks on cognitive tasks with long-term dependencies. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.10.561588. [PMID: 37873445 PMCID: PMC10592728 DOI: 10.1101/2023.10.10.561588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
Training recurrent neural networks (RNNs) has become a go-to approach for generating and evaluating mechanistic neural hypotheses for cognition. The ease and efficiency of training RNNs with backpropagation through time and the availability of robustly supported deep learning libraries has made RNN modeling more approachable and accessible to neuroscience. Yet, a major technical hindrance remains. Cognitive processes such as working memory and decision making involve neural population dynamics over a long period of time within a behavioral trial and across trials. It is difficult to train RNNs to accomplish tasks where neural representations and dynamics have long temporal dependencies without gating mechanisms such as LSTMs or GRUs which currently lack experimental support and prohibit direct comparison between RNNs and biological neural circuits. We tackled this problem based on the idea of specialized skip-connections through time to support the emergence of task-relevant dynamics, and subsequently reinstitute biological plausibility by reverting to the original architecture. We show that this approach enables RNNs to successfully learn cognitive tasks that prove impractical if not impossible to learn using conventional methods. Over numerous tasks considered here, we achieve less training steps and shorter wall-clock times, particularly in tasks that require learning long-term dependencies via temporal integration over long timescales or maintaining a memory of past events in hidden-states. Our methods expand the range of experimental tasks that biologically plausible RNN models can learn, thereby supporting the development of theory for the emergent neural mechanisms of computations involving long-term dependencies.
Collapse
|
35
|
Quass GL, Rogalla MM, Ford AN, Apostolides PF. Mixed representations of sound and action in the auditory midbrain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.19.558449. [PMID: 37786676 PMCID: PMC10541616 DOI: 10.1101/2023.09.19.558449] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Linking sensory input and its consequences is a fundamental brain operation. Accordingly, neural activity of neo-cortical and limbic systems often reflects dynamic combinations of sensory and behaviorally relevant variables, and these "mixed representations" are suggested to be important for perception, learning, and plasticity. However, the extent to which such integrative computations might occur in brain regions upstream of the forebrain is less clear. Here, we conduct cellular-resolution 2-photon Ca2+ imaging in the superficial "shell" layers of the inferior colliculus (IC), as head-fixed mice of either sex perform a reward-based psychometric auditory task. We find that the activity of individual shell IC neurons jointly reflects auditory cues and mice's actions, such that trajectories of neural population activity diverge depending on mice's behavioral choice. Consequently, simple classifier models trained on shell IC neuron activity can predict trial-by-trial outcomes, even when training data are restricted to neural activity occurring prior to mice's instrumental actions. Thus in behaving animals, auditory midbrain neurons transmit a population code that reflects a joint representation of sound and action.
Collapse
Affiliation(s)
- GL Quass
- Kresge Hearing Research Institute, Department of Otolaryngology – Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, Michigan 48109, United States
| | - MM Rogalla
- Kresge Hearing Research Institute, Department of Otolaryngology – Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, Michigan 48109, United States
| | - AN Ford
- Kresge Hearing Research Institute, Department of Otolaryngology – Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, Michigan 48109, United States
| | - PF Apostolides
- Kresge Hearing Research Institute, Department of Otolaryngology – Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, Michigan 48109, United States
- Department of Molecular and Integrative Physiology, University of Michigan Medical School, Ann Arbor, Michigan 48109, United States
| |
Collapse
|
36
|
Muscinelli SP, Wagner MJ, Litwin-Kumar A. Optimal routing to cerebellum-like structures. Nat Neurosci 2023; 26:1630-1641. [PMID: 37604889 PMCID: PMC10506727 DOI: 10.1038/s41593-023-01403-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 07/12/2023] [Indexed: 08/23/2023]
Abstract
The vast expansion from mossy fibers to cerebellar granule cells (GrC) produces a neural representation that supports functions including associative and internal model learning. This motif is shared by other cerebellum-like structures and has inspired numerous theoretical models. Less attention has been paid to structures immediately presynaptic to GrC layers, whose architecture can be described as a 'bottleneck' and whose function is not understood. We therefore develop a theory of cerebellum-like structures in conjunction with their afferent pathways that predicts the role of the pontine relay to cerebellum and the glomerular organization of the insect antennal lobe. We highlight a new computational distinction between clustered and distributed neuronal representations that is reflected in the anatomy of these two brain structures. Our theory also reconciles recent observations of correlated GrC activity with theories of nonlinear mixing. More generally, it shows that structured compression followed by random expansion is an efficient architecture for flexible computation.
Collapse
Affiliation(s)
- Samuel P Muscinelli
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY, USA.
| | - Mark J Wagner
- National Institute of Neurological Disorders and Stroke, NIH, Bethesda, MD, USA
| | - Ashok Litwin-Kumar
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY, USA.
| |
Collapse
|
37
|
Pocratsky AM, Nascimento F, Özyurt MG, White IJ, Sullivan R, O’Callaghan BJ, Smith CC, Surana S, Beato M, Brownstone RM. Pathophysiology of Dyt1- Tor1a dystonia in mice is mediated by spinal neural circuit dysfunction. Sci Transl Med 2023; 15:eadg3904. [PMID: 37134150 PMCID: PMC7614689 DOI: 10.1126/scitranslmed.adg3904] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 04/14/2023] [Indexed: 05/05/2023]
Abstract
Dystonia, a neurological disorder defined by abnormal postures and disorganized movements, is considered to be a neural circuit disorder with dysfunction arising within and between multiple brain regions. Given that spinal neural circuits constitute the final pathway for motor control, we sought to determine their contribution to this movement disorder. Focusing on the most common inherited form of dystonia in humans, DYT1-TOR1A, we generated a conditional knockout of the torsin family 1 member A (Tor1a) gene in the mouse spinal cord and dorsal root ganglia (DRG). We found that these mice recapitulated the phenotype of the human condition, developing early-onset generalized torsional dystonia. Motor signs emerged early in the mouse hindlimbs before spreading caudo-rostrally to affect the pelvis, trunk, and forelimbs throughout postnatal maturation. Physiologically, these mice bore the hallmark features of dystonia, including spontaneous contractions at rest and excessive and disorganized contractions, including cocontractions of antagonist muscle groups, during voluntary movements. Spontaneous activity, disorganized motor output, and impaired monosynaptic reflexes, all signs of human dystonia, were recorded from isolated mouse spinal cords from these conditional knockout mice. All components of the monosynaptic reflex arc were affected, including motor neurons. Given that confining the Tor1a conditional knockout to DRG did not lead to early-onset dystonia, we conclude that the pathophysiological substrate of this mouse model of dystonia lies in spinal neural circuits. Together, these data provide new insights into our current understanding of dystonia pathophysiology.
Collapse
Affiliation(s)
- Amanda M. Pocratsky
- Department of Neuromuscular Diseases, UCL Queen Square Institute of Neurology, University College London; London, WC1N 3BG, UK
| | - Filipe Nascimento
- Department of Neuromuscular Diseases, UCL Queen Square Institute of Neurology, University College London; London, WC1N 3BG, UK
| | - M. Görkem Özyurt
- Department of Neuromuscular Diseases, UCL Queen Square Institute of Neurology, University College London; London, WC1N 3BG, UK
| | - Ian J. White
- Laboratory for Molecular Cell Biology, University College London; London, WC1E 6BT, UK
| | - Roisin Sullivan
- Department of Molecular Neuroscience, UCL Queen Square Institute of Neurology, University College London; London, WC1N 3BG, UK
| | - Benjamin J. O’Callaghan
- Department of Molecular Neuroscience, UCL Queen Square Institute of Neurology, University College London; London, WC1N 3BG, UK
| | - Calvin C. Smith
- Department of Neuromuscular Diseases, UCL Queen Square Institute of Neurology, University College London; London, WC1N 3BG, UK
| | - Sunaina Surana
- Department of Neuromuscular Diseases, UCL Queen Square Institute of Neurology, University College London; London, WC1N 3BG, UK
- UK Dementia Research Institute, University College London; London, WC1E 6BT, UK
| | - Marco Beato
- Department of Neuroscience, Physiology, and Pharmacology, University College London; London, WC1E 6BT, UK
| | - Robert M. Brownstone
- Department of Neuromuscular Diseases, UCL Queen Square Institute of Neurology, University College London; London, WC1N 3BG, UK
| |
Collapse
|
38
|
Disse GD, Nandakumar B, Pauzin FP, Blumenthal GH, Kong Z, Ditterich J, Moxon KA. Neural ensemble dynamics in trunk and hindlimb sensorimotor cortex encode for the control of postural stability. Cell Rep 2023; 42:112347. [PMID: 37027302 DOI: 10.1016/j.celrep.2023.112347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2022] [Revised: 02/09/2023] [Accepted: 03/21/2023] [Indexed: 04/08/2023] Open
Abstract
The cortex has a disputed role in monitoring postural equilibrium and intervening in cases of major postural disturbances. Here, we investigate the patterns of neural activity in the cortex that underlie neural dynamics during unexpected perturbations. In both the primary sensory (S1) and motor (M1) cortices of the rat, unique neuronal classes differentially covary their responses to distinguish different characteristics of applied postural perturbations; however, there is substantial information gain in M1, demonstrating a role for higher-order computations in motor control. A dynamical systems model of M1 activity and forces generated by the limbs reveals that these neuronal classes contribute to a low-dimensional manifold comprised of separate subspaces enabled by congruent and incongruent neural firing patterns that define different computations depending on the postural responses. These results inform how the cortex engages in postural control, directing work aiming to understand postural instability after neurological disease.
Collapse
Affiliation(s)
- Gregory D Disse
- Neuroscience Graduate Group, University of California, Davis, Davis, CA 95616, USA; Biomedical Engineering, University of California, Davis, Davis, CA 95616, USA
| | | | - Francois P Pauzin
- Biomedical Engineering, University of California, Davis, Davis, CA 95616, USA
| | - Gary H Blumenthal
- School of Biomedical Engineering Science and Health Systems, Drexel University, Philadelphia, PA 19104, USA
| | - Zhaodan Kong
- Mechanical and Aerospace Engineering, University of California, Davis, Davis, CA 95616, USA
| | - Jochen Ditterich
- Neuroscience Graduate Group, University of California, Davis, Davis, CA 95616, USA; Neurobiology, Physiology and Behavior, University of California, Davis, Davis, CA 95616, USA
| | - Karen A Moxon
- Neuroscience Graduate Group, University of California, Davis, Davis, CA 95616, USA; Biomedical Engineering, University of California, Davis, Davis, CA 95616, USA.
| |
Collapse
|
39
|
Marciniak Dg Agra K, Dg Agra P. F = ma. Is the macaque brain Newtonian? Cogn Neuropsychol 2023; 39:376-408. [PMID: 37045793 DOI: 10.1080/02643294.2023.2191843] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/14/2023]
Abstract
Intuitive Physics, the ability to anticipate how the physical events involving mass objects unfold in time and space, is a central component of intelligent systems. Intuitive physics is a promising tool for gaining insight into mechanisms that generalize across species because both humans and non-human primates are subject to the same physical constraints when engaging with the environment. Physical reasoning abilities are widely present within the animal kingdom, but monkeys, with acute 3D vision and a high level of dexterity, appreciate and manipulate the physical world in much the same way humans do.
Collapse
Affiliation(s)
- Karolina Marciniak Dg Agra
- The Rockefeller University, Laboratory of Neural Circuits, New York, NY, USA
- Center for Brain, Minds and Machines, Cambridge, MA, USA
| | - Pedro Dg Agra
- The Rockefeller University, Laboratory of Neural Circuits, New York, NY, USA
- Center for Brain, Minds and Machines, Cambridge, MA, USA
| |
Collapse
|
40
|
Recurrent networks endowed with structural priors explain suboptimal animal behavior. Curr Biol 2023; 33:622-638.e7. [PMID: 36657448 DOI: 10.1016/j.cub.2022.12.044] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 10/03/2022] [Accepted: 12/16/2022] [Indexed: 01/19/2023]
Abstract
The strategies found by animals facing a new task are determined both by individual experience and by structural priors evolved to leverage the statistics of natural environments. Rats quickly learn to capitalize on the trial sequence correlations of two-alternative forced choice (2AFC) tasks after correct trials but consistently deviate from optimal behavior after error trials. To understand this outcome-dependent gating, we first show that recurrent neural networks (RNNs) trained in the same 2AFC task outperform rats as they can readily learn to use across-trial information both after correct and error trials. We hypothesize that, although RNNs can optimize their behavior in the 2AFC task without any a priori restrictions, rats' strategy is constrained by a structural prior adapted to a natural environment in which rewarded and non-rewarded actions provide largely asymmetric information. When pre-training RNNs in a more ecological task with more than two possible choices, networks develop a strategy by which they gate off the across-trial evidence after errors, mimicking rats' behavior. Population analyses show that the pre-trained networks form an accurate representation of the sequence statistics independently of the outcome in the previous trial. After error trials, gating is implemented by a change in the network dynamics that temporarily decouple the categorization of the stimulus from the across-trial accumulated evidence. Our results suggest that the rats' suboptimal behavior reflects the influence of a structural prior that reacts to errors by isolating the network decision dynamics from the context, ultimately constraining the performance in a 2AFC laboratory task.
Collapse
|
41
|
Banerjee A, Chen F, Druckmann S, Long MA. Neural dynamics in the rodent motor cortex enables flexible control of vocal timing. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.23.525252. [PMID: 36747850 PMCID: PMC9900850 DOI: 10.1101/2023.01.23.525252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
Neocortical activity is thought to mediate voluntary control over vocal production, but the underlying neural mechanisms remain unclear. In a highly vocal rodent, the Alston's singing mouse, we investigate neural dynamics in the orofacial motor cortex (OMC), a structure critical for vocal behavior. We first describe neural activity that is modulated by component notes (approx. 100 ms), likely representing sensory feedback. At longer timescales, however, OMC neurons exhibit diverse and often persistent premotor firing patterns that stretch or compress with song duration (approx. 10 s). Using computational modeling, we demonstrate that such temporal scaling, acting via downstream motor production circuits, can enable vocal flexibility. These results provide a framework for studying hierarchical control circuits, a common design principle across many natural and artificial systems.
Collapse
Affiliation(s)
- Arkarup Banerjee
- NYU Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA
- Department of Otolaryngology, New York University Langone Health, New York, NY 10016, USA
- Center for Neural Science, New York University, New York, NY 10003, USA
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, USA
| | - Feng Chen
- Department of Applied Physics, Stanford University, Stanford, CA 94305, USA
| | - Shaul Druckmann
- Department of Neuroscience, Stanford University, Stanford, CA 94304, USA
| | - Michael A Long
- NYU Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA
- Department of Otolaryngology, New York University Langone Health, New York, NY 10016, USA
- Center for Neural Science, New York University, New York, NY 10003, USA
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, USA
| |
Collapse
|
42
|
Thura D, Cabana JF, Feghaly A, Cisek P. Integrated neural dynamics of sensorimotor decisions and actions. PLoS Biol 2022; 20:e3001861. [PMID: 36520685 PMCID: PMC9754259 DOI: 10.1371/journal.pbio.3001861] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 09/29/2022] [Indexed: 12/23/2022] Open
Abstract
Recent theoretical models suggest that deciding about actions and executing them are not implemented by completely distinct neural mechanisms but are instead two modes of an integrated dynamical system. Here, we investigate this proposal by examining how neural activity unfolds during a dynamic decision-making task within the high-dimensional space defined by the activity of cells in monkey dorsal premotor (PMd), primary motor (M1), and dorsolateral prefrontal cortex (dlPFC) as well as the external and internal segments of the globus pallidus (GPe, GPi). Dimensionality reduction shows that the four strongest components of neural activity are functionally interpretable, reflecting a state transition between deliberation and commitment, the transformation of sensory evidence into a choice, and the baseline and slope of the rising urgency to decide. Analysis of the contribution of each population to these components shows meaningful differences between regions but no distinct clusters within each region, consistent with an integrated dynamical system. During deliberation, cortical activity unfolds on a two-dimensional "decision manifold" defined by sensory evidence and urgency and falls off this manifold at the moment of commitment into a choice-dependent trajectory leading to movement initiation. The structure of the manifold varies between regions: In PMd, it is curved; in M1, it is nearly perfectly flat; and in dlPFC, it is almost entirely confined to the sensory evidence dimension. In contrast, pallidal activity during deliberation is primarily defined by urgency. We suggest that these findings reveal the distinct functional contributions of different brain regions to an integrated dynamical system governing action selection and execution.
Collapse
Affiliation(s)
- David Thura
- Groupe de recherche sur la signalisation neurale et la circuiterie, Department of Neuroscience, Université de Montréal, Montréal, Québec, Canada
| | - Jean-François Cabana
- Groupe de recherche sur la signalisation neurale et la circuiterie, Department of Neuroscience, Université de Montréal, Montréal, Québec, Canada
| | - Albert Feghaly
- Groupe de recherche sur la signalisation neurale et la circuiterie, Department of Neuroscience, Université de Montréal, Montréal, Québec, Canada
| | - Paul Cisek
- Groupe de recherche sur la signalisation neurale et la circuiterie, Department of Neuroscience, Université de Montréal, Montréal, Québec, Canada
- * E-mail:
| |
Collapse
|