1
|
Ramot A, Taschbach FH, Yang YC, Hu Y, Chen Q, Morales BC, Wang XC, Wu A, Tye KM, Benna MK, Komiyama T. Motor learning refines thalamic influence on motor cortex. Nature 2025:10.1038/s41586-025-08962-8. [PMID: 40335698 DOI: 10.1038/s41586-025-08962-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Accepted: 04/01/2025] [Indexed: 05/09/2025]
Abstract
The primary motor cortex (M1) is central for the learning and execution of dexterous motor skills1-3, and its superficial layer (layers 2 and 3; hereafter, L2/3) is a key locus of learning-related plasticity1,4-6. It remains unknown how motor learning shapes the way in which upstream regions activate M1 circuits to execute learned movements. Here, using longitudinal axonal imaging of the main inputs to M1 L2/3 in mice, we show that the motor thalamus is the key input source that encodes learned movements in experts (animals trained for two weeks). We then use optogenetics to identify the subset of M1 L2/3 neurons that are strongly driven by thalamic inputs before and after learning. We find that the thalamic influence on M1 changes with learning, such that the motor thalamus preferentially activates the M1 neurons that encode learned movements in experts. Inactivation of the thalamic inputs to M1 in experts impairs learned movements. Our study shows that motor learning reshapes the thalamic influence on M1 to enable the reliable execution of learned movements.
Collapse
Affiliation(s)
- Assaf Ramot
- Department of Neurobiology, University of California San Diego, La Jolla, CA, USA
- Center for Neural Circuits and Behavior, University of California San Diego, La Jolla, CA, USA
- Department of Neurosciences, University of California San Diego, La Jolla, CA, USA
- Halıcıoğlu Data Science Institute, University of California San Diego, La Jolla, CA, USA
| | - Felix H Taschbach
- Department of Neurobiology, University of California San Diego, La Jolla, CA, USA
- Salk Institute for Biological Studies, Howard Hughes Medical Institute, La Jolla, CA, USA
| | - Yun C Yang
- Department of Neurobiology, University of California San Diego, La Jolla, CA, USA
- Center for Neural Circuits and Behavior, University of California San Diego, La Jolla, CA, USA
- Department of Neurosciences, University of California San Diego, La Jolla, CA, USA
- Halıcıoğlu Data Science Institute, University of California San Diego, La Jolla, CA, USA
| | - Yuxin Hu
- Department of Neurobiology, University of California San Diego, La Jolla, CA, USA
- Center for Neural Circuits and Behavior, University of California San Diego, La Jolla, CA, USA
- Department of Neurosciences, University of California San Diego, La Jolla, CA, USA
- Halıcıoğlu Data Science Institute, University of California San Diego, La Jolla, CA, USA
| | - Qiyu Chen
- Department of Neurobiology, University of California San Diego, La Jolla, CA, USA
- Center for Neural Circuits and Behavior, University of California San Diego, La Jolla, CA, USA
- Department of Neurosciences, University of California San Diego, La Jolla, CA, USA
- Halıcıoğlu Data Science Institute, University of California San Diego, La Jolla, CA, USA
| | - Bobbie C Morales
- Department of Neurobiology, University of California San Diego, La Jolla, CA, USA
- Center for Neural Circuits and Behavior, University of California San Diego, La Jolla, CA, USA
- Department of Neurosciences, University of California San Diego, La Jolla, CA, USA
- Halıcıoğlu Data Science Institute, University of California San Diego, La Jolla, CA, USA
| | - Xinyi C Wang
- Department of Neurobiology, University of California San Diego, La Jolla, CA, USA
- Center for Neural Circuits and Behavior, University of California San Diego, La Jolla, CA, USA
- Department of Neurosciences, University of California San Diego, La Jolla, CA, USA
- Halıcıoğlu Data Science Institute, University of California San Diego, La Jolla, CA, USA
| | - An Wu
- Department of Neurobiology, University of California San Diego, La Jolla, CA, USA
- Center for Neural Circuits and Behavior, University of California San Diego, La Jolla, CA, USA
- Department of Neurosciences, University of California San Diego, La Jolla, CA, USA
- Halıcıoğlu Data Science Institute, University of California San Diego, La Jolla, CA, USA
| | - Kay M Tye
- Salk Institute for Biological Studies, Howard Hughes Medical Institute, La Jolla, CA, USA
- Kavli Institute for the Brain and Mind, La Jolla, CA, USA
| | - Marcus K Benna
- Department of Neurobiology, University of California San Diego, La Jolla, CA, USA.
| | - Takaki Komiyama
- Department of Neurobiology, University of California San Diego, La Jolla, CA, USA.
- Center for Neural Circuits and Behavior, University of California San Diego, La Jolla, CA, USA.
- Department of Neurosciences, University of California San Diego, La Jolla, CA, USA.
- Halıcıoğlu Data Science Institute, University of California San Diego, La Jolla, CA, USA.
- Kavli Institute for the Brain and Mind, La Jolla, CA, USA.
| |
Collapse
|
2
|
Zheng C, Wang Q, Cui H. Continuous sensorimotor transformation enhances robustness of neural dynamics to perturbation in macaque motor cortex. Nat Commun 2025; 16:3213. [PMID: 40180984 PMCID: PMC11968799 DOI: 10.1038/s41467-025-58421-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 03/20/2025] [Indexed: 04/05/2025] Open
Abstract
Neural activity in the motor cortex evolves dynamically to prepare and generate movement. Here, we investigate how motor cortical dynamics adapt to dynamic environments and whether these adaptations influence robustness against disruptions. We apply intracortical microstimulation (ICMS) in the motor cortex of monkeys performing delayed center-out reaches to either a static target (static) or a rotating target (moving) that required interception. While ICMS prolongs reaction times (RTs) in the static condition, it does not increase RTs in the moving condition, correlating with faster recovery of neural population activity post-perturbation. Neural dynamics suggests that the moving condition involves ongoing sensorimotor transformations during the delay period, whereas motor planning in the static condition is completed shortly. A neural network model shows that continuous feedback input rapidly corrects perturbation-induced errors in the moving condition. We conclude that continuous sensorimotor transformations enhance the motor cortex's resilience to perturbations, facilitating timely movement execution.
Collapse
Affiliation(s)
- Cong Zheng
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China.
- Beijing Institute for Brain Research, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, 102206, China.
- Chinese Institute for Brain Research, Beijing, 102206, China.
| | - Qifan Wang
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China
- Beijing Institute for Brain Research, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, 102206, China
- Chinese Institute for Brain Research, Beijing, 102206, China
| | - He Cui
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China.
- Beijing Institute for Brain Research, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, 102206, China.
- Chinese Institute for Brain Research, Beijing, 102206, China.
| |
Collapse
|
3
|
Feulner B, Perich MG, Miller LE, Clopath C, Gallego JA. A neural implementation model of feedback-based motor learning. Nat Commun 2025; 16:1805. [PMID: 39979257 PMCID: PMC11842561 DOI: 10.1038/s41467-024-54738-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 11/18/2024] [Indexed: 02/22/2025] Open
Abstract
Animals use feedback to rapidly correct ongoing movements in the presence of a perturbation. Repeated exposure to a predictable perturbation leads to behavioural adaptation that compensates for its effects. Here, we tested the hypothesis that all the processes necessary for motor adaptation may emerge as properties of a controller that adaptively updates its policy. We trained a recurrent neural network to control its own output through an error-based feedback signal, which allowed it to rapidly counteract external perturbations. Implementing a biologically plausible plasticity rule based on this same feedback signal enabled the network to learn to compensate for persistent perturbations through a trial-by-trial process. The network activity changes during learning matched those from populations of neurons from monkey primary motor cortex - known to mediate both movement correction and motor adaptation - during the same task. Furthermore, our model natively reproduced several key aspects of behavioural studies in humans and monkeys. Thus, key features of trial-by-trial motor adaptation can arise from the internal properties of a recurrent neural circuit that adaptively controls its output based on ongoing feedback.
Collapse
Affiliation(s)
- Barbara Feulner
- Department of Bioengineering, Imperial College London, London, UK
| | - Matthew G Perich
- Département de neurosciences, Faculté de médecine, Université de Montréal, Montréal, QC, Canada
- Mila (Quebec Artificial Intelligence Institute), Montréal, QC, Canada
| | - Lee E Miller
- Department of Neuroscience, Northwestern University, Chicago, IL, USA
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, USA
- Department of Physical Medicine and Rehabilitation, Northwestern University, and Shirley Ryan Ability Lab, Chicago, IL, USA
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK.
| | - Juan A Gallego
- Department of Bioengineering, Imperial College London, London, UK.
| |
Collapse
|
4
|
Versteeg C, McCart JD, Ostrow M, Zoltowski DM, Washington CB, Driscoll L, Codol O, Michaels JA, Linderman SW, Sussillo D, Pandarinath C. Computation-through-Dynamics Benchmark: Simulated datasets and quality metrics for dynamical models of neural activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.02.07.637062. [PMID: 39975240 PMCID: PMC11839132 DOI: 10.1101/2025.02.07.637062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2025]
Abstract
A primary goal of systems neuroscience is to discover how ensembles of neurons transform inputs into goal-directed behavior, a process known as neural computation. A powerful framework for understanding neural computation uses neural dynamics - the rules that describe the temporal evolution of neural activity - to explain how goal-directed input-output transformations occur. As dynamical rules are not directly observable, we need computational models that can infer neural dynamics from recorded neural activity. We typically validate such models using synthetic datasets with known ground-truth dynamics, but unfortunately existing synthetic datasets don't reflect fundamental features of neural computation and are thus poor proxies for neural systems. Further, the field lacks validated metrics for quantifying the accuracy of the dynamics inferred by models. The Computation-through-Dynamics Benchmark (CtDB) fills these critical gaps by providing: 1) synthetic datasets that reflect computational properties of biological neural circuits, 2) interpretable metrics for quantifying model performance, and 3) a standardized pipeline for training and evaluating models with or without known external inputs. In this manuscript, we demonstrate how CtDB can help guide the development, tuning, and troubleshooting of neural dynamics models. In summary, CtDB provides a critical platform for model developers to better understand and characterize neural computation through the lens of dynamics.
Collapse
Affiliation(s)
- Christopher Versteeg
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
| | - Jonathan D McCart
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
- Center for Machine Learning, Georgia Institute of Technology, Atlanta, GA, USA
| | - Mitchell Ostrow
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
| | - David M Zoltowski
- Wu Tsai Neurosciences Institute, Stanford, CA, USA
- Department of Statistics, Stanford University, Stanford, CA, USA
| | - Clayton B Washington
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
| | - Laura Driscoll
- Allen Institute for Neural Dynamics, Seattle, WA, USA
- Department of Neurobiology & Biophysics, University of Washington, Seattle, WA, USA
| | - Olivier Codol
- Département de Neurosciences, Faculté de Médecine, Université de Montréal, Montréal, Canada
- MILA, Quebec Artificial Intelligence Institute, Montréal, Canada
| | - Jonathan A Michaels
- School of Kinesiology and Health Science, Faculty of Health, York University, Toronto, ON, Canada
| | - Scott W Linderman
- Wu Tsai Neurosciences Institute, Stanford, CA, USA
- Department of Statistics, Stanford University, Stanford, CA, USA
| | - David Sussillo
- Wu Tsai Neurosciences Institute, Stanford, CA, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Chethan Pandarinath
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
- Department of Neurosurgery, Emory University, Atlanta, GA, USA
| |
Collapse
|
5
|
Oby ER, Degenhart AD, Grigsby EM, Motiwala A, McClain NT, Marino PJ, Yu BM, Batista AP. Dynamical constraints on neural population activity. Nat Neurosci 2025; 28:383-393. [PMID: 39825141 PMCID: PMC11802451 DOI: 10.1038/s41593-024-01845-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Accepted: 10/25/2024] [Indexed: 01/20/2025]
Abstract
The manner in which neural activity unfolds over time is thought to be central to sensory, motor and cognitive functions in the brain. Network models have long posited that the brain's computations involve time courses of activity that are shaped by the underlying network. A prediction from this view is that the activity time courses should be difficult to violate. We leveraged a brain-computer interface to challenge monkeys to violate the naturally occurring time courses of neural population activity that we observed in the motor cortex. This included challenging animals to traverse the natural time course of neural activity in a time-reversed manner. Animals were unable to violate the natural time courses of neural activity when directly challenged to do so. These results provide empirical support for the view that activity time courses observed in the brain indeed reflect the underlying network-level computational mechanisms that they are believed to implement.
Collapse
Affiliation(s)
- Emily R Oby
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Biomedical and Molecular Sciences, Queen's University, Kingston, Ontario, Canada
| | - Alan D Degenhart
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Erinn M Grigsby
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, PA, USA
- Rehabilitation and Neural Engineering Laboratory, University of Pittsburgh, Pittsburgh, PA, USA
| | - Asma Motiwala
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Nicole T McClain
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Patrick J Marino
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Byron M Yu
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Aaron P Batista
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA.
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.
| |
Collapse
|
6
|
Kalidindi HT, Crevecoeur F. Task-dependent coarticulation of movement sequences. eLife 2024; 13:RP96854. [PMID: 39331027 PMCID: PMC11434614 DOI: 10.7554/elife.96854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/28/2024] Open
Abstract
Combining individual actions into sequences is a hallmark of everyday activities. Classical theories propose that the motor system forms a single specification of the sequence as a whole, leading to the coarticulation of the different elements. In contrast, recent neural recordings challenge this idea and suggest independent execution of each element specified separately. Here, we show that separate or coarticulated sequences can result from the same task-dependent controller, without implying different representations in the brain. Simulations show that planning for multiple reaches simultaneously allows separate or coarticulated sequences depending on instructions about intermediate goals. Human experiments in a two-reach sequence task validated this model. Furthermore, in co-articulated sequences, the second goal influenced long-latency stretch responses to external loads applied during the first reach, demonstrating the involvement of the sensorimotor network supporting fast feedback control. Overall, our study establishes a computational framework for sequence production that highlights the importance of feedback control in this essential motor skill.
Collapse
Affiliation(s)
- Hari Teja Kalidindi
- Institute for Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), Université Catholique de Louvain, Louvain-la-Neuve, Belgium
- Institute of Neuroscience (IoNS), Université Catholique de Louvain, Brussels, Belgium
| | - Frederic Crevecoeur
- Institute for Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), Université Catholique de Louvain, Louvain-la-Neuve, Belgium
- Institute of Neuroscience (IoNS), Université Catholique de Louvain, Brussels, Belgium
- WEL Research Institute, Wavre, Belgium
| |
Collapse
|
7
|
Schimel M, Kao TC, Hennequin G. When and why does motor preparation arise in recurrent neural network models of motor control? eLife 2024; 12:RP89131. [PMID: 39316044 PMCID: PMC11421851 DOI: 10.7554/elife.89131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/25/2024] Open
Abstract
During delayed ballistic reaches, motor areas consistently display movement-specific activity patterns prior to movement onset. It is unclear why these patterns arise: while they have been proposed to seed an initial neural state from which the movement unfolds, recent experiments have uncovered the presence and necessity of ongoing inputs during movement, which may lessen the need for careful initialization. Here, we modeled the motor cortex as an input-driven dynamical system, and we asked what the optimal way to control this system to perform fast delayed reaches is. We find that delay-period inputs consistently arise in an optimally controlled model of M1. By studying a variety of network architectures, we could dissect and predict the situations in which it is beneficial for a network to prepare. Finally, we show that optimal input-driven control of neural dynamics gives rise to multiple phases of preparation during reach sequences, providing a novel explanation for experimentally observed features of monkey M1 activity in double reaching.
Collapse
Affiliation(s)
- Marine Schimel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Ta-Chu Kao
- Meta Reality Labs, Burlingame, United States
| | - Guillaume Hennequin
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
8
|
Kirk EA, Hope KT, Sober SJ, Sauerbrei BA. An output-null signature of inertial load in motor cortex. Nat Commun 2024; 15:7309. [PMID: 39181866 PMCID: PMC11344817 DOI: 10.1038/s41467-024-51750-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 08/15/2024] [Indexed: 08/27/2024] Open
Abstract
Coordinated movement requires the nervous system to continuously compensate for changes in mechanical load across different conditions. For voluntary movements like reaching, the motor cortex is a critical hub that generates commands to move the limbs and counteract loads. How does cortex contribute to load compensation when rhythmic movements are sequenced by a spinal pattern generator? Here, we address this question by manipulating the mass of the forelimb in unrestrained mice during locomotion. While load produces changes in motor output that are robust to inactivation of motor cortex, it also induces a profound shift in cortical dynamics. This shift is minimally affected by cerebellar perturbation and significantly larger than the load response in the spinal motoneuron population. This latent representation may enable motor cortex to generate appropriate commands when a voluntary movement must be integrated with an ongoing, spinally-generated rhythm.
Collapse
Affiliation(s)
- Eric A Kirk
- Department of Neurosciences, Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - Keenan T Hope
- Department of Neurosciences, Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - Samuel J Sober
- Department of Biology, Emory University, Atlanta, GA, USA
| | - Britton A Sauerbrei
- Department of Neurosciences, Case Western Reserve University School of Medicine, Cleveland, OH, USA.
| |
Collapse
|
9
|
Aldarondo D, Merel J, Marshall JD, Hasenclever L, Klibaite U, Gellis A, Tassa Y, Wayne G, Botvinick M, Ölveczky BP. A virtual rodent predicts the structure of neural activity across behaviours. Nature 2024; 632:594-602. [PMID: 38862024 PMCID: PMC12080270 DOI: 10.1038/s41586-024-07633-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 05/30/2024] [Indexed: 06/13/2024]
Abstract
Animals have exquisite control of their bodies, allowing them to perform a diverse range of behaviours. How such control is implemented by the brain, however, remains unclear. Advancing our understanding requires models that can relate principles of control to the structure of neural activity in behaving animals. Here, to facilitate this, we built a 'virtual rodent', in which an artificial neural network actuates a biomechanically realistic model of the rat1 in a physics simulator2. We used deep reinforcement learning3-5 to train the virtual agent to imitate the behaviour of freely moving rats, thus allowing us to compare neural activity recorded in real rats to the network activity of a virtual rodent mimicking their behaviour. We found that neural activity in the sensorimotor striatum and motor cortex was better predicted by the virtual rodent's network activity than by any features of the real rat's movements, consistent with both regions implementing inverse dynamics6. Furthermore, the network's latent variability predicted the structure of neural variability across behaviours and afforded robustness in a way consistent with the minimal intervention principle of optimal feedback control7. These results demonstrate how physical simulation of biomechanically realistic virtual animals can help interpret the structure of neural activity across behaviour and relate it to theoretical principles of motor control.
Collapse
Affiliation(s)
- Diego Aldarondo
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA.
- Fauna Robotics, New York, NY, USA.
| | - Josh Merel
- DeepMind, Google, London, UK
- Fauna Robotics, New York, NY, USA
| | - Jesse D Marshall
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA
- Reality Labs, Meta, New York, NY, USA
| | | | - Ugne Klibaite
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Amanda Gellis
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | | | | | - Matthew Botvinick
- DeepMind, Google, London, UK
- Gatsby Computational Neuroscience Unit, University College London, London, UK
| | - Bence P Ölveczky
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
10
|
Jiang H, Bu X, Sui X, Tang H, Pan X, Chen Y. Spike Neural Network of Motor Cortex Model for Arm Reaching Control. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40039622 DOI: 10.1109/embc53108.2024.10781802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Motor cortex modeling is crucial for understanding movement planning and execution. While interconnected recurrent neural networks have successfully described the dynamics of neural population activity, most existing methods utilize continuous signal-based neural networks, which do not reflect the biological spike neural signal. To address this limitation, we propose a recurrent spike neural network to simulate motor cortical activity during an arm-reaching task. Specifically, our model is built upon integrate-and-fire spiking neurons with conductance-based synapses. We carefully designed the interconnections of neurons with two different firing time scales - "fast" and "slow" neurons. Experimental results demonstrate the effectiveness of our method, with the model's neuronal activity in good agreement with monkey's motor cortex data at both single-cell and population levels. Quantitative analysis reveals a correlation coefficient 0.89 between the model's and real data. These results suggest the possibility of multiple timescales in motor cortical control.
Collapse
|
11
|
Parker Jones O, Mitchell AL, Yamada J, Merkt W, Geisert M, Havoutis I, Posner I. Oscillating latent dynamics in robot systems during walking and reaching. Sci Rep 2024; 14:11434. [PMID: 38763969 PMCID: PMC11102915 DOI: 10.1038/s41598-024-61610-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Accepted: 05/07/2024] [Indexed: 05/21/2024] Open
Abstract
Sensorimotor control of complex, dynamic systems such as humanoids or quadrupedal robots is notoriously difficult. While artificial systems traditionally employ hierarchical optimisation approaches or black-box policies, recent results in systems neuroscience suggest that complex behaviours such as locomotion and reaching are correlated with limit cycles in the primate motor cortex. A recent result suggests that, when applied to a learned latent space, oscillating patterns of activation can be used to control locomotion in a physical robot. While reminiscent of limit cycles observed in primate motor cortex, these dynamics are unsurprising given the cyclic nature of the robot's behaviour (walking). In this preliminary investigation, we consider how a similar approach extends to a less obviously cyclic behaviour (reaching). This has been explored in prior work using computational simulations. But simulations necessarily make simplifying assumptions that do not necessarily correspond to reality, so do not trivially transfer to real robot platforms. Our primary contribution is to demonstrate that we can infer and control real robot states in a learnt representation using oscillatory dynamics during reaching tasks. We further show that the learned latent representation encodes interpretable movements in the robot's workspace. Compared to robot locomotion, the dynamics that we observe for reaching are not fully cyclic, as they do not begin and end at the same position of latent space. However, they do begin to trace out the shape of a cycle, and, by construction, they are driven by the same underlying oscillatory mechanics.
Collapse
Affiliation(s)
- Oiwi Parker Jones
- Applied AI Lab, Oxford Robotics Institute, University of Oxford, Oxford, UK.
| | - Alexander L Mitchell
- Applied AI Lab, Oxford Robotics Institute, University of Oxford, Oxford, UK.
- Dynamic Robot Systems Group, Oxford Robotics Institute, University of Oxford, Oxford, UK.
| | - Jun Yamada
- Applied AI Lab, Oxford Robotics Institute, University of Oxford, Oxford, UK
| | - Wolfgang Merkt
- Dynamic Robot Systems Group, Oxford Robotics Institute, University of Oxford, Oxford, UK
| | - Mathieu Geisert
- Dynamic Robot Systems Group, Oxford Robotics Institute, University of Oxford, Oxford, UK
| | - Ioannis Havoutis
- Dynamic Robot Systems Group, Oxford Robotics Institute, University of Oxford, Oxford, UK
| | - Ingmar Posner
- Applied AI Lab, Oxford Robotics Institute, University of Oxford, Oxford, UK
| |
Collapse
|
12
|
Sadeghi M, Sharif Razavian R, Bazzi S, Chowdhury RH, Batista AP, Loughlin PJ, Sternad D. Inferring control objectives in a virtual balancing task in humans and monkeys. eLife 2024; 12:RP88514. [PMID: 38738986 PMCID: PMC11090506 DOI: 10.7554/elife.88514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/14/2024] Open
Abstract
Natural behaviors have redundancy, which implies that humans and animals can achieve their goals with different strategies. Given only observations of behavior, is it possible to infer the control objective that the subject is employing? This challenge is particularly acute in animal behavior because we cannot ask or instruct the subject to use a particular strategy. This study presents a three-pronged approach to infer an animal's control objective from behavior. First, both humans and monkeys performed a virtual balancing task for which different control strategies could be utilized. Under matched experimental conditions, corresponding behaviors were observed in humans and monkeys. Second, a generative model was developed that represented two main control objectives to achieve the task goal. Model simulations were used to identify aspects of behavior that could distinguish which control objective was being used. Third, these behavioral signatures allowed us to infer the control objective used by human subjects who had been instructed to use one control objective or the other. Based on this validation, we could then infer objectives from animal subjects. Being able to positively identify a subject's control objective from observed behavior can provide a powerful tool to neurophysiologists as they seek the neural mechanisms of sensorimotor coordination.
Collapse
Affiliation(s)
- Mohsen Sadeghi
- Department of Biology, Northeastern UniversityBostonUnited States
| | - Reza Sharif Razavian
- Department of Biology, Northeastern UniversityBostonUnited States
- Department of Electrical and Computer Engineering, Northeastern UniversityBostonUnited States
- Department of Mechanical Engineering, Northern Arizona UniversityFlagstaffUnited States
| | - Salah Bazzi
- Department of Biology, Northeastern UniversityBostonUnited States
- Department of Electrical and Computer Engineering, Northeastern UniversityBostonUnited States
- Institute for Experiential Robotics, Northeastern UniversityBostonUnited States
| | - Raeed H Chowdhury
- Department of Bioengineering, and Center for the Neural Basis of Cognition, University of PittsburghPittsburghUnited States
| | - Aaron P Batista
- Department of Bioengineering, and Center for the Neural Basis of Cognition, University of PittsburghPittsburghUnited States
| | - Patrick J Loughlin
- Department of Bioengineering, and Center for the Neural Basis of Cognition, University of PittsburghPittsburghUnited States
| | - Dagmar Sternad
- Department of Biology, Northeastern UniversityBostonUnited States
- Department of Electrical and Computer Engineering, Northeastern UniversityBostonUnited States
- Institute for Experiential Robotics, Northeastern UniversityBostonUnited States
- Department of Physics, Northeastern UniversityBostonUnited States
| |
Collapse
|
13
|
Strohmer B, Najarro E, Ausborn J, Berg RW, Tolu S. Sparse Firing in a Hybrid Central Pattern Generator for Spinal Motor Circuits. Neural Comput 2024; 36:759-780. [PMID: 38658025 DOI: 10.1162/neco_a_01660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 01/02/2024] [Indexed: 04/26/2024]
Abstract
Central pattern generators are circuits generating rhythmic movements, such as walking. The majority of existing computational models of these circuits produce antagonistic output where all neurons within a population spike with a broad burst at about the same neuronal phase with respect to network output. However, experimental recordings reveal that many neurons within these circuits fire sparsely, sometimes as rarely as once within a cycle. Here we address the sparse neuronal firing and develop a model to replicate the behavior of individual neurons within rhythm-generating populations to increase biological plausibility and facilitate new insights into the underlying mechanisms of rhythm generation. The developed network architecture is able to produce sparse firing of individual neurons, creating a novel implementation for exploring the contribution of network architecture on rhythmic output. Furthermore, the introduction of sparse firing of individual neurons within the rhythm-generating circuits is one of the factors that allows for a broad neuronal phase representation of firing at the population level. This moves the model toward recent experimental findings of evenly distributed neuronal firing across phases among individual spinal neurons. The network is tested by methodically iterating select parameters to gain an understanding of how connectivity and the interplay of excitation and inhibition influence the output. This knowledge can be applied in future studies to implement a biologically plausible rhythm-generating circuit for testing biological hypotheses.
Collapse
Affiliation(s)
- Beck Strohmer
- Department of Electrical and Photonics Engineering, Technical University of Denmark, 2800 Lyngby, Denmark
| | - Elias Najarro
- Department of Digital Design, IT University of Copenhagen, DK-2300 Copenhagen, Denmark
| | - Jessica Ausborn
- Department of Neurobiology and Anatomy, Drexel University College of Medicine, Philadelphia, PA, U.S.A.
| | - Rune W Berg
- Department of Neuroscience, University of Copenhagen, DK-1165 Copenhagen, Denmark
| | - Silvia Tolu
- Department of Electrical and Photonics Engineering, Technical University of Denmark, 2800 Lyngby, Denmark
| |
Collapse
|
14
|
Churchland MM, Shenoy KV. Preparatory activity and the expansive null-space. Nat Rev Neurosci 2024; 25:213-236. [PMID: 38443626 DOI: 10.1038/s41583-024-00796-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/26/2024] [Indexed: 03/07/2024]
Abstract
The study of the cortical control of movement experienced a conceptual shift over recent decades, as the basic currency of understanding shifted from single-neuron tuning towards population-level factors and their dynamics. This transition was informed by a maturing understanding of recurrent networks, where mechanism is often characterized in terms of population-level factors. By estimating factors from data, experimenters could test network-inspired hypotheses. Central to such hypotheses are 'output-null' factors that do not directly drive motor outputs yet are essential to the overall computation. In this Review, we highlight how the hypothesis of output-null factors was motivated by the venerable observation that motor-cortex neurons are active during movement preparation, well before movement begins. We discuss how output-null factors then became similarly central to understanding neural activity during movement. We discuss how this conceptual framework provided key analysis tools, making it possible for experimenters to address long-standing questions regarding motor control. We highlight an intriguing trend: as experimental and theoretical discoveries accumulate, the range of computational roles hypothesized to be subserved by output-null factors continues to expand.
Collapse
Affiliation(s)
- Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA.
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA.
| | - Krishna V Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Neurobiology, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Bio-X Institute, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
| |
Collapse
|
15
|
Almani MN, Lazzari J, Chacon A, Saxena S. μSim: A goal-driven framework for elucidating the neural control of movement through musculoskeletal modeling. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.02.578628. [PMID: 38405828 PMCID: PMC10888726 DOI: 10.1101/2024.02.02.578628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
How does the motor cortex (MC) produce purposeful and generalizable movements from the complex musculoskeletal system in a dynamic environment? To elucidate the underlying neural dynamics, we use a goal-driven approach to model MC by considering its goal as a controller driving the musculoskeletal system through desired states to achieve movement. Specifically, we formulate the MC as a recurrent neural network (RNN) controller producing muscle commands while receiving sensory feedback from biologically accurate musculoskeletal models. Given this real-time simulated feedback implemented in advanced physics simulation engines, we use deep reinforcement learning to train the RNN to achieve desired movements under specified neural and musculoskeletal constraints. Activity of the trained model can accurately decode experimentally recorded neural population dynamics and single-unit MC activity, while generalizing well to testing conditions significantly different from training. Simultaneous goal- and data- driven modeling in which we use the recorded neural activity as observed states of the MC further enhances direct and generalizable single-unit decoding. Finally, we show that this framework elucidates computational principles of how neural dynamics enable flexible control of movement and make this framework easy-to-use for future experiments.
Collapse
|
16
|
Vahidi P, Sani OG, Shanechi MM. Modeling and dissociation of intrinsic and input-driven neural population dynamics underlying behavior. Proc Natl Acad Sci U S A 2024; 121:e2212887121. [PMID: 38335258 PMCID: PMC10873612 DOI: 10.1073/pnas.2212887121] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 12/03/2023] [Indexed: 02/12/2024] Open
Abstract
Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other brain regions. To avoid misinterpreting temporally structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of behavior. We first show how training dynamical models of neural activity while considering behavior but not input or input but not behavior may lead to misinterpretations. We then develop an analytical learning method for linear dynamical models that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of the task while other methods can be influenced by the task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the different subjects and tasks, whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.
Collapse
Affiliation(s)
- Parsa Vahidi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Omid G. Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Maryam M. Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA90089
- Thomas Lord Department of Computer Science and Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| |
Collapse
|
17
|
Kuzmina E, Kriukov D, Lebedev M. Neuronal travelling waves explain rotational dynamics in experimental datasets and modelling. Sci Rep 2024; 14:3566. [PMID: 38347042 PMCID: PMC10861525 DOI: 10.1038/s41598-024-53907-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Accepted: 02/06/2024] [Indexed: 02/15/2024] Open
Abstract
Spatiotemporal properties of neuronal population activity in cortical motor areas have been subjects of experimental and theoretical investigations, generating numerous interpretations regarding mechanisms for preparing and executing limb movements. Two competing models, representational and dynamical, strive to explain the relationship between movement parameters and neuronal activity. A dynamical model uses the jPCA method that holistically characterizes oscillatory activity in neuron populations by maximizing the data rotational dynamics. Different rotational dynamics interpretations revealed by the jPCA approach have been proposed. Yet, the nature of such dynamics remains poorly understood. We comprehensively analyzed several neuronal-population datasets and found rotational dynamics consistently accounted for by a traveling wave pattern. For quantifying rotation strength, we developed a complex-valued measure, the gyration number. Additionally, we identified parameters influencing rotation extent in the data. Our findings suggest that rotational dynamics and traveling waves are typically the same phenomena, so reevaluation of the previous interpretations where they were considered separate entities is needed.
Collapse
Affiliation(s)
- Ekaterina Kuzmina
- Skolkovo Institute of Science and Technology, Vladimir Zelman Center for Neurobiology and Brain Rehabilitation, Moscow, Russia, 121205.
- Artificial Intelligence Research Institute (AIRI), Moscow, Russia.
| | - Dmitrii Kriukov
- Artificial Intelligence Research Institute (AIRI), Moscow, Russia
- Skolkovo Institute of Science and Technology, Center for Molecular and Cellular Biology, Moscow, Russia, 121205
| | - Mikhail Lebedev
- Faculty of Mechanics and Mathematics, Lomonosov Moscow State University, Moscow, Russia, 119992
- Sechenov Institute of Evolutionary Physiology and Biochemistry of the Russian Academy of Sciences, Saint-Petersburg, Russia, 194223
| |
Collapse
|
18
|
Cross KP, Cook DJ, Scott SH. Rapid Online Corrections for Proprioceptive and Visual Perturbations Recruit Similar Circuits in Primary Motor Cortex. eNeuro 2024; 11:ENEURO.0083-23.2024. [PMID: 38238081 PMCID: PMC10867723 DOI: 10.1523/eneuro.0083-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 12/22/2023] [Accepted: 01/09/2024] [Indexed: 02/16/2024] Open
Abstract
An important aspect of motor function is our ability to rapidly generate goal-directed corrections for disturbances to the limb or behavioral goal. The primary motor cortex (M1) is a key region involved in processing feedback for rapid motor corrections, yet we know little about how M1 circuits are recruited by different sources of sensory feedback to make rapid corrections. We trained two male monkeys (Macaca mulatta) to make goal-directed reaches and on random trials introduced different sensory errors by either jumping the visual location of the goal (goal jump), jumping the visual location of the hand (cursor jump), or applying a mechanical load to displace the hand (proprioceptive feedback). Sensory perturbations evoked a broad response in M1 with ∼73% of neurons (n = 257) responding to at least one of the sensory perturbations. Feedback responses were also similar as response ranges between the goal and cursor jumps were highly correlated (range of r = [0.91, 0.97]) as were the response ranges between the mechanical loads and the visual perturbations (range of r = [0.68, 0.86]). Lastly, we identified the neural subspace each perturbation response resided in and found a strong overlap between the two visual perturbations (range of overlap index, 0.73-0.89) and between the mechanical loads and visual perturbations (range of overlap index, 0.36-0.47) indicating each perturbation evoked similar structure of activity at the population level. Collectively, our results indicate rapid responses to errors from different sensory sources target similar overlapping circuits in M1.
Collapse
Affiliation(s)
- Kevin P Cross
- Neuroscience Center, University of North Carolina, Chapel Hill, North Carolina 27599
| | - Douglas J Cook
- Department of Surgery, Queen's University, Kingston, Ontario K7L 3N6, Canada
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario K7L 3N6, Canada
| | - Stephen H Scott
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario K7L 3N6, Canada
- Departments of Biomedical and Molecular Sciences, Queen's University, Kingston, Ontario K7L 3N6, Canada
- Medicine, Queen's University, Kingston, Ontario K7L 3N6, Canada
| |
Collapse
|
19
|
Oby ER, Degenhart AD, Grigsby EM, Motiwala A, McClain NT, Marino PJ, Yu BM, Batista AP. Dynamical constraints on neural population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.03.573543. [PMID: 38260549 PMCID: PMC10802336 DOI: 10.1101/2024.01.03.573543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
The manner in which neural activity unfolds over time is thought to be central to sensory, motor, and cognitive functions in the brain. Network models have long posited that the brain's computations involve time courses of activity that are shaped by the underlying network. A prediction from this view is that the activity time courses should be difficult to violate. We leveraged a brain-computer interface (BCI) to challenge monkeys to violate the naturally-occurring time courses of neural population activity that we observed in motor cortex. This included challenging animals to traverse the natural time course of neural activity in a time-reversed manner. Animals were unable to violate the natural time courses of neural activity when directly challenged to do so. These results provide empirical support for the view that activity time courses observed in the brain indeed reflect the underlying network-level computational mechanisms that they are believed to implement.
Collapse
|
20
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent factors and structures in neural population activity. Nat Biomed Eng 2024; 8:85-108. [PMID: 38082181 PMCID: PMC11735406 DOI: 10.1038/s41551-023-01106-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 09/12/2023] [Indexed: 12/26/2023]
Abstract
Modelling the spatiotemporal dynamics in the activity of neural populations while also enabling their flexible inference is hindered by the complexity and noisiness of neural observations. Here we show that the lower-dimensional nonlinear latent factors and latent structures can be computationally modelled in a manner that allows for flexible inference causally, non-causally and in the presence of missing neural observations. To enable flexible inference, we developed a neural network that separates the model into jointly trained manifold and dynamic latent factors such that nonlinearity is captured through the manifold factors and the dynamics can be modelled in tractable linear form on this nonlinear manifold. We show that the model, which we named 'DFINE' (for 'dynamical flexible inference for nonlinear embeddings') achieves flexible inference in simulations of nonlinear dynamics and across neural datasets representing a diversity of brain regions and behaviours. Compared with earlier neural-network models, DFINE enables flexible inference, better predicts neural activity and behaviour, and better captures the latent neural manifold structure. DFINE may advance the development of neurotechnology and investigations in neuroscience.
Collapse
Affiliation(s)
- Hamidreza Abbaspourazad
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Eray Erturk
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Departments of Neurosurgery, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
21
|
Sadeghi M, Razavian RS, Bazzi S, Chowdhury R, Batista A, Loughlin P, Sternad D. Inferring control objectives in a virtual balancing task in humans and monkeys. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.02.539055. [PMID: 37205497 PMCID: PMC10187212 DOI: 10.1101/2023.05.02.539055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Natural behaviors have redundancy, which implies that humans and animals can achieve their goals with different control objectives. Given only observations of behavior, is it possible to infer the control strategy that the subject is employing? This challenge is particularly acute in animal behavior because we cannot ask or instruct the subject to use a particular control strategy. This study presents a threepronged approach to infer an animal's control strategy from behavior. First, both humans and monkeys performed a virtual balancing task for which different control objectives could be utilized. Under matched experimental conditions, corresponding behaviors were observed in humans and monkeys. Second, a generative model was developed that represented two main control strategies to achieve the task goal. Model simulations were used to identify aspects of behavior that could distinguish which control objective was being used. Third, these behavioral signatures allowed us to infer the control objective used by human subjects who had been instructed to use one control objective or the other. Based on this validation, we could then infer strategies from animal subjects. Being able to positively identify a subject's control objective from behavior can provide a powerful tool to neurophysiologists as they seek the neural mechanisms of sensorimotor coordination.
Collapse
Affiliation(s)
- Mohsen Sadeghi
- Department of Biology, Northeastern University
- Department of Electrical and Computer Engineering, Northeastern University
| | | | - Salah Bazzi
- Institute for Experiential Robotics, Northeastern University
| | - Raeed Chowdhury
- Department of Bioengineering, and Center for the Neural Basis of Cognition, University of Pittsburgh, PA, USA
| | - Aaron Batista
- Department of Bioengineering, and Center for the Neural Basis of Cognition, University of Pittsburgh, PA, USA
| | - Patrick Loughlin
- Department of Bioengineering, and Center for the Neural Basis of Cognition, University of Pittsburgh, PA, USA
| | - Dagmar Sternad
- Department of Biology, Northeastern University
- Department of Electrical and Computer Engineering, Northeastern University
- Institute for Experiential Robotics, Northeastern University
- Department of Physics, Northeastern University
| |
Collapse
|
22
|
Soo WWM, Goudar V, Wang XJ. Training biologically plausible recurrent neural networks on cognitive tasks with long-term dependencies. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.10.561588. [PMID: 37873445 PMCID: PMC10592728 DOI: 10.1101/2023.10.10.561588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
Training recurrent neural networks (RNNs) has become a go-to approach for generating and evaluating mechanistic neural hypotheses for cognition. The ease and efficiency of training RNNs with backpropagation through time and the availability of robustly supported deep learning libraries has made RNN modeling more approachable and accessible to neuroscience. Yet, a major technical hindrance remains. Cognitive processes such as working memory and decision making involve neural population dynamics over a long period of time within a behavioral trial and across trials. It is difficult to train RNNs to accomplish tasks where neural representations and dynamics have long temporal dependencies without gating mechanisms such as LSTMs or GRUs which currently lack experimental support and prohibit direct comparison between RNNs and biological neural circuits. We tackled this problem based on the idea of specialized skip-connections through time to support the emergence of task-relevant dynamics, and subsequently reinstitute biological plausibility by reverting to the original architecture. We show that this approach enables RNNs to successfully learn cognitive tasks that prove impractical if not impossible to learn using conventional methods. Over numerous tasks considered here, we achieve less training steps and shorter wall-clock times, particularly in tasks that require learning long-term dependencies via temporal integration over long timescales or maintaining a memory of past events in hidden-states. Our methods expand the range of experimental tasks that biologically plausible RNN models can learn, thereby supporting the development of theory for the emergent neural mechanisms of computations involving long-term dependencies.
Collapse
|
23
|
Visser YF, Medendorp WP, Selen LPJ. Muscular reflex gains reflect changes of mind in reaching. J Neurophysiol 2023; 130:640-651. [PMID: 37584102 DOI: 10.1152/jn.00197.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 07/18/2023] [Accepted: 08/10/2023] [Indexed: 08/17/2023] Open
Abstract
Decisions for action are accompanied by a continual processing of sensory information, sometimes resulting in a revision of the initial choice, called a change of mind (CoM). Although the motor system is tuned during the formation of a reach decision, it is unclear whether its preparatory state differs between CoM and non-CoM decisions. To test this, participants (n = 14) viewed a random-dot motion (RDM) stimulus of various coherence levels for a random viewing duration. At the onset of a mechanical perturbation that rapidly stretched the pectoralis muscle, they indicated the perceived motion direction by making a reaching movement to one of two targets. Using electromyography (EMG), we quantified the reflex gains of the pectoralis and posterior deltoid muscles. Results show that reflex gains scaled with both the coherence level and the viewing duration of the stimulus. We fit a drift diffusion model (DDM) to the behavioral choices. The decision variable (DV), derived from the DDM, correlated well with the measured reflex gain at the single-trial level. However, when matched on DV magnitude, reflex gains were significantly lower in CoM than non-CoM trials. We conclude that the internal state of the motor system, as measured by the spinal reflexes, reflects the continual deliberation on sensory evidence for action selection, including the postdecisional evidence that can lead to a change of mind.NEW & NOTEWORTHY Using behavioral findings, EMG, and computational modeling, we show that not only the perceptual decision outcome but also the accumulating evidence for that outcome is continuously sent to the relevant muscles. Moreover, we show that an upcoming change of mind can be detected in the motor periphery, suggesting that a correlate of the internal decision making process is being sent along.
Collapse
Affiliation(s)
- Yvonne F Visser
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - W Pieter Medendorp
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Luc P J Selen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| |
Collapse
|
24
|
Athalye VR, Khanna P, Gowda S, Orsborn AL, Costa RM, Carmena JM. Invariant neural dynamics drive commands to control different movements. Curr Biol 2023; 33:2962-2976.e15. [PMID: 37402376 PMCID: PMC10527529 DOI: 10.1016/j.cub.2023.06.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 04/24/2023] [Accepted: 06/09/2023] [Indexed: 07/06/2023]
Abstract
It has been proposed that the nervous system has the capacity to generate a wide variety of movements because it reuses some invariant code. Previous work has identified that dynamics of neural population activity are similar during different movements, where dynamics refer to how the instantaneous spatial pattern of population activity changes in time. Here, we test whether invariant dynamics of neural populations are actually used to issue the commands that direct movement. Using a brain-machine interface (BMI) that transforms rhesus macaques' motor-cortex activity into commands for a neuroprosthetic cursor, we discovered that the same command is issued with different neural-activity patterns in different movements. However, these different patterns were predictable, as we found that the transitions between activity patterns are governed by the same dynamics across movements. These invariant dynamics are low dimensional, and critically, they align with the BMI, so that they predict the specific component of neural activity that actually issues the next command. We introduce a model of optimal feedback control (OFC) that shows that invariant dynamics can help transform movement feedback into commands, reducing the input that the neural population needs to control movement. Altogether our results demonstrate that invariant dynamics drive commands to control a variety of movements and show how feedback can be integrated with invariant dynamics to issue generalizable commands.
Collapse
Affiliation(s)
- Vivek R Athalye
- Zuckerman Mind Brain Behavior Institute, Departments of Neuroscience and Neurology, Columbia University, New York, NY 10027, USA.
| | - Preeya Khanna
- Department of Neurology, University of California, San Francisco, San Francisco, CA 94158, USA.
| | - Suraj Gowda
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Amy L Orsborn
- Departments of Bioengineering, Electrical and Computer Engineering, University of Washington, Seattle, Seattle, WA 98195, USA
| | - Rui M Costa
- Zuckerman Mind Brain Behavior Institute, Departments of Neuroscience and Neurology, Columbia University, New York, NY 10027, USA.
| | - Jose M Carmena
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA 94720, USA; Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA; UC Berkeley-UCSF Joint Graduate Program in Bioengineering, University of California, Berkeley, Berkeley, CA 94720, USA.
| |
Collapse
|
25
|
Disse GD, Nandakumar B, Pauzin FP, Blumenthal GH, Kong Z, Ditterich J, Moxon KA. Neural ensemble dynamics in trunk and hindlimb sensorimotor cortex encode for the control of postural stability. Cell Rep 2023; 42:112347. [PMID: 37027302 DOI: 10.1016/j.celrep.2023.112347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2022] [Revised: 02/09/2023] [Accepted: 03/21/2023] [Indexed: 04/08/2023] Open
Abstract
The cortex has a disputed role in monitoring postural equilibrium and intervening in cases of major postural disturbances. Here, we investigate the patterns of neural activity in the cortex that underlie neural dynamics during unexpected perturbations. In both the primary sensory (S1) and motor (M1) cortices of the rat, unique neuronal classes differentially covary their responses to distinguish different characteristics of applied postural perturbations; however, there is substantial information gain in M1, demonstrating a role for higher-order computations in motor control. A dynamical systems model of M1 activity and forces generated by the limbs reveals that these neuronal classes contribute to a low-dimensional manifold comprised of separate subspaces enabled by congruent and incongruent neural firing patterns that define different computations depending on the postural responses. These results inform how the cortex engages in postural control, directing work aiming to understand postural instability after neurological disease.
Collapse
Affiliation(s)
- Gregory D Disse
- Neuroscience Graduate Group, University of California, Davis, Davis, CA 95616, USA; Biomedical Engineering, University of California, Davis, Davis, CA 95616, USA
| | | | - Francois P Pauzin
- Biomedical Engineering, University of California, Davis, Davis, CA 95616, USA
| | - Gary H Blumenthal
- School of Biomedical Engineering Science and Health Systems, Drexel University, Philadelphia, PA 19104, USA
| | - Zhaodan Kong
- Mechanical and Aerospace Engineering, University of California, Davis, Davis, CA 95616, USA
| | - Jochen Ditterich
- Neuroscience Graduate Group, University of California, Davis, Davis, CA 95616, USA; Neurobiology, Physiology and Behavior, University of California, Davis, Davis, CA 95616, USA
| | - Karen A Moxon
- Neuroscience Graduate Group, University of California, Davis, Davis, CA 95616, USA; Biomedical Engineering, University of California, Davis, Davis, CA 95616, USA.
| |
Collapse
|
26
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent structures in neural population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.13.532479. [PMID: 36993605 PMCID: PMC10054986 DOI: 10.1101/2023.03.13.532479] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Inferring complex spatiotemporal dynamics in neural population activity is critical for investigating neural mechanisms and developing neurotechnology. These activity patterns are noisy observations of lower-dimensional latent factors and their nonlinear dynamical structure. A major unaddressed challenge is to model this nonlinear structure, but in a manner that allows for flexible inference, whether causally, non-causally, or in the presence of missing neural observations. We address this challenge by developing DFINE, a new neural network that separates the model into dynamic and manifold latent factors, such that the dynamics can be modeled in tractable form. We show that DFINE achieves flexible nonlinear inference across diverse behaviors and brain regions. Further, despite enabling flexible inference unlike prior neural network models of population activity, DFINE also better predicts the behavior and neural activity, and better captures the latent neural manifold structure. DFINE can both enhance future neurotechnology and facilitate investigations across diverse domains of neuroscience.
Collapse
|
27
|
Vahidi P, Sani OG, Shanechi MM. Modeling and dissociation of intrinsic and input-driven neural population dynamics underlying behavior. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.14.532554. [PMID: 36993213 PMCID: PMC10055042 DOI: 10.1101/2023.03.14.532554] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other regions. To avoid misinterpreting temporally-structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of a specific behavior. We first show how training dynamical models of neural activity while considering behavior but not input, or input but not behavior may lead to misinterpretations. We then develop a novel analytical learning method that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the new capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of task while other methods can be influenced by the change in task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the three subjects and two tasks whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.
Collapse
|
28
|
Kanwisher N, Khosla M, Dobs K. Using artificial neural networks to ask 'why' questions of minds and brains. Trends Neurosci 2023; 46:240-254. [PMID: 36658072 DOI: 10.1016/j.tins.2022.12.008] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 11/29/2022] [Accepted: 12/22/2022] [Indexed: 01/19/2023]
Abstract
Neuroscientists have long characterized the properties and functions of the nervous system, and are increasingly succeeding in answering how brains perform the tasks they do. But the question 'why' brains work the way they do is asked less often. The new ability to optimize artificial neural networks (ANNs) for performance on human-like tasks now enables us to approach these 'why' questions by asking when the properties of networks optimized for a given task mirror the behavioral and neural characteristics of humans performing the same task. Here we highlight the recent success of this strategy in explaining why the visual and auditory systems work the way they do, at both behavioral and neural levels.
Collapse
Affiliation(s)
- Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Meenakshi Khosla
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Katharina Dobs
- Department of Psychology, Justus Liebig University Giessen, Giessen, Germany; Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany.
| |
Collapse
|
29
|
Lisberger SG. Toward a Biomimetic Neural Circuit Model of Sensory-Motor Processing. Neural Comput 2023; 35:384-412. [PMID: 35671470 PMCID: PMC9971833 DOI: 10.1162/neco_a_01516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 03/31/2022] [Indexed: 11/04/2022]
Abstract
Computational models have been a mainstay of research on smooth pursuit eye movements in monkeys. Pursuit is a sensory-motor system that is driven by the visual motion of small targets. It creates a smooth eye movement that accelerates up to target speed and tracks the moving target essentially perfectly. In this review of my laboratory's research, I trace the development of computational models of pursuit eye movements from the early control-theory models to the most recent neural circuit models. I outline a combined experimental and computational plan to move the models to the next level. Finally, I explain why research on nonhuman primates is so critical to the development of the neural circuit models I think we need.
Collapse
Affiliation(s)
- Stephen G. Lisberger
- Department of Neurobiology, Duke University School of Medicine, Durham, NC 27710, U.S.A
| |
Collapse
|
30
|
Izawa J, Higo N, Murata Y. Accounting for the valley of recovery during post-stroke rehabilitation training via a model-based analysis of macaque manual dexterity. FRONTIERS IN REHABILITATION SCIENCES 2022; 3:1042912. [PMID: 36644290 PMCID: PMC9838193 DOI: 10.3389/fresc.2022.1042912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Accepted: 12/01/2022] [Indexed: 12/24/2022]
Abstract
Background True recovery, in which a stroke patient regains the same precise motor skills observed in prestroke conditions, is the fundamental goal of rehabilitation training. However, a transient drop in task performance during rehabilitation training after stroke, observed in human clinical outcome as well as in both macaque and squirrel monkey retrieval data, might prevent smooth transitions during recovery. This drop, i.e., recovery valley, often occurs during the transition from compensatory skill to precision skill. Here, we sought computational mechanisms behind such transitions and recovery. Analogous to motor skill learning, we considered that the motor recovery process is composed of spontaneous recovery and training-induced recovery. Specifically, we hypothesized that the interaction of these multiple skill update processes might determine profiles of the recovery valley. Methods A computational model of motor recovery was developed based on a state-space model of motor learning that incorporates a retention factor and interaction terms for training-induced recovery and spontaneous recovery. The model was fit to previously reported macaque motor recovery data where the monkey practiced precision grip skills after a lesion in the sensorimotor area in the cortex. Multiple computational models and the effects of each parameter were examined by model comparisons based on information criteria and sensitivity analyses of each parameter. Result Both training-induced and spontaneous recoveries were necessary to explain the behavioral data. Since these two factors contributed following logarithmic function, the training-induced recovery were effective only after spontaneous biological recovery had developed. In the training-induced recovery component, the practice of the compensation also contributed to recovery of the precision grip skill as if there is a significant generalization effect of learning between these two skills. In addition, a retention factor was critical to explain the recovery profiles. Conclusions We found that spontaneous recovery, training-induced recovery, retention factors, and interaction terms are crucial to explain recovery and recovery valley profiles. This simulation-based examination of the model parameters provides suggestions for effective rehabilitation methods to prevent the recovery valley, such as plasticity-promoting medications, brain stimulation, and robotic rehabilitation technologies.
Collapse
Affiliation(s)
- Jun Izawa
- Faculty of Engineering, Information and Systems, University of Tsukuba, Tsukuba, Japan,Correspondence: Jun Izawa Yumi Murata
| | - Noriyuki Higo
- Neurorehabilitation Research Group, Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan
| | - Yumi Murata
- Neurorehabilitation Research Group, Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan,Correspondence: Jun Izawa Yumi Murata
| |
Collapse
|
31
|
Cometa A, Falasconi A, Biasizzo M, Carpaneto J, Horn A, Mazzoni A, Micera S. Clinical neuroscience and neurotechnology: An amazing symbiosis. iScience 2022; 25:105124. [PMID: 36193050 PMCID: PMC9526189 DOI: 10.1016/j.isci.2022.105124] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
In the last decades, clinical neuroscience found a novel ally in neurotechnologies, devices able to record and stimulate electrical activity in the nervous system. These technologies improved the ability to diagnose and treat neural disorders. Neurotechnologies are concurrently enabling a deeper understanding of healthy and pathological dynamics of the nervous system through stimulation and recordings during brain implants. On the other hand, clinical neurosciences are not only driving neuroengineering toward the most relevant clinical issues, but are also shaping the neurotechnologies thanks to clinical advancements. For instance, understanding the etiology of a disease informs the location of a therapeutic stimulation, but also the way stimulation patterns should be designed to be more effective/naturalistic. Here, we describe cases of fruitful integration such as Deep Brain Stimulation and cortical interfaces to highlight how this symbiosis between clinical neuroscience and neurotechnology is closer to a novel integrated framework than to a simple interdisciplinary interaction.
Collapse
Affiliation(s)
- Andrea Cometa
- The Biorobotics Institute, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, 56127 Pisa, Italy
| | - Antonio Falasconi
- Friedrich Miescher Institute for Biomedical Research, 4058 Basel, Switzerland
- Biozentrum, University of Basel, 4056 Basel, Switzerland
| | - Marco Biasizzo
- The Biorobotics Institute, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, 56127 Pisa, Italy
| | - Jacopo Carpaneto
- The Biorobotics Institute, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, 56127 Pisa, Italy
| | - Andreas Horn
- Center for Brain Circuit Therapeutics Department of Neurology Brigham & Women’s Hospital, Harvard Medical School, Boston, MA 02115, USA
- MGH Neurosurgery & Center for Neurotechnology and Neurorecovery (CNTR) at MGH Neurology Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
- Movement Disorder and Neuromodulation Unit, Department of Neurology, Charité – Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt- Universität zu Berlin, Department of Neurology, 10117 Berlin, Germany
| | - Alberto Mazzoni
- The Biorobotics Institute, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, 56127 Pisa, Italy
| | - Silvestro Micera
- The Biorobotics Institute, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, 56127 Pisa, Italy
- Translational Neural Engineering Lab, School of Engineering, École Polytechnique Fèdèrale de Lausanne, 1015 Lausanne, Switzerland
| |
Collapse
|
32
|
Small, correlated changes in synaptic connectivity may facilitate rapid motor learning. Nat Commun 2022; 13:5163. [PMID: 36056006 PMCID: PMC9440011 DOI: 10.1038/s41467-022-32646-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 08/08/2022] [Indexed: 11/08/2022] Open
Abstract
Animals rapidly adapt their movements to external perturbations, a process paralleled by changes in neural activity in the motor cortex. Experimental studies suggest that these changes originate from altered inputs (Hinput) rather than from changes in local connectivity (Hlocal), as neural covariance is largely preserved during adaptation. Since measuring synaptic changes in vivo remains very challenging, we used a modular recurrent neural network to qualitatively test this interpretation. As expected, Hinput resulted in small activity changes and largely preserved covariance. Surprisingly given the presumed dependence of stable covariance on preserved circuit connectivity, Hlocal led to only slightly larger changes in activity and covariance, still within the range of experimental recordings. This similarity is due to Hlocal only requiring small, correlated connectivity changes for successful adaptation. Simulations of tasks that impose increasingly larger behavioural changes revealed a growing difference between Hinput and Hlocal, which could be exploited when designing future experiments.
Collapse
|
33
|
Saxena S, Russo AA, Cunningham J, Churchland MM. Motor cortex activity across movement speeds is predicted by network-level strategies for generating muscle activity. eLife 2022; 11:e67620. [PMID: 35621264 PMCID: PMC9197394 DOI: 10.7554/elife.67620] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Accepted: 05/26/2022] [Indexed: 12/02/2022] Open
Abstract
Learned movements can be skillfully performed at different paces. What neural strategies produce this flexibility? Can they be predicted and understood by network modeling? We trained monkeys to perform a cycling task at different speeds, and trained artificial recurrent networks to generate the empirical muscle-activity patterns. Network solutions reflected the principle that smooth well-behaved dynamics require low trajectory tangling. Network solutions had a consistent form, which yielded quantitative and qualitative predictions. To evaluate predictions, we analyzed motor cortex activity recorded during the same task. Responses supported the hypothesis that the dominant neural signals reflect not muscle activity, but network-level strategies for generating muscle activity. Single-neuron responses were better accounted for by network activity than by muscle activity. Similarly, neural population trajectories shared their organization not with muscle trajectories, but with network solutions. Thus, cortical activity could be understood based on the need to generate muscle activity via dynamics that allow smooth, robust control over movement speed.
Collapse
Affiliation(s)
- Shreya Saxena
- Department of Electrical and Computer Engineering, University of FloridaGainesvilleUnited States
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
- Grossman Center for the Statistics of Mind, Columbia UniversityNew YorkUnited States
- Center for Theoretical Neuroscience, Columbia UniversityNew YorkUnited States
- Department of Statistics, Columbia UniversityNew YorkUnited States
| | - Abigail A Russo
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
- Department of Neuroscience, Columbia UniversityNew YorkUnited States
| | - John Cunningham
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
- Grossman Center for the Statistics of Mind, Columbia UniversityNew YorkUnited States
- Center for Theoretical Neuroscience, Columbia UniversityNew YorkUnited States
- Department of Statistics, Columbia UniversityNew YorkUnited States
| | - Mark M Churchland
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
- Grossman Center for the Statistics of Mind, Columbia UniversityNew YorkUnited States
- Department of Neuroscience, Columbia UniversityNew YorkUnited States
- Kavli Institute for Brain Science, Columbia UniversityNew YorkUnited States
| |
Collapse
|
34
|
Feldotto B, Eppler JM, Jimenez-Romero C, Bignamini C, Gutierrez CE, Albanese U, Retamino E, Vorobev V, Zolfaghari V, Upton A, Sun Z, Yamaura H, Heidarinejad M, Klijn W, Morrison A, Cruz F, McMurtrie C, Knoll AC, Igarashi J, Yamazaki T, Doya K, Morin FO. Deploying and Optimizing Embodied Simulations of Large-Scale Spiking Neural Networks on HPC Infrastructure. Front Neuroinform 2022; 16:884180. [PMID: 35662903 PMCID: PMC9160925 DOI: 10.3389/fninf.2022.884180] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Accepted: 04/19/2022] [Indexed: 12/20/2022] Open
Abstract
Simulating the brain-body-environment trinity in closed loop is an attractive proposal to investigate how perception, motor activity and interactions with the environment shape brain activity, and vice versa. The relevance of this embodied approach, however, hinges entirely on the modeled complexity of the various simulated phenomena. In this article, we introduce a software framework that is capable of simulating large-scale, biologically realistic networks of spiking neurons embodied in a biomechanically accurate musculoskeletal system that interacts with a physically realistic virtual environment. We deploy this framework on the high performance computing resources of the EBRAINS research infrastructure and we investigate the scaling performance by distributing computation across an increasing number of interconnected compute nodes. Our architecture is based on requested compute nodes as well as persistent virtual machines; this provides a high-performance simulation environment that is accessible to multi-domain users without expert knowledge, with a view to enable users to instantiate and control simulations at custom scale via a web-based graphical user interface. Our simulation environment, entirely open source, is based on the Neurorobotics Platform developed in the context of the Human Brain Project, and the NEST simulator. We characterize the capabilities of our parallelized architecture for large-scale embodied brain simulations through two benchmark experiments, by investigating the effects of scaling compute resources on performance defined in terms of experiment runtime, brain instantiation and simulation time. The first benchmark is based on a large-scale balanced network, while the second one is a multi-region embodied brain simulation consisting of more than a million neurons and a billion synapses. Both benchmarks clearly show how scaling compute resources improves the aforementioned performance metrics in a near-linear fashion. The second benchmark in particular is indicative of both the potential and limitations of a highly distributed simulation in terms of a trade-off between computation speed and resource cost. Our simulation architecture is being prepared to be accessible for everyone as an EBRAINS service, thereby offering a community-wide tool with a unique workflow that should provide momentum to the investigation of closed-loop embodiment within the computational neuroscience community.
Collapse
Affiliation(s)
- Benedikt Feldotto
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| | - Jochen Martin Eppler
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Cristian Jimenez-Romero
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | | | - Carlos Enrique Gutierrez
- Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
| | - Ugo Albanese
- Department of Excellence in Robotics and AI, The BioRobotics Institute, Scuola Superiore Sant'Anna, Pontedera, Italy
| | - Eloy Retamino
- Department of Computer Architecture and Technology, Research Centre for Information and Communication Technologies, University of Granada, Granada, Spain
| | - Viktor Vorobev
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| | - Vahid Zolfaghari
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| | - Alex Upton
- Swiss National Supercomputing Centre (CSCS), ETH Zurich, Lugano, Switzerland
| | - Zhe Sun
- Image Processing Research Team, Center for Advanced Photonics, RIKEN, Wako, Japan
- Computational Engineering Applications Unit, Head Office for Information Systems and Cybersecurity, RIKEN, Wako, Japan
| | - Hiroshi Yamaura
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Morteza Heidarinejad
- Computational Engineering Applications Unit, Head Office for Information Systems and Cybersecurity, RIKEN, Wako, Japan
| | - Wouter Klijn
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Abigail Morrison
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
- Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich, Germany
- Computer Science 3-Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Felipe Cruz
- Swiss National Supercomputing Centre (CSCS), ETH Zurich, Lugano, Switzerland
| | - Colin McMurtrie
- Swiss National Supercomputing Centre (CSCS), ETH Zurich, Lugano, Switzerland
| | - Alois C. Knoll
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| | - Jun Igarashi
- Computational Engineering Applications Unit, Head Office for Information Systems and Cybersecurity, RIKEN, Wako, Japan
- Center for Computational Science, RIKEN, Kobe, Japan
| | - Tadashi Yamazaki
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Kenji Doya
- Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
| | - Fabrice O. Morin
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| |
Collapse
|
35
|
Abstract
Investigating how an artificial network of neurons controls a simulated arm suggests that rotational patterns of activity in the motor cortex may rely on sensory feedback from the moving limb.
Collapse
Affiliation(s)
- Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, United States
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, United States.,Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, United States.,Neuroscience Graduate Program, University of Southern California, Los Angeles, United States
| |
Collapse
|