1
|
Parker Jones O, Mitchell AL, Yamada J, Merkt W, Geisert M, Havoutis I, Posner I. Oscillating latent dynamics in robot systems during walking and reaching. Sci Rep 2024; 14:11434. [PMID: 38763969 PMCID: PMC11102915 DOI: 10.1038/s41598-024-61610-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Accepted: 05/07/2024] [Indexed: 05/21/2024] Open
Abstract
Sensorimotor control of complex, dynamic systems such as humanoids or quadrupedal robots is notoriously difficult. While artificial systems traditionally employ hierarchical optimisation approaches or black-box policies, recent results in systems neuroscience suggest that complex behaviours such as locomotion and reaching are correlated with limit cycles in the primate motor cortex. A recent result suggests that, when applied to a learned latent space, oscillating patterns of activation can be used to control locomotion in a physical robot. While reminiscent of limit cycles observed in primate motor cortex, these dynamics are unsurprising given the cyclic nature of the robot's behaviour (walking). In this preliminary investigation, we consider how a similar approach extends to a less obviously cyclic behaviour (reaching). This has been explored in prior work using computational simulations. But simulations necessarily make simplifying assumptions that do not necessarily correspond to reality, so do not trivially transfer to real robot platforms. Our primary contribution is to demonstrate that we can infer and control real robot states in a learnt representation using oscillatory dynamics during reaching tasks. We further show that the learned latent representation encodes interpretable movements in the robot's workspace. Compared to robot locomotion, the dynamics that we observe for reaching are not fully cyclic, as they do not begin and end at the same position of latent space. However, they do begin to trace out the shape of a cycle, and, by construction, they are driven by the same underlying oscillatory mechanics.
Collapse
Affiliation(s)
- Oiwi Parker Jones
- Applied AI Lab, Oxford Robotics Institute, University of Oxford, Oxford, UK.
| | - Alexander L Mitchell
- Applied AI Lab, Oxford Robotics Institute, University of Oxford, Oxford, UK.
- Dynamic Robot Systems Group, Oxford Robotics Institute, University of Oxford, Oxford, UK.
| | - Jun Yamada
- Applied AI Lab, Oxford Robotics Institute, University of Oxford, Oxford, UK
| | - Wolfgang Merkt
- Dynamic Robot Systems Group, Oxford Robotics Institute, University of Oxford, Oxford, UK
| | - Mathieu Geisert
- Dynamic Robot Systems Group, Oxford Robotics Institute, University of Oxford, Oxford, UK
| | - Ioannis Havoutis
- Dynamic Robot Systems Group, Oxford Robotics Institute, University of Oxford, Oxford, UK
| | - Ingmar Posner
- Applied AI Lab, Oxford Robotics Institute, University of Oxford, Oxford, UK
| |
Collapse
|
2
|
Sadeghi M, Sharif Razavian R, Bazzi S, Chowdhury RH, Batista AP, Loughlin PJ, Sternad D. Inferring control objectives in a virtual balancing task in humans and monkeys. eLife 2024; 12:RP88514. [PMID: 38738986 PMCID: PMC11090506 DOI: 10.7554/elife.88514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/14/2024] Open
Abstract
Natural behaviors have redundancy, which implies that humans and animals can achieve their goals with different strategies. Given only observations of behavior, is it possible to infer the control objective that the subject is employing? This challenge is particularly acute in animal behavior because we cannot ask or instruct the subject to use a particular strategy. This study presents a three-pronged approach to infer an animal's control objective from behavior. First, both humans and monkeys performed a virtual balancing task for which different control strategies could be utilized. Under matched experimental conditions, corresponding behaviors were observed in humans and monkeys. Second, a generative model was developed that represented two main control objectives to achieve the task goal. Model simulations were used to identify aspects of behavior that could distinguish which control objective was being used. Third, these behavioral signatures allowed us to infer the control objective used by human subjects who had been instructed to use one control objective or the other. Based on this validation, we could then infer objectives from animal subjects. Being able to positively identify a subject's control objective from observed behavior can provide a powerful tool to neurophysiologists as they seek the neural mechanisms of sensorimotor coordination.
Collapse
Affiliation(s)
- Mohsen Sadeghi
- Department of Biology, Northeastern UniversityBostonUnited States
| | - Reza Sharif Razavian
- Department of Biology, Northeastern UniversityBostonUnited States
- Department of Electrical and Computer Engineering, Northeastern UniversityBostonUnited States
- Department of Mechanical Engineering, Northern Arizona UniversityFlagstaffUnited States
| | - Salah Bazzi
- Department of Biology, Northeastern UniversityBostonUnited States
- Department of Electrical and Computer Engineering, Northeastern UniversityBostonUnited States
- Institute for Experiential Robotics, Northeastern UniversityBostonUnited States
| | - Raeed H Chowdhury
- Department of Bioengineering, and Center for the Neural Basis of Cognition, University of PittsburghPittsburghUnited States
| | - Aaron P Batista
- Department of Bioengineering, and Center for the Neural Basis of Cognition, University of PittsburghPittsburghUnited States
| | - Patrick J Loughlin
- Department of Bioengineering, and Center for the Neural Basis of Cognition, University of PittsburghPittsburghUnited States
| | - Dagmar Sternad
- Department of Biology, Northeastern UniversityBostonUnited States
- Department of Electrical and Computer Engineering, Northeastern UniversityBostonUnited States
- Institute for Experiential Robotics, Northeastern UniversityBostonUnited States
- Department of Physics, Northeastern UniversityBostonUnited States
| |
Collapse
|
3
|
Strohmer B, Najarro E, Ausborn J, Berg RW, Tolu S. Sparse Firing in a Hybrid Central Pattern Generator for Spinal Motor Circuits. Neural Comput 2024; 36:759-780. [PMID: 38658025 DOI: 10.1162/neco_a_01660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 01/02/2024] [Indexed: 04/26/2024]
Abstract
Central pattern generators are circuits generating rhythmic movements, such as walking. The majority of existing computational models of these circuits produce antagonistic output where all neurons within a population spike with a broad burst at about the same neuronal phase with respect to network output. However, experimental recordings reveal that many neurons within these circuits fire sparsely, sometimes as rarely as once within a cycle. Here we address the sparse neuronal firing and develop a model to replicate the behavior of individual neurons within rhythm-generating populations to increase biological plausibility and facilitate new insights into the underlying mechanisms of rhythm generation. The developed network architecture is able to produce sparse firing of individual neurons, creating a novel implementation for exploring the contribution of network architecture on rhythmic output. Furthermore, the introduction of sparse firing of individual neurons within the rhythm-generating circuits is one of the factors that allows for a broad neuronal phase representation of firing at the population level. This moves the model toward recent experimental findings of evenly distributed neuronal firing across phases among individual spinal neurons. The network is tested by methodically iterating select parameters to gain an understanding of how connectivity and the interplay of excitation and inhibition influence the output. This knowledge can be applied in future studies to implement a biologically plausible rhythm-generating circuit for testing biological hypotheses.
Collapse
Affiliation(s)
- Beck Strohmer
- Department of Electrical and Photonics Engineering, Technical University of Denmark, 2800 Lyngby, Denmark
| | - Elias Najarro
- Department of Digital Design, IT University of Copenhagen, DK-2300 Copenhagen, Denmark
| | - Jessica Ausborn
- Department of Neurobiology and Anatomy, Drexel University College of Medicine, Philadelphia, PA, U.S.A.
| | - Rune W Berg
- Department of Neuroscience, University of Copenhagen, DK-1165 Copenhagen, Denmark
| | - Silvia Tolu
- Department of Electrical and Photonics Engineering, Technical University of Denmark, 2800 Lyngby, Denmark
| |
Collapse
|
4
|
Churchland MM, Shenoy KV. Preparatory activity and the expansive null-space. Nat Rev Neurosci 2024; 25:213-236. [PMID: 38443626 DOI: 10.1038/s41583-024-00796-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/26/2024] [Indexed: 03/07/2024]
Abstract
The study of the cortical control of movement experienced a conceptual shift over recent decades, as the basic currency of understanding shifted from single-neuron tuning towards population-level factors and their dynamics. This transition was informed by a maturing understanding of recurrent networks, where mechanism is often characterized in terms of population-level factors. By estimating factors from data, experimenters could test network-inspired hypotheses. Central to such hypotheses are 'output-null' factors that do not directly drive motor outputs yet are essential to the overall computation. In this Review, we highlight how the hypothesis of output-null factors was motivated by the venerable observation that motor-cortex neurons are active during movement preparation, well before movement begins. We discuss how output-null factors then became similarly central to understanding neural activity during movement. We discuss how this conceptual framework provided key analysis tools, making it possible for experimenters to address long-standing questions regarding motor control. We highlight an intriguing trend: as experimental and theoretical discoveries accumulate, the range of computational roles hypothesized to be subserved by output-null factors continues to expand.
Collapse
Affiliation(s)
- Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA.
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA.
| | - Krishna V Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Neurobiology, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Bio-X Institute, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
| |
Collapse
|
5
|
Almani MN, Lazzari J, Chacon A, Saxena S. μSim: A goal-driven framework for elucidating the neural control of movement through musculoskeletal modeling. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.02.578628. [PMID: 38405828 PMCID: PMC10888726 DOI: 10.1101/2024.02.02.578628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
How does the motor cortex (MC) produce purposeful and generalizable movements from the complex musculoskeletal system in a dynamic environment? To elucidate the underlying neural dynamics, we use a goal-driven approach to model MC by considering its goal as a controller driving the musculoskeletal system through desired states to achieve movement. Specifically, we formulate the MC as a recurrent neural network (RNN) controller producing muscle commands while receiving sensory feedback from biologically accurate musculoskeletal models. Given this real-time simulated feedback implemented in advanced physics simulation engines, we use deep reinforcement learning to train the RNN to achieve desired movements under specified neural and musculoskeletal constraints. Activity of the trained model can accurately decode experimentally recorded neural population dynamics and single-unit MC activity, while generalizing well to testing conditions significantly different from training. Simultaneous goal- and data- driven modeling in which we use the recorded neural activity as observed states of the MC further enhances direct and generalizable single-unit decoding. Finally, we show that this framework elucidates computational principles of how neural dynamics enable flexible control of movement and make this framework easy-to-use for future experiments.
Collapse
|
6
|
Vahidi P, Sani OG, Shanechi MM. Modeling and dissociation of intrinsic and input-driven neural population dynamics underlying behavior. Proc Natl Acad Sci U S A 2024; 121:e2212887121. [PMID: 38335258 PMCID: PMC10873612 DOI: 10.1073/pnas.2212887121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 12/03/2023] [Indexed: 02/12/2024] Open
Abstract
Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other brain regions. To avoid misinterpreting temporally structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of behavior. We first show how training dynamical models of neural activity while considering behavior but not input or input but not behavior may lead to misinterpretations. We then develop an analytical learning method for linear dynamical models that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of the task while other methods can be influenced by the task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the different subjects and tasks, whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.
Collapse
Affiliation(s)
- Parsa Vahidi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Omid G. Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Maryam M. Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA90089
- Thomas Lord Department of Computer Science and Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| |
Collapse
|
7
|
Kuzmina E, Kriukov D, Lebedev M. Neuronal travelling waves explain rotational dynamics in experimental datasets and modelling. Sci Rep 2024; 14:3566. [PMID: 38347042 PMCID: PMC10861525 DOI: 10.1038/s41598-024-53907-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Accepted: 02/06/2024] [Indexed: 02/15/2024] Open
Abstract
Spatiotemporal properties of neuronal population activity in cortical motor areas have been subjects of experimental and theoretical investigations, generating numerous interpretations regarding mechanisms for preparing and executing limb movements. Two competing models, representational and dynamical, strive to explain the relationship between movement parameters and neuronal activity. A dynamical model uses the jPCA method that holistically characterizes oscillatory activity in neuron populations by maximizing the data rotational dynamics. Different rotational dynamics interpretations revealed by the jPCA approach have been proposed. Yet, the nature of such dynamics remains poorly understood. We comprehensively analyzed several neuronal-population datasets and found rotational dynamics consistently accounted for by a traveling wave pattern. For quantifying rotation strength, we developed a complex-valued measure, the gyration number. Additionally, we identified parameters influencing rotation extent in the data. Our findings suggest that rotational dynamics and traveling waves are typically the same phenomena, so reevaluation of the previous interpretations where they were considered separate entities is needed.
Collapse
Affiliation(s)
- Ekaterina Kuzmina
- Skolkovo Institute of Science and Technology, Vladimir Zelman Center for Neurobiology and Brain Rehabilitation, Moscow, Russia, 121205.
- Artificial Intelligence Research Institute (AIRI), Moscow, Russia.
| | - Dmitrii Kriukov
- Artificial Intelligence Research Institute (AIRI), Moscow, Russia
- Skolkovo Institute of Science and Technology, Center for Molecular and Cellular Biology, Moscow, Russia, 121205
| | - Mikhail Lebedev
- Faculty of Mechanics and Mathematics, Lomonosov Moscow State University, Moscow, Russia, 119992
- Sechenov Institute of Evolutionary Physiology and Biochemistry of the Russian Academy of Sciences, Saint-Petersburg, Russia, 194223
| |
Collapse
|
8
|
Cross KP, Cook DJ, Scott SH. Rapid Online Corrections for Proprioceptive and Visual Perturbations Recruit Similar Circuits in Primary Motor Cortex. eNeuro 2024; 11:ENEURO.0083-23.2024. [PMID: 38238081 PMCID: PMC10867723 DOI: 10.1523/eneuro.0083-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 12/22/2023] [Accepted: 01/09/2024] [Indexed: 02/16/2024] Open
Abstract
An important aspect of motor function is our ability to rapidly generate goal-directed corrections for disturbances to the limb or behavioral goal. The primary motor cortex (M1) is a key region involved in processing feedback for rapid motor corrections, yet we know little about how M1 circuits are recruited by different sources of sensory feedback to make rapid corrections. We trained two male monkeys (Macaca mulatta) to make goal-directed reaches and on random trials introduced different sensory errors by either jumping the visual location of the goal (goal jump), jumping the visual location of the hand (cursor jump), or applying a mechanical load to displace the hand (proprioceptive feedback). Sensory perturbations evoked a broad response in M1 with ∼73% of neurons (n = 257) responding to at least one of the sensory perturbations. Feedback responses were also similar as response ranges between the goal and cursor jumps were highly correlated (range of r = [0.91, 0.97]) as were the response ranges between the mechanical loads and the visual perturbations (range of r = [0.68, 0.86]). Lastly, we identified the neural subspace each perturbation response resided in and found a strong overlap between the two visual perturbations (range of overlap index, 0.73-0.89) and between the mechanical loads and visual perturbations (range of overlap index, 0.36-0.47) indicating each perturbation evoked similar structure of activity at the population level. Collectively, our results indicate rapid responses to errors from different sensory sources target similar overlapping circuits in M1.
Collapse
Affiliation(s)
- Kevin P Cross
- Neuroscience Center, University of North Carolina, Chapel Hill, North Carolina 27599
| | - Douglas J Cook
- Department of Surgery, Queen's University, Kingston, Ontario K7L 3N6, Canada
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario K7L 3N6, Canada
| | - Stephen H Scott
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario K7L 3N6, Canada
- Departments of Biomedical and Molecular Sciences, Queen's University, Kingston, Ontario K7L 3N6, Canada
- Medicine, Queen's University, Kingston, Ontario K7L 3N6, Canada
| |
Collapse
|
9
|
Oby ER, Degenhart AD, Grigsby EM, Motiwala A, McClain NT, Marino PJ, Yu BM, Batista AP. Dynamical constraints on neural population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.03.573543. [PMID: 38260549 PMCID: PMC10802336 DOI: 10.1101/2024.01.03.573543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
The manner in which neural activity unfolds over time is thought to be central to sensory, motor, and cognitive functions in the brain. Network models have long posited that the brain's computations involve time courses of activity that are shaped by the underlying network. A prediction from this view is that the activity time courses should be difficult to violate. We leveraged a brain-computer interface (BCI) to challenge monkeys to violate the naturally-occurring time courses of neural population activity that we observed in motor cortex. This included challenging animals to traverse the natural time course of neural activity in a time-reversed manner. Animals were unable to violate the natural time courses of neural activity when directly challenged to do so. These results provide empirical support for the view that activity time courses observed in the brain indeed reflect the underlying network-level computational mechanisms that they are believed to implement.
Collapse
|
10
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent factors and structures in neural population activity. Nat Biomed Eng 2024; 8:85-108. [PMID: 38082181 DOI: 10.1038/s41551-023-01106-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 09/12/2023] [Indexed: 12/26/2023]
Abstract
Modelling the spatiotemporal dynamics in the activity of neural populations while also enabling their flexible inference is hindered by the complexity and noisiness of neural observations. Here we show that the lower-dimensional nonlinear latent factors and latent structures can be computationally modelled in a manner that allows for flexible inference causally, non-causally and in the presence of missing neural observations. To enable flexible inference, we developed a neural network that separates the model into jointly trained manifold and dynamic latent factors such that nonlinearity is captured through the manifold factors and the dynamics can be modelled in tractable linear form on this nonlinear manifold. We show that the model, which we named 'DFINE' (for 'dynamical flexible inference for nonlinear embeddings') achieves flexible inference in simulations of nonlinear dynamics and across neural datasets representing a diversity of brain regions and behaviours. Compared with earlier neural-network models, DFINE enables flexible inference, better predicts neural activity and behaviour, and better captures the latent neural manifold structure. DFINE may advance the development of neurotechnology and investigations in neuroscience.
Collapse
Affiliation(s)
- Hamidreza Abbaspourazad
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Eray Erturk
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Departments of Neurosurgery, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
11
|
Sadeghi M, Razavian RS, Bazzi S, Chowdhury R, Batista A, Loughlin P, Sternad D. Inferring control objectives in a virtual balancing task in humans and monkeys. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.02.539055. [PMID: 37205497 PMCID: PMC10187212 DOI: 10.1101/2023.05.02.539055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Natural behaviors have redundancy, which implies that humans and animals can achieve their goals with different control objectives. Given only observations of behavior, is it possible to infer the control strategy that the subject is employing? This challenge is particularly acute in animal behavior because we cannot ask or instruct the subject to use a particular control strategy. This study presents a threepronged approach to infer an animal's control strategy from behavior. First, both humans and monkeys performed a virtual balancing task for which different control objectives could be utilized. Under matched experimental conditions, corresponding behaviors were observed in humans and monkeys. Second, a generative model was developed that represented two main control strategies to achieve the task goal. Model simulations were used to identify aspects of behavior that could distinguish which control objective was being used. Third, these behavioral signatures allowed us to infer the control objective used by human subjects who had been instructed to use one control objective or the other. Based on this validation, we could then infer strategies from animal subjects. Being able to positively identify a subject's control objective from behavior can provide a powerful tool to neurophysiologists as they seek the neural mechanisms of sensorimotor coordination.
Collapse
Affiliation(s)
- Mohsen Sadeghi
- Department of Biology, Northeastern University
- Department of Electrical and Computer Engineering, Northeastern University
| | | | - Salah Bazzi
- Institute for Experiential Robotics, Northeastern University
| | - Raeed Chowdhury
- Department of Bioengineering, and Center for the Neural Basis of Cognition, University of Pittsburgh, PA, USA
| | - Aaron Batista
- Department of Bioengineering, and Center for the Neural Basis of Cognition, University of Pittsburgh, PA, USA
| | - Patrick Loughlin
- Department of Bioengineering, and Center for the Neural Basis of Cognition, University of Pittsburgh, PA, USA
| | - Dagmar Sternad
- Department of Biology, Northeastern University
- Department of Electrical and Computer Engineering, Northeastern University
- Institute for Experiential Robotics, Northeastern University
- Department of Physics, Northeastern University
| |
Collapse
|
12
|
Soo WWM, Goudar V, Wang XJ. Training biologically plausible recurrent neural networks on cognitive tasks with long-term dependencies. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.10.561588. [PMID: 37873445 PMCID: PMC10592728 DOI: 10.1101/2023.10.10.561588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
Training recurrent neural networks (RNNs) has become a go-to approach for generating and evaluating mechanistic neural hypotheses for cognition. The ease and efficiency of training RNNs with backpropagation through time and the availability of robustly supported deep learning libraries has made RNN modeling more approachable and accessible to neuroscience. Yet, a major technical hindrance remains. Cognitive processes such as working memory and decision making involve neural population dynamics over a long period of time within a behavioral trial and across trials. It is difficult to train RNNs to accomplish tasks where neural representations and dynamics have long temporal dependencies without gating mechanisms such as LSTMs or GRUs which currently lack experimental support and prohibit direct comparison between RNNs and biological neural circuits. We tackled this problem based on the idea of specialized skip-connections through time to support the emergence of task-relevant dynamics, and subsequently reinstitute biological plausibility by reverting to the original architecture. We show that this approach enables RNNs to successfully learn cognitive tasks that prove impractical if not impossible to learn using conventional methods. Over numerous tasks considered here, we achieve less training steps and shorter wall-clock times, particularly in tasks that require learning long-term dependencies via temporal integration over long timescales or maintaining a memory of past events in hidden-states. Our methods expand the range of experimental tasks that biologically plausible RNN models can learn, thereby supporting the development of theory for the emergent neural mechanisms of computations involving long-term dependencies.
Collapse
|
13
|
Visser YF, Medendorp WP, Selen LPJ. Muscular reflex gains reflect changes of mind in reaching. J Neurophysiol 2023; 130:640-651. [PMID: 37584102 DOI: 10.1152/jn.00197.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 07/18/2023] [Accepted: 08/10/2023] [Indexed: 08/17/2023] Open
Abstract
Decisions for action are accompanied by a continual processing of sensory information, sometimes resulting in a revision of the initial choice, called a change of mind (CoM). Although the motor system is tuned during the formation of a reach decision, it is unclear whether its preparatory state differs between CoM and non-CoM decisions. To test this, participants (n = 14) viewed a random-dot motion (RDM) stimulus of various coherence levels for a random viewing duration. At the onset of a mechanical perturbation that rapidly stretched the pectoralis muscle, they indicated the perceived motion direction by making a reaching movement to one of two targets. Using electromyography (EMG), we quantified the reflex gains of the pectoralis and posterior deltoid muscles. Results show that reflex gains scaled with both the coherence level and the viewing duration of the stimulus. We fit a drift diffusion model (DDM) to the behavioral choices. The decision variable (DV), derived from the DDM, correlated well with the measured reflex gain at the single-trial level. However, when matched on DV magnitude, reflex gains were significantly lower in CoM than non-CoM trials. We conclude that the internal state of the motor system, as measured by the spinal reflexes, reflects the continual deliberation on sensory evidence for action selection, including the postdecisional evidence that can lead to a change of mind.NEW & NOTEWORTHY Using behavioral findings, EMG, and computational modeling, we show that not only the perceptual decision outcome but also the accumulating evidence for that outcome is continuously sent to the relevant muscles. Moreover, we show that an upcoming change of mind can be detected in the motor periphery, suggesting that a correlate of the internal decision making process is being sent along.
Collapse
Affiliation(s)
- Yvonne F Visser
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - W Pieter Medendorp
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Luc P J Selen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| |
Collapse
|
14
|
Athalye VR, Khanna P, Gowda S, Orsborn AL, Costa RM, Carmena JM. Invariant neural dynamics drive commands to control different movements. Curr Biol 2023; 33:2962-2976.e15. [PMID: 37402376 PMCID: PMC10527529 DOI: 10.1016/j.cub.2023.06.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 04/24/2023] [Accepted: 06/09/2023] [Indexed: 07/06/2023]
Abstract
It has been proposed that the nervous system has the capacity to generate a wide variety of movements because it reuses some invariant code. Previous work has identified that dynamics of neural population activity are similar during different movements, where dynamics refer to how the instantaneous spatial pattern of population activity changes in time. Here, we test whether invariant dynamics of neural populations are actually used to issue the commands that direct movement. Using a brain-machine interface (BMI) that transforms rhesus macaques' motor-cortex activity into commands for a neuroprosthetic cursor, we discovered that the same command is issued with different neural-activity patterns in different movements. However, these different patterns were predictable, as we found that the transitions between activity patterns are governed by the same dynamics across movements. These invariant dynamics are low dimensional, and critically, they align with the BMI, so that they predict the specific component of neural activity that actually issues the next command. We introduce a model of optimal feedback control (OFC) that shows that invariant dynamics can help transform movement feedback into commands, reducing the input that the neural population needs to control movement. Altogether our results demonstrate that invariant dynamics drive commands to control a variety of movements and show how feedback can be integrated with invariant dynamics to issue generalizable commands.
Collapse
Affiliation(s)
- Vivek R Athalye
- Zuckerman Mind Brain Behavior Institute, Departments of Neuroscience and Neurology, Columbia University, New York, NY 10027, USA.
| | - Preeya Khanna
- Department of Neurology, University of California, San Francisco, San Francisco, CA 94158, USA.
| | - Suraj Gowda
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Amy L Orsborn
- Departments of Bioengineering, Electrical and Computer Engineering, University of Washington, Seattle, Seattle, WA 98195, USA
| | - Rui M Costa
- Zuckerman Mind Brain Behavior Institute, Departments of Neuroscience and Neurology, Columbia University, New York, NY 10027, USA.
| | - Jose M Carmena
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA 94720, USA; Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA; UC Berkeley-UCSF Joint Graduate Program in Bioengineering, University of California, Berkeley, Berkeley, CA 94720, USA.
| |
Collapse
|
15
|
Disse GD, Nandakumar B, Pauzin FP, Blumenthal GH, Kong Z, Ditterich J, Moxon KA. Neural ensemble dynamics in trunk and hindlimb sensorimotor cortex encode for the control of postural stability. Cell Rep 2023; 42:112347. [PMID: 37027302 DOI: 10.1016/j.celrep.2023.112347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2022] [Revised: 02/09/2023] [Accepted: 03/21/2023] [Indexed: 04/08/2023] Open
Abstract
The cortex has a disputed role in monitoring postural equilibrium and intervening in cases of major postural disturbances. Here, we investigate the patterns of neural activity in the cortex that underlie neural dynamics during unexpected perturbations. In both the primary sensory (S1) and motor (M1) cortices of the rat, unique neuronal classes differentially covary their responses to distinguish different characteristics of applied postural perturbations; however, there is substantial information gain in M1, demonstrating a role for higher-order computations in motor control. A dynamical systems model of M1 activity and forces generated by the limbs reveals that these neuronal classes contribute to a low-dimensional manifold comprised of separate subspaces enabled by congruent and incongruent neural firing patterns that define different computations depending on the postural responses. These results inform how the cortex engages in postural control, directing work aiming to understand postural instability after neurological disease.
Collapse
Affiliation(s)
- Gregory D Disse
- Neuroscience Graduate Group, University of California, Davis, Davis, CA 95616, USA; Biomedical Engineering, University of California, Davis, Davis, CA 95616, USA
| | | | - Francois P Pauzin
- Biomedical Engineering, University of California, Davis, Davis, CA 95616, USA
| | - Gary H Blumenthal
- School of Biomedical Engineering Science and Health Systems, Drexel University, Philadelphia, PA 19104, USA
| | - Zhaodan Kong
- Mechanical and Aerospace Engineering, University of California, Davis, Davis, CA 95616, USA
| | - Jochen Ditterich
- Neuroscience Graduate Group, University of California, Davis, Davis, CA 95616, USA; Neurobiology, Physiology and Behavior, University of California, Davis, Davis, CA 95616, USA
| | - Karen A Moxon
- Neuroscience Graduate Group, University of California, Davis, Davis, CA 95616, USA; Biomedical Engineering, University of California, Davis, Davis, CA 95616, USA.
| |
Collapse
|
16
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent structures in neural population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.13.532479. [PMID: 36993605 PMCID: PMC10054986 DOI: 10.1101/2023.03.13.532479] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Inferring complex spatiotemporal dynamics in neural population activity is critical for investigating neural mechanisms and developing neurotechnology. These activity patterns are noisy observations of lower-dimensional latent factors and their nonlinear dynamical structure. A major unaddressed challenge is to model this nonlinear structure, but in a manner that allows for flexible inference, whether causally, non-causally, or in the presence of missing neural observations. We address this challenge by developing DFINE, a new neural network that separates the model into dynamic and manifold latent factors, such that the dynamics can be modeled in tractable form. We show that DFINE achieves flexible nonlinear inference across diverse behaviors and brain regions. Further, despite enabling flexible inference unlike prior neural network models of population activity, DFINE also better predicts the behavior and neural activity, and better captures the latent neural manifold structure. DFINE can both enhance future neurotechnology and facilitate investigations across diverse domains of neuroscience.
Collapse
|
17
|
Vahidi P, Sani OG, Shanechi MM. Modeling and dissociation of intrinsic and input-driven neural population dynamics underlying behavior. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.14.532554. [PMID: 36993213 PMCID: PMC10055042 DOI: 10.1101/2023.03.14.532554] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other regions. To avoid misinterpreting temporally-structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of a specific behavior. We first show how training dynamical models of neural activity while considering behavior but not input, or input but not behavior may lead to misinterpretations. We then develop a novel analytical learning method that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the new capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of task while other methods can be influenced by the change in task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the three subjects and two tasks whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.
Collapse
|
18
|
Kanwisher N, Khosla M, Dobs K. Using artificial neural networks to ask 'why' questions of minds and brains. Trends Neurosci 2023; 46:240-254. [PMID: 36658072 DOI: 10.1016/j.tins.2022.12.008] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 11/29/2022] [Accepted: 12/22/2022] [Indexed: 01/19/2023]
Abstract
Neuroscientists have long characterized the properties and functions of the nervous system, and are increasingly succeeding in answering how brains perform the tasks they do. But the question 'why' brains work the way they do is asked less often. The new ability to optimize artificial neural networks (ANNs) for performance on human-like tasks now enables us to approach these 'why' questions by asking when the properties of networks optimized for a given task mirror the behavioral and neural characteristics of humans performing the same task. Here we highlight the recent success of this strategy in explaining why the visual and auditory systems work the way they do, at both behavioral and neural levels.
Collapse
Affiliation(s)
- Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Meenakshi Khosla
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Katharina Dobs
- Department of Psychology, Justus Liebig University Giessen, Giessen, Germany; Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany.
| |
Collapse
|
19
|
Lisberger SG. Toward a Biomimetic Neural Circuit Model of Sensory-Motor Processing. Neural Comput 2023; 35:384-412. [PMID: 35671470 PMCID: PMC9971833 DOI: 10.1162/neco_a_01516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 03/31/2022] [Indexed: 11/04/2022]
Abstract
Computational models have been a mainstay of research on smooth pursuit eye movements in monkeys. Pursuit is a sensory-motor system that is driven by the visual motion of small targets. It creates a smooth eye movement that accelerates up to target speed and tracks the moving target essentially perfectly. In this review of my laboratory's research, I trace the development of computational models of pursuit eye movements from the early control-theory models to the most recent neural circuit models. I outline a combined experimental and computational plan to move the models to the next level. Finally, I explain why research on nonhuman primates is so critical to the development of the neural circuit models I think we need.
Collapse
Affiliation(s)
- Stephen G. Lisberger
- Department of Neurobiology, Duke University School of Medicine, Durham, NC 27710, U.S.A
| |
Collapse
|
20
|
Izawa J, Higo N, Murata Y. Accounting for the valley of recovery during post-stroke rehabilitation training via a model-based analysis of macaque manual dexterity. FRONTIERS IN REHABILITATION SCIENCES 2022; 3:1042912. [PMID: 36644290 PMCID: PMC9838193 DOI: 10.3389/fresc.2022.1042912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Accepted: 12/01/2022] [Indexed: 12/24/2022]
Abstract
Background True recovery, in which a stroke patient regains the same precise motor skills observed in prestroke conditions, is the fundamental goal of rehabilitation training. However, a transient drop in task performance during rehabilitation training after stroke, observed in human clinical outcome as well as in both macaque and squirrel monkey retrieval data, might prevent smooth transitions during recovery. This drop, i.e., recovery valley, often occurs during the transition from compensatory skill to precision skill. Here, we sought computational mechanisms behind such transitions and recovery. Analogous to motor skill learning, we considered that the motor recovery process is composed of spontaneous recovery and training-induced recovery. Specifically, we hypothesized that the interaction of these multiple skill update processes might determine profiles of the recovery valley. Methods A computational model of motor recovery was developed based on a state-space model of motor learning that incorporates a retention factor and interaction terms for training-induced recovery and spontaneous recovery. The model was fit to previously reported macaque motor recovery data where the monkey practiced precision grip skills after a lesion in the sensorimotor area in the cortex. Multiple computational models and the effects of each parameter were examined by model comparisons based on information criteria and sensitivity analyses of each parameter. Result Both training-induced and spontaneous recoveries were necessary to explain the behavioral data. Since these two factors contributed following logarithmic function, the training-induced recovery were effective only after spontaneous biological recovery had developed. In the training-induced recovery component, the practice of the compensation also contributed to recovery of the precision grip skill as if there is a significant generalization effect of learning between these two skills. In addition, a retention factor was critical to explain the recovery profiles. Conclusions We found that spontaneous recovery, training-induced recovery, retention factors, and interaction terms are crucial to explain recovery and recovery valley profiles. This simulation-based examination of the model parameters provides suggestions for effective rehabilitation methods to prevent the recovery valley, such as plasticity-promoting medications, brain stimulation, and robotic rehabilitation technologies.
Collapse
Affiliation(s)
- Jun Izawa
- Faculty of Engineering, Information and Systems, University of Tsukuba, Tsukuba, Japan,Correspondence: Jun Izawa Yumi Murata
| | - Noriyuki Higo
- Neurorehabilitation Research Group, Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan
| | - Yumi Murata
- Neurorehabilitation Research Group, Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan,Correspondence: Jun Izawa Yumi Murata
| |
Collapse
|
21
|
Cometa A, Falasconi A, Biasizzo M, Carpaneto J, Horn A, Mazzoni A, Micera S. Clinical neuroscience and neurotechnology: An amazing symbiosis. iScience 2022; 25:105124. [PMID: 36193050 PMCID: PMC9526189 DOI: 10.1016/j.isci.2022.105124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
In the last decades, clinical neuroscience found a novel ally in neurotechnologies, devices able to record and stimulate electrical activity in the nervous system. These technologies improved the ability to diagnose and treat neural disorders. Neurotechnologies are concurrently enabling a deeper understanding of healthy and pathological dynamics of the nervous system through stimulation and recordings during brain implants. On the other hand, clinical neurosciences are not only driving neuroengineering toward the most relevant clinical issues, but are also shaping the neurotechnologies thanks to clinical advancements. For instance, understanding the etiology of a disease informs the location of a therapeutic stimulation, but also the way stimulation patterns should be designed to be more effective/naturalistic. Here, we describe cases of fruitful integration such as Deep Brain Stimulation and cortical interfaces to highlight how this symbiosis between clinical neuroscience and neurotechnology is closer to a novel integrated framework than to a simple interdisciplinary interaction.
Collapse
|
22
|
Small, correlated changes in synaptic connectivity may facilitate rapid motor learning. Nat Commun 2022; 13:5163. [PMID: 36056006 PMCID: PMC9440011 DOI: 10.1038/s41467-022-32646-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 08/08/2022] [Indexed: 11/08/2022] Open
Abstract
Animals rapidly adapt their movements to external perturbations, a process paralleled by changes in neural activity in the motor cortex. Experimental studies suggest that these changes originate from altered inputs (Hinput) rather than from changes in local connectivity (Hlocal), as neural covariance is largely preserved during adaptation. Since measuring synaptic changes in vivo remains very challenging, we used a modular recurrent neural network to qualitatively test this interpretation. As expected, Hinput resulted in small activity changes and largely preserved covariance. Surprisingly given the presumed dependence of stable covariance on preserved circuit connectivity, Hlocal led to only slightly larger changes in activity and covariance, still within the range of experimental recordings. This similarity is due to Hlocal only requiring small, correlated connectivity changes for successful adaptation. Simulations of tasks that impose increasingly larger behavioural changes revealed a growing difference between Hinput and Hlocal, which could be exploited when designing future experiments.
Collapse
|
23
|
Saxena S, Russo AA, Cunningham J, Churchland MM. Motor cortex activity across movement speeds is predicted by network-level strategies for generating muscle activity. eLife 2022; 11:67620. [PMID: 35621264 PMCID: PMC9197394 DOI: 10.7554/elife.67620] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Accepted: 05/26/2022] [Indexed: 12/02/2022] Open
Abstract
Learned movements can be skillfully performed at different paces. What neural strategies produce this flexibility? Can they be predicted and understood by network modeling? We trained monkeys to perform a cycling task at different speeds, and trained artificial recurrent networks to generate the empirical muscle-activity patterns. Network solutions reflected the principle that smooth well-behaved dynamics require low trajectory tangling. Network solutions had a consistent form, which yielded quantitative and qualitative predictions. To evaluate predictions, we analyzed motor cortex activity recorded during the same task. Responses supported the hypothesis that the dominant neural signals reflect not muscle activity, but network-level strategies for generating muscle activity. Single-neuron responses were better accounted for by network activity than by muscle activity. Similarly, neural population trajectories shared their organization not with muscle trajectories, but with network solutions. Thus, cortical activity could be understood based on the need to generate muscle activity via dynamics that allow smooth, robust control over movement speed.
Collapse
Affiliation(s)
- Shreya Saxena
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, United States.,Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States.,Grossman Center for the Statistics of Mind, Columbia University, New York, United States.,Center for Theoretical Neuroscience, Columbia University, New York, United States.,Department of Statistics, Columbia University, New York, United States
| | - Abigail A Russo
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States.,Department of Neuroscience, Columbia University, New York, United States
| | - John Cunningham
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States.,Grossman Center for the Statistics of Mind, Columbia University, New York, United States.,Center for Theoretical Neuroscience, Columbia University, New York, United States.,Department of Statistics, Columbia University, New York, United States
| | - Mark M Churchland
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States.,Grossman Center for the Statistics of Mind, Columbia University, New York, United States.,Department of Neuroscience, Columbia University, New York, United States.,Kavli Institute for Brain Science, Columbia University, New York, United States
| |
Collapse
|
24
|
Feldotto B, Eppler JM, Jimenez-Romero C, Bignamini C, Gutierrez CE, Albanese U, Retamino E, Vorobev V, Zolfaghari V, Upton A, Sun Z, Yamaura H, Heidarinejad M, Klijn W, Morrison A, Cruz F, McMurtrie C, Knoll AC, Igarashi J, Yamazaki T, Doya K, Morin FO. Deploying and Optimizing Embodied Simulations of Large-Scale Spiking Neural Networks on HPC Infrastructure. Front Neuroinform 2022; 16:884180. [PMID: 35662903 PMCID: PMC9160925 DOI: 10.3389/fninf.2022.884180] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Accepted: 04/19/2022] [Indexed: 12/20/2022] Open
Abstract
Simulating the brain-body-environment trinity in closed loop is an attractive proposal to investigate how perception, motor activity and interactions with the environment shape brain activity, and vice versa. The relevance of this embodied approach, however, hinges entirely on the modeled complexity of the various simulated phenomena. In this article, we introduce a software framework that is capable of simulating large-scale, biologically realistic networks of spiking neurons embodied in a biomechanically accurate musculoskeletal system that interacts with a physically realistic virtual environment. We deploy this framework on the high performance computing resources of the EBRAINS research infrastructure and we investigate the scaling performance by distributing computation across an increasing number of interconnected compute nodes. Our architecture is based on requested compute nodes as well as persistent virtual machines; this provides a high-performance simulation environment that is accessible to multi-domain users without expert knowledge, with a view to enable users to instantiate and control simulations at custom scale via a web-based graphical user interface. Our simulation environment, entirely open source, is based on the Neurorobotics Platform developed in the context of the Human Brain Project, and the NEST simulator. We characterize the capabilities of our parallelized architecture for large-scale embodied brain simulations through two benchmark experiments, by investigating the effects of scaling compute resources on performance defined in terms of experiment runtime, brain instantiation and simulation time. The first benchmark is based on a large-scale balanced network, while the second one is a multi-region embodied brain simulation consisting of more than a million neurons and a billion synapses. Both benchmarks clearly show how scaling compute resources improves the aforementioned performance metrics in a near-linear fashion. The second benchmark in particular is indicative of both the potential and limitations of a highly distributed simulation in terms of a trade-off between computation speed and resource cost. Our simulation architecture is being prepared to be accessible for everyone as an EBRAINS service, thereby offering a community-wide tool with a unique workflow that should provide momentum to the investigation of closed-loop embodiment within the computational neuroscience community.
Collapse
Affiliation(s)
- Benedikt Feldotto
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
- *Correspondence: Benedikt Feldotto
| | - Jochen Martin Eppler
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Cristian Jimenez-Romero
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | | | - Carlos Enrique Gutierrez
- Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
| | - Ugo Albanese
- Department of Excellence in Robotics and AI, The BioRobotics Institute, Scuola Superiore Sant'Anna, Pontedera, Italy
| | - Eloy Retamino
- Department of Computer Architecture and Technology, Research Centre for Information and Communication Technologies, University of Granada, Granada, Spain
| | - Viktor Vorobev
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| | - Vahid Zolfaghari
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| | - Alex Upton
- Swiss National Supercomputing Centre (CSCS), ETH Zurich, Lugano, Switzerland
| | - Zhe Sun
- Image Processing Research Team, Center for Advanced Photonics, RIKEN, Wako, Japan
- Computational Engineering Applications Unit, Head Office for Information Systems and Cybersecurity, RIKEN, Wako, Japan
| | - Hiroshi Yamaura
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Morteza Heidarinejad
- Computational Engineering Applications Unit, Head Office for Information Systems and Cybersecurity, RIKEN, Wako, Japan
| | - Wouter Klijn
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Abigail Morrison
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
- Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich, Germany
- Computer Science 3-Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Felipe Cruz
- Swiss National Supercomputing Centre (CSCS), ETH Zurich, Lugano, Switzerland
| | - Colin McMurtrie
- Swiss National Supercomputing Centre (CSCS), ETH Zurich, Lugano, Switzerland
| | - Alois C. Knoll
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| | - Jun Igarashi
- Computational Engineering Applications Unit, Head Office for Information Systems and Cybersecurity, RIKEN, Wako, Japan
- Center for Computational Science, RIKEN, Kobe, Japan
| | - Tadashi Yamazaki
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Kenji Doya
- Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
| | - Fabrice O. Morin
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| |
Collapse
|
25
|
Abstract
Investigating how an artificial network of neurons controls a simulated arm suggests that rotational patterns of activity in the motor cortex may rely on sensory feedback from the moving limb.
Collapse
Affiliation(s)
- Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, United States
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, United States.,Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, United States.,Neuroscience Graduate Program, University of Southern California, Los Angeles, United States
| |
Collapse
|