1
|
De novo motor learning creates structure in neural activity that shapes adaptation. Nat Commun 2024; 15:4084. [PMID: 38744847 PMCID: PMC11094149 DOI: 10.1038/s41467-024-48008-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 04/18/2024] [Indexed: 05/16/2024] Open
Abstract
Animals can quickly adapt learned movements to external perturbations, and their existing motor repertoire likely influences their ease of adaptation. Long-term learning causes lasting changes in neural connectivity, which shapes the activity patterns that can be produced during adaptation. Here, we examined how a neural population's existing activity patterns, acquired through de novo learning, affect subsequent adaptation by modeling motor cortical neural population dynamics with recurrent neural networks. We trained networks on different motor repertoires comprising varying numbers of movements, which they acquired following various learning experiences. Networks with multiple movements had more constrained and robust dynamics, which were associated with more defined neural 'structure'-organization in the available population activity patterns. This structure facilitated adaptation, but only when the changes imposed by the perturbation were congruent with the organization of the inputs and the structure in neural activity acquired during de novo learning. These results highlight trade-offs in skill acquisition and demonstrate how different learning experiences can shape the geometrical properties of neural population activity and subsequent adaptation.
Collapse
|
2
|
A theory of brain-computer interface learning via low-dimensional control. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.18.589952. [PMID: 38712193 PMCID: PMC11071278 DOI: 10.1101/2024.04.18.589952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
A remarkable demonstration of the flexibility of mammalian motor systems is primates' ability to learn to control brain-computer interfaces (BCIs). This constitutes a completely novel motor behavior, yet primates are capable of learning to control BCIs under a wide range of conditions. BCIs with carefully calibrated decoders, for example, can be learned with only minutes to hours of practice. With a few weeks of practice, even BCIs with randomly constructed decoders can be learned. What are the biological substrates of this learning process? Here, we develop a theory based on a re-aiming strategy, whereby learning operates within a low-dimensional subspace of task-relevant inputs driving the local population of recorded neurons. Through comprehensive numerical and formal analysis, we demonstrate that this theory can provide a unifying explanation for disparate phenomena previously reported in three different BCI learning tasks, and we derive a novel experimental prediction that we verify with previously published data. By explicitly modeling the underlying neural circuitry, the theory reveals an interpretation of these phenomena in terms of biological constraints on neural activity.
Collapse
|
3
|
Transitioning from global to local computational strategies during brain-machine interface learning. Front Neurosci 2024; 18:1371107. [PMID: 38707591 PMCID: PMC11066153 DOI: 10.3389/fnins.2024.1371107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 03/05/2024] [Indexed: 05/07/2024] Open
Abstract
When learning to use a brain-machine interface (BMI), the brain modulates neuronal activity patterns, exploring and exploiting the state space defined by their neural manifold. Neurons directly involved in BMI control (i.e., direct neurons) can display marked changes in their firing patterns during BMI learning. However, the extent of firing pattern changes in neurons not directly involved in BMI control (i.e., indirect neurons) remains unclear. To clarify this issue, we localized direct and indirect neurons to separate hemispheres in a task designed to bilaterally engage these hemispheres while animals learned to control the position of a platform with their neural signals. Animals that learned to control the platform and improve their performance in the task shifted from a global strategy, where both direct and indirect neurons modified their firing patterns, to a local strategy, where only direct neurons modified their firing rate, as animals became expert in the task. Animals that did not learn the BMI task did not shift from utilizing a global to a local strategy. These results provide important insights into what differentiates successful and unsuccessful BMI learning and the computational mechanisms adopted by the neurons.
Collapse
|
4
|
Learning leaves a memory trace in motor cortex. Curr Biol 2024; 34:1519-1531.e4. [PMID: 38531360 PMCID: PMC11097210 DOI: 10.1016/j.cub.2024.03.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Revised: 12/06/2023] [Accepted: 03/04/2024] [Indexed: 03/28/2024]
Abstract
How are we able to learn new behaviors without disrupting previously learned ones? To understand how the brain achieves this, we used a brain-computer interface (BCI) learning paradigm, which enables us to detect the presence of a memory of one behavior while performing another. We found that learning to use a new BCI map altered the neural activity that monkeys produced when they returned to using a familiar BCI map in a way that was specific to the learning experience. That is, learning left a "memory trace" in the primary motor cortex. This memory trace coexisted with proficient performance under the familiar map, primarily by altering neural activity in dimensions that did not impact behavior. Forming memory traces might be how the brain is able to provide for the joint learning of multiple behaviors without interference.
Collapse
|
5
|
From innate to instructed: A new look at perceptual decision-making. Curr Opin Neurobiol 2024; 86:102871. [PMID: 38569230 DOI: 10.1016/j.conb.2024.102871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 03/07/2024] [Accepted: 03/08/2024] [Indexed: 04/05/2024]
Abstract
Understanding how subjects perceive sensory stimuli in their environment and use this information to guide appropriate actions is a major challenge in neuroscience. To study perceptual decision-making in animals, researchers use tasks that either probe spontaneous responses to stimuli (often described as "naturalistic") or train animals to associate stimuli with experimenter-defined responses. Spontaneous decisions rely on animals' pre-existing knowledge, while trained tasks offer greater versatility, albeit often at the cost of extensive training. Here, we review emerging approaches to investigate perceptual decision-making using both spontaneous and trained behaviors, highlighting their strengths and limitations. Additionally, we propose how trained decision-making tasks could be improved to achieve faster learning and a more generalizable understanding of task rules.
Collapse
|
6
|
Preparatory activity and the expansive null-space. Nat Rev Neurosci 2024; 25:213-236. [PMID: 38443626 DOI: 10.1038/s41583-024-00796-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/26/2024] [Indexed: 03/07/2024]
Abstract
The study of the cortical control of movement experienced a conceptual shift over recent decades, as the basic currency of understanding shifted from single-neuron tuning towards population-level factors and their dynamics. This transition was informed by a maturing understanding of recurrent networks, where mechanism is often characterized in terms of population-level factors. By estimating factors from data, experimenters could test network-inspired hypotheses. Central to such hypotheses are 'output-null' factors that do not directly drive motor outputs yet are essential to the overall computation. In this Review, we highlight how the hypothesis of output-null factors was motivated by the venerable observation that motor-cortex neurons are active during movement preparation, well before movement begins. We discuss how output-null factors then became similarly central to understanding neural activity during movement. We discuss how this conceptual framework provided key analysis tools, making it possible for experimenters to address long-standing questions regarding motor control. We highlight an intriguing trend: as experimental and theoretical discoveries accumulate, the range of computational roles hypothesized to be subserved by output-null factors continues to expand.
Collapse
|
7
|
Adaptation and learning as strategies to maximize reward in neurofeedback tasks. Front Hum Neurosci 2024; 18:1368115. [PMID: 38590363 PMCID: PMC11000125 DOI: 10.3389/fnhum.2024.1368115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 03/04/2024] [Indexed: 04/10/2024] Open
Abstract
Introduction Adaptation and learning have been observed to contribute to the acquisition of new motor skills and are used as strategies to cope with changing environments. However, it is hard to determine the relative contribution of each when executing goal directed motor tasks. This study explores the dynamics of neural activity during a center-out reaching task with continuous visual feedback under the influence of rotational perturbations. Methods Results for a brain-computer interface (BCI) task performed by two non-human primate (NHP) subjects are compared to simulations from a reinforcement learning agent performing an analogous task. We characterized baseline activity and compared it to the activity after rotational perturbations of different magnitudes were introduced. We employed principal component analysis (PCA) to analyze the spiking activity driving the cursor in the NHP BCI task as well as the activation of the neural network of the reinforcement learning agent. Results and discussion Our analyses reveal that both for the NHPs and the reinforcement learning agent, the task-relevant neural manifold is isomorphic with the task. However, for the NHPs the manifold is largely preserved for all rotational perturbations explored and adaptation of neural activity occurs within this manifold as rotations are compensated by reassignment of regions of the neural space in an angular pattern that cancels said rotations. In contrast, retraining the reinforcement learning agent to reach the targets after rotation results in substantial modifications of the underlying neural manifold. Our findings demonstrate that NHPs adapt their existing neural dynamic repertoire in a quantitatively precise manner to account for perturbations of different magnitudes and they do so in a way that obviates the need for extensive learning.
Collapse
|
8
|
Assistive sensory-motor perturbations influence learned neural representations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.20.585972. [PMID: 38562772 PMCID: PMC10983972 DOI: 10.1101/2024.03.20.585972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Task errors are used to learn and refine motor skills. We investigated how task assistance influences learned neural representations using Brain-Computer Interfaces (BCIs), which map neural activity into movement via a decoder. We analyzed motor cortex activity as monkeys practiced BCI with a decoder that adapted to improve or maintain performance over days. Population dimensionality remained constant or increased with learning, counter to trends with non-adaptive BCIs. Yet, over time, task information was contained in a smaller subset of neurons or population modes. Moreover, task information was ultimately stored in neural modes that occupied a small fraction of the population variance. An artificial neural network model suggests the adaptive decoders contribute to forming these compact neural representations. Our findings show that assistive decoders manipulate error information used for long-term learning computations, like credit assignment, which informs our understanding of motor learning and has implications for designing real-world BCIs.
Collapse
|
9
|
Measuring instability in chronic human intracortical neural recordings towards stable, long-term brain-computer interfaces. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.29.582733. [PMID: 38496552 PMCID: PMC10942277 DOI: 10.1101/2024.02.29.582733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
Intracortical brain-computer interfaces (iBCIs) enable people with tetraplegia to gain intuitive cursor control from movement intentions. To translate to practical use, iBCIs should provide reliable performance for extended periods of time. However, performance begins to degrade as the relationship between kinematic intention and recorded neural activity shifts compared to when the decoder was initially trained. In addition to developing decoders to better handle long-term instability, identifying when to recalibrate will also optimize performance. We propose a method to measure instability in neural data without needing to label user intentions. Longitudinal data were analyzed from two BrainGate2 participants with tetraplegia as they used fixed decoders to control a computer cursor spanning 142 days and 28 days, respectively. We demonstrate a measure of instability that correlates with changes in closed-loop cursor performance solely based on the recorded neural activity (Pearson r = 0.93 and 0.72, respectively). This result suggests a strategy to infer online iBCI performance from neural data alone and to determine when recalibration should take place for practical long-term use.
Collapse
|
10
|
Multimodal subspace identification for modeling discrete-continuous spiking and field potential population activity. J Neural Eng 2024; 21:026001. [PMID: 38016450 PMCID: PMC10913727 DOI: 10.1088/1741-2552/ad1053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 10/23/2023] [Accepted: 11/28/2023] [Indexed: 11/30/2023]
Abstract
Objective.Learning dynamical latent state models for multimodal spiking and field potential activity can reveal their collective low-dimensional dynamics and enable better decoding of behavior through multimodal fusion. Toward this goal, developing unsupervised learning methods that are computationally efficient is important, especially for real-time learning applications such as brain-machine interfaces (BMIs). However, efficient learning remains elusive for multimodal spike-field data due to their heterogeneous discrete-continuous distributions and different timescales.Approach.Here, we develop a multiscale subspace identification (multiscale SID) algorithm that enables computationally efficient learning for modeling and dimensionality reduction for multimodal discrete-continuous spike-field data. We describe the spike-field activity as combined Poisson and Gaussian observations, for which we derive a new analytical SID method. Importantly, we also introduce a novel constrained optimization approach to learn valid noise statistics, which is critical for multimodal statistical inference of the latent state, neural activity, and behavior. We validate the method using numerical simulations and with spiking and local field potential population activity recorded during a naturalistic reach and grasp behavior.Main results.We find that multiscale SID accurately learned dynamical models of spike-field signals and extracted low-dimensional dynamics from these multimodal signals. Further, it fused multimodal information, thus better identifying the dynamical modes and predicting behavior compared to using a single modality. Finally, compared to existing multiscale expectation-maximization learning for Poisson-Gaussian observations, multiscale SID had a much lower training time while being better in identifying the dynamical modes and having a better or similar accuracy in predicting neural activity and behavior.Significance.Overall, multiscale SID is an accurate learning method that is particularly beneficial when efficient learning is of interest, such as for online adaptive BMIs to track non-stationary dynamics or for reducing offline training time in neuroscience investigations.
Collapse
|
11
|
Category representation in primary visual cortex after visual perceptual learning. Cogn Neurodyn 2024; 18:23-35. [PMID: 38406201 PMCID: PMC10881456 DOI: 10.1007/s11571-022-09926-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 11/15/2022] [Accepted: 12/19/2022] [Indexed: 01/31/2023] Open
Abstract
The visual perceptual learning (VPL) leads to long-term enhancement of visual task performance. The subjects are often trained to link different visual stimuli to several options, such as the widely used two-alternative forced choice (2AFC) task, which involves an implicit categorical decision. The enhancement of performance has been related to the specific changes of neural activities, but few studies investigate the effects of categorical responding on the changes of neural activities. Here we investigated whether the neural activities would exhibit the categorical characteristics if the subjects are requested to respond visual stimuli in a categorical manner during VPL. We analyzed the neural activities of two monkeys in a contour detection VPL. We found that the neural activities in primary visual cortex (V1) converge to one pattern if the contour can be detected by monkey and another pattern if the contour cannot be detected, exhibiting a kind of category learning that the neural representations of detectable contour become less selective for number of bars forming contour and diverge from the representations of undetectable contour. Supplementary Information The online version contains supplementary material available at 10.1007/s11571-022-09926-8.
Collapse
|
12
|
Rhesus monkeys learn to control a directional-key inspired brain machine interface via bio-feedback. PLoS One 2024; 19:e0286742. [PMID: 38232123 DOI: 10.1371/journal.pone.0286742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Accepted: 05/23/2023] [Indexed: 01/19/2024] Open
Abstract
Brain machine interfaces (BMI) connect brains directly to the outside world, bypassing natural neural systems and actuators. Neuronal-activity-to-motion transformation algorithms allow applications such as control of prosthetics or computer cursors. These algorithms lie within a spectrum between bio-mimetic control and bio-feedback control. The bio-mimetic approach relies on increasingly complex algorithms to decode neural activity by mimicking the natural neural system and actuator relationship while focusing on machine learning: the supervised fitting of decoder parameters. On the other hand, the bio-feedback approach uses simple algorithms and relies primarily on user learning, which may take some time, but can facilitate control of novel, non-biological appendages. An increasing amount of work has focused on the arguably more successful bio-mimetic approach. However, as chronic recordings have become more accessible and utilization of novel appendages such as computer cursors have become more universal, users can more easily spend time learning in a bio-feedback control paradigm. We believe a simple approach which leverages user learning and few assumptions will provide users with good control ability. To test the feasibility of this idea, we implemented a simple firing-rate-to-motion correspondence rule, assigned groups of neurons to virtual "directional keys" for control of a 2D cursor. Though not strictly required, to facilitate initial control, we selected neurons with similar preferred directions for each group. The groups of neurons were kept the same across multiple recording sessions to allow learning. Two Rhesus monkeys used this BMI to perform a center-out cursor movement task. After about a week of training, monkeys performed the task better and neuronal signal patterns changed on a group basis, indicating learning. While our experiments did not compare this bio-feedback BMI to bio-mimetic BMIs, the results demonstrate the feasibility of our control paradigm and paves the way for further research in multi-dimensional bio-feedback BMIs.
Collapse
|
13
|
Dynamical constraints on neural population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.03.573543. [PMID: 38260549 PMCID: PMC10802336 DOI: 10.1101/2024.01.03.573543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
The manner in which neural activity unfolds over time is thought to be central to sensory, motor, and cognitive functions in the brain. Network models have long posited that the brain's computations involve time courses of activity that are shaped by the underlying network. A prediction from this view is that the activity time courses should be difficult to violate. We leveraged a brain-computer interface (BCI) to challenge monkeys to violate the naturally-occurring time courses of neural population activity that we observed in motor cortex. This included challenging animals to traverse the natural time course of neural activity in a time-reversed manner. Animals were unable to violate the natural time courses of neural activity when directly challenged to do so. These results provide empirical support for the view that activity time courses observed in the brain indeed reflect the underlying network-level computational mechanisms that they are believed to implement.
Collapse
|
14
|
Efficient, continual, and generalized learning in the brain - neural mechanism of Mental Schema 2.0. Rev Neurosci 2023; 34:839-868. [PMID: 36960579 DOI: 10.1515/revneuro-2022-0137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Accepted: 02/26/2023] [Indexed: 03/25/2023]
Abstract
There has been tremendous progress in artificial neural networks (ANNs) over the past decade; however, the gap between ANNs and the biological brain as a learning device remains large. With the goal of closing this gap, this paper reviews learning mechanisms in the brain by focusing on three important issues in ANN research: efficiency, continuity, and generalization. We first discuss the method by which the brain utilizes a variety of self-organizing mechanisms to maximize learning efficiency, with a focus on the role of spontaneous activity of the brain in shaping synaptic connections to facilitate spatiotemporal learning and numerical processing. Then, we examined the neuronal mechanisms that enable lifelong continual learning, with a focus on memory replay during sleep and its implementation in brain-inspired ANNs. Finally, we explored the method by which the brain generalizes learned knowledge in new situations, particularly from the mathematical generalization perspective of topology. Besides a systematic comparison in learning mechanisms between the brain and ANNs, we propose "Mental Schema 2.0," a new computational property underlying the brain's unique learning ability that can be implemented in ANNs.
Collapse
|
15
|
Exploring the Role of Neuroplasticity in Development, Aging, and Neurodegeneration. Brain Sci 2023; 13:1610. [PMID: 38137058 PMCID: PMC10741468 DOI: 10.3390/brainsci13121610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 11/16/2023] [Accepted: 11/18/2023] [Indexed: 12/24/2023] Open
Abstract
Neuroplasticity refers to the ability of the brain to reorganize and modify its neural connections in response to environmental stimuli, experience, learning, injury, and disease processes. It encompasses a range of mechanisms, including changes in synaptic strength and connectivity, the formation of new synapses, alterations in the structure and function of neurons, and the generation of new neurons. Neuroplasticity plays a crucial role in developing and maintaining brain function, including learning and memory, as well as in recovery from brain injury and adaptation to environmental changes. In this review, we explore the vast potential of neuroplasticity in various aspects of brain function across the lifespan and in the context of disease. Changes in the aging brain and the significance of neuroplasticity in maintaining cognitive function later in life will also be reviewed. Finally, we will discuss common mechanisms associated with age-related neurodegenerative processes (including protein aggregation and accumulation, mitochondrial dysfunction, oxidative stress, and neuroinflammation) and how these processes can be mitigated, at least partially, by non-invasive and non-pharmacologic lifestyle interventions aimed at promoting and harnessing neuroplasticity.
Collapse
|
16
|
Tracking the Dynamic Neural Connectivity via Conjugate Gradient Optimization . ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082697 DOI: 10.1109/embc40787.2023.10340664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Neural connectivity describes how neuron populations coordinate and create cognitive and behavioral functions. Neural connectivity performs dynamics where its population spiking responses to stimuli or intention change over time. Brain-machine interface (BMI) provides a framework for studying dynamical neural connectivity. In BMI, point process is a powerful technique in analyzing the single neuronal tuning. And generalized linear mode (GLM) as an encoding model can incorporate the tuning in kinematics and the neural connectivity. Quantification and tracking of dynamic neural connectivity can contribute to the elucidation of the generation of brain functions in a computational way. However, most of the previous work focused on single neuronal adaptation to kinematics. When a neuron is significantly modulated by some other neurons in some tasks, the shape of the log likelihood function for single neuronal observations can be narrowed in some dimensions. And the existing gradient-based methods are not able to reach the optimum in a fast and adaptive searching way. In this work, to maximize the likelihood of observations and obtain the dynamic neural connectivity tuning parameters, we proposed a conjugate gradient-based encoding model (CGE). We illustrate CGE for likelihood function using the real experimental data under manual control and brain control. The results show that the proposed CGE has better performance in tracking the dynamic neural connectivity tuning parameters and modeling neural encoding.Clinical Relevance- Not directly related.
Collapse
|
17
|
Neural Plasticity in Sensorimotor Brain-Machine Interfaces. Annu Rev Biomed Eng 2023; 25:51-76. [PMID: 36854262 PMCID: PMC10791144 DOI: 10.1146/annurev-bioeng-110220-110833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/02/2023]
Abstract
Brain-machine interfaces (BMIs) aim to treat sensorimotor neurological disorders by creating artificial motor and/or sensory pathways. Introducing artificial pathways creates new relationships between sensory input and motor output, which the brain must learn to gain dexterous control. This review highlights the role of learning in BMIs to restore movement and sensation, and discusses how BMI design may influence neural plasticity and performance. The close integration of plasticity in sensory and motor function influences the design of both artificial pathways and will be an essential consideration for bidirectional devices that restore both sensory and motor function.
Collapse
|
18
|
Multimodal subspace identification for modeling discrete-continuous spiking and field potential population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.26.542509. [PMID: 37398400 PMCID: PMC10312539 DOI: 10.1101/2023.05.26.542509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Learning dynamical latent state models for multimodal spiking and field potential activity can reveal their collective low-dimensional dynamics and enable better decoding of behavior through multimodal fusion. Toward this goal, developing unsupervised learning methods that are computationally efficient is important, especially for real-time learning applications such as brain-machine interfaces (BMIs). However, efficient learning remains elusive for multimodal spike-field data due to their heterogeneous discrete-continuous distributions and different timescales. Here, we develop a multiscale subspace identification (multiscale SID) algorithm that enables computationally efficient modeling and dimensionality reduction for multimodal discrete-continuous spike-field data. We describe the spike-field activity as combined Poisson and Gaussian observations, for which we derive a new analytical subspace identification method. Importantly, we also introduce a novel constrained optimization approach to learn valid noise statistics, which is critical for multimodal statistical inference of the latent state, neural activity, and behavior. We validate the method using numerical simulations and spike-LFP population activity recorded during a naturalistic reach and grasp behavior. We find that multiscale SID accurately learned dynamical models of spike-field signals and extracted low-dimensional dynamics from these multimodal signals. Further, it fused multimodal information, thus better identifying the dynamical modes and predicting behavior compared to using a single modality. Finally, compared to existing multiscale expectation-maximization learning for Poisson-Gaussian observations, multiscale SID had a much lower computational cost while being better in identifying the dynamical modes and having a better or similar accuracy in predicting neural activity. Overall, multiscale SID is an accurate learning method that is particularly beneficial when efficient learning is of interest.
Collapse
|
19
|
Self-organization of songbird neural sequences during social isolation. eLife 2023; 12:e77262. [PMID: 37252761 PMCID: PMC10229124 DOI: 10.7554/elife.77262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 04/19/2023] [Indexed: 05/31/2023] Open
Abstract
Behaviors emerge via a combination of experience and innate predispositions. As the brain matures, it undergoes major changes in cellular, network, and functional properties that can be due to sensory experience as well as developmental processes. In normal birdsong learning, neural sequences emerge to control song syllables learned from a tutor. Here, we disambiguate the role of tutor experience and development in neural sequence formation by delaying exposure to a tutor. Using functional calcium imaging, we observe neural sequences in the absence of tutoring, demonstrating that tutor experience is not necessary for the formation of sequences. However, after exposure to a tutor, pre-existing sequences can become tightly associated with new song syllables. Since we delayed tutoring, only half our birds learned new syllables following tutor exposure. The birds that failed to learn were the birds in which pre-tutoring neural sequences were most 'crystallized,' that is, already tightly associated with their (untutored) song.
Collapse
|
20
|
De novo motor learning creates structure in neural activity space that shapes adaptation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.23.541925. [PMID: 37293081 PMCID: PMC10245862 DOI: 10.1101/2023.05.23.541925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Animals can quickly adapt learned movements in response to external perturbations. Motor adaptation is likely influenced by an animal's existing movement repertoire, but the nature of this influence is unclear. Long-term learning causes lasting changes in neural connectivity which determine the activity patterns that can be produced. Here, we sought to understand how a neural population's activity repertoire, acquired through long-term learning, affects short-term adaptation by modeling motor cortical neural population dynamics during de novo learning and subsequent adaptation using recurrent neural networks. We trained these networks on different motor repertoires comprising varying numbers of movements. Networks with multiple movements had more constrained and robust dynamics, which were associated with more defined neural 'structure'-organization created by the neural population activity patterns corresponding to each movement. This structure facilitated adaptation, but only when small changes in motor output were required, and when the structure of the network inputs, the neural activity space, and the perturbation were congruent. These results highlight trade-offs in skill acquisition and demonstrate how prior experience and external cues during learning can shape the geometrical properties of neural population activity as well as subsequent adaptation.
Collapse
|
21
|
Rapid adaptation of brain-computer interfaces to new neuronal ensembles or participants via generative modelling. Nat Biomed Eng 2023; 7:546-558. [PMID: 34795394 PMCID: PMC9114171 DOI: 10.1038/s41551-021-00811-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2020] [Accepted: 09/17/2021] [Indexed: 11/09/2022]
Abstract
For brain-computer interfaces (BCIs), obtaining sufficient training data for algorithms that map neural signals onto actions can be difficult, expensive or even impossible. Here we report the development and use of a generative model-a model that synthesizes a virtually unlimited number of new data distributions from a learned data distribution-that learns mappings between hand kinematics and the associated neural spike trains. The generative spike-train synthesizer is trained on data from one recording session with a monkey performing a reaching task and can be rapidly adapted to new sessions or monkeys by using limited additional neural data. We show that the model can be adapted to synthesize new spike trains, accelerating the training and improving the generalization of BCI decoders. The approach is fully data-driven, and hence, applicable to applications of BCIs beyond motor control.
Collapse
|
22
|
|
23
|
High-performance neural population dynamics modeling enabled by scalable computational infrastructure. JOURNAL OF OPEN SOURCE SOFTWARE 2023; 8:5023. [PMID: 37520691 PMCID: PMC10374446 DOI: 10.21105/joss.05023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 08/01/2023]
|
24
|
The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks. Neuron 2023; 111:631-649.e10. [PMID: 36630961 PMCID: PMC10118067 DOI: 10.1016/j.neuron.2022.12.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 06/17/2022] [Accepted: 12/05/2022] [Indexed: 01/12/2023]
Abstract
Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.
Collapse
|
25
|
Parallel movement planning is achieved via an optimal preparatory state in motor cortex. Cell Rep 2023; 42:112136. [PMID: 36807145 DOI: 10.1016/j.celrep.2023.112136] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 12/16/2022] [Accepted: 02/01/2023] [Indexed: 02/22/2023] Open
Abstract
How do patterns of neural activity in the motor cortex contribute to the planning of a movement? A recent theory developed for single movements proposes that the motor cortex acts as a dynamical system whose initial state is optimized during the preparatory phase of the movement. This theory makes important yet untested predictions about preparatory dynamics in more complex behavioral settings. Here, we analyze preparatory activity in non-human primates planning not one but two movements simultaneously. As predicted by the theory, we find that parallel planning is achieved by adjusting preparatory activity within an optimal subspace to an intermediate state reflecting a trade-off between the two movements. The theory quantitatively accounts for the relationship between this intermediate state and fluctuations in the animals' behavior down at the trial level. These results uncover a simple mechanism for planning multiple movements in parallel and further point to motor planning as a controlled dynamical process.
Collapse
|
26
|
Foundations of human spatial problem solving. Sci Rep 2023; 13:1485. [PMID: 36707649 PMCID: PMC9883268 DOI: 10.1038/s41598-023-28834-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 01/25/2023] [Indexed: 01/28/2023] Open
Abstract
Despite great strides in both machine learning and neuroscience, we do not know how the human brain solves problems in the general sense. We approach this question by drawing on the framework of engineering control theory. We demonstrate a computational neural model with only localist learning laws that is able to find solutions to arbitrary problems. The model and humans perform a multi-step task with arbitrary and changing starting and desired ending states. Using a combination of computational neural modeling, human fMRI, and representational similarity analysis, we show here that the roles of a number of brain regions can be reinterpreted as interacting mechanisms of a control theoretic system. The results suggest a new set of functional perspectives on the orbitofrontal cortex, hippocampus, basal ganglia, anterior temporal lobe, lateral prefrontal cortex, and visual cortex, as well as a new path toward artificial general intelligence.
Collapse
|
27
|
Computational models of Idling brain activity for memory processing. Neurosci Res 2022; 189:75-82. [PMID: 36592825 DOI: 10.1016/j.neures.2022.12.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 12/29/2022] [Indexed: 01/01/2023]
Abstract
Studying the underlying neural mechanisms of cognitive functions of the brain is one of the central questions in modern biology. Moreover, it has significantly impacted the development of novel technologies in artificial intelligence. Spontaneous activity is a unique feature of the brain and is currently lacking in many artificially constructed intelligent machines. Spontaneous activity may represent the brain's idling states, which are internally driven by neuronal networks and possibly participate in offline processing during awake, sleep, and resting states. Evidence is accumulating that the brain's spontaneous activity is not mere noise but part of the mechanisms to process information about previous experiences. A bunch of literature has shown how previous sensory and behavioral experiences influence the subsequent patterns of brain activity with various methods in various animals. It seems, however, that the patterns of neural activity and their computational roles differ significantly from area to area and from function to function. In this article, I review the various forms of the brain's spontaneous activity, especially those observed during memory processing, and some attempts to model the generation mechanisms and computational roles of such activities.
Collapse
|
28
|
De Novo Brain-Computer Interfacing Deforms Manifold of Populational Neural Activity Patterns in Human Cerebral Cortex. eNeuro 2022; 9:ENEURO.0145-22.2022. [PMID: 36376067 PMCID: PMC9721308 DOI: 10.1523/eneuro.0145-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 10/27/2022] [Accepted: 11/02/2022] [Indexed: 11/15/2022] Open
Abstract
Human brains are capable of modulating innate activities to adapt to novel environments and tasks; for sensorimotor neural system this means acquisition of a rich repertoire of activity patterns that improve behavioral performance. To directly map the process of acquiring the neural repertoire during tasks onto performance improvement, we analyzed net neural populational activity during the learning of its voluntary modulation by brain-computer interface (BCI) operation in female and male humans. The recorded whole-head high-density scalp electroencephalograms (EEGs) were subjected to dimensionality reduction algorithm to capture changes in cortical activity patterns represented by the synchronization of neuronal oscillations during adaptation. Although the preserved variance of targeted features in the reduced dimensions was 20%, we found systematic interactions between the activity patterns and BCI classifiers that detected motor attempt; the neural manifold derived in the embedded space was stretched along with motor-related features of EEG by model-based fixed classifiers but not with adaptive classifiers that were constantly recalibrated to user activity. Moreover, the manifold was deformed to be orthogonal to the boundary by de novo classifiers with a fixed decision boundary based on biologically unnatural features. Collectively, the flexibility of human cortical signaling patterns (i.e., neural plasticity) is only induced by operation of a BCI whose classifier required fixed activities, and the adaptation could be induced even the requirement is not consistent with biologically natural responses. These principles of neural adaptation at a macroscopic level may underlie the ability of humans to learn wide-ranging behavioral repertoires and adapt to novel environments.
Collapse
|
29
|
Transition of distinct context-dependent ensembles from secondary to primary motor cortex in skilled motor performance. Cell Rep 2022; 41:111494. [DOI: 10.1016/j.celrep.2022.111494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Revised: 06/27/2022] [Accepted: 09/21/2022] [Indexed: 11/19/2022] Open
|
30
|
Estimating Intrinsic Manifold Dimensionality to Classify Task-Related Information in Human and Non-Human Primate Data. IEEE BIOMEDICAL CIRCUITS AND SYSTEMS CONFERENCE : HEALTHCARE TECHNOLOGY : [PROCEEDINGS]. IEEE BIOMEDICAL CIRCUITS AND SYSTEMS CONFERENCE 2022; 2022:650-654. [PMID: 36820790 PMCID: PMC9942267 DOI: 10.1109/biocas54905.2022.9948604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Feature selection, or dimensionality reduction, has become a standard step in reducing large-scale neural datasets into usable signals for brain-machine interface and neurofeedback decoders. Current techniques in fMRI data reduce the number of voxels (features) by performing statistics on individual voxels or using traditional techniques that utilize linear combinations of features (e.g., principal component analysis (PCA)). However, these methods often do not account for the cross-correlations found across voxels and do not sufficiently reduce the feature space to support efficient real-time feedback. To overcome these limitations, we propose using factor analysis on fMRI data. This technique has become increasingly popular for extracting a minimal number of latent features to explain high-dimensional data in non-human primates (NHPs). Here, we demonstrate these methods in both NHP and human data. In NHP subjects (n=2), we reduced the number of features to an average of 26.86% and 14.86% of the total feature space to build our multinomial classifier. In one NHP subject, the average accuracy of classifying eight target locations over 64 sessions was 62.43% (+/-6.19%) compared to a PCA-based classifier with 60.26% (+/-6.02%). In healthy fMRI subjects, we reduced the feature space to an average of 0.33% of the initial space. Group average (n=5) accuracy of FA-based category classification was 74.33% (+/- 4.91%) compared to a PCA-based classifier with 68.42% (+/-4.79%). FA-based classifiers can maintain the performance fidelity observed with PCA-based decoders. Importantly, FA-based methods allow researchers to address specific hypotheses about how underlying neural activity relates to behavior.
Collapse
|
31
|
Clinical neuroscience and neurotechnology: An amazing symbiosis. iScience 2022; 25:105124. [PMID: 36193050 PMCID: PMC9526189 DOI: 10.1016/j.isci.2022.105124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
In the last decades, clinical neuroscience found a novel ally in neurotechnologies, devices able to record and stimulate electrical activity in the nervous system. These technologies improved the ability to diagnose and treat neural disorders. Neurotechnologies are concurrently enabling a deeper understanding of healthy and pathological dynamics of the nervous system through stimulation and recordings during brain implants. On the other hand, clinical neurosciences are not only driving neuroengineering toward the most relevant clinical issues, but are also shaping the neurotechnologies thanks to clinical advancements. For instance, understanding the etiology of a disease informs the location of a therapeutic stimulation, but also the way stimulation patterns should be designed to be more effective/naturalistic. Here, we describe cases of fruitful integration such as Deep Brain Stimulation and cortical interfaces to highlight how this symbiosis between clinical neuroscience and neurotechnology is closer to a novel integrated framework than to a simple interdisciplinary interaction.
Collapse
|
32
|
Soft integration of a neural cells network and bionic interfaces. Front Bioeng Biotechnol 2022; 10:950235. [PMID: 36246365 PMCID: PMC9558115 DOI: 10.3389/fbioe.2022.950235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Accepted: 09/12/2022] [Indexed: 11/28/2022] Open
Abstract
Both glial cells and neurons can be considered basic computational units in neural networks, and the brain–computer interface (BCI) can play a role in awakening the latency portion and being sensitive to positive feedback through learning. However, high-quality information gained from BCI requires invasive approaches such as microelectrodes implanted under the endocranium. As a hard foreign object in the aqueous microenvironment, the soft cerebral cortex’s chronic inflammation state and scar tissue appear subsequently. To avoid the obvious defects caused by hard electrodes, this review focuses on the bioinspired neural interface, guiding and optimizing the implant system for better biocompatibility and accuracy. At the same time, the bionic techniques of signal reception and transmission interfaces are summarized and the structural units with functions similar to nerve cells are introduced. Multiple electrical and electromagnetic transmissions, regulating the secretion of neuromodulators or neurotransmitters via nanofluidic channels, have been flexibly applied. The accurate regulation of neural networks from the nanoscale to the cellular reconstruction of protein pathways will make BCI the extension of the brain.
Collapse
|
33
|
Selective modulation of cortical population dynamics during neuroprosthetic skill learning. Sci Rep 2022; 12:15948. [PMID: 36153356 PMCID: PMC9509316 DOI: 10.1038/s41598-022-20218-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 09/09/2022] [Indexed: 01/23/2023] Open
Abstract
Brain-machine interfaces (BMIs) provide a framework for studying how cortical population dynamics evolve over learning in a task in which the mapping between neural activity and behavior is precisely defined. Learning to control a BMI is associated with the emergence of coordinated neural dynamics in populations of neurons whose activity serves as direct input to the BMI decoder (direct subpopulation). While previous work shows differential modification of firing rate modulation in this population relative to a population whose activity was not directly input to the BMI decoder (indirect subpopulation), little is known about how learning-related changes in cortical population dynamics within these groups compare.To investigate this, we monitored both direct and indirect subpopulations as two macaque monkeys learned to control a BMI. We found that while the combined population increased coordinated neural dynamics, this increase in coordination was primarily driven by changes in the direct subpopulation. These findings suggest that motor cortex refines cortical dynamics by increasing neural variance throughout the entire population during learning, with a more pronounced coordination of firing activity in subpopulations that are causally linked to behavior.
Collapse
|
34
|
Volitional Generation of Reproducible, Efficient Temporal Patterns. Brain Sci 2022; 12:1269. [PMID: 36291203 PMCID: PMC9599309 DOI: 10.3390/brainsci12101269] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 09/09/2022] [Accepted: 09/14/2022] [Indexed: 12/26/2023] Open
Abstract
One of the extraordinary characteristics of the biological brain is the low energy expense it requires to implement a variety of biological functions and intelligence as compared to the modern artificial intelligence (AI). Spike-based energy-efficient temporal codes have long been suggested as a contributor for the brain to run on low energy expense. Despite this code having been largely reported in the sensory cortex, whether this code can be implemented in other brain areas to serve broader functions and how it evolves throughout learning have remained unaddressed. In this study, we designed a novel brain-machine interface (BMI) paradigm. Two macaques could volitionally generate reproducible energy-efficient temporal patterns in the primary motor cortex (M1) by learning the BMI paradigm. Moreover, most neurons that were not directly assigned to control the BMI did not boost their excitability, and they demonstrated an overall energy-efficient manner in performing the task. Over the course of learning, we found that the firing rates and temporal precision of selected neurons co-evolved to generate the energy-efficient temporal patterns, suggesting that a cohesive rather than dissociable processing underlies the refinement of energy-efficient temporal patterns.
Collapse
|
35
|
Beyond the brain-computer interface: Decoding brain activity as a tool to understand neuronal mechanisms subtending cognition and behavior. Front Neurosci 2022; 16:811736. [PMID: 36161174 PMCID: PMC9492914 DOI: 10.3389/fnins.2022.811736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Accepted: 08/23/2022] [Indexed: 11/13/2022] Open
Abstract
One of the major challenges in system neurosciences consists in developing techniques for estimating the cognitive information content in brain activity. This has an enormous potential in different domains spanning from clinical applications, cognitive enhancement to a better understanding of the neural bases of cognition. In this context, the inclusion of machine learning techniques to decode different aspects of human cognition and behavior and its use to develop brain–computer interfaces for applications in neuroprosthetics has supported a genuine revolution in the field. However, while these approaches have been shown quite successful for the study of the motor and sensory functions, success is still far from being reached when it comes to covert cognitive functions such as attention, motivation and decision making. While improvement in this field of BCIs is growing fast, a new research focus has emerged from the development of strategies for decoding neural activity. In this review, we aim at exploring how the advanced in decoding of brain activity is becoming a major neuroscience tool moving forward our understanding of brain functions, providing a robust theoretical framework to test predictions on the relationship between brain activity and cognition and behavior.
Collapse
|
36
|
Small, correlated changes in synaptic connectivity may facilitate rapid motor learning. Nat Commun 2022; 13:5163. [PMID: 36056006 PMCID: PMC9440011 DOI: 10.1038/s41467-022-32646-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 08/08/2022] [Indexed: 11/08/2022] Open
Abstract
Animals rapidly adapt their movements to external perturbations, a process paralleled by changes in neural activity in the motor cortex. Experimental studies suggest that these changes originate from altered inputs (Hinput) rather than from changes in local connectivity (Hlocal), as neural covariance is largely preserved during adaptation. Since measuring synaptic changes in vivo remains very challenging, we used a modular recurrent neural network to qualitatively test this interpretation. As expected, Hinput resulted in small activity changes and largely preserved covariance. Surprisingly given the presumed dependence of stable covariance on preserved circuit connectivity, Hlocal led to only slightly larger changes in activity and covariance, still within the range of experimental recordings. This similarity is due to Hlocal only requiring small, correlated connectivity changes for successful adaptation. Simulations of tasks that impose increasingly larger behavioural changes revealed a growing difference between Hinput and Hlocal, which could be exploited when designing future experiments.
Collapse
|
37
|
Representational drift: Emerging theories for continual learning and experimental future directions. Curr Opin Neurobiol 2022; 76:102609. [PMID: 35939861 DOI: 10.1016/j.conb.2022.102609] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 06/08/2022] [Accepted: 06/23/2022] [Indexed: 11/03/2022]
Abstract
Recent work has revealed that the neural activity patterns correlated with sensation, cognition, and action often are not stable and instead undergo large scale changes over days and weeks-a phenomenon called representational drift. Here, we highlight recent observations of drift, how drift is unlikely to be explained by experimental confounds, and how the brain can likely compensate for drift to allow stable computation. We propose that drift might have important roles in neural computation to allow continual learning, both for separating and relating memories that occur at distinct times. Finally, we present an outlook on future experimental directions that are needed to further characterize drift and to test emerging theories for drift's role in computation.
Collapse
|
38
|
Understanding implicit and explicit sensorimotor learning through neural dynamics. Front Comput Neurosci 2022; 16:960569. [PMID: 35990367 PMCID: PMC9381967 DOI: 10.3389/fncom.2022.960569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Accepted: 07/08/2022] [Indexed: 11/30/2022] Open
|
39
|
Cognitive experience alters cortical involvement in goal-directed navigation. eLife 2022; 11:76051. [PMID: 35735909 PMCID: PMC9259027 DOI: 10.7554/elife.76051] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 06/22/2022] [Indexed: 11/29/2022] Open
Abstract
Neural activity in the mammalian cortex has been studied extensively during decision tasks, and recent work aims to identify under what conditions cortex is actually necessary for these tasks. We discovered that mice with distinct cognitive experiences, beyond sensory and motor learning, use different cortical areas and neural activity patterns to solve the same navigation decision task, revealing past learning as a critical determinant of whether cortex is necessary for goal-directed navigation. We used optogenetics and calcium imaging to study the necessity and neural activity of multiple cortical areas in mice with different training histories. Posterior parietal cortex and retrosplenial cortex were mostly dispensable for accurate performance of a simple navigation task. In contrast, these areas were essential for the same simple task when mice were previously trained on complex tasks with delay periods or association switches. Multiarea calcium imaging showed that, in mice with complex-task experience, single-neuron activity had higher selectivity and neuron–neuron correlations were weaker, leading to codes with higher task information. Therefore, past experience is a key factor in determining whether cortical areas have a causal role in goal-directed navigation.
Collapse
|
40
|
Preserved cortical somatotopic and motor representations in tetraplegic humans. Curr Opin Neurobiol 2022; 74:102547. [DOI: 10.1016/j.conb.2022.102547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 03/16/2022] [Accepted: 03/27/2022] [Indexed: 11/16/2022]
|
41
|
The science and engineering behind sensitized brain-controlled bionic hands. Physiol Rev 2022; 102:551-604. [PMID: 34541898 PMCID: PMC8742729 DOI: 10.1152/physrev.00034.2020] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 09/07/2021] [Accepted: 09/13/2021] [Indexed: 12/13/2022] Open
Abstract
Advances in our understanding of brain function, along with the development of neural interfaces that allow for the monitoring and activation of neurons, have paved the way for brain-machine interfaces (BMIs), which harness neural signals to reanimate the limbs via electrical activation of the muscles or to control extracorporeal devices, thereby bypassing the muscles and senses altogether. BMIs consist of reading out motor intent from the neuronal responses monitored in motor regions of the brain and executing intended movements with bionic limbs, reanimated limbs, or exoskeletons. BMIs also allow for the restoration of the sense of touch by electrically activating neurons in somatosensory regions of the brain, thereby evoking vivid tactile sensations and conveying feedback about object interactions. In this review, we discuss the neural mechanisms of motor control and somatosensation in able-bodied individuals and describe approaches to use neuronal responses as control signals for movement restoration and to activate residual sensory pathways to restore touch. Although the focus of the review is on intracortical approaches, we also describe alternative signal sources for control and noninvasive strategies for sensory restoration.
Collapse
|
42
|
From Parametric Representation to Dynamical System: Shifting Views of the Motor Cortex in Motor Control. Neurosci Bull 2022; 38:796-808. [PMID: 35298779 PMCID: PMC9276910 DOI: 10.1007/s12264-022-00832-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Accepted: 11/29/2021] [Indexed: 11/01/2022] Open
Abstract
In contrast to traditional representational perspectives in which the motor cortex is involved in motor control via neuronal preference for kinetics and kinematics, a dynamical system perspective emerging in the last decade views the motor cortex as a dynamical machine that generates motor commands by autonomous temporal evolution. In this review, we first look back at the history of the representational and dynamical perspectives and discuss their explanatory power and controversy from both empirical and computational points of view. Here, we aim to reconcile the above perspectives, and evaluate their theoretical impact, future direction, and potential applications in brain-machine interfaces.
Collapse
|
43
|
Individual variability of neural computations in the primate retina. Neuron 2022; 110:698-708.e5. [PMID: 34932942 PMCID: PMC8857061 DOI: 10.1016/j.neuron.2021.11.026] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 08/10/2021] [Accepted: 11/20/2021] [Indexed: 12/28/2022]
Abstract
Variation in the neural code contributes to making each individual unique. We probed neural code variation using ∼100 population recordings from major ganglion cell types in the macaque retina, combined with an interpretable computational representation of individual variability. This representation captured variation and covariation in properties such as nonlinearity, temporal dynamics, and spatial receptive field size and preserved invariances such as asymmetries between On and Off cells. The covariation of response properties in different cell types was associated with the proximity of lamination of their synaptic input. Surprisingly, male retinas exhibited higher firing rates and faster temporal integration than female retinas. Exploiting data from previously recorded retinas enabled efficient characterization of a new macaque retina, and of a human retina. Simulations indicated that combining a large dataset of retinal recordings with behavioral feedback could reveal the neural code in a living human and thus improve vision restoration with retinal implants.
Collapse
|
44
|
Abstract
The brain's remarkable ability to learn and execute various motor behaviours harnesses the capacity of neural populations to generate a variety of activity patterns. Here we explore systematic changes in preparatory activity in motor cortex that accompany motor learning. We trained rhesus monkeys to learn an arm-reaching task1 in a curl force field that elicited new muscle forces for some, but not all, movement directions2,3. We found that in a neural subspace predictive of hand forces, changes in preparatory activity tracked the learned behavioural modifications and reassociated4 existing activity patterns with updated movements. Along a neural population dimension orthogonal to the force-predictive subspace, we discovered that preparatory activity shifted uniformly for all movement directions, including those unaltered by learning. During a washout period when the curl field was removed, preparatory activity gradually reverted in the force-predictive subspace, but the uniform shift persisted. These persistent preparatory activity patterns may retain a motor memory of the learned field5,6 and support accelerated relearning of the same curl field. When a set of distinct curl fields was learned in sequence, we observed a corresponding set of field-specific uniform shifts which separated the associated motor memories in the neural state space7-9. The precise geometry of these uniform shifts in preparatory activity could serve to index motor memories, facilitating the acquisition, retention and retrieval of a broad motor repertoire.
Collapse
|
45
|
Neuronal population activity dynamics reveal a low-dimensional signature of operant learning in Aplysia. Commun Biol 2022; 5:90. [PMID: 35075264 PMCID: PMC8786933 DOI: 10.1038/s42003-022-03044-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 01/07/2022] [Indexed: 11/24/2022] Open
Abstract
Learning engages a high-dimensional neuronal population space spanning multiple brain regions. However, it remains unknown whether it is possible to identify a low-dimensional signature associated with operant conditioning, a ubiquitous form of learning in which animals learn from the consequences of behavior. Using single-neuron resolution voltage imaging, here we identify two low-dimensional motor modules in the neuronal population underlying Aplysia feeding. Our findings point to a temporal shift in module recruitment as the primary signature of operant learning. Our findings can help guide characterization of learning signatures in systems in which only a smaller fraction of the relevant neuronal population can be monitored. Costa et al. use single-neuron resolution voltage imaging to identify two low-dimensional motor modules in the neuronal population underlying Aplysia feeding. Their findings point to a temporal shift in module recruitment as the primary signature of operant learning.
Collapse
|
46
|
Remote cortical perturbation dynamically changes the network solutions to given tactile inputs in neocortical neurons. iScience 2022; 25:103557. [PMID: 34977509 PMCID: PMC8689199 DOI: 10.1016/j.isci.2021.103557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 10/18/2021] [Accepted: 12/01/2021] [Indexed: 11/17/2022] Open
Abstract
The neocortex has a globally encompassing network structure, which for each given input constrains the possible combinations of neuronal activations across it. Hence, its network contains solutions. But in addition, the cortex has an ever-changing multidimensional internal state, causing each given input to result in a wide range of specific neuronal activations. Here we use intracellular recordings in somatosensory cortex (SI) neurons of anesthetized rats to show that remote, subthreshold intracortical electrical perturbation can impact such constraints on the responses to a set of spatiotemporal tactile input patterns. Whereas each given input pattern normally induces a wide set of preferred response states, when combined with cortical perturbation response states that did not otherwise occur were induced and consequently made other response states less likely. The findings indicate that the physiological network structure can dynamically change as the state of any given cortical region changes, thereby enabling a rich, multifactorial, perceptual capability. Tactile sensory input patterns evoke multi-structure cortical neuron responses Multi-structure responses are shown to be impacted by remote cortical regions Highly dynamic neuron responses reflects global cortical information integration Perception hence depends on globally distributed activity at the time of input
Collapse
|
47
|
Cortical Control of Virtual Self-Motion Using Task-Specific Subspaces. J Neurosci 2022; 42:220-239. [PMID: 34716229 PMCID: PMC8802935 DOI: 10.1523/jneurosci.2687-20.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Revised: 09/18/2021] [Accepted: 10/17/2021] [Indexed: 11/21/2022] Open
Abstract
Brain-machine interfaces (BMIs) for reaching have enjoyed continued performance improvements, yet there remains significant need for BMIs that control other movement classes. Recent scientific findings suggest that the intrinsic covariance structure of neural activity depends strongly on movement class, potentially necessitating different decode algorithms across classes. To address this possibility, we developed a self-motion BMI based on cortical activity as monkeys cycled a hand-held pedal to progress along a virtual track. Unlike during reaching, we found no high-variance dimensions that directly correlated with to-be-decoded variables. This was due to no neurons having consistent correlations between their responses and kinematic variables. Yet we could decode a single variable-self-motion-by nonlinearly leveraging structure that spanned multiple high-variance neural dimensions. Resulting online BMI-control success rates approached those during manual control. These findings make two broad points regarding how to build decode algorithms that harmonize with the empirical structure of neural activity in motor cortex. First, even when decoding from the same cortical region (e.g., arm-related motor cortex), different movement classes may need to employ very different strategies. Although correlations between neural activity and hand velocity are prominent during reaching tasks, they are not a fundamental property of motor cortex and cannot be counted on to be present in general. Second, although one generally desires a low-dimensional readout, it can be beneficial to leverage a multidimensional high-variance subspace. Fully embracing this approach requires highly nonlinear approaches tailored to the task at hand, but can produce near-native levels of performance.SIGNIFICANCE STATEMENT Many brain-machine interface decoders have been constructed for controlling movements normally performed with the arm. Yet it is unclear how these will function beyond the reach-like scenarios where they were developed. Existing decoders implicitly assume that neural covariance structure, and correlations with to-be-decoded kinematic variables, will be largely preserved across tasks. We find that the correlation between neural activity and hand kinematics, a feature typically exploited when decoding reach-like movements, is essentially absent during another task performed with the arm: cycling through a virtual environment. Nevertheless, the use of a different strategy, one focused on leveraging the highest-variance neural signals, supported high performance real-time brain-machine interface control.
Collapse
|
48
|
Abstract
Traditional brain-machine interfaces decode cortical motor commands to control external devices. These commands are the product of higher-level cognitive processes, occurring across a network of brain areas, that integrate sensory information, plan upcoming motor actions, and monitor ongoing movements. We review cognitive signals recently discovered in the human posterior parietal cortex during neuroprosthetic clinical trials. These signals are consistent with small regions of cortex having a diverse role in cognitive aspects of movement control and body monitoring, including sensorimotor integration, planning, trajectory representation, somatosensation, action semantics, learning, and decision making. These variables are encoded within the same population of cells using structured representations that bind related sensory and motor variables, an architecture termed partially mixed selectivity. Diverse cognitive signals provide complementary information to traditional motor commands to enable more natural and intuitive control of external devices.
Collapse
|
49
|
Recent Advances and Current Trends in Brain-Computer Interface (BCI) Research and Their Applications. Int J Dev Neurosci 2021; 82:107-123. [PMID: 34939217 DOI: 10.1002/jdn.10166] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 11/16/2021] [Accepted: 12/18/2021] [Indexed: 11/06/2022] Open
Abstract
Brain-Computer Interface (BCI) provides direct communication between the brain and an external device. BCI systems have become a trendy field of research in recent years. These systems can be used in a variety of applications to help both disabled and healthy people. Concerning significant BCI progress, we may assume that these systems are not very far from real-world applications. This review has taken into account current trends in BCI research. In this survey, one hundred most cited articles from the WOS database were selected over the last four years. This survey is divided into several sectors. These sectors are Medicine, Communication and Control, Entertainment, and Other BCI applications. The application area, recording method, signal acquisition types, and countries of origin have been identified in each article. This survey provides an overview of the BCI articles published from 2016 to 2020 and their current trends and advances in different application areas.
Collapse
|
50
|
Drop, Swap, and Generate: A Self-Supervised Approach for Generating Neural Activity. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2021; 34:10587-10599. [PMID: 36467015 PMCID: PMC9713686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Meaningful and simplified representations of neural activity can yield insights into how and what information is being processed within a neural circuit. However, without labels, finding representations that reveal the link between the brain and behavior can be challenging. Here, we introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE. Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state). These transformed (or augmented) views are created by dropping out neurons and jittering samples in time, which intuitively should lead the network to a representation that maintains both temporal consistency and invariance to the specific neurons used to represent the neural state. Through evaluations on both synthetic data and neural recordings from hundreds of neurons in different primate brains, we show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.
Collapse
|