1
|
Srinath R, Ni AM, Marucci C, Cohen MR, Brainard DH. Orthogonal neural representations support perceptual judgements of natural stimuli. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.14.580134. [PMID: 38464018 PMCID: PMC10925131 DOI: 10.1101/2024.02.14.580134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/12/2024]
Abstract
In natural behavior, observers must separate relevant information from a barrage of irrelevant information. Many studies have investigated the neural underpinnings of this ability using artificial stimuli presented on simple backgrounds. Natural viewing, however, carries a set of challenges that are inaccessible using artificial stimuli, including neural responses to background objects that are task-irrelevant. An emerging body of evidence suggests that the visual abilities of humans and animals can be modeled through the linear decoding of task-relevant information from visual cortex. This idea suggests the hypothesis that irrelevant features of a natural scene should impair performance on a visual task only if their neural representations intrude on the linear readout of the task relevant feature, as would occur if the representations of task-relevant and irrelevant features are not orthogonal in the underlying neural population. We tested this hypothesis using human psychophysics and monkey neurophysiology, in response to parametrically variable naturalistic stimuli. We demonstrate that 1) the neural representation of one feature (the position of a central object) in visual area V4 is orthogonal to those of several background features, 2) the ability of human observers to precisely judge object position was largely unaffected by task-irrelevant variation in those background features, and 3) many features of the object and the background are orthogonally represented by V4 neural responses. Our observations are consistent with the hypothesis that orthogonal neural representations can support stable perception of objects and features despite the tremendous richness of natural visual scenes. Significance Statement We studied how the structure of the mid-level neural representation of multiple visual features supports robust perceptual decisions. We combined array recording with parametrically controlled naturalistic images to demonstrate that the representation of a central object's position in monkey visual area V4 is orthogonal to that of several background features. In addition, we used human psychophysics with the same stimulus set to show that observers' ability to judge a central object's position is largely unaffected by variation in the same background features. This result supports the hypothesis that orthogonal neural representations can enable stable and robust perception in naturalistic visual environments and advances our understanding of how visual processing operates in the real world.
Collapse
|
2
|
Jarne C, Caruso M. Effect in the spectra of eigenvalues and dynamics of RNNs trained with excitatory-inhibitory constraint. Cogn Neurodyn 2024; 18:1323-1335. [PMID: 38826641 PMCID: PMC11143133 DOI: 10.1007/s11571-023-09956-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 01/09/2023] [Accepted: 03/08/2023] [Indexed: 04/09/2023] Open
Abstract
In order to comprehend and enhance models that describes various brain regions it is important to study the dynamics of trained recurrent neural networks. Including Dale's law in such models usually presents several challenges. However, this is an important aspect that allows computational models to better capture the characteristics of the brain. Here we present a framework to train networks using such constraint. Then we have used it to train them in simple decision making tasks. We characterized the eigenvalue distributions of the recurrent weight matrices of such networks. Interestingly, we discovered that the non-dominant eigenvalues of the recurrent weight matrix are distributed in a circle with a radius less than 1 for those whose initial condition before training was random normal and in a ring for those whose initial condition was random orthogonal. In both cases, the radius does not depend on the fraction of excitatory and inhibitory units nor the size of the network. Diminution of the radius, compared to networks trained without the constraint, has implications on the activity and dynamics that we discussed here. Supplementary Information The online version contains supplementary material available at 10.1007/s11571-023-09956-w.
Collapse
Affiliation(s)
- Cecilia Jarne
- Departmento de Ciencia y Tecnología, Universidad Nacional de Quilmes, Bernal, Argentina
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
- CONICET, Buenos Aires, Argentina
| | - Mariano Caruso
- Present Address: Fundación I+D del Software Libre–FIDESOL, Granada, Spain
- Universidad Internacional de La Rioja–UNIR, La Rioja, Spain
| |
Collapse
|
3
|
Beiran M, Litwin-Kumar A. Prediction of neural activity in connectome-constrained recurrent networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.22.581667. [PMID: 38854115 PMCID: PMC11160579 DOI: 10.1101/2024.02.22.581667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2024]
Abstract
We develop a theory of connectome-constrained neural networks in which a "student" network is trained to reproduce the activity of a ground-truth "teacher," representing a neural system for which a connectome is available. Unlike standard paradigms with unconstrained connectivity, here the two networks have the same connectivity but different biophysical parameters, reflecting uncertainty in neuronal and synaptic properties. We find that a connectome is often insufficient to constrain the dynamics of networks that perform a specific task, illustrating the difficulty of inferring function from connectivity alone. However, recordings from a small subset of neurons can remove this degeneracy, producing dynamics in the student that agree with the teacher. Our theory can also prioritize which neurons to record from to most efficiently predict unmeasured network activity. Our analysis shows that the solution spaces of connectome-constrained and unconstrained models are qualitatively different and provides a framework to determine when such models yield consistent dynamics.
Collapse
Affiliation(s)
- Manuel Beiran
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Ashok Litwin-Kumar
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| |
Collapse
|
4
|
Rodriguez AC, Perich MG, Miller L, Humphries MD. Motor cortex latent dynamics encode spatial and temporal arm movement parameters independently. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.05.26.542452. [PMID: 37292834 PMCID: PMC10246015 DOI: 10.1101/2023.05.26.542452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The fluid movement of an arm requires multiple spatiotemporal parameters to be set independently. Recent studies have argued that arm movements are generated by the collective dynamics of neurons in motor cortex. An untested prediction of this hypothesis is that independent parameters of movement must map to independent components of the neural dynamics. Using a task where monkeys made a sequence of reaching movements to randomly placed targets, we show that the spatial and temporal parameters of arm movements are independently encoded in the low-dimensional trajectories of population activity in motor cortex: Each movement's direction corresponds to a fixed neural trajectory through neural state space and its speed to how quickly that trajectory is traversed. Recurrent neural network models show this coding allows independent control over the spatial and temporal parameters of movement by separate network parameters. Our results support a key prediction of the dynamical systems view of motor cortex, but also argue that not all parameters of movement are defined by different trajectories of population activity.
Collapse
Affiliation(s)
| | - Matthew G. Perich
- Département de neurosciences, Faculté de médecine, Université de Montréal, Montréal, Canada
- Québec Artificial Intelligence Institute (Mila), Québec, Canada
| | - Lee Miller
- Northwestern University, Department of Biomedical Engineering, Chicago, USA
| | - Mark D. Humphries
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
5
|
Breveglieri R, Brandolani R, Diomedi S, Lappe M, Galletti C, Fattori P. Modulation of reaching by spatial attention. Front Integr Neurosci 2024; 18:1393690. [PMID: 38817775 PMCID: PMC11138159 DOI: 10.3389/fnint.2024.1393690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Accepted: 04/30/2024] [Indexed: 06/01/2024] Open
Abstract
Attention is needed to perform goal-directed vision-guided movements. We investigated whether the direction of covert attention modulates movement outcomes and dynamics. Right-handed and left-handed volunteers attended to a spatial location while planning a reach toward the same hemifield, the opposite one, or planned a reach without constraining attention. We measured behavioral variables as outcomes of ipsilateral and contralateral reaching and the tangling of behavioral trajectories obtained through principal component analysis as a measure of the dynamics of motor control. We found that the direction of covert attention had significant effects on the dynamics of motor control, specifically during contralateral reaching. Data suggest that motor control was more feedback-driven when attention was directed leftward than when attention was directed rightward or when it was not constrained, irrespectively of handedness. These results may help to better understand the neural bases of asymmetrical neurological diseases like hemispatial neglect.
Collapse
Affiliation(s)
- Rossella Breveglieri
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Riccardo Brandolani
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Center for Neuroscience, University of Camerino, Camerino, Italy
| | - Stefano Diomedi
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Markus Lappe
- Department of Psychology, Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| | - Claudio Galletti
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| |
Collapse
|
6
|
Menéndez JA, Hennig JA, Golub MD, Oby ER, Sadtler PT, Batista AP, Chase SM, Yu BM, Latham PE. A theory of brain-computer interface learning via low-dimensional control. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.18.589952. [PMID: 38712193 PMCID: PMC11071278 DOI: 10.1101/2024.04.18.589952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
A remarkable demonstration of the flexibility of mammalian motor systems is primates' ability to learn to control brain-computer interfaces (BCIs). This constitutes a completely novel motor behavior, yet primates are capable of learning to control BCIs under a wide range of conditions. BCIs with carefully calibrated decoders, for example, can be learned with only minutes to hours of practice. With a few weeks of practice, even BCIs with randomly constructed decoders can be learned. What are the biological substrates of this learning process? Here, we develop a theory based on a re-aiming strategy, whereby learning operates within a low-dimensional subspace of task-relevant inputs driving the local population of recorded neurons. Through comprehensive numerical and formal analysis, we demonstrate that this theory can provide a unifying explanation for disparate phenomena previously reported in three different BCI learning tasks, and we derive a novel experimental prediction that we verify with previously published data. By explicitly modeling the underlying neural circuitry, the theory reveals an interpretation of these phenomena in terms of biological constraints on neural activity.
Collapse
|
7
|
Chettih SN, Mackevicius EL, Hale S, Aronov D. Barcoding of episodic memories in the hippocampus of a food-caching bird. Cell 2024; 187:1922-1935.e20. [PMID: 38554707 PMCID: PMC11015962 DOI: 10.1016/j.cell.2024.02.032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 11/28/2023] [Accepted: 02/23/2024] [Indexed: 04/02/2024]
Abstract
The hippocampus is critical for episodic memory. Although hippocampal activity represents place and other behaviorally relevant variables, it is unclear how it encodes numerous memories of specific events in life. To study episodic coding, we leveraged the specialized behavior of chickadees-food-caching birds that form memories at well-defined moments in time whenever they cache food for subsequent retrieval. Our recordings during caching revealed very sparse, transient barcode-like patterns of firing across hippocampal neurons. Each "barcode" uniquely represented a caching event and transiently reactivated during the retrieval of that specific cache. Barcodes co-occurred with the conventional activity of place cells but were uncorrelated even for nearby cache locations that had similar place codes. We propose that animals recall episodic memories by reactivating hippocampal barcodes. Similarly to computer hash codes, these patterns assign unique identifiers to different events and could be a mechanism for rapid formation and storage of many non-interfering memories.
Collapse
Affiliation(s)
- Selmaan N Chettih
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Emily L Mackevicius
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Basis Research Institute, New York, NY 10027, USA
| | - Stephanie Hale
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Dmitriy Aronov
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA.
| |
Collapse
|
8
|
Dekleva BM, Chowdhury RH, Batista AP, Chase SM, Yu BM, Boninger ML, Collinger JL. Motor cortex retains and reorients neural dynamics during motor imagery. Nat Hum Behav 2024; 8:729-742. [PMID: 38287177 PMCID: PMC11089477 DOI: 10.1038/s41562-023-01804-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 12/13/2023] [Indexed: 01/31/2024]
Abstract
The most prominent characteristic of motor cortex is its activation during movement execution, but it is also active when we simply imagine movements in the absence of actual motor output. Despite decades of behavioural and imaging studies, it is unknown how the specific activity patterns and temporal dynamics in motor cortex during covert motor imagery relate to those during motor execution. Here we recorded intracortical activity from the motor cortex of two people who retain some residual wrist function following incomplete spinal cord injury as they performed both actual and imagined isometric wrist extensions. We found that we could decompose the population activity into three orthogonal subspaces, where one was similarly active during both action and imagery, and the others were active only during a single task type-action or imagery. Although they inhabited orthogonal neural dimensions, the action-unique and imagery-unique subspaces contained a strikingly similar set of dynamic features. Our results suggest that during motor imagery, motor cortex maintains the same overall population dynamics as during execution by reorienting the components related to motor output and/or feedback into a unique, output-null imagery subspace.
Collapse
Affiliation(s)
- Brian M Dekleva
- Rehab Neural Engineering Labs, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Physical Medicine & Rehabilitation, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Raeed H Chowdhury
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Aaron P Batista
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Steven M Chase
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Byron M Yu
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Michael L Boninger
- Rehab Neural Engineering Labs, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Physical Medicine & Rehabilitation, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Jennifer L Collinger
- Rehab Neural Engineering Labs, University of Pittsburgh, Pittsburgh, PA, USA.
- Department of Physical Medicine & Rehabilitation, University of Pittsburgh, Pittsburgh, PA, USA.
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA.
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
| |
Collapse
|
9
|
Churchland MM, Shenoy KV. Preparatory activity and the expansive null-space. Nat Rev Neurosci 2024; 25:213-236. [PMID: 38443626 DOI: 10.1038/s41583-024-00796-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/26/2024] [Indexed: 03/07/2024]
Abstract
The study of the cortical control of movement experienced a conceptual shift over recent decades, as the basic currency of understanding shifted from single-neuron tuning towards population-level factors and their dynamics. This transition was informed by a maturing understanding of recurrent networks, where mechanism is often characterized in terms of population-level factors. By estimating factors from data, experimenters could test network-inspired hypotheses. Central to such hypotheses are 'output-null' factors that do not directly drive motor outputs yet are essential to the overall computation. In this Review, we highlight how the hypothesis of output-null factors was motivated by the venerable observation that motor-cortex neurons are active during movement preparation, well before movement begins. We discuss how output-null factors then became similarly central to understanding neural activity during movement. We discuss how this conceptual framework provided key analysis tools, making it possible for experimenters to address long-standing questions regarding motor control. We highlight an intriguing trend: as experimental and theoretical discoveries accumulate, the range of computational roles hypothesized to be subserved by output-null factors continues to expand.
Collapse
Affiliation(s)
- Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA.
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA.
| | - Krishna V Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Neurobiology, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Bio-X Institute, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
| |
Collapse
|
10
|
Temmar H, Willsey MS, Costello JT, Mender MJ, Cubillos LH, Lam JL, Wallace DM, Kelberman MM, Patil PG, Chestek CA. Artificial neural network for brain-machine interface consistently produces more naturalistic finger movements than linear methods. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.01.583000. [PMID: 38496403 PMCID: PMC10942378 DOI: 10.1101/2024.03.01.583000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
Brain-machine interfaces (BMI) aim to restore function to persons living with spinal cord injuries by 'decoding' neural signals into behavior. Recently, nonlinear BMI decoders have outperformed previous state-of-the-art linear decoders, but few studies have investigated what specific improvements these nonlinear approaches provide. In this study, we compare how temporally convolved feedforward neural networks (tcFNNs) and linear approaches predict individuated finger movements in open and closed-loop settings. We show that nonlinear decoders generate more naturalistic movements, producing distributions of velocities 85.3% closer to true hand control than linear decoders. Addressing concerns that neural networks may come to inconsistent solutions, we find that regularization techniques improve the consistency of tcFNN convergence by 194.6%, along with improving average performance, and training speed. Finally, we show that tcFNN can leverage training data from multiple task variations to improve generalization. The results of this study show that nonlinear methods produce more naturalistic movements and show potential for generalizing over less constrained tasks. Teaser A neural network decoder produces consistent naturalistic movements and shows potential for real-world generalization through task variations.
Collapse
|
11
|
Monosov IE. Curiosity: primate neural circuits for novelty and information seeking. Nat Rev Neurosci 2024; 25:195-208. [PMID: 38263217 DOI: 10.1038/s41583-023-00784-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/13/2023] [Indexed: 01/25/2024]
Abstract
For many years, neuroscientists have investigated the behavioural, computational and neurobiological mechanisms that support value-based decisions, revealing how humans and animals make choices to obtain rewards. However, many decisions are influenced by factors other than the value of physical rewards or second-order reinforcers (such as money). For instance, animals (including humans) frequently explore novel objects that have no intrinsic value solely because they are novel and they exhibit the desire to gain information to reduce their uncertainties about the future, even if this information cannot lead to reward or assist them in accomplishing upcoming tasks. In this Review, I discuss how circuits in the primate brain responsible for detecting, predicting and assessing novelty and uncertainty regulate behaviour and give rise to these behavioural components of curiosity. I also briefly discuss how curiosity-related behaviours arise during postnatal development and point out some important reasons for the persistence of curiosity across generations.
Collapse
Affiliation(s)
- Ilya E Monosov
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO, USA.
- Department of Electrical Engineering, Washington University, St. Louis, MO, USA.
- Department of Biomedical Engineering, Washington University, St. Louis, MO, USA.
- Department of Neurosurgery, Washington University, St. Louis, MO, USA.
- Pain Center, Washington University, St. Louis, MO, USA.
| |
Collapse
|
12
|
Zimnik AJ, Cora Ames K, An X, Driscoll L, Lara AH, Russo AA, Susoy V, Cunningham JP, Paninski L, Churchland MM, Glaser JI. Identifying Interpretable Latent Factors with Sparse Component Analysis. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.05.578988. [PMID: 38370650 PMCID: PMC10871230 DOI: 10.1101/2024.02.05.578988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
In many neural populations, the computationally relevant signals are posited to be a set of 'latent factors' - signals shared across many individual neurons. Understanding the relationship between neural activity and behavior requires the identification of factors that reflect distinct computational roles. Methods for identifying such factors typically require supervision, which can be suboptimal if one is unsure how (or whether) factors can be grouped into distinct, meaningful sets. Here, we introduce Sparse Component Analysis (SCA), an unsupervised method that identifies interpretable latent factors. SCA seeks factors that are sparse in time and occupy orthogonal dimensions. With these simple constraints, SCA facilitates surprisingly clear parcellations of neural activity across a range of behaviors. We applied SCA to motor cortex activity from reaching and cycling monkeys, single-trial imaging data from C. elegans, and activity from a multitask artificial network. SCA consistently identified sets of factors that were useful in describing network computations.
Collapse
Affiliation(s)
- Andrew J Zimnik
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - K Cora Ames
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Xinyue An
- Department of Neurology, Northwestern University, Chicago, IL, USA
- Interdepartmental Neuroscience Program, Northwestern University, Chicago, IL, USA
| | - Laura Driscoll
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Allen Institute for Neural Dynamics, Allen Institute, Seattle, CA, USA
| | - Antonio H Lara
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Abigail A Russo
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Vladislav Susoy
- Department of Physics, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - John P Cunningham
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
| | - Liam Paninski
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY, USA
| | - Joshua I Glaser
- Department of Neurology, Northwestern University, Chicago, IL, USA
- Department of Computer Science, Northwestern University, Evanston, IL, USA
| |
Collapse
|
13
|
Dyballa L, Rudzite AM, Hoseini MS, Thapa M, Stryker MP, Field GD, Zucker SW. Population encoding of stimulus features along the visual hierarchy. Proc Natl Acad Sci U S A 2024; 121:e2317773121. [PMID: 38227668 PMCID: PMC10823231 DOI: 10.1073/pnas.2317773121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 12/13/2023] [Indexed: 01/18/2024] Open
Abstract
The retina and primary visual cortex (V1) both exhibit diverse neural populations sensitive to diverse visual features. Yet it remains unclear how neural populations in each area partition stimulus space to span these features. One possibility is that neural populations are organized into discrete groups of neurons, with each group signaling a particular constellation of features. Alternatively, neurons could be continuously distributed across feature-encoding space. To distinguish these possibilities, we presented a battery of visual stimuli to the mouse retina and V1 while measuring neural responses with multi-electrode arrays. Using machine learning approaches, we developed a manifold embedding technique that captures how neural populations partition feature space and how visual responses correlate with physiological and anatomical properties of individual neurons. We show that retinal populations discretely encode features, while V1 populations provide a more continuous representation. Applying the same analysis approach to convolutional neural networks that model visual processing, we demonstrate that they partition features much more similarly to the retina, indicating they are more like big retinas than little brains.
Collapse
Affiliation(s)
- Luciano Dyballa
- Department of Computer Science, Yale University, New Haven, CT06511
| | | | - Mahmood S. Hoseini
- Department of Physiology, University of California, San Francisco, CA94143
| | - Mishek Thapa
- Department of Neurobiology, Duke University, Durham, NC27708
- Department of Ophthalmology, David Geffen School of Medicine, Stein Eye Institute, University of California, Los Angeles, CA90095
| | - Michael P. Stryker
- Department of Physiology, University of California, San Francisco, CA94143
- Kavli Institute for Fundamental Neuroscience, University of California, San Francisco, CA94143
| | - Greg D. Field
- Department of Neurobiology, Duke University, Durham, NC27708
- Department of Ophthalmology, David Geffen School of Medicine, Stein Eye Institute, University of California, Los Angeles, CA90095
| | - Steven W. Zucker
- Department of Computer Science, Yale University, New Haven, CT06511
- Department of Biomedical Engineering, Yale University, New Haven, CT06511
| |
Collapse
|
14
|
Weber J, Solbakk AK, Blenkmann AO, Llorens A, Funderud I, Leske S, Larsson PG, Ivanovic J, Knight RT, Endestad T, Helfrich RF. Ramping dynamics and theta oscillations reflect dissociable signatures during rule-guided human behavior. Nat Commun 2024; 15:637. [PMID: 38245516 PMCID: PMC10799948 DOI: 10.1038/s41467-023-44571-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Accepted: 12/19/2023] [Indexed: 01/22/2024] Open
Abstract
Contextual cues and prior evidence guide human goal-directed behavior. The neurophysiological mechanisms that implement contextual priors to guide subsequent actions in the human brain remain unclear. Using intracranial electroencephalography (iEEG), we demonstrate that increasing uncertainty introduces a shift from a purely oscillatory to a mixed processing regime with an additional ramping component. Oscillatory and ramping dynamics reflect dissociable signatures, which likely differentially contribute to the encoding and transfer of different cognitive variables in a cue-guided motor task. The results support the idea that prefrontal activity encodes rules and ensuing actions in distinct coding subspaces, while theta oscillations synchronize the prefrontal-motor network, possibly to guide action execution. Collectively, our results reveal how two key features of large-scale neural population activity, namely continuous ramping dynamics and oscillatory synchrony, jointly support rule-guided human behavior.
Collapse
Affiliation(s)
- Jan Weber
- Hertie Institute for Clinical Brain Research, Center for Neurology, University Medical Center Tübingen, Tübingen, Germany
- International Max Planck Research School for the Mechanisms of Mental Function and Dysfunction, University of Tübingen, Tübingen, Germany
| | - Anne-Kristin Solbakk
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Department of Neurosurgery, Oslo University Hospital, Oslo, Norway
- Department of Neuropsychology, Helgeland Hospital, Mosjøen, Norway
| | - Alejandro O Blenkmann
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
| | - Anais Llorens
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Helen Wills Neuroscience Institute, UC Berkeley, Berkeley, CA, USA
| | - Ingrid Funderud
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Department of Neuropsychology, Helgeland Hospital, Mosjøen, Norway
| | - Sabine Leske
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Department of Musicology, University of Oslo, Oslo, Norway
| | | | | | - Robert T Knight
- Helen Wills Neuroscience Institute, UC Berkeley, Berkeley, CA, USA
- Department of Psychology, UC Berkeley, Berkeley, CA, USA
| | - Tor Endestad
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
| | - Randolph F Helfrich
- Hertie Institute for Clinical Brain Research, Center for Neurology, University Medical Center Tübingen, Tübingen, Germany.
| |
Collapse
|
15
|
Gort J. Emergence of Universal Computations Through Neural Manifold Dynamics. Neural Comput 2024; 36:227-270. [PMID: 38101328 DOI: 10.1162/neco_a_01631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 09/05/2023] [Indexed: 12/17/2023]
Abstract
There is growing evidence that many forms of neural computation may be implemented by low-dimensional dynamics unfolding at the population scale. However, neither the connectivity structure nor the general capabilities of these embedded dynamical processes are currently understood. In this work, the two most common formalisms of firing-rate models are evaluated using tools from analysis, topology, and nonlinear dynamics in order to provide plausible explanations for these problems. It is shown that low-rank structured connectivities predict the formation of invariant and globally attracting manifolds in all these models. Regarding the dynamics arising in these manifolds, it is proved they are topologically equivalent across the considered formalisms. This letter also shows that under the low-rank hypothesis, the flows emerging in neural manifolds, including input-driven systems, are universal, which broadens previous findings. It explores how low-dimensional orbits can bear the production of continuous sets of muscular trajectories, the implementation of central pattern generators, and the storage of memory states. These dynamics can robustly simulate any Turing machine over arbitrary bounded memory strings, virtually endowing rate models with the power of universal computation. In addition, the letter shows how the low-rank hypothesis predicts the parsimonious correlation structure observed in cortical activity. Finally, it discusses how this theory could provide a useful tool from which to study neuropsychological phenomena using mathematical methods.
Collapse
Affiliation(s)
- Joan Gort
- Facultat de Psicologia, Universitat Autònoma de Barcelona, 08193, Bellaterra, Barcelona, Spain
| |
Collapse
|
16
|
Oby ER, Degenhart AD, Grigsby EM, Motiwala A, McClain NT, Marino PJ, Yu BM, Batista AP. Dynamical constraints on neural population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.03.573543. [PMID: 38260549 PMCID: PMC10802336 DOI: 10.1101/2024.01.03.573543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
The manner in which neural activity unfolds over time is thought to be central to sensory, motor, and cognitive functions in the brain. Network models have long posited that the brain's computations involve time courses of activity that are shaped by the underlying network. A prediction from this view is that the activity time courses should be difficult to violate. We leveraged a brain-computer interface (BCI) to challenge monkeys to violate the naturally-occurring time courses of neural population activity that we observed in motor cortex. This included challenging animals to traverse the natural time course of neural activity in a time-reversed manner. Animals were unable to violate the natural time courses of neural activity when directly challenged to do so. These results provide empirical support for the view that activity time courses observed in the brain indeed reflect the underlying network-level computational mechanisms that they are believed to implement.
Collapse
|
17
|
Jordan GA, Vishwanath A, Holguin G, Bartlett MJ, Tapia AK, Winter GM, Sexauer MR, Stopera CJ, Falk T, Cowen SL. Automated system for training and assessing reaching and grasping behaviors in rodents. J Neurosci Methods 2024; 401:109990. [PMID: 37866457 PMCID: PMC10731814 DOI: 10.1016/j.jneumeth.2023.109990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 09/27/2023] [Accepted: 10/13/2023] [Indexed: 10/24/2023]
Abstract
BACKGROUND Reaching, grasping, and pulling behaviors are studied across species to investigate motor control and problem solving. String pulling is a distinct reaching and grasping behavior that is rapidly learned, requires bimanual coordination, is ethologically grounded, and has been applied across species and disease conditions. NEW METHOD Here we describe the PANDA system (Pulling And Neural Data Analysis), a hardware and software system that integrates a continuous string loop connected to a rotary encoder, feeder, microcontroller, high-speed camera, and analysis software for the assessment and training of reaching, grasping, and pulling behaviors and synchronization with neural data. RESULTS We demonstrate this system in rats implanted with electrodes in motor cortex and hippocampus and show how it can be used to assess relationships between reaching, pulling, and grasping movements and single-unit and local-field activity. Furthermore, we found that automating the shaping procedure significantly improved performance over manual training, with rats pulling > 100 m during a 15-minute session. COMPARISON WITH EXISTING METHODS String-pulling is typically shaped by tying food reward to the string and visually scoring behavior. The system described here automates training, streamlines video assessment with deep learning, and automatically segments reaching movements into distinct reach/pull phases. No system, to our knowledge, exists for the automated shaping and assessment of this behavior. CONCLUSIONS This system will be of general use to researchers investigating motor control, motivation, sensorimotor integration, and motor disorders such as Parkinson's disease and stroke.
Collapse
Affiliation(s)
- Gianna A Jordan
- Biomedical Engineering, University of Arizona, Tucson, AZ, USA
| | | | | | | | - Andrew K Tapia
- Biomedical Engineering, University of Arizona, Tucson, AZ, USA
| | | | | | | | - Torsten Falk
- Neurology, University of Arizona, Tucson, AZ, USA; Pharmacology, University of Arizona, Tucson, AZ, USA
| | | |
Collapse
|
18
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent factors and structures in neural population activity. Nat Biomed Eng 2024; 8:85-108. [PMID: 38082181 DOI: 10.1038/s41551-023-01106-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 09/12/2023] [Indexed: 12/26/2023]
Abstract
Modelling the spatiotemporal dynamics in the activity of neural populations while also enabling their flexible inference is hindered by the complexity and noisiness of neural observations. Here we show that the lower-dimensional nonlinear latent factors and latent structures can be computationally modelled in a manner that allows for flexible inference causally, non-causally and in the presence of missing neural observations. To enable flexible inference, we developed a neural network that separates the model into jointly trained manifold and dynamic latent factors such that nonlinearity is captured through the manifold factors and the dynamics can be modelled in tractable linear form on this nonlinear manifold. We show that the model, which we named 'DFINE' (for 'dynamical flexible inference for nonlinear embeddings') achieves flexible inference in simulations of nonlinear dynamics and across neural datasets representing a diversity of brain regions and behaviours. Compared with earlier neural-network models, DFINE enables flexible inference, better predicts neural activity and behaviour, and better captures the latent neural manifold structure. DFINE may advance the development of neurotechnology and investigations in neuroscience.
Collapse
Affiliation(s)
- Hamidreza Abbaspourazad
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Eray Erturk
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Departments of Neurosurgery, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
19
|
Hatsopoulos N, Moore D, MacLean J, Walker J. A dynamic subset of network interactions underlies tuning to natural movements in marmoset sensorimotor cortex. RESEARCH SQUARE 2023:rs.3.rs-3750312. [PMID: 38234779 PMCID: PMC10793486 DOI: 10.21203/rs.3.rs-3750312/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
Mechanisms of computation in sensorimotor cortex must be flexible and robust to support skilled motor behavior. Patterns of neuronal coactivity emerge as a result of computational processes. Pairwise spike-time statistical relationships, across the population, can be summarized as a functional network (FN) which retains single-unit properties. We record populations of single-unit neural activity in forelimb sensorimotor cortex during prey-capture and spontaneous behavior and use an encoding model incorporating kinematic trajectories and network features to predict single-unit activity during forelimb movements. The contribution of network features depends on structured connectivity within strongly connected functional groups. We identify a context-specific functional group that is highly tuned to kinematics and reorganizes its connectivity between spontaneous and prey-capture movements. In the remaining context-invariant group, interactions are comparatively stable across behaviors and units are less tuned to kinematics. This suggests different roles in producing natural forelimb movements and contextualizes single-unit tuning properties within population dynamics.
Collapse
|
20
|
Lin XX, Nieder A, Jacob SN. The neuronal implementation of representational geometry in primate prefrontal cortex. SCIENCE ADVANCES 2023; 9:eadh8685. [PMID: 38091404 PMCID: PMC10848744 DOI: 10.1126/sciadv.adh8685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 11/14/2023] [Indexed: 12/18/2023]
Abstract
Modern neuroscience has seen the rise of a population-doctrine that represents cognitive variables using geometrical structures in activity space. Representational geometry does not, however, account for how individual neurons implement these representations. Leveraging the principle of sparse coding, we present a framework to dissect representational geometry into biologically interpretable components that retain links to single neurons. Applied to extracellular recordings from the primate prefrontal cortex in a working memory task with interference, the identified components revealed disentangled and sequential memory representations including the recovery of memory content after distraction, signals hidden to conventional analyses. Each component was contributed by small subpopulations of neurons with distinct spiking properties and response dynamics. Modeling showed that such sparse implementations are supported by recurrently connected circuits as in prefrontal cortex. The perspective of neuronal implementation links representational geometries to their cellular constituents, providing mechanistic insights into how neural systems encode and process information.
Collapse
Affiliation(s)
- Xiao-Xiong Lin
- Translational Neurotechnology Laboratory, Department of Neurosurgery, Klinikum rechts der Isar, Technical University of Munich, Germany
- Graduate School of Systemic Neurosciences, Ludwig-Maximilians-University Munich, Germany
| | | | - Simon N. Jacob
- Translational Neurotechnology Laboratory, Department of Neurosurgery, Klinikum rechts der Isar, Technical University of Munich, Germany
| |
Collapse
|
21
|
Shinn M. Phantom oscillations in principal component analysis. Proc Natl Acad Sci U S A 2023; 120:e2311420120. [PMID: 37988465 PMCID: PMC10691246 DOI: 10.1073/pnas.2311420120] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 10/18/2023] [Indexed: 11/23/2023] Open
Abstract
Principal component analysis (PCA) is a dimensionality reduction method that is known for being simple and easy to interpret. Principal components are often interpreted as low-dimensional patterns in high-dimensional space. However, this simple interpretation fails for timeseries, spatial maps, and other continuous data. In these cases, nonoscillatory data may have oscillatory principal components. Here, we show that two common properties of data cause oscillatory principal components: smoothness and shifts in time or space. These two properties implicate almost all neuroscience data. We show how the oscillations produced by PCA, which we call "phantom oscillations," impact data analysis. We also show that traditional cross-validation does not detect phantom oscillations, so we suggest procedures that do. Our findings are supported by a collection of mathematical proofs. Collectively, our work demonstrates that patterns which emerge from high-dimensional data analysis may not faithfully represent the underlying data.
Collapse
Affiliation(s)
- Maxwell Shinn
- University College London (UCL) Queen Square Institute of Neurology, University College London, LondonWC1E 6BT, United Kingdom
| |
Collapse
|
22
|
Kirk EA, Hope KT, Sober SJ, Sauerbrei BA. An output-null signature of inertial load in motor cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.11.06.565869. [PMID: 37986810 PMCID: PMC10659339 DOI: 10.1101/2023.11.06.565869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Abstract
Coordinated movement requires the nervous system to continuously compensate for changes in mechanical load across different contexts. For voluntary movements like reaching, the motor cortex is a critical hub that generates commands to move the limbs and counteract loads. How does cortex contribute to load compensation when rhythmic movements are clocked by a spinal pattern generator? Here, we address this question by manipulating the mass of the forelimb in unrestrained mice during locomotion. While load produces changes in motor output that are robust to inactivation of motor cortex, it also induces a profound shift in cortical dynamics, which is minimally affected by cerebellar perturbation and significantly larger than the response in the spinal motoneuron population. This latent representation may enable motor cortex to generate appropriate commands when a voluntary movement must be integrated with an ongoing, spinally-generated rhythm.
Collapse
Affiliation(s)
- Eric A. Kirk
- CaseWestern Reserve University School ofMedicine, Department of Neurosciences
| | - Keenan T. Hope
- CaseWestern Reserve University School ofMedicine, Department of Neurosciences
| | | | | |
Collapse
|
23
|
Jarne C, Laje R. Exploring weight initialization, diversity of solutions, and degradation in recurrent neural networks trained for temporal and decision-making tasks. J Comput Neurosci 2023; 51:407-431. [PMID: 37561278 DOI: 10.1007/s10827-023-00857-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 05/26/2023] [Accepted: 06/27/2023] [Indexed: 08/11/2023]
Abstract
Recurrent Neural Networks (RNNs) are frequently used to model aspects of brain function and structure. In this work, we trained small fully-connected RNNs to perform temporal and flow control tasks with time-varying stimuli. Our results show that different RNNs can solve the same task by converging to different underlying dynamics and also how the performance gracefully degrades as either network size is decreased, interval duration is increased, or connectivity damage is induced. For the considered tasks, we explored how robust the network obtained after training can be according to task parameterization. In the process, we developed a framework that can be useful to parameterize other tasks of interest in computational neuroscience. Our results are useful to quantify different aspects of the models, which are normally used as black boxes and need to be understood in order to model the biological response of cerebral cortex areas.
Collapse
Affiliation(s)
- Cecilia Jarne
- Universidad Nacional de Quilmes, Departamento de Ciencia y Tecnología, Bernal, Buenos Aires, Argentina.
- CONICET, Buenos Aires, Argentina.
- Center for Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.
| | - Rodrigo Laje
- Universidad Nacional de Quilmes, Departamento de Ciencia y Tecnología, Bernal, Buenos Aires, Argentina
- CONICET, Buenos Aires, Argentina
| |
Collapse
|
24
|
Wang S, Falcone R, Richmond B, Averbeck BB. Attractor dynamics reflect decision confidence in macaque prefrontal cortex. Nat Neurosci 2023; 26:1970-1980. [PMID: 37798412 DOI: 10.1038/s41593-023-01445-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Accepted: 08/31/2023] [Indexed: 10/07/2023]
Abstract
Decisions are made with different degrees of consistency, and this consistency can be linked to the confidence that the best choice has been made. Theoretical work suggests that attractor dynamics in networks can account for choice consistency, but how this is implemented in the brain remains unclear. Here we provide evidence that the energy landscape around attractor basins in population neural activity in the prefrontal cortex reflects choice consistency. We trained two rhesus monkeys to make accept/reject decisions based on pretrained visual cues that signaled reward offers with different magnitudes and delays to reward. Monkeys made consistent decisions for very good and very bad offers, but decisions were less consistent for intermediate offers. Analysis of neural data showed that the attractor basins around patterns of activity reflecting decisions had steeper landscapes for offers that led to consistent decisions. Therefore, we provide neural evidence that energy landscapes predict decision consistency, which reflects decision confidence.
Collapse
Affiliation(s)
- Siyu Wang
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Rossella Falcone
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
- Leo M. Davidoff Department of Neurological Surgery, Albert Einstein College of Medicine Montefiore Medical Center, Bronx, NY, USA
| | - Barry Richmond
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Bruno B Averbeck
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA.
| |
Collapse
|
25
|
Durstewitz D, Koppe G, Thurm MI. Reconstructing computational system dynamics from neural data with recurrent neural networks. Nat Rev Neurosci 2023; 24:693-710. [PMID: 37794121 DOI: 10.1038/s41583-023-00740-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/18/2023] [Indexed: 10/06/2023]
Abstract
Computational models in neuroscience usually take the form of systems of differential equations. The behaviour of such systems is the subject of dynamical systems theory. Dynamical systems theory provides a powerful mathematical toolbox for analysing neurobiological processes and has been a mainstay of computational neuroscience for decades. Recently, recurrent neural networks (RNNs) have become a popular machine learning tool for studying the non-linear dynamics of neural and behavioural processes by emulating an underlying system of differential equations. RNNs have been routinely trained on similar behavioural tasks to those used for animal subjects to generate hypotheses about the underlying computational mechanisms. By contrast, RNNs can also be trained on the measured physiological and behavioural data, thereby directly inheriting their temporal and geometrical properties. In this way they become a formal surrogate for the experimentally probed system that can be further analysed, perturbed and simulated. This powerful approach is called dynamical system reconstruction. In this Perspective, we focus on recent trends in artificial intelligence and machine learning in this exciting and rapidly expanding field, which may be less well known in neuroscience. We discuss formal prerequisites, different model architectures and training approaches for RNN-based dynamical system reconstructions, ways to evaluate and validate model performance, how to interpret trained models in a neuroscience context, and current challenges.
Collapse
Affiliation(s)
- Daniel Durstewitz
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany.
- Interdisciplinary Center for Scientific Computing, Heidelberg University, Heidelberg, Germany.
- Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany.
| | - Georgia Koppe
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Dept. of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Hector Institute for Artificial Intelligence in Psychiatry, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Max Ingo Thurm
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
26
|
Tsuda B, Richmond BJ, Sejnowski TJ. Exploring strategy differences between humans and monkeys with recurrent neural networks. PLoS Comput Biol 2023; 19:e1011618. [PMID: 37983250 PMCID: PMC10695363 DOI: 10.1371/journal.pcbi.1011618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Revised: 12/04/2023] [Accepted: 10/19/2023] [Indexed: 11/22/2023] Open
Abstract
Animal models are used to understand principles of human biology. Within cognitive neuroscience, non-human primates are considered the premier model for studying decision-making behaviors in which direct manipulation experiments are still possible. Some prominent studies have brought to light major discrepancies between monkey and human cognition, highlighting problems with unverified extrapolation from monkey to human. Here, we use a parallel model system-artificial neural networks (ANNs)-to investigate a well-established discrepancy identified between monkeys and humans with a working memory task, in which monkeys appear to use a recency-based strategy while humans use a target-selective strategy. We find that ANNs trained on the same task exhibit a progression of behavior from random behavior (untrained) to recency-like behavior (partially trained) and finally to selective behavior (further trained), suggesting monkeys and humans may occupy different points in the same overall learning progression. Surprisingly, what appears to be recency-like behavior in the ANN, is in fact an emergent non-recency-based property of the organization of the neural network's state space during its development through training. We find that explicit encouragement of recency behavior during training has a dual effect, not only causing an accentuated recency-like behavior, but also speeding up the learning process altogether, resulting in an efficient shaping mechanism to achieve the optimal strategy. Our results suggest a new explanation for the discrepency observed between monkeys and humans and reveal that what can appear to be a recency-based strategy in some cases may not be recency at all.
Collapse
Affiliation(s)
- Ben Tsuda
- Computational Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, California, United States of America
- Neurosciences Graduate Program, University of California San Diego, La Jolla, California, United States of America
- Medical Scientist Training Program, University of California San Diego, La Jolla, California, United States of America
| | - Barry J. Richmond
- Section on Neural Coding and Computation, National Institute of Mental Health, Bethesda, Maryland, United States of America
| | - Terrence J. Sejnowski
- Computational Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, California, United States of America
- Institute for Neural Computation, University of California San Diego, La Jolla, California, United States of America
- Division of Biological Sciences, University of California San Diego, La Jolla, California, United States of America
| |
Collapse
|
27
|
Betancourt A, Pérez O, Gámez J, Mendoza G, Merchant H. Amodal population clock in the primate medial premotor system for rhythmic tapping. Cell Rep 2023; 42:113234. [PMID: 37838944 DOI: 10.1016/j.celrep.2023.113234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 08/09/2023] [Accepted: 09/24/2023] [Indexed: 10/17/2023] Open
Abstract
The neural substrate for beat extraction and response entrainment to rhythms is not fully understood. Here we analyze the activity of medial premotor neurons in monkeys performing isochronous tapping guided by brief flashing stimuli or auditory tones. The population dynamics shared the following properties across modalities: the circular dynamics of the neural trajectories form a regenerating loop for every produced interval; the trajectories converge in similar state space at tapping times resetting the clock; and the tempo of the synchronized tapping is encoded in the trajectories by a combination of amplitude modulation and temporal scaling. Notably, the modality induces displacement in the neural trajectories in the auditory and visual subspaces without greatly altering the time-keeping mechanism. These results suggest that the interaction between the medial premotor cortex's amodal internal representation of pulse and a modality-specific external input generates a neural rhythmic clock whose dynamics govern rhythmic tapping execution across senses.
Collapse
Affiliation(s)
- Abraham Betancourt
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Boulevard Juriquilla No. 3001, Querétaro, Qro 76230, México
| | - Oswaldo Pérez
- Escuela Nacional de Estudios Superiores, Unidad Juriquilla, UNAM, Boulevard Juriquilla No. 3001, Querétaro, Qro 76230, México
| | - Jorge Gámez
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Boulevard Juriquilla No. 3001, Querétaro, Qro 76230, México
| | - Germán Mendoza
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Boulevard Juriquilla No. 3001, Querétaro, Qro 76230, México
| | - Hugo Merchant
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Boulevard Juriquilla No. 3001, Querétaro, Qro 76230, México.
| |
Collapse
|
28
|
Verhein JR, Vyas S, Shenoy KV. Methylphenidate modulates motor cortical dynamics and behavior. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.15.562405. [PMID: 37905157 PMCID: PMC10614820 DOI: 10.1101/2023.10.15.562405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Methylphenidate (MPH, brand: Ritalin) is a common stimulant used both medically and non-medically. Though typically prescribed for its cognitive effects, MPH also affects movement. While it is known that MPH noncompetitively blocks the reuptake of catecholamines through inhibition of dopamine and norepinephrine transporters, a critical step in exploring how it affects behavior is to understand how MPH directly affects neural activity. This would establish an electrophysiological mechanism of action for MPH. Since we now have biologically-grounded network-level hypotheses regarding how populations of motor cortical neurons plan and execute movements, there is a unique opportunity to make testable predictions regarding how systemic MPH administration - a pharmacological perturbation - might affect neural activity in motor cortex. To that end, we administered clinically-relevant doses of MPH to Rhesus monkeys as they performed an instructed-delay reaching task. Concomitantly, we measured neural activity from dorsal premotor and primary motor cortex. Consistent with our predictions, we found dose-dependent and significant effects on reaction time, trial-by-trial variability, and movement speed. We confirmed our hypotheses that changes in reaction time and variability were accompanied by previously established population-level changes in motor cortical preparatory activity and the condition-independent signal that precedes movements. We expected changes in speed to be a result of changes in the amplitude of motor cortical dynamics and/or a translation of those dynamics in activity space. Instead, our data are consistent with a mechanism whereby the neuromodulatory effect of MPH is to increase the gain and/or the signal-to-noise of motor cortical dynamics during reaching. Continued work in this domain to better understand the brain-wide electrophysiological mechanism of action of MPH and other psychoactive drugs could facilitate more targeted treatments for a host of cognitive-motor disorders.
Collapse
Affiliation(s)
- Jessica R Verhein
- Medical Scientist Training Program, Stanford School of Medicine, Stanford University, Stanford, CA
- Neurosciences Graduate Program, Stanford School of Medicine, Stanford University, Stanford, CA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA
- Current affiliations: Psychiatry Research Residency Training Program, University of California, San Francisco, San Francisco, CA
| | - Saurabh Vyas
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA
- Department of Bioengineering, Stanford University, Stanford, CA
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY
| | - Krishna V Shenoy
- Neurosciences Graduate Program, Stanford School of Medicine, Stanford University, Stanford, CA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA
- Department of Bioengineering, Stanford University, Stanford, CA
- Department of Electrical Engineering, Stanford University, Stanford, CA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA
- Department of Neurobiology, Stanford University, Stanford, CA
- Bio-X Program, Stanford University, Stanford, CA
| |
Collapse
|
29
|
De A, Chaudhuri R. Common population codes produce extremely nonlinear neural manifolds. Proc Natl Acad Sci U S A 2023; 120:e2305853120. [PMID: 37733742 PMCID: PMC10523500 DOI: 10.1073/pnas.2305853120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 08/03/2023] [Indexed: 09/23/2023] Open
Abstract
Populations of neurons represent sensory, motor, and cognitive variables via patterns of activity distributed across the population. The size of the population used to encode a variable is typically much greater than the dimension of the variable itself, and thus, the corresponding neural population activity occupies lower-dimensional subsets of the full set of possible activity states. Given population activity data with such lower-dimensional structure, a fundamental question asks how close the low-dimensional data lie to a linear subspace. The linearity or nonlinearity of the low-dimensional structure reflects important computational features of the encoding, such as robustness and generalizability. Moreover, identifying such linear structure underlies common data analysis methods such as Principal Component Analysis (PCA). Here, we show that for data drawn from many common population codes the resulting point clouds and manifolds are exceedingly nonlinear, with the dimension of the best-fitting linear subspace growing at least exponentially with the true dimension of the data. Consequently, linear methods like PCA fail dramatically at identifying the true underlying structure, even in the limit of arbitrarily many data points and no noise.
Collapse
Affiliation(s)
- Anandita De
- Center for Neuroscience, University of California, Davis, CA95618
- Department of Physics, University of California, Davis, CA95616
| | - Rishidev Chaudhuri
- Center for Neuroscience, University of California, Davis, CA95618
- Department of Neurobiology, Physiology and Behavior, University of California, Davis, CA95616
- Department of Mathematics, University of California, Davis, CA95616
| |
Collapse
|
30
|
Wang S, Falcone R, Richmond B, Averbeck BB. Attractor dynamics reflect decision confidence in macaque prefrontal cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.17.558139. [PMID: 37886489 PMCID: PMC10602028 DOI: 10.1101/2023.09.17.558139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
Decisions are made with different degrees of consistency, and this consistency can be linked to the confidence that the best choice has been made. Theoretical work suggests that attractor dynamics in networks can account for choice consistency, but how this is implemented in the brain remains unclear. Here, we provide evidence that the energy landscape around attractor basins in population neural activity in prefrontal cortex reflects choice consistency. We trained two rhesus monkeys to make accept/reject decisions based on pretrained visual cues that signaled reward offers with different magnitudes and delays-to-reward. Monkeys made consistent decisions for very good and very bad offers, but decisions were less consistent for intermediate offers. Analysis of neural data showed that the attractor basins around patterns of activity reflecting decisions had steeper landscapes for offers that led to consistent decisions. Therefore, we provide neural evidence that energy landscapes predict decision consistency, which reflects decision confidence.
Collapse
|
31
|
Stephen EP, Li Y, Metzger S, Oganian Y, Chang EF. Latent neural dynamics encode temporal context in speech. Hear Res 2023; 437:108838. [PMID: 37441880 PMCID: PMC11182421 DOI: 10.1016/j.heares.2023.108838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 06/15/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023]
Abstract
Direct neural recordings from human auditory cortex have demonstrated encoding for acoustic-phonetic features of consonants and vowels. Neural responses also encode distinct acoustic amplitude cues related to timing, such as those that occur at the onset of a sentence after a silent period or the onset of the vowel in each syllable. Here, we used a group reduced rank regression model to show that distributed cortical responses support a low-dimensional latent state representation of temporal context in speech. The timing cues each capture more unique variance than all other phonetic features and exhibit rotational or cyclical dynamics in latent space from activity that is widespread over the superior temporal gyrus. We propose that these spatially distributed timing signals could serve to provide temporal context for, and possibly bind across time, the concurrent processing of individual phonetic features, to compose higher-order phonological (e.g. word-level) representations.
Collapse
Affiliation(s)
- Emily P Stephen
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA 94143, United States; Department of Mathematics and Statistics, Boston University, Boston, MA 02215, United States
| | - Yuanning Li
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA 94143, United States; School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Sean Metzger
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA 94143, United States
| | - Yulia Oganian
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA 94143, United States; Center for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
| | - Edward F Chang
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA 94143, United States.
| |
Collapse
|
32
|
Muscinelli SP, Wagner MJ, Litwin-Kumar A. Optimal routing to cerebellum-like structures. Nat Neurosci 2023; 26:1630-1641. [PMID: 37604889 PMCID: PMC10506727 DOI: 10.1038/s41593-023-01403-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 07/12/2023] [Indexed: 08/23/2023]
Abstract
The vast expansion from mossy fibers to cerebellar granule cells (GrC) produces a neural representation that supports functions including associative and internal model learning. This motif is shared by other cerebellum-like structures and has inspired numerous theoretical models. Less attention has been paid to structures immediately presynaptic to GrC layers, whose architecture can be described as a 'bottleneck' and whose function is not understood. We therefore develop a theory of cerebellum-like structures in conjunction with their afferent pathways that predicts the role of the pontine relay to cerebellum and the glomerular organization of the insect antennal lobe. We highlight a new computational distinction between clustered and distributed neuronal representations that is reflected in the anatomy of these two brain structures. Our theory also reconciles recent observations of correlated GrC activity with theories of nonlinear mixing. More generally, it shows that structured compression followed by random expansion is an efficient architecture for flexible computation.
Collapse
Affiliation(s)
- Samuel P Muscinelli
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY, USA.
| | - Mark J Wagner
- National Institute of Neurological Disorders and Stroke, NIH, Bethesda, MD, USA
| | - Ashok Litwin-Kumar
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY, USA.
| |
Collapse
|
33
|
Jordan GA, Vishwanath A, Holguin G, Bartlett MJ, Tapia AK, Winter GM, Sexauer MR, Stopera CJ, Falk T, Cowen SL. Automated system for training and assessing string-pulling behaviors in rodents. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.02.547431. [PMID: 37461637 PMCID: PMC10349952 DOI: 10.1101/2023.07.02.547431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 07/25/2023]
Abstract
String-pulling tasks have been used for centuries to study coordinated bimanual motor behavior and problem solving. String pulling is rapidly learned, ethologically grounded, and has been applied to many species and disease conditions. Typically, training of string-pulling behaviors is achieved through manual shaping and baiting. Furthermore, behavioral assessment of reaching, grasping, and pulling is often performed through labor intensive manual video scoring. No system, to our knowledge, currently exists for the automated shaping and assessment of string-pulling behaviors. Here we describe the PANDA system (Pulling And Neural Data Analysis), an inexpensive hardware and software system that utilizes a continuous string loop connected to a rotary encoder, feeder, microcontroller, high-speed camera, and analysis software for assessment and training of string-pulling behaviors and synchronization with neural recording data. We demonstrate this system in unimplanted rats and rats implanted with electrodes in motor cortex and hippocampus and show how the PANDA system can be used to assess relationships between paw movements and single-unit and local-field activity. We also found that automating the shaping procedure significantly improved overall performance, with rats regularly pulling >100 meters during a 15-minute session. In conclusion, the PANDA system will be of general use to researchers investigating motor control, motivation, and motor disorders such as Parkinson's disease, Huntington's disease, and stroke. It will also support the investigation of neural mechanisms involved in sensorimotor integration.
Collapse
Affiliation(s)
| | | | | | | | - Andrew K. Tapia
- Biomedical Engineering, University of Arizona, Tucson Arizona
| | | | | | | | - Torsten Falk
- Neurology, University of Arizona, Tucson Arizona
- Pharmacology, University of Arizona, Tucson Arizona
| | | |
Collapse
|
34
|
Athalye VR, Khanna P, Gowda S, Orsborn AL, Costa RM, Carmena JM. Invariant neural dynamics drive commands to control different movements. Curr Biol 2023; 33:2962-2976.e15. [PMID: 37402376 PMCID: PMC10527529 DOI: 10.1016/j.cub.2023.06.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 04/24/2023] [Accepted: 06/09/2023] [Indexed: 07/06/2023]
Abstract
It has been proposed that the nervous system has the capacity to generate a wide variety of movements because it reuses some invariant code. Previous work has identified that dynamics of neural population activity are similar during different movements, where dynamics refer to how the instantaneous spatial pattern of population activity changes in time. Here, we test whether invariant dynamics of neural populations are actually used to issue the commands that direct movement. Using a brain-machine interface (BMI) that transforms rhesus macaques' motor-cortex activity into commands for a neuroprosthetic cursor, we discovered that the same command is issued with different neural-activity patterns in different movements. However, these different patterns were predictable, as we found that the transitions between activity patterns are governed by the same dynamics across movements. These invariant dynamics are low dimensional, and critically, they align with the BMI, so that they predict the specific component of neural activity that actually issues the next command. We introduce a model of optimal feedback control (OFC) that shows that invariant dynamics can help transform movement feedback into commands, reducing the input that the neural population needs to control movement. Altogether our results demonstrate that invariant dynamics drive commands to control a variety of movements and show how feedback can be integrated with invariant dynamics to issue generalizable commands.
Collapse
Affiliation(s)
- Vivek R Athalye
- Zuckerman Mind Brain Behavior Institute, Departments of Neuroscience and Neurology, Columbia University, New York, NY 10027, USA.
| | - Preeya Khanna
- Department of Neurology, University of California, San Francisco, San Francisco, CA 94158, USA.
| | - Suraj Gowda
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Amy L Orsborn
- Departments of Bioengineering, Electrical and Computer Engineering, University of Washington, Seattle, Seattle, WA 98195, USA
| | - Rui M Costa
- Zuckerman Mind Brain Behavior Institute, Departments of Neuroscience and Neurology, Columbia University, New York, NY 10027, USA.
| | - Jose M Carmena
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA 94720, USA; Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA; UC Berkeley-UCSF Joint Graduate Program in Bioengineering, University of California, Berkeley, Berkeley, CA 94720, USA.
| |
Collapse
|
35
|
Mitskopoulos L, Onken A. Discovering Low-Dimensional Descriptions of Multineuronal Dependencies. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1026. [PMID: 37509973 PMCID: PMC10378554 DOI: 10.3390/e25071026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 06/12/2023] [Accepted: 07/04/2023] [Indexed: 07/30/2023]
Abstract
Coordinated activity in neural populations is crucial for information processing. Shedding light on the multivariate dependencies that shape multineuronal responses is important to understand neural codes. However, existing approaches based on pairwise linear correlations are inadequate at capturing complicated interaction patterns and miss features that shape aspects of the population function. Copula-based approaches address these shortcomings by extracting the dependence structures in the joint probability distribution of population responses. In this study, we aimed to dissect neural dependencies with a C-Vine copula approach coupled with normalizing flows for estimating copula densities. While this approach allows for more flexibility compared to fitting parametric copulas, drawing insights on the significance of these dependencies from large sets of copula densities is challenging. To alleviate this challenge, we used a weighted non-negative matrix factorization procedure to leverage shared latent features in neural population dependencies. We validated the method on simulated data and applied it on copulas we extracted from recordings of neurons in the mouse visual cortex as well as in the macaque motor cortex. Our findings reveal that neural dependencies occupy low-dimensional subspaces, but distinct modules are synergistically combined to give rise to diverse interaction patterns that may serve the population function.
Collapse
Affiliation(s)
| | - Arno Onken
- School of Informatics, University of Edinburgh, Edinburgh EH8 9AB, UK
| |
Collapse
|
36
|
Price BH, Jensen CM, Khoudary AA, Gavornik JP. Expectation violations produce error signals in mouse V1. Cereb Cortex 2023; 33:8803-8820. [PMID: 37183176 PMCID: PMC10321125 DOI: 10.1093/cercor/bhad163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 04/22/2023] [Accepted: 04/25/2023] [Indexed: 05/16/2023] Open
Abstract
Repeated exposure to visual sequences changes the form of evoked activity in the primary visual cortex (V1). Predictive coding theory provides a potential explanation for this, namely that plasticity shapes cortical circuits to encode spatiotemporal predictions and that subsequent responses are modulated by the degree to which actual inputs match these expectations. Here we use a recently developed statistical modeling technique called Model-Based Targeted Dimensionality Reduction (MbTDR) to study visually evoked dynamics in mouse V1 in the context of an experimental paradigm called "sequence learning." We report that evoked spiking activity changed significantly with training, in a manner generally consistent with the predictive coding framework. Neural responses to expected stimuli were suppressed in a late window (100-150 ms) after stimulus onset following training, whereas responses to novel stimuli were not. Substituting a novel stimulus for a familiar one led to increases in firing that persisted for at least 300 ms. Omitting predictable stimuli in trained animals also led to increased firing at the expected time of stimulus onset. Finally, we show that spiking data can be used to accurately decode time within the sequence. Our findings are consistent with the idea that plasticity in early visual circuits is involved in coding spatiotemporal information.
Collapse
Affiliation(s)
- Byron H Price
- Center for Systems Neuroscience, Department of Biology, Boston University, Boston, MA 02215, USA
- Graduate Program in Neuroscience, Boston University, Boston, MA 02215, USA
| | - Cambria M Jensen
- Center for Systems Neuroscience, Department of Biology, Boston University, Boston, MA 02215, USA
| | - Anthony A Khoudary
- Center for Systems Neuroscience, Department of Biology, Boston University, Boston, MA 02215, USA
| | - Jeffrey P Gavornik
- Center for Systems Neuroscience, Department of Biology, Boston University, Boston, MA 02215, USA
- Graduate Program in Neuroscience, Boston University, Boston, MA 02215, USA
| |
Collapse
|
37
|
Li X, Wang S. Toward a computational theory of manifold untangling: from global embedding to local flattening. Front Comput Neurosci 2023; 17:1197031. [PMID: 37324172 PMCID: PMC10264604 DOI: 10.3389/fncom.2023.1197031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 05/11/2023] [Indexed: 06/17/2023] Open
Abstract
It has been hypothesized that the ventral stream processing for object recognition is based on a mechanism called cortically local subspace untangling. A mathematical abstraction of object recognition by the visual cortex is how to untangle the manifolds associated with different object categories. Such a manifold untangling problem is closely related to the celebrated kernel trick in metric space. In this paper, we conjecture that there is a more general solution to manifold untangling in the topological space without artificially defining any distance metric. Geometrically, we can either embed a manifold in a higher-dimensional space to promote selectivity or flatten a manifold to promote tolerance. General strategies of both global manifold embedding and local manifold flattening are presented and connected with existing work on the untangling of image, audio, and language data. We also discuss the implications of untangling the manifold into motor control and internal representations.
Collapse
Affiliation(s)
- Xin Li
- Lane Department of Computer Science and Electrical Engineering (CSEE), West Virginia University, Morgantown, WV, United States
| | - Shuo Wang
- Department of Radiology, Washington University at St. Louis, St. Louis, MO, United States
| |
Collapse
|
38
|
Bachschmid-Romano L, Hatsopoulos NG, Brunel N. Interplay between external inputs and recurrent dynamics during movement preparation and execution in a network model of motor cortex. eLife 2023; 12:77690. [PMID: 37166452 PMCID: PMC10174693 DOI: 10.7554/elife.77690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 03/09/2023] [Indexed: 05/12/2023] Open
Abstract
The primary motor cortex has been shown to coordinate movement preparation and execution through computations in approximately orthogonal subspaces. The underlying network mechanisms, and the roles played by external and recurrent connectivity, are central open questions that need to be answered to understand the neural substrates of motor control. We develop a recurrent neural network model that recapitulates the temporal evolution of neuronal activity recorded from the primary motor cortex of a macaque monkey during an instructed delayed-reach task. In particular, it reproduces the observed dynamic patterns of covariation between neural activity and the direction of motion. We explore the hypothesis that the observed dynamics emerges from a synaptic connectivity structure that depends on the preferred directions of neurons in both preparatory and movement-related epochs, and we constrain the strength of both synaptic connectivity and external input parameters from data. While the model can reproduce neural activity for multiple combinations of the feedforward and recurrent connections, the solution that requires minimum external inputs is one where the observed patterns of covariance are shaped by external inputs during movement preparation, while they are dominated by strong direction-specific recurrent connectivity during movement execution. Our model also demonstrates that the way in which single-neuron tuning properties change over time can explain the level of orthogonality of preparatory and movement-related subspaces.
Collapse
Affiliation(s)
| | - Nicholas G Hatsopoulos
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, United States
- Committee on Computational Neuroscience, University of Chicago, Chicago, United States
| | - Nicolas Brunel
- Department of Neurobiology, Duke University, Durham, United States
- Department of Physics, Duke University, Durham, United States
- Duke Institute for Brain Sciences, Duke University, Durham, United States
- Center for Cognitive Neuroscience, Duke University, Durham, United States
| |
Collapse
|
39
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent structures in neural population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.13.532479. [PMID: 36993605 PMCID: PMC10054986 DOI: 10.1101/2023.03.13.532479] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Inferring complex spatiotemporal dynamics in neural population activity is critical for investigating neural mechanisms and developing neurotechnology. These activity patterns are noisy observations of lower-dimensional latent factors and their nonlinear dynamical structure. A major unaddressed challenge is to model this nonlinear structure, but in a manner that allows for flexible inference, whether causally, non-causally, or in the presence of missing neural observations. We address this challenge by developing DFINE, a new neural network that separates the model into dynamic and manifold latent factors, such that the dynamics can be modeled in tractable form. We show that DFINE achieves flexible nonlinear inference across diverse behaviors and brain regions. Further, despite enabling flexible inference unlike prior neural network models of population activity, DFINE also better predicts the behavior and neural activity, and better captures the latent neural manifold structure. DFINE can both enhance future neurotechnology and facilitate investigations across diverse domains of neuroscience.
Collapse
|
40
|
DePasquale B, Sussillo D, Abbott LF, Churchland MM. The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks. Neuron 2023; 111:631-649.e10. [PMID: 36630961 PMCID: PMC10118067 DOI: 10.1016/j.neuron.2022.12.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 06/17/2022] [Accepted: 12/05/2022] [Indexed: 01/12/2023]
Abstract
Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.
Collapse
Affiliation(s)
- Brian DePasquale
- Princeton Neuroscience Institute, Princeton University, Princeton NJ, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - L F Abbott
- Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Physiology and Cellular Biophysics, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
| |
Collapse
|
41
|
Dekleva BM, Chowdhury RH, Batista AP, Chase SM, Yu BM, Boninger ML, Collinger JL. Motor cortex retains and reorients neural dynamics during motor imagery. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.17.524394. [PMID: 36711675 PMCID: PMC9882181 DOI: 10.1101/2023.01.17.524394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
The most prominent role of motor cortex is generating patterns of neural activity that lead to movement, but it is also active when we simply imagine movements in the absence of actual motor output. Despite decades of behavioral and imaging studies, it is unknown how the specific activity patterns and temporal dynamics within motor cortex during covert motor imagery relate to those during motor execution. Here we recorded intracortical activity from the motor cortex of two people with residual wrist function following incomplete spinal cord injury as they performed both actual and imagined isometric wrist extensions. We found that we could decompose the population-level activity into orthogonal subspaces such that one set of components was similarly active during both action and imagery, and others were only active during a single task typeâ€"action or imagery. Although they inhabited orthogonal neural dimensions, the action-unique and imagery-unique subspaces contained a strikingly similar set of dynamical features. Our results suggest that during motor imagery, motor cortex maintains the same overall population dynamics as during execution by recreating the missing components related to motor output and/or feedback within a unique imagery-only subspace.
Collapse
|
42
|
Neural manifold analysis of brain circuit dynamics in health and disease. J Comput Neurosci 2023; 51:1-21. [PMID: 36522604 PMCID: PMC9840597 DOI: 10.1007/s10827-022-00839-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 08/30/2022] [Accepted: 10/29/2022] [Indexed: 12/23/2022]
Abstract
Recent developments in experimental neuroscience make it possible to simultaneously record the activity of thousands of neurons. However, the development of analysis approaches for such large-scale neural recordings have been slower than those applicable to single-cell experiments. One approach that has gained recent popularity is neural manifold learning. This approach takes advantage of the fact that often, even though neural datasets may be very high dimensional, the dynamics of neural activity tends to traverse a much lower-dimensional space. The topological structures formed by these low-dimensional neural subspaces are referred to as "neural manifolds", and may potentially provide insight linking neural circuit dynamics with cognitive function and behavioral performance. In this paper we review a number of linear and non-linear approaches to neural manifold learning, including principal component analysis (PCA), multi-dimensional scaling (MDS), Isomap, locally linear embedding (LLE), Laplacian eigenmaps (LEM), t-SNE, and uniform manifold approximation and projection (UMAP). We outline these methods under a common mathematical nomenclature, and compare their advantages and disadvantages with respect to their use for neural data analysis. We apply them to a number of datasets from published literature, comparing the manifolds that result from their application to hippocampal place cells, motor cortical neurons during a reaching task, and prefrontal cortical neurons during a multi-behavior task. We find that in many circumstances linear algorithms produce similar results to non-linear methods, although in particular cases where the behavioral complexity is greater, non-linear methods tend to find lower-dimensional manifolds, at the possible expense of interpretability. We demonstrate that these methods are applicable to the study of neurological disorders through simulation of a mouse model of Alzheimer's Disease, and speculate that neural manifold analysis may help us to understand the circuit-level consequences of molecular and cellular neuropathology.
Collapse
|
43
|
Brennan C, Aggarwal A, Pei R, Sussillo D, Proekt A. One dimensional approximations of neuronal dynamics reveal computational strategy. PLoS Comput Biol 2023; 19:e1010784. [PMID: 36607933 PMCID: PMC9821456 DOI: 10.1371/journal.pcbi.1010784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Accepted: 12/01/2022] [Indexed: 01/07/2023] Open
Abstract
The relationship between neuronal activity and computations embodied by it remains an open question. We develop a novel methodology that condenses observed neuronal activity into a quantitatively accurate, simple, and interpretable model and validate it on diverse systems and scales from single neurons in C. elegans to fMRI in humans. The model treats neuronal activity as collections of interlocking 1-dimensional trajectories. Despite their simplicity, these models accurately predict future neuronal activity and future decisions made by human participants. Moreover, the structure formed by interconnected trajectories-a scaffold-is closely related to the computational strategy of the system. We use these scaffolds to compare the computational strategy of primates and artificial systems trained on the same task to identify specific conditions under which the artificial agent learns the same strategy as the primate. The computational strategy extracted using our methodology predicts specific errors on novel stimuli. These results show that our methodology is a powerful tool for studying the relationship between computation and neuronal activity across diverse systems.
Collapse
Affiliation(s)
- Connor Brennan
- Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Adeeti Aggarwal
- Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Rui Pei
- Department of Psychology, Stanford University, Palo Alto, California, United States of America
| | - David Sussillo
- Stanford Neurosciences Institute, Stanford University, Palo Alto, California, United States of America
- Department of Electrical Engineering, Stanford University, Palo Alto, California, United States of America
| | - Alex Proekt
- Department of Anesthesiology and Critical Care, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
44
|
Xu W, De Carvalho F, Jackson A. Conserved Population Dynamics in the Cerebro-Cerebellar System between Waking and Sleep. J Neurosci 2022; 42:9415-9425. [PMID: 36384678 PMCID: PMC9794372 DOI: 10.1523/jneurosci.0807-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 09/20/2022] [Accepted: 10/23/2022] [Indexed: 11/17/2022] Open
Abstract
Despite the importance of the cerebellum for motor learning, and the recognized role of sleep in motor memory consolidation, surprisingly little is known about neural activity in the sleeping cerebro-cerebellar system. Here, we used wireless recording from primary motor cortex (M1) and the cerebellum in three female monkeys to examine the relationship between patterns of single-unit spiking activity observed during waking behavior and in natural sleep. Across the population of recorded units, we observed similarities in the timing of firing relative to local field potential features associated with both movements during waking and up state during sleep. We also observed a consistent pattern of asymmetry in pairwise cross-correlograms, indicative of preserved sequential firing in both wake and sleep at low frequencies. Despite the overall similarity in population dynamics between wake and sleep, there was a global change in the timing of cerebellar activity relative to motor cortex, from contemporaneous in the awake state to motor cortex preceding the cerebellum in sleep. We speculate that similar population dynamics in waking and sleep may imply that cerebellar internal models are activated in both states, despite the absence of movement when asleep. Moreover, spindle frequency coherence between the cerebellum and motor cortex may provide a mechanism for cerebellar computations to influence sleep-dependent learning processes in the motor cortex.SIGNIFICANCE STATEMENT It is well known that sleep can lead to improved motor performance. One possibility is that off-line learning results from neural activity during sleep in brain areas responsible for the control of movement. In this study we show for the first time that neuronal patterns in the cerebro-cerebellar system are conserved during both movements and sleep up-states, albeit with a shift in the relative timing between areas. Additionally, we show the presence of simultaneous M1-cerebellar spike coherence at spindle frequencies associated with up-state replay and postulate that this is a mechanism whereby a cerebellar internal model can shape plasticity in neocortical circuits during sleep.
Collapse
Affiliation(s)
- Wei Xu
- Center for Discovery Brain Sciences, Edinburgh University, Edinburgh EH16 4SB, United Kingdom
| | - Felipe De Carvalho
- Biosciences Institute, Newcastle University, Newcastle upon Tyne NE2 4HH, United Kingdom
| | - Andrew Jackson
- Biosciences Institute, Newcastle University, Newcastle upon Tyne NE2 4HH, United Kingdom
| |
Collapse
|
45
|
Melbaum S, Russo E, Eriksson D, Schneider A, Durstewitz D, Brox T, Diester I. Conserved structures of neural activity in sensorimotor cortex of freely moving rats allow cross-subject decoding. Nat Commun 2022; 13:7420. [PMID: 36456557 PMCID: PMC9715555 DOI: 10.1038/s41467-022-35115-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 11/17/2022] [Indexed: 12/04/2022] Open
Abstract
Our knowledge about neuronal activity in the sensorimotor cortex relies primarily on stereotyped movements that are strictly controlled in experimental settings. It remains unclear how results can be carried over to less constrained behavior like that of freely moving subjects. Toward this goal, we developed a self-paced behavioral paradigm that encouraged rats to engage in different movement types. We employed bilateral electrophysiological recordings across the entire sensorimotor cortex and simultaneous paw tracking. These techniques revealed behavioral coupling of neurons with lateralization and an anterior-posterior gradient from the premotor to the primary sensory cortex. The structure of population activity patterns was conserved across animals despite the severe under-sampling of the total number of neurons and variations in electrode positions across individuals. We demonstrated cross-subject and cross-session generalization in a decoding task through alignments of low-dimensional neural manifolds, providing evidence of a conserved neuronal code.
Collapse
Affiliation(s)
- Svenja Melbaum
- grid.5963.9Computer Vision Group, Department of Computer Science, University of Freiburg, 79110 Freiburg, Germany ,grid.5963.9IMBIT//BrainLinks-BrainTools, University of Freiburg, Georges-Köhler-Allee 201, 79110 Freiburg, Germany
| | - Eleonora Russo
- grid.410607.4Department of Psychiatry and Psychotherapy, University Medical Center, Johannes Gutenberg University, 55131 Mainz, Germany ,grid.7700.00000 0001 2190 4373Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, University of Heidelberg, 68159 Mannheim, Germany
| | - David Eriksson
- grid.5963.9IMBIT//BrainLinks-BrainTools, University of Freiburg, Georges-Köhler-Allee 201, 79110 Freiburg, Germany ,grid.5963.9Optophysiology Lab, Faculty of Biology, University of Freiburg, 79110 Freiburg, Germany
| | - Artur Schneider
- grid.5963.9IMBIT//BrainLinks-BrainTools, University of Freiburg, Georges-Köhler-Allee 201, 79110 Freiburg, Germany ,grid.5963.9Optophysiology Lab, Faculty of Biology, University of Freiburg, 79110 Freiburg, Germany
| | - Daniel Durstewitz
- grid.7700.00000 0001 2190 4373Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, University of Heidelberg, 68159 Mannheim, Germany
| | - Thomas Brox
- grid.5963.9Computer Vision Group, Department of Computer Science, University of Freiburg, 79110 Freiburg, Germany ,grid.5963.9IMBIT//BrainLinks-BrainTools, University of Freiburg, Georges-Köhler-Allee 201, 79110 Freiburg, Germany
| | - Ilka Diester
- grid.5963.9IMBIT//BrainLinks-BrainTools, University of Freiburg, Georges-Köhler-Allee 201, 79110 Freiburg, Germany ,grid.5963.9Optophysiology Lab, Faculty of Biology, University of Freiburg, 79110 Freiburg, Germany ,grid.5963.9Bernstein Center Freiburg, University of Freiburg, 79104 Freiburg, Germany
| |
Collapse
|
46
|
Marshall NJ, Glaser JI, Trautmann EM, Amematsro EA, Perkins SM, Shadlen MN, Abbott LF, Cunningham JP, Churchland MM. Flexible neural control of motor units. Nat Neurosci 2022; 25:1492-1504. [PMID: 36216998 PMCID: PMC9633430 DOI: 10.1038/s41593-022-01165-8] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Accepted: 08/12/2022] [Indexed: 01/13/2023]
Abstract
Voluntary movement requires communication from cortex to the spinal cord, where a dedicated pool of motor units (MUs) activates each muscle. The canonical description of MU function rests upon two foundational tenets. First, cortex cannot control MUs independently but supplies each pool with a common drive. Second, MUs are recruited in a rigid fashion that largely accords with Henneman's size principle. Although this paradigm has considerable empirical support, a direct test requires simultaneous observations of many MUs across diverse force profiles. In this study, we developed an isometric task that allowed stable MU recordings, in a rhesus macaque, even during rapidly changing forces. Patterns of MU activity were surprisingly behavior-dependent and could be accurately described only by assuming multiple drives. Consistent with flexible descending control, microstimulation of neighboring cortical sites recruited different MUs. Furthermore, the cortical population response displayed sufficient degrees of freedom to potentially exert fine-grained control. Thus, MU activity is flexibly controlled to meet task demands, and cortex may contribute to this ability.
Collapse
Affiliation(s)
- Najja J Marshall
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Joshua I Glaser
- Zuckerman Institute, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University Medical Center, New York, NY, USA
| | - Eric M Trautmann
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
| | - Elom A Amematsro
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Sean M Perkins
- Zuckerman Institute, Columbia University, New York, NY, USA
- Department of Biomedical Engineering, Columbia University, New York, NY, USA
| | - Michael N Shadlen
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY, USA
- Howard Hughes Medical Institute, Columbia University, New York, NY, USA
| | - L F Abbott
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University Medical Center, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY, USA
- Department of Physiology and Cellular Biophysics, Columbia University Medical Center, New York, NY, USA
| | - John P Cunningham
- Zuckerman Institute, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University Medical Center, New York, NY, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA.
- Zuckerman Institute, Columbia University, New York, NY, USA.
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY, USA.
| |
Collapse
|
47
|
Movement is governed by rotational neural dynamics in spinal motor networks. Nature 2022; 610:526-531. [PMID: 36224394 DOI: 10.1038/s41586-022-05293-w] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 08/30/2022] [Indexed: 11/08/2022]
Abstract
Although the generation of movements is a fundamental function of the nervous system, the underlying neural principles remain unclear. As flexor and extensor muscle activities alternate during rhythmic movements such as walking, it is often assumed that the responsible neural circuitry is similarly exhibiting alternating activity1. Here we present ensemble recordings of neurons in the lumbar spinal cord that indicate that, rather than alternating, the population is performing a low-dimensional 'rotation' in neural space, in which the neural activity is cycling through all phases continuously during the rhythmic behaviour. The radius of rotation correlates with the intended muscle force, and a perturbation of the low-dimensional trajectory can modify the motor behaviour. As existing models of spinal motor control do not offer an adequate explanation of rotation1,2, we propose a theory of neural generation of movements from which this and other unresolved issues, such as speed regulation, force control and multifunctionalism, are readily explained.
Collapse
|
48
|
Rajalingham R, Piccato A, Jazayeri M. Recurrent neural networks with explicit representation of dynamic latent variables can mimic behavioral patterns in a physical inference task. Nat Commun 2022; 13:5865. [PMID: 36195614 PMCID: PMC9532407 DOI: 10.1038/s41467-022-33581-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 09/22/2022] [Indexed: 11/09/2022] Open
Abstract
Primates can richly parse sensory inputs to infer latent information. This ability is hypothesized to rely on establishing mental models of the external world and running mental simulations of those models. However, evidence supporting this hypothesis is limited to behavioral models that do not emulate neural computations. Here, we test this hypothesis by directly comparing the behavior of primates (humans and monkeys) in a ball interception task to that of a large set of recurrent neural network (RNN) models with or without the capacity to dynamically track the underlying latent variables. Humans and monkeys exhibit similar behavioral patterns. This primate behavioral pattern is best captured by RNNs endowed with dynamic inference, consistent with the hypothesis that the primate brain uses dynamic inferences to support flexible physical predictions. Moreover, our work highlights a general strategy for using model neural systems to test computational hypotheses of higher brain function.
Collapse
Affiliation(s)
- Rishi Rajalingham
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Building 46, 43 Vassar St., Cambridge, MA, 02139, USA
| | - Aída Piccato
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Building 46, 43 Vassar St., Cambridge, MA, 02139, USA.,Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Building 46, 43 Vassar St., Cambridge, MA, 02139-4307, USA
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Building 46, 43 Vassar St., Cambridge, MA, 02139, USA. .,Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Building 46, 43 Vassar St., Cambridge, MA, 02139-4307, USA.
| |
Collapse
|
49
|
Qiao H, Chen J, Huang X. A Survey of Brain-Inspired Intelligent Robots: Integration of Vision, Decision, Motion Control, and Musculoskeletal Systems. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:11267-11280. [PMID: 33909584 DOI: 10.1109/tcyb.2021.3071312] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Current robotic studies are focused on the performance of specific tasks. However, such tasks cannot be generalized, and some special tasks, such as compliant and precise manipulation, fast and flexible response, and deep collaboration between humans and robots, cannot be realized. Brain-inspired intelligent robots imitate humans and animals, from inner mechanisms to external structures, through an integration of visual cognition, decision making, motion control, and musculoskeletal systems. This kind of robot is more likely to realize the functions that current robots cannot realize and become human friends. With the focus on the development of brain-inspired intelligent robots, this article reviews cutting-edge research in the areas of brain-inspired visual cognition, decision making, musculoskeletal robots, motion control, and their integration. It aims to provide greater insight into brain-inspired intelligent robots and attracts more attention to this field from the global research community.
Collapse
|
50
|
Kadmon Harpaz N, Hardcastle K, Ölveczky BP. Learning-induced changes in the neural circuits underlying motor sequence execution. Curr Opin Neurobiol 2022; 76:102624. [PMID: 36030613 PMCID: PMC11125547 DOI: 10.1016/j.conb.2022.102624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 06/02/2022] [Accepted: 07/19/2022] [Indexed: 11/03/2022]
Abstract
As the old adage goes: practice makes perfect. Yet, the neural mechanisms by which rote repetition transforms a halting behavior into a fluid, effortless, and "automatic" action are not well understood. Here we consider the possibility that well-practiced motor sequences, which initially rely on higher-level decision-making circuits, become wholly specified in lower-level control circuits. We review studies informing this idea, discuss the constraints on such shift in control, and suggest approaches to pinpoint circuit-level changes associated with motor sequence learning.
Collapse
Affiliation(s)
- Naama Kadmon Harpaz
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University. https://twitter.com/@NKadmonHarpaz
| | - Kiah Hardcastle
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University. https://twitter.com/@kiahhardcastle
| | - Bence P Ölveczky
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University.
| |
Collapse
|