51
|
Vahidi P, Sani OG, Shanechi MM. Modeling and dissociation of intrinsic and input-driven neural population dynamics underlying behavior. Proc Natl Acad Sci U S A 2024; 121:e2212887121. [PMID: 38335258 PMCID: PMC10873612 DOI: 10.1073/pnas.2212887121] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 12/03/2023] [Indexed: 02/12/2024] Open
Abstract
Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other brain regions. To avoid misinterpreting temporally structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of behavior. We first show how training dynamical models of neural activity while considering behavior but not input or input but not behavior may lead to misinterpretations. We then develop an analytical learning method for linear dynamical models that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of the task while other methods can be influenced by the task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the different subjects and tasks, whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.
Collapse
Affiliation(s)
- Parsa Vahidi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Omid G. Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Maryam M. Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA90089
- Thomas Lord Department of Computer Science and Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| |
Collapse
|
52
|
Bush NE, Ramirez JM. Latent neural population dynamics underlying breathing, opioid-induced respiratory depression and gasping. Nat Neurosci 2024; 27:259-271. [PMID: 38182835 PMCID: PMC10849970 DOI: 10.1038/s41593-023-01520-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 11/06/2023] [Indexed: 01/07/2024]
Abstract
Breathing is vital and must be concurrently robust and flexible. This rhythmic behavior is generated and maintained within a rostrocaudally aligned set of medullary nuclei called the ventral respiratory column (VRC). The rhythmic properties of individual VRC nuclei are well known, yet technical challenges have limited the interrogation of the entire VRC population simultaneously. Here we characterize over 15,000 medullary units using high-density electrophysiology, opto-tagging and histological reconstruction. Population dynamics analysis reveals consistent rotational trajectories through a low-dimensional neural manifold. These rotations are robust and maintained even during opioid-induced respiratory depression. During severe hypoxia-induced gasping, the low-dimensional dynamics of the VRC reconfigure from rotational to all-or-none, ballistic efforts. Thus, latent dynamics provide a unifying lens onto the activities of large, heterogeneous populations of neurons involved in the simple, yet vital, behavior of breathing, and well describe how these populations respond to a variety of perturbations.
Collapse
Affiliation(s)
- Nicholas Edward Bush
- Center for Integrative Brain Research, Seattle Children's Research Institute, Seattle, WA, USA
| | - Jan-Marino Ramirez
- Center for Integrative Brain Research, Seattle Children's Research Institute, Seattle, WA, USA.
- Department of Pediatrics, University of Washington, Seattle, WA, USA.
- Department of Neurological Surgery, University of Washington, Seattle, WA, USA.
| |
Collapse
|
53
|
Weng G, Clark K, Akbarian A, Noudoost B, Nategh N. Time-varying generalized linear models: characterizing and decoding neuronal dynamics in higher visual areas. Front Comput Neurosci 2024; 18:1273053. [PMID: 38348287 PMCID: PMC10859875 DOI: 10.3389/fncom.2024.1273053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 01/09/2024] [Indexed: 02/15/2024] Open
Abstract
To create a behaviorally relevant representation of the visual world, neurons in higher visual areas exhibit dynamic response changes to account for the time-varying interactions between external (e.g., visual input) and internal (e.g., reward value) factors. The resulting high-dimensional representational space poses challenges for precisely quantifying individual factors' contributions to the representation and readout of sensory information during a behavior. The widely used point process generalized linear model (GLM) approach provides a powerful framework for a quantitative description of neuronal processing as a function of various sensory and non-sensory inputs (encoding) as well as linking particular response components to particular behaviors (decoding), at the level of single trials and individual neurons. However, most existing variations of GLMs assume the neural systems to be time-invariant, making them inadequate for modeling nonstationary characteristics of neuronal sensitivity in higher visual areas. In this review, we summarize some of the existing GLM variations, with a focus on time-varying extensions. We highlight their applications to understanding neural representations in higher visual areas and decoding transient neuronal sensitivity as well as linking physiology to behavior through manipulation of model components. This time-varying class of statistical models provide valuable insights into the neural basis of various visual behaviors in higher visual areas and hold significant potential for uncovering the fundamental computational principles that govern neuronal processing underlying various behaviors in different regions of the brain.
Collapse
Affiliation(s)
- Geyu Weng
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, United States
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Kelsey Clark
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Amir Akbarian
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Behrad Noudoost
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Neda Nategh
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
- Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT, United States
| |
Collapse
|
54
|
Weber J, Solbakk AK, Blenkmann AO, Llorens A, Funderud I, Leske S, Larsson PG, Ivanovic J, Knight RT, Endestad T, Helfrich RF. Ramping dynamics and theta oscillations reflect dissociable signatures during rule-guided human behavior. Nat Commun 2024; 15:637. [PMID: 38245516 PMCID: PMC10799948 DOI: 10.1038/s41467-023-44571-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Accepted: 12/19/2023] [Indexed: 01/22/2024] Open
Abstract
Contextual cues and prior evidence guide human goal-directed behavior. The neurophysiological mechanisms that implement contextual priors to guide subsequent actions in the human brain remain unclear. Using intracranial electroencephalography (iEEG), we demonstrate that increasing uncertainty introduces a shift from a purely oscillatory to a mixed processing regime with an additional ramping component. Oscillatory and ramping dynamics reflect dissociable signatures, which likely differentially contribute to the encoding and transfer of different cognitive variables in a cue-guided motor task. The results support the idea that prefrontal activity encodes rules and ensuing actions in distinct coding subspaces, while theta oscillations synchronize the prefrontal-motor network, possibly to guide action execution. Collectively, our results reveal how two key features of large-scale neural population activity, namely continuous ramping dynamics and oscillatory synchrony, jointly support rule-guided human behavior.
Collapse
Affiliation(s)
- Jan Weber
- Hertie Institute for Clinical Brain Research, Center for Neurology, University Medical Center Tübingen, Tübingen, Germany
- International Max Planck Research School for the Mechanisms of Mental Function and Dysfunction, University of Tübingen, Tübingen, Germany
| | - Anne-Kristin Solbakk
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Department of Neurosurgery, Oslo University Hospital, Oslo, Norway
- Department of Neuropsychology, Helgeland Hospital, Mosjøen, Norway
| | - Alejandro O Blenkmann
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
| | - Anais Llorens
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Helen Wills Neuroscience Institute, UC Berkeley, Berkeley, CA, USA
| | - Ingrid Funderud
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Department of Neuropsychology, Helgeland Hospital, Mosjøen, Norway
| | - Sabine Leske
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Department of Musicology, University of Oslo, Oslo, Norway
| | | | | | - Robert T Knight
- Helen Wills Neuroscience Institute, UC Berkeley, Berkeley, CA, USA
- Department of Psychology, UC Berkeley, Berkeley, CA, USA
| | - Tor Endestad
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
| | - Randolph F Helfrich
- Hertie Institute for Clinical Brain Research, Center for Neurology, University Medical Center Tübingen, Tübingen, Germany.
| |
Collapse
|
55
|
Gort J. Emergence of Universal Computations Through Neural Manifold Dynamics. Neural Comput 2024; 36:227-270. [PMID: 38101328 DOI: 10.1162/neco_a_01631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 09/05/2023] [Indexed: 12/17/2023]
Abstract
There is growing evidence that many forms of neural computation may be implemented by low-dimensional dynamics unfolding at the population scale. However, neither the connectivity structure nor the general capabilities of these embedded dynamical processes are currently understood. In this work, the two most common formalisms of firing-rate models are evaluated using tools from analysis, topology, and nonlinear dynamics in order to provide plausible explanations for these problems. It is shown that low-rank structured connectivities predict the formation of invariant and globally attracting manifolds in all these models. Regarding the dynamics arising in these manifolds, it is proved they are topologically equivalent across the considered formalisms. This letter also shows that under the low-rank hypothesis, the flows emerging in neural manifolds, including input-driven systems, are universal, which broadens previous findings. It explores how low-dimensional orbits can bear the production of continuous sets of muscular trajectories, the implementation of central pattern generators, and the storage of memory states. These dynamics can robustly simulate any Turing machine over arbitrary bounded memory strings, virtually endowing rate models with the power of universal computation. In addition, the letter shows how the low-rank hypothesis predicts the parsimonious correlation structure observed in cortical activity. Finally, it discusses how this theory could provide a useful tool from which to study neuropsychological phenomena using mathematical methods.
Collapse
Affiliation(s)
- Joan Gort
- Facultat de Psicologia, Universitat Autònoma de Barcelona, 08193, Bellaterra, Barcelona, Spain
| |
Collapse
|
56
|
Love K, Cao D, Chang JC, Dal'Bello LR, Ma X, O'Shea DJ, Schone HR, Shahbazi M, Smoulder A. Highlights from the 32nd Annual Meeting of the Society for the Neural Control of Movement. J Neurophysiol 2024; 131:75-87. [PMID: 38057264 DOI: 10.1152/jn.00428.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 12/04/2023] [Indexed: 12/08/2023] Open
Affiliation(s)
- Kassia Love
- Massachusetts Eye and Ear, Boston, Massachusetts, United States
| | - Di Cao
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, Maryland, United States
- Center for Movement Studies, Kennedy Krieger Institute, Baltimore, Maryland, United States
| | - Joanna C Chang
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Lucas R Dal'Bello
- Laboratory of Neuromotor Physiology, IRCCS Fondazione Santa Lucia, Rome, Italy
| | - Xuan Ma
- Department of Neuroscience, Northwestern University, Chicago, Illinois, United States
| | - Daniel J O'Shea
- Department of Bioengineering, Stanford University, Stanford, California, United States
| | - Hunter R Schone
- Rehabilitation and Neural Engineering Laboratory, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
| | - Mahdiyar Shahbazi
- Western Institute for Neuroscience, Western University, London, Ontario, Canada
| | - Adam Smoulder
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
- Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, United States
| |
Collapse
|
57
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent factors and structures in neural population activity. Nat Biomed Eng 2024; 8:85-108. [PMID: 38082181 PMCID: PMC11735406 DOI: 10.1038/s41551-023-01106-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 09/12/2023] [Indexed: 12/26/2023]
Abstract
Modelling the spatiotemporal dynamics in the activity of neural populations while also enabling their flexible inference is hindered by the complexity and noisiness of neural observations. Here we show that the lower-dimensional nonlinear latent factors and latent structures can be computationally modelled in a manner that allows for flexible inference causally, non-causally and in the presence of missing neural observations. To enable flexible inference, we developed a neural network that separates the model into jointly trained manifold and dynamic latent factors such that nonlinearity is captured through the manifold factors and the dynamics can be modelled in tractable linear form on this nonlinear manifold. We show that the model, which we named 'DFINE' (for 'dynamical flexible inference for nonlinear embeddings') achieves flexible inference in simulations of nonlinear dynamics and across neural datasets representing a diversity of brain regions and behaviours. Compared with earlier neural-network models, DFINE enables flexible inference, better predicts neural activity and behaviour, and better captures the latent neural manifold structure. DFINE may advance the development of neurotechnology and investigations in neuroscience.
Collapse
Affiliation(s)
- Hamidreza Abbaspourazad
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Eray Erturk
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Departments of Neurosurgery, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
58
|
Elmoznino E, Bonner MF. High-performing neural network models of visual cortex benefit from high latent dimensionality. PLoS Comput Biol 2024; 20:e1011792. [PMID: 38198504 PMCID: PMC10805290 DOI: 10.1371/journal.pcbi.1011792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 01/23/2024] [Accepted: 12/30/2023] [Indexed: 01/12/2024] Open
Abstract
Geometric descriptions of deep neural networks (DNNs) have the potential to uncover core representational principles of computational models in neuroscience. Here we examined the geometry of DNN models of visual cortex by quantifying the latent dimensionality of their natural image representations. A popular view holds that optimal DNNs compress their representations onto low-dimensional subspaces to achieve invariance and robustness, which suggests that better models of visual cortex should have lower dimensional geometries. Surprisingly, we found a strong trend in the opposite direction-neural networks with high-dimensional image subspaces tended to have better generalization performance when predicting cortical responses to held-out stimuli in both monkey electrophysiology and human fMRI data. Moreover, we found that high dimensionality was associated with better performance when learning new categories of stimuli, suggesting that higher dimensional representations are better suited to generalize beyond their training domains. These findings suggest a general principle whereby high-dimensional geometry confers computational benefits to DNN models of visual cortex.
Collapse
Affiliation(s)
- Eric Elmoznino
- Department of Cognitive Science, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Michael F. Bonner
- Department of Cognitive Science, Johns Hopkins University, Baltimore, Maryland, United States of America
| |
Collapse
|
59
|
Safaie M, Chang JC, Park J, Miller LE, Dudman JT, Perich MG, Gallego JA. Preserved neural dynamics across animals performing similar behaviour. Nature 2023; 623:765-771. [PMID: 37938772 PMCID: PMC10665198 DOI: 10.1038/s41586-023-06714-0] [Citation(s) in RCA: 31] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 10/04/2023] [Indexed: 11/09/2023]
Abstract
Animals of the same species exhibit similar behaviours that are advantageously adapted to their body and environment. These behaviours are shaped at the species level by selection pressures over evolutionary timescales. Yet, it remains unclear how these common behavioural adaptations emerge from the idiosyncratic neural circuitry of each individual. The overall organization of neural circuits is preserved across individuals1 because of their common evolutionarily specified developmental programme2-4. Such organization at the circuit level may constrain neural activity5-8, leading to low-dimensional latent dynamics across the neural population9-11. Accordingly, here we suggested that the shared circuit-level constraints within a species would lead to suitably preserved latent dynamics across individuals. We analysed recordings of neural populations from monkey and mouse motor cortex to demonstrate that neural dynamics in individuals from the same species are surprisingly preserved when they perform similar behaviour. Neural population dynamics were also preserved when animals consciously planned future movements without overt behaviour12 and enabled the decoding of planned and ongoing movement across different individuals. Furthermore, we found that preserved neural dynamics extend beyond cortical regions to the dorsal striatum, an evolutionarily older structure13,14. Finally, we used neural network models to demonstrate that behavioural similarity is necessary but not sufficient for this preservation. We posit that these emergent dynamics result from evolutionary constraints on brain development and thus reflect fundamental properties of the neural basis of behaviour.
Collapse
Affiliation(s)
- Mostafa Safaie
- Department of Bioengineering, Imperial College London, London, UK
| | - Joanna C Chang
- Department of Bioengineering, Imperial College London, London, UK
| | - Junchol Park
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, TX, USA
| | - Lee E Miller
- Departments of Physiology, Biomedical Engineering and Physical Medicine and Rehabilitation, Northwestern University and Shirley Ryan Ability Lab, Chicago, IL, USA
| | - Joshua T Dudman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, TX, USA
| | - Matthew G Perich
- Département de Neurosciences, Faculté de Médecine, Université de Montréal, Montreal, Quebec, Canada.
- Mila, Quebec Artificial Intelligence Institute, Montreal, Quebec, Canada.
| | - Juan A Gallego
- Department of Bioengineering, Imperial College London, London, UK.
| |
Collapse
|
60
|
Jajcay N, Hlinka J. Towards a dynamical understanding of microstate analysis of M/EEG data. Neuroimage 2023; 281:120371. [PMID: 37716592 DOI: 10.1016/j.neuroimage.2023.120371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 09/04/2023] [Accepted: 09/08/2023] [Indexed: 09/18/2023] Open
Abstract
One of the interesting aspects of EEG data is the presence of temporally stable and spatially coherent patterns of activity, known as microstates, which have been linked to various cognitive and clinical phenomena. However, there is still no general agreement on the interpretation of microstate analysis. Various clustering algorithms have been used for microstate computation, and multiple studies suggest that the microstate time series may provide insight into the neural activity of the brain in the resting state. This study addresses two gaps in the literature. Firstly, by applying several state-of-the-art microstate algorithms to a large dataset of EEG recordings, we aim to characterise and describe various microstate algorithms. We demonstrate and discuss why the three "classically" used algorithms ((T)AAHC and modified K-Means) yield virtually the same results, while HMM algorithm generates the most dissimilar results. Secondly, we aim to test the hypothesis that dynamical microstate properties might be, to a large extent, determined by the linear characteristics of the underlying EEG signal, in particular, by the cross-covariance and autocorrelation structure of the EEG data. To this end, we generated a Fourier transform surrogate of the EEG signal to compare microstate properties. Here, we found that these are largely similar, thus hinting that microstate properties depend to a very high degree on the linear covariance and autocorrelation structure of the underlying EEG data. Finally, we treated the EEG data as a vector autoregression process, estimated its parameters, and generated surrogate stationary and linear data from fitted VAR. We observed that such a linear model generates microstates highly comparable to those estimated from real EEG data, supporting the conclusion that a linear EEG model can help with the methodological and clinical interpretation of both static and dynamic human brain microstate properties.
Collapse
Affiliation(s)
- Nikola Jajcay
- Center for Advanced Studies of Brain and Consciousness, National Institute of Mental Health, Klecany, 250 67, Czech Republic; Department of Complex Systems, Institute of Computer Science, Czech Academy of Sciences, Prague, 182 07, Czech Republic.
| | - Jaroslav Hlinka
- Center for Advanced Studies of Brain and Consciousness, National Institute of Mental Health, Klecany, 250 67, Czech Republic; Department of Complex Systems, Institute of Computer Science, Czech Academy of Sciences, Prague, 182 07, Czech Republic.
| |
Collapse
|
61
|
Barbosa J, Proville R, Rodgers CC, DeWeese MR, Ostojic S, Boubenec Y. Early selection of task-relevant features through population gating. Nat Commun 2023; 14:6837. [PMID: 37884507 PMCID: PMC10603060 DOI: 10.1038/s41467-023-42519-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 10/12/2023] [Indexed: 10/28/2023] Open
Abstract
Brains can gracefully weed out irrelevant stimuli to guide behavior. This feat is believed to rely on a progressive selection of task-relevant stimuli across the cortical hierarchy, but the specific across-area interactions enabling stimulus selection are still unclear. Here, we propose that population gating, occurring within primary auditory cortex (A1) but controlled by top-down inputs from prelimbic region of medial prefrontal cortex (mPFC), can support across-area stimulus selection. Examining single-unit activity recorded while rats performed an auditory context-dependent task, we found that A1 encoded relevant and irrelevant stimuli along a common dimension of its neural space. Yet, the relevant stimulus encoding was enhanced along an extra dimension. In turn, mPFC encoded only the stimulus relevant to the ongoing context. To identify candidate mechanisms for stimulus selection within A1, we reverse-engineered low-rank RNNs trained on a similar task. Our analyses predicted that two context-modulated neural populations gated their preferred stimulus in opposite contexts, which we confirmed in further analyses of A1. Finally, we show in a two-region RNN how population gating within A1 could be controlled by top-down inputs from PFC, enabling flexible across-area communication despite fixed inter-areal connectivity.
Collapse
Affiliation(s)
- Joao Barbosa
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research University, 75005, Paris, France.
| | - Rémi Proville
- Tailored Data Solutions, 192 Cours Gambetta, 84300, Cavaillon, France
| | - Chris C Rodgers
- Department of Neurosurgery, Emory University, Atlanta, GA, 30033, USA
| | - Michael R DeWeese
- Department of Physics, Helen Wills Neuroscience Institute, and Redwood Center for Theoretical Neuroscience, University of California, Berkeley, CA, USA
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research University, 75005, Paris, France
| | - Yves Boubenec
- Laboratoire des Systèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure PSL Research University, CNRS, Paris, France
| |
Collapse
|
62
|
Pei R, Courtney AL, Ferguson I, Brennan C, Zaki J. A neural signature of social support mitigates negative emotion. Sci Rep 2023; 13:17293. [PMID: 37828064 PMCID: PMC10570303 DOI: 10.1038/s41598-023-43273-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 09/21/2023] [Indexed: 10/14/2023] Open
Abstract
Social support can mitigate the impact of distressing events. Such stress buffering elicits activity in many brain regions, but it remains unclear (1) whether this activity constitutes a stable brain signature, and (2) whether brain activity can predict buffering across people. Here, we developed a neural signature that predicted social buffering of negative emotion in response to real life stressors. During neuroimaging, participants (n = 95) responded to stressful autobiographical memories either naturally, or by imagining a conversation with a peer. Using supervised dimensionality reduction and machine learning techniques, we identified a spatio-temporal neural signature that distinguished between these two trials. Activation of this signature was associated with less negative affect across trials, and people who most activated the signature reported more supportive social connections and lower loneliness outside the lab. Together, this work provides a behaviorally relevant neurophysiological marker for social support that underlies stress buffering.
Collapse
Affiliation(s)
- Rui Pei
- Department of Psychology, Stanford University, Stanford, USA.
| | | | - Ian Ferguson
- Department of Psychology, Stanford University, Stanford, USA
| | | | - Jamil Zaki
- Department of Psychology, Stanford University, Stanford, USA.
| |
Collapse
|
63
|
Mang J, Xu Z, Qi Y, Zhang T. Favoring the cognitive-motor process in the closed-loop of BCI mediated post stroke motor function recovery: challenges and approaches. Front Neurorobot 2023; 17:1271967. [PMID: 37881517 PMCID: PMC10595019 DOI: 10.3389/fnbot.2023.1271967] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 09/08/2023] [Indexed: 10/27/2023] Open
Abstract
The brain-computer interface (BCI)-mediated rehabilitation is emerging as a solution to restore motor skills in paretic patients after stroke. In the human brain, cortical motor neurons not only fire when actions are carried out but are also activated in a wired manner through many cognitive processes related to movement such as imagining, perceiving, and observing the actions. Moreover, the recruitment of motor cortexes can usually be regulated by environmental conditions, forming a closed-loop through neurofeedback. However, this cognitive-motor control loop is often interrupted by the impairment of stroke. The requirement to bridge the stroke-induced gap in the motor control loop is promoting the evolution of the BCI-based motor rehabilitation system and, notably posing many challenges regarding the disease-specific process of post stroke motor function recovery. This review aimed to map the current literature surrounding the new progress in BCI-mediated post stroke motor function recovery involved with cognitive aspect, particularly in how it refired and rewired the neural circuit of motor control through motor learning along with the BCI-centric closed-loop.
Collapse
Affiliation(s)
- Jing Mang
- Department of Neurology, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Zhuo Xu
- Department of Rehabilitation, China-Japan Union Hospital of Jilin University, Changchun, China
| | - YingBin Qi
- Department of Neurology, Jilin Province People's Hospital, Changchun, China
| | - Ting Zhang
- Rehabilitation Therapeutics, School of Nursing, Jilin University, Changchun, China
| |
Collapse
|
64
|
Lim SC, Fusi S, Hen R. Ventral CA1 Population Codes for Anxiety. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.25.559358. [PMID: 37808689 PMCID: PMC10557595 DOI: 10.1101/2023.09.25.559358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
Abstract
The ventral hippocampus is a critical node in the distributed brain network that controls anxiety. Using miniature microscopy and calcium imaging, we recorded ventral CA1 (vCA1) neurons in freely moving mice as they explored variants of classic behavioral assays for anxiety. Unsupervised behavioral segmentation revealed clusters of behavioral motifs that corresponded to exploratory and vigilance-like states. We discovered multiple vCA1 population codes that represented the anxiogenic features of the environment, such as bright light and openness, as well as the moment-to-moment anxiety state of the animals. These population codes possessed distinct generalization properties: neural representations of anxiogenic features were different for open field and elevated plus/zero maze tasks, while neural representations of moment-to-moment anxiety state were similar across both experimental contexts. Our results suggest that anxiety is not tied to the aversive compartments of these mazes but is rather defined by a behavioral state and its corresponding population code that generalizes across environments.
Collapse
|
65
|
Arce-McShane FI, Sessle BJ, Ram Y, Ross CF, Hatsopoulos NG. Multiple regions of sensorimotor cortex encode bite force and gape. Front Syst Neurosci 2023; 17:1213279. [PMID: 37808467 PMCID: PMC10556252 DOI: 10.3389/fnsys.2023.1213279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 08/21/2023] [Indexed: 10/10/2023] Open
Abstract
The precise control of bite force and gape is vital for safe and effective breakdown and manipulation of food inside the oral cavity during feeding. Yet, the role of the orofacial sensorimotor cortex (OSMcx) in the control of bite force and gape is still largely unknown. The aim of this study was to elucidate how individual neurons and populations of neurons in multiple regions of OSMcx differentially encode bite force and static gape when subjects (Macaca mulatta) generated different levels of bite force at varying gapes. We examined neuronal activity recorded simultaneously from three microelectrode arrays implanted chronically in the primary motor (MIo), primary somatosensory (SIo), and cortical masticatory (CMA) areas of OSMcx. We used generalized linear models to evaluate encoding properties of individual neurons and utilized dimensionality reduction techniques to decompose population activity into components related to specific task parameters. Individual neurons encoded bite force more strongly than gape in all three OSMCx areas although bite force was a better predictor of spiking activity in MIo vs. SIo. Population activity differentiated between levels of bite force and gape while preserving task-independent temporal modulation across the behavioral trial. While activation patterns of neuronal populations were comparable across OSMCx areas, the total variance explained by task parameters was context-dependent and differed across areas. These findings suggest that the cortical control of static gape during biting may rely on computations at the population level whereas the strong encoding of bite force at the individual neuron level allows for the precise and rapid control of bite force.
Collapse
Affiliation(s)
- Fritzie I. Arce-McShane
- Department of Oral Health Sciences, School of Dentistry, University of Washington, Seattle, WA, United States
- Graduate Program in Neuroscience, University of Washington, Seattle, WA, United States
| | - Barry J. Sessle
- Faculty of Dentistry and Department of Physiology, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Yasheshvini Ram
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL, United States
| | - Callum F. Ross
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL, United States
| | - Nicholas G. Hatsopoulos
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL, United States
| |
Collapse
|
66
|
Ye J, Collinger JL, Wehbe L, Gaunt R. Neural Data Transformer 2: Multi-context Pretraining for Neural Spiking Activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.18.558113. [PMID: 37781630 PMCID: PMC10541112 DOI: 10.1101/2023.09.18.558113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/03/2023]
Abstract
The neural population spiking activity recorded by intracortical brain-computer interfaces (iBCIs) contain rich structure. Current models of such spiking activity are largely prepared for individual experimental contexts, restricting data volume to that collectable within a single session and limiting the effectiveness of deep neural networks (DNNs). The purported challenge in aggregating neural spiking data is the pervasiveness of context-dependent shifts in the neural data distributions. However, large scale unsupervised pretraining by nature spans heterogeneous data, and has proven to be a fundamental recipe for successful representation learning across deep learning. We thus develop Neural Data Transformer 2 (NDT2), a spatiotemporal Transformer for neural spiking activity, and demonstrate that pretraining can leverage motor BCI datasets that span sessions, subjects, and experimental tasks. NDT2 enables rapid adaptation to novel contexts in downstream decoding tasks and opens the path to deployment of pretrained DNNs for iBCI control. Code: https://github.com/joel99/context_general_bci.
Collapse
Affiliation(s)
- Joel Ye
- Rehab Neural Engineering Labs, University of Pittsburgh
- Neuroscience Institute, Carnegie Mellon University
- Center for the Neural Basis of Cognition, Pittsburgh
| | - Jennifer L. Collinger
- Rehab Neural Engineering Labs, University of Pittsburgh
- Center for the Neural Basis of Cognition, Pittsburgh
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh
- Department of Bioengineering, University of Pittsburgh
- Department of Biomedical Engineering, Carnegie Mellon University
| | - Leila Wehbe
- Neuroscience Institute, Carnegie Mellon University
- Center for the Neural Basis of Cognition, Pittsburgh
- Machine Learning Department, Carnegie Mellon University
| | - Robert Gaunt
- Rehab Neural Engineering Labs, University of Pittsburgh
- Center for the Neural Basis of Cognition, Pittsburgh
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh
- Department of Bioengineering, University of Pittsburgh
- Department of Biomedical Engineering, Carnegie Mellon University
| |
Collapse
|
67
|
Li HH, Curtis CE. Neural population dynamics of human working memory. Curr Biol 2023; 33:3775-3784.e4. [PMID: 37595590 PMCID: PMC10528783 DOI: 10.1016/j.cub.2023.07.067] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 06/20/2023] [Accepted: 07/31/2023] [Indexed: 08/20/2023]
Abstract
The activity of neurons in macaque prefrontal cortex (PFC) persists during working memory (WM) delays, providing a mechanism for memory.1,2,3,4,5,6,7,8,9,10,11 Although theory,11,12 including formal network models,13,14 assumes that WM codes are stable over time, PFC neurons exhibit dynamics inconsistent with these assumptions.15,16,17,18,19 Recently, multivariate reanalyses revealed the coexistence of both stable and dynamic WM codes in macaque PFC.20,21,22,23 Human EEG studies also suggest that WM might contain dynamics.24,25 Nonetheless, how WM dynamics vary across the cortical hierarchy and which factors drive dynamics remain unknown. To elucidate WM dynamics in humans, we decoded WM content from fMRI responses across multiple cortical visual field maps.26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48 We found coexisting stable and dynamic neural representations of WM during a memory-guided saccade task. Geometric analyses of neural subspaces revealed that early visual cortex exhibited stronger dynamics than high-level visual and frontoparietal cortex. Leveraging models of population receptive fields, we visualized and made the neural dynamics interpretable. We found that during WM delays, V1 population initially encoded a narrowly tuned bump of activation centered on the peripheral memory target. Remarkably, this bump then spread inward toward foveal locations, forming a vector along the trajectory of the forthcoming memory-guided saccade. In other words, the neural code transformed into an abstraction of the stimulus more proximal to memory-guided behavior. Therefore, theories of WM must consider both sensory features and their task-relevant abstractions because changes in the format of memoranda naturally drive neural dynamics.
Collapse
Affiliation(s)
- Hsin-Hung Li
- Department of Psychology, New York University, New York, NY 10003, USA; Center for Neural Science, New York University, New York, NY 10003, USA
| | - Clayton E Curtis
- Department of Psychology, New York University, New York, NY 10003, USA; Center for Neural Science, New York University, New York, NY 10003, USA.
| |
Collapse
|
68
|
Bounds HA, Sadahiro M, Hendricks WD, Gajowa M, Gopakumar K, Quintana D, Tasic B, Daigle TL, Zeng H, Oldenburg IA, Adesnik H. All-optical recreation of naturalistic neural activity with a multifunctional transgenic reporter mouse. Cell Rep 2023; 42:112909. [PMID: 37542722 PMCID: PMC10755854 DOI: 10.1016/j.celrep.2023.112909] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 06/23/2023] [Accepted: 07/14/2023] [Indexed: 08/07/2023] Open
Abstract
Determining which features of the neural code drive behavior requires the ability to simultaneously read out and write in neural activity patterns with high precision across many neurons. All-optical systems that combine two-photon calcium imaging and targeted photostimulation enable the activation of specific, functionally defined groups of neurons. However, these techniques are unable to test how patterns of activity across a population contribute to computation because of an inability to both read and write cell-specific firing rates. To overcome this challenge, we make two advances: first, we introduce a genetic line of mice for Cre-dependent co-expression of a calcium indicator and a potent soma-targeted microbial opsin. Second, using this line, we develop a method for read-out and write-in of precise population vectors of neural activity by calibrating the photostimulation to each cell. These advances offer a powerful and convenient platform for investigating the neural codes of computation and behavior.
Collapse
Affiliation(s)
- Hayley A Bounds
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA, USA; The Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA
| | - Masato Sadahiro
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA, USA
| | - William D Hendricks
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA, USA
| | - Marta Gajowa
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA, USA
| | - Karthika Gopakumar
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA, USA
| | - Daniel Quintana
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA, USA
| | | | | | - Hongkui Zeng
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Ian Antón Oldenburg
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA, USA.
| | - Hillel Adesnik
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA, USA; The Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA; Chan Zuckerberg Biohub, San Francisco, CA 94158, USA.
| |
Collapse
|
69
|
Kirchherr S, Mildiner Moraga S, Coudé G, Bimbi M, Ferrari PF, Aarts E, Bonaiuto JJ. Bayesian multilevel hidden Markov models identify stable state dynamics in longitudinal recordings from macaque primary motor cortex. Eur J Neurosci 2023; 58:2787-2806. [PMID: 37382060 DOI: 10.1111/ejn.16065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 04/02/2023] [Accepted: 06/01/2023] [Indexed: 06/30/2023]
Abstract
Neural populations, rather than single neurons, may be the fundamental unit of cortical computation. Analysing chronically recorded neural population activity is challenging not only because of the high dimensionality of activity but also because of changes in the signal that may or may not be due to neural plasticity. Hidden Markov models (HMMs) are a promising technique for analysing such data in terms of discrete latent states, but previous approaches have not considered the statistical properties of neural spiking data, have not been adaptable to longitudinal data, or have not modelled condition-specific differences. We present a multilevel Bayesian HMM addresses these shortcomings by incorporating multivariate Poisson log-normal emission probability distributions, multilevel parameter estimation and trial-specific condition covariates. We applied this framework to multi-unit neural spiking data recorded using chronically implanted multi-electrode arrays from macaque primary motor cortex during a cued reaching, grasping and placing task. We show that, in line with previous work, the model identifies latent neural population states which are tightly linked to behavioural events, despite the model being trained without any information about event timing. The association between these states and corresponding behaviour is consistent across multiple days of recording. Notably, this consistency is not observed in the case of a single-level HMM, which fails to generalise across distinct recording sessions. The utility and stability of this approach is demonstrated using a previously learned task, but this multilevel Bayesian HMM framework would be especially suited for future studies of long-term plasticity in neural populations.
Collapse
Affiliation(s)
- Sebastien Kirchherr
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Bron, France
- Université Claude Bernard Lyon 1, Université de Lyon, France
| | | | - Gino Coudé
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Bron, France
- Université Claude Bernard Lyon 1, Université de Lyon, France
- Inovarion, Paris, France
| | - Marco Bimbi
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Bron, France
- Université Claude Bernard Lyon 1, Université de Lyon, France
| | - Pier F Ferrari
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Bron, France
- Université Claude Bernard Lyon 1, Université de Lyon, France
| | - Emmeke Aarts
- Department of Methodology and Statistics, Universiteit Utrecht, Utrecht, Netherlands
| | - James J Bonaiuto
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Bron, France
- Université Claude Bernard Lyon 1, Université de Lyon, France
| |
Collapse
|
70
|
Wu S, Wang Y. Applying Neural Manifold Constraint on Point Process Model for Neural Spike Prediction. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083695 DOI: 10.1109/embc40787.2023.10340489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Neural prostheses can compensate for functional losses caused by blocked neural pathways by modeling neural activities among cortical areas. Existing methods generally utilize point process models to predict neural spikes from one area to another, and optimize the model by maximizing the log-likelihood between model predictions and recorded activities of individual neurons. However, single-neuron recordings can be distorted, while neuron population activity tends to reside within a stable subspace called the neural manifold, which reflects the connectivity and correlation among output neurons. This paper proposes a neural manifold constraint to modify the loss function for model training. The constraint term minimizes the distance from model predictions to the empirical manifold to amend the model predictions from distorted recordings. We test our methods on synthetic data with distortion on output spike trains and evaluate the similarity between model predictions and original output spike trains by the Kolmogorov-Smirnov test. The results show that the models trained with constraint have higher goodness-of-fit than those trained without constraint, which indicates the potential better approach for neural prostheses in noisy environments.
Collapse
|
71
|
Fine JM, Maisson DJN, Yoo SBM, Cash-Padgett TV, Wang MZ, Zimmermann J, Hayden BY. Abstract Value Encoding in Neural Populations But Not Single Neurons. J Neurosci 2023; 43:4650-4663. [PMID: 37208178 PMCID: PMC10286943 DOI: 10.1523/jneurosci.1954-22.2023] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 05/07/2023] [Accepted: 05/11/2023] [Indexed: 05/21/2023] Open
Abstract
An important open question in neuroeconomics is how the brain represents the value of offers in a way that is both abstract (allowing for comparison) and concrete (preserving the details of the factors that influence value). Here, we examine neuronal responses to risky and safe options in five brain regions that putatively encode value in male macaques. Surprisingly, we find no detectable overlap in the neural codes used for risky and safe options, even when the options have identical subjective values (as revealed by preference) in any of the regions. Indeed, responses are weakly correlated and occupy distinct (semi-orthogonal) encoding subspaces. Notably, however, these subspaces are linked through a linear transform of their constituent encodings, a property that allows for comparison of dissimilar option types. This encoding scheme allows these regions to multiplex decision related processes: they can encode the detailed factors that influence offer value (here, risky and safety) but also directly compare dissimilar offer types. Together these results suggest a neuronal basis for the qualitatively different psychological properties of risky and safe options and highlight the power of population geometry to resolve outstanding problems in neural coding.SIGNIFICANCE STATEMENT To make economic choices, we must have some mechanism for comparing dissimilar offers. We propose that the brain uses distinct neural codes for risky and safe offers, but that these codes are linearly transformable. This encoding scheme has the dual advantage of allowing for comparison across offer types while preserving information about offer type, which in turn allows for flexibility in changing circumstances. We show that responses to risky and safe offers exhibit these predicted properties in five different reward-sensitive regions. Together, these results highlight the power of population coding principles for solving representation problems in economic choice.
Collapse
Affiliation(s)
- Justin M Fine
- Department of Neuroscience and Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, Minnesota 55455
| | - David J-N Maisson
- Department of Neuroscience and Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, Minnesota 55455
| | - Seng Bum Michael Yoo
- Department of Neuroscience and Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, Minnesota 55455
| | - Tyler V Cash-Padgett
- Department of Neuroscience and Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, Minnesota 55455
| | - Maya Zhe Wang
- Department of Neuroscience and Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, Minnesota 55455
| | - Jan Zimmermann
- Department of Neuroscience and Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, Minnesota 55455
| | - Benjamin Y Hayden
- Department of Neuroscience and Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, Minnesota 55455
| |
Collapse
|
72
|
Chang JC, Perich MG, Miller LE, Gallego JA, Clopath C. De novo motor learning creates structure in neural activity space that shapes adaptation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.23.541925. [PMID: 37293081 PMCID: PMC10245862 DOI: 10.1101/2023.05.23.541925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Animals can quickly adapt learned movements in response to external perturbations. Motor adaptation is likely influenced by an animal's existing movement repertoire, but the nature of this influence is unclear. Long-term learning causes lasting changes in neural connectivity which determine the activity patterns that can be produced. Here, we sought to understand how a neural population's activity repertoire, acquired through long-term learning, affects short-term adaptation by modeling motor cortical neural population dynamics during de novo learning and subsequent adaptation using recurrent neural networks. We trained these networks on different motor repertoires comprising varying numbers of movements. Networks with multiple movements had more constrained and robust dynamics, which were associated with more defined neural 'structure'-organization created by the neural population activity patterns corresponding to each movement. This structure facilitated adaptation, but only when small changes in motor output were required, and when the structure of the network inputs, the neural activity space, and the perturbation were congruent. These results highlight trade-offs in skill acquisition and demonstrate how prior experience and external cues during learning can shape the geometrical properties of neural population activity as well as subsequent adaptation.
Collapse
Affiliation(s)
- Joanna C. Chang
- Department of Bioengineering, Imperial College London, London, UK
| | - Matthew G. Perich
- Département de neurosciences, Université de Montréal, Montréal, Canada
| | - Lee E. Miller
- Department of Neuroscience, Northwestern University, USA
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, USA
- Department of Physical Medicine and Rehabilitation, Northwestern University, and Shirley Ryan Ability Lab, Chicago, IL, USA
| | - Juan A. Gallego
- Department of Bioengineering, Imperial College London, London, UK
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK
| |
Collapse
|
73
|
Bachschmid-Romano L, Hatsopoulos NG, Brunel N. Interplay between external inputs and recurrent dynamics during movement preparation and execution in a network model of motor cortex. eLife 2023; 12:77690. [PMID: 37166452 PMCID: PMC10174693 DOI: 10.7554/elife.77690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 03/09/2023] [Indexed: 05/12/2023] Open
Abstract
The primary motor cortex has been shown to coordinate movement preparation and execution through computations in approximately orthogonal subspaces. The underlying network mechanisms, and the roles played by external and recurrent connectivity, are central open questions that need to be answered to understand the neural substrates of motor control. We develop a recurrent neural network model that recapitulates the temporal evolution of neuronal activity recorded from the primary motor cortex of a macaque monkey during an instructed delayed-reach task. In particular, it reproduces the observed dynamic patterns of covariation between neural activity and the direction of motion. We explore the hypothesis that the observed dynamics emerges from a synaptic connectivity structure that depends on the preferred directions of neurons in both preparatory and movement-related epochs, and we constrain the strength of both synaptic connectivity and external input parameters from data. While the model can reproduce neural activity for multiple combinations of the feedforward and recurrent connections, the solution that requires minimum external inputs is one where the observed patterns of covariance are shaped by external inputs during movement preparation, while they are dominated by strong direction-specific recurrent connectivity during movement execution. Our model also demonstrates that the way in which single-neuron tuning properties change over time can explain the level of orthogonality of preparatory and movement-related subspaces.
Collapse
Affiliation(s)
| | - Nicholas G Hatsopoulos
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, United States
- Committee on Computational Neuroscience, University of Chicago, Chicago, United States
| | - Nicolas Brunel
- Department of Neurobiology, Duke University, Durham, United States
- Department of Physics, Duke University, Durham, United States
- Duke Institute for Brain Sciences, Duke University, Durham, United States
- Center for Cognitive Neuroscience, Duke University, Durham, United States
| |
Collapse
|
74
|
Schneider S, Lee JH, Mathis MW. Learnable latent embeddings for joint behavioural and neural analysis. Nature 2023; 617:360-368. [PMID: 37138088 PMCID: PMC10172131 DOI: 10.1038/s41586-023-06031-6] [Citation(s) in RCA: 85] [Impact Index Per Article: 42.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 03/28/2023] [Indexed: 05/05/2023]
Abstract
Mapping behavioural actions to neural activity is a fundamental goal of neuroscience. As our ability to record large neural and behavioural data increases, there is growing interest in modelling neural dynamics during adaptive behaviours to probe neural representations1-3. In particular, although neural latent embeddings can reveal underlying correlates of behaviour, we lack nonlinear techniques that can explicitly and flexibly leverage joint behaviour and neural data to uncover neural dynamics3-5. Here, we fill this gap with a new encoding method, CEBRA, that jointly uses behavioural and neural data in a (supervised) hypothesis- or (self-supervised) discovery-driven manner to produce both consistent and high-performance latent spaces. We show that consistency can be used as a metric for uncovering meaningful differences, and the inferred latents can be used for decoding. We validate its accuracy and demonstrate our tool's utility for both calcium and electrophysiology datasets, across sensory and motor tasks and in simple or complex behaviours across species. It allows leverage of single- and multi-session datasets for hypothesis testing or can be used label free. Lastly, we show that CEBRA can be used for the mapping of space, uncovering complex kinematic features, for the production of consistent latent spaces across two-photon and Neuropixels data, and can provide rapid, high-accuracy decoding of natural videos from visual cortex.
Collapse
Affiliation(s)
- Steffen Schneider
- Brain Mind Institute & Neuro X Institute, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Jin Hwa Lee
- Brain Mind Institute & Neuro X Institute, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Mackenzie Weygandt Mathis
- Brain Mind Institute & Neuro X Institute, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland.
| |
Collapse
|
75
|
Affiliation(s)
- Max Dabagia
- School of Computer Science, Georgia Institute of Technology, Atlanta, GA, USA
| | - Konrad P Kording
- Department of Biomedical Engineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Eva L Dyer
- Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA, USA.
| |
Collapse
|
76
|
Zou W, Li C, Huang H. Ensemble perspective for understanding temporal credit assignment. Phys Rev E 2023; 107:024307. [PMID: 36932505 DOI: 10.1103/physreve.107.024307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 01/24/2023] [Indexed: 06/18/2023]
Abstract
Recurrent neural networks are widely used for modeling spatiotemporal sequences in both nature language processing and neural population dynamics. However, understanding the temporal credit assignment is hard. Here, we propose that each individual connection in the recurrent computation is modeled by a spike and slab distribution, rather than a precise weight value. We then derive the mean-field algorithm to train the network at the ensemble level. The method is then applied to classify handwritten digits when pixels are read in sequence, and to the multisensory integration task that is a fundamental cognitive function of animals. Our model reveals important connections that determine the overall performance of the network. The model also shows how spatiotemporal information is processed through the hyperparameters of the distribution, and moreover reveals distinct types of emergent neural selectivity. To provide a mechanistic analysis of the ensemble learning, we first derive an analytic solution of the learning at the infinitely large network limit. We then carry out a low-dimensional projection of both neural and synaptic dynamics, analyze symmetry breaking in the parameter space, and finally demonstrate the role of stochastic plasticity in the recurrent computation. Therefore, our study sheds light on mechanisms of how weight uncertainty impacts the temporal credit assignment in recurrent neural networks from the ensemble perspective.
Collapse
Affiliation(s)
- Wenxuan Zou
- PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
| | - Chan Li
- PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
| | - Haiping Huang
- PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
- Guangdong Provincial Key Laboratory of Magnetoelectric Physics and Devices, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
| |
Collapse
|
77
|
Bordelon B, Pehlevan C. Population codes enable learning from few examples by shaping inductive bias. eLife 2022; 11:e78606. [PMID: 36524716 PMCID: PMC9839349 DOI: 10.7554/elife.78606] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 12/15/2022] [Indexed: 12/23/2022] Open
Abstract
Learning from a limited number of experiences requires suitable inductive biases. To identify how inductive biases are implemented in and shaped by neural codes, we analyze sample-efficient learning of arbitrary stimulus-response maps from arbitrary neural codes with biologically-plausible readouts. We develop an analytical theory that predicts the generalization error of the readout as a function of the number of observed examples. Our theory illustrates in a mathematically precise way how the structure of population codes shapes inductive bias, and how a match between the code and the task is crucial for sample-efficient learning. It elucidates a bias to explain observed data with simple stimulus-response maps. Using recordings from the mouse primary visual cortex, we demonstrate the existence of an efficiency bias towards low-frequency orientation discrimination tasks for grating stimuli and low spatial frequency reconstruction tasks for natural images. We reproduce the discrimination bias in a simple model of primary visual cortex, and further show how invariances in the code to certain stimulus variations alter learning performance. We extend our methods to time-dependent neural codes and predict the sample efficiency of readouts from recurrent networks. We observe that many different codes can support the same inductive bias. By analyzing recordings from the mouse primary visual cortex, we demonstrate that biological codes have lower total activity than other codes with identical bias. Finally, we discuss implications of our theory in the context of recent developments in neuroscience and artificial intelligence. Overall, our study provides a concrete method for elucidating inductive biases of the brain and promotes sample-efficient learning as a general normative coding principle.
Collapse
Affiliation(s)
- Blake Bordelon
- John A Paulson School of Engineering and Applied Sciences, Harvard UniversityCambridgeUnited States
- Center for Brain Science, Harvard UniversityCambridgeUnited States
| | - Cengiz Pehlevan
- John A Paulson School of Engineering and Applied Sciences, Harvard UniversityCambridgeUnited States
- Center for Brain Science, Harvard UniversityCambridgeUnited States
| |
Collapse
|
78
|
Thura D, Cabana JF, Feghaly A, Cisek P. Integrated neural dynamics of sensorimotor decisions and actions. PLoS Biol 2022; 20:e3001861. [PMID: 36520685 PMCID: PMC9754259 DOI: 10.1371/journal.pbio.3001861] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 09/29/2022] [Indexed: 12/23/2022] Open
Abstract
Recent theoretical models suggest that deciding about actions and executing them are not implemented by completely distinct neural mechanisms but are instead two modes of an integrated dynamical system. Here, we investigate this proposal by examining how neural activity unfolds during a dynamic decision-making task within the high-dimensional space defined by the activity of cells in monkey dorsal premotor (PMd), primary motor (M1), and dorsolateral prefrontal cortex (dlPFC) as well as the external and internal segments of the globus pallidus (GPe, GPi). Dimensionality reduction shows that the four strongest components of neural activity are functionally interpretable, reflecting a state transition between deliberation and commitment, the transformation of sensory evidence into a choice, and the baseline and slope of the rising urgency to decide. Analysis of the contribution of each population to these components shows meaningful differences between regions but no distinct clusters within each region, consistent with an integrated dynamical system. During deliberation, cortical activity unfolds on a two-dimensional "decision manifold" defined by sensory evidence and urgency and falls off this manifold at the moment of commitment into a choice-dependent trajectory leading to movement initiation. The structure of the manifold varies between regions: In PMd, it is curved; in M1, it is nearly perfectly flat; and in dlPFC, it is almost entirely confined to the sensory evidence dimension. In contrast, pallidal activity during deliberation is primarily defined by urgency. We suggest that these findings reveal the distinct functional contributions of different brain regions to an integrated dynamical system governing action selection and execution.
Collapse
Affiliation(s)
- David Thura
- Groupe de recherche sur la signalisation neurale et la circuiterie, Department of Neuroscience, Université de Montréal, Montréal, Québec, Canada
| | - Jean-François Cabana
- Groupe de recherche sur la signalisation neurale et la circuiterie, Department of Neuroscience, Université de Montréal, Montréal, Québec, Canada
| | - Albert Feghaly
- Groupe de recherche sur la signalisation neurale et la circuiterie, Department of Neuroscience, Université de Montréal, Montréal, Québec, Canada
| | - Paul Cisek
- Groupe de recherche sur la signalisation neurale et la circuiterie, Department of Neuroscience, Université de Montréal, Montréal, Québec, Canada
- * E-mail:
| |
Collapse
|
79
|
Xing D, Truccolo W, Borton DA. Emergence of Distinct Neural Subspaces in Motor Cortical Dynamics during Volitional Adjustments of Ongoing Locomotion. J Neurosci 2022; 42:9142-9157. [PMID: 36283830 PMCID: PMC9761674 DOI: 10.1523/jneurosci.0746-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2022] [Revised: 10/10/2022] [Accepted: 10/12/2022] [Indexed: 01/07/2023] Open
Abstract
The ability to modulate ongoing walking gait with precise, voluntary adjustments is what allows animals to navigate complex terrains. However, how the nervous system generates the signals to precisely control the limbs while simultaneously maintaining locomotion is poorly understood. One potential strategy is to distribute the neural activity related to these two functions into distinct cortical activity coactivation subspaces so that both may be conducted simultaneously without disruptive interference. To investigate this hypothesis, we recorded the activity of primary motor cortex in male nonhuman primates during obstacle avoidance on a treadmill. We found that the same neural population was active during both basic unobstructed locomotion and volitional obstacle avoidance movements. We identified the neural modes spanning the subspace of the low-dimensional dynamics in primary motor cortex and found a subspace that consistently maintains the same cyclic activity throughout obstacle stepping, despite large changes in the movement itself. All of the variance corresponding to this large change in movement during the obstacle avoidance was confined to its own distinct subspace. Furthermore, neural decoders built for ongoing locomotion did not generalize to decoding obstacle avoidance during locomotion. Our findings suggest that separate underlying subspaces emerge during complex locomotion that coordinates ongoing locomotor-related neural dynamics with volitional gait adjustments. These findings may have important implications for the development of brain-machine interfaces.SIGNIFICANCE STATEMENT Locomotion and precise, goal-directed movements are two distinct movement modalities with known differing requirements of motor cortical input. Previous studies have characterized the cortical activity during obstacle avoidance while walking in rodents and felines, but, to date, no such studies have been completed in primates. Additionally, in any animal model, it is unknown how these two movements are represented in primary motor cortex (M1) low-dimensional dynamics when both activities are performed at the same time, such as during obstacle avoidance. We developed a novel obstacle avoidance paradigm in freely moving nonhuman primates and discovered that the rhythmic locomotion-related dynamics and the voluntary, gait-adjustment movement separate into distinct subspaces in M1 cortical activity. Our analysis of decoding generalization may also have important implications for the development of brain-machine interfaces.
Collapse
Affiliation(s)
- David Xing
- School of Engineering, Brown University, Providence, Rhode Island 02912
| | - Wilson Truccolo
- Department of Neuroscience, Brown University, Providence, Rhode Island 02912
- Carney Institute for Brain Science, Brown University, Providence, Rhode Island 02912
| | - David A Borton
- School of Engineering, Brown University, Providence, Rhode Island 02912
- Carney Institute for Brain Science, Brown University, Providence, Rhode Island 02912
- Center for Neurorestoration & Neurotechnology, Rehabilitation Research and Development Service, Department of Veterans Affairs, Providence, Rhode Island 02908
| |
Collapse
|
80
|
Melbaum S, Russo E, Eriksson D, Schneider A, Durstewitz D, Brox T, Diester I. Conserved structures of neural activity in sensorimotor cortex of freely moving rats allow cross-subject decoding. Nat Commun 2022; 13:7420. [PMID: 36456557 PMCID: PMC9715555 DOI: 10.1038/s41467-022-35115-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 11/17/2022] [Indexed: 12/04/2022] Open
Abstract
Our knowledge about neuronal activity in the sensorimotor cortex relies primarily on stereotyped movements that are strictly controlled in experimental settings. It remains unclear how results can be carried over to less constrained behavior like that of freely moving subjects. Toward this goal, we developed a self-paced behavioral paradigm that encouraged rats to engage in different movement types. We employed bilateral electrophysiological recordings across the entire sensorimotor cortex and simultaneous paw tracking. These techniques revealed behavioral coupling of neurons with lateralization and an anterior-posterior gradient from the premotor to the primary sensory cortex. The structure of population activity patterns was conserved across animals despite the severe under-sampling of the total number of neurons and variations in electrode positions across individuals. We demonstrated cross-subject and cross-session generalization in a decoding task through alignments of low-dimensional neural manifolds, providing evidence of a conserved neuronal code.
Collapse
Affiliation(s)
- Svenja Melbaum
- Computer Vision Group, Department of Computer Science, University of Freiburg, 79110, Freiburg, Germany
- IMBIT//BrainLinks-BrainTools, University of Freiburg, Georges-Köhler-Allee 201, 79110, Freiburg, Germany
| | - Eleonora Russo
- Department of Psychiatry and Psychotherapy, University Medical Center, Johannes Gutenberg University, 55131, Mainz, Germany
- Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, University of Heidelberg, 68159, Mannheim, Germany
| | - David Eriksson
- IMBIT//BrainLinks-BrainTools, University of Freiburg, Georges-Köhler-Allee 201, 79110, Freiburg, Germany
- Optophysiology Lab, Faculty of Biology, University of Freiburg, 79110, Freiburg, Germany
| | - Artur Schneider
- IMBIT//BrainLinks-BrainTools, University of Freiburg, Georges-Köhler-Allee 201, 79110, Freiburg, Germany
- Optophysiology Lab, Faculty of Biology, University of Freiburg, 79110, Freiburg, Germany
| | - Daniel Durstewitz
- Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, University of Heidelberg, 68159, Mannheim, Germany
| | - Thomas Brox
- Computer Vision Group, Department of Computer Science, University of Freiburg, 79110, Freiburg, Germany
- IMBIT//BrainLinks-BrainTools, University of Freiburg, Georges-Köhler-Allee 201, 79110, Freiburg, Germany
| | - Ilka Diester
- IMBIT//BrainLinks-BrainTools, University of Freiburg, Georges-Köhler-Allee 201, 79110, Freiburg, Germany.
- Optophysiology Lab, Faculty of Biology, University of Freiburg, 79110, Freiburg, Germany.
- Bernstein Center Freiburg, University of Freiburg, 79104, Freiburg, Germany.
| |
Collapse
|
81
|
De Novo Brain-Computer Interfacing Deforms Manifold of Populational Neural Activity Patterns in Human Cerebral Cortex. eNeuro 2022; 9:ENEURO.0145-22.2022. [PMID: 36376067 PMCID: PMC9721308 DOI: 10.1523/eneuro.0145-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 10/27/2022] [Accepted: 11/02/2022] [Indexed: 11/15/2022] Open
Abstract
Human brains are capable of modulating innate activities to adapt to novel environments and tasks; for sensorimotor neural system this means acquisition of a rich repertoire of activity patterns that improve behavioral performance. To directly map the process of acquiring the neural repertoire during tasks onto performance improvement, we analyzed net neural populational activity during the learning of its voluntary modulation by brain-computer interface (BCI) operation in female and male humans. The recorded whole-head high-density scalp electroencephalograms (EEGs) were subjected to dimensionality reduction algorithm to capture changes in cortical activity patterns represented by the synchronization of neuronal oscillations during adaptation. Although the preserved variance of targeted features in the reduced dimensions was 20%, we found systematic interactions between the activity patterns and BCI classifiers that detected motor attempt; the neural manifold derived in the embedded space was stretched along with motor-related features of EEG by model-based fixed classifiers but not with adaptive classifiers that were constantly recalibrated to user activity. Moreover, the manifold was deformed to be orthogonal to the boundary by de novo classifiers with a fixed decision boundary based on biologically unnatural features. Collectively, the flexibility of human cortical signaling patterns (i.e., neural plasticity) is only induced by operation of a BCI whose classifier required fixed activities, and the adaptation could be induced even the requirement is not consistent with biologically natural responses. These principles of neural adaptation at a macroscopic level may underlie the ability of humans to learn wide-ranging behavioral repertoires and adapt to novel environments.
Collapse
|
82
|
Khona M, Fiete IR. Attractor and integrator networks in the brain. Nat Rev Neurosci 2022; 23:744-766. [DOI: 10.1038/s41583-022-00642-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/22/2022] [Indexed: 11/06/2022]
|
83
|
Cometa A, Falasconi A, Biasizzo M, Carpaneto J, Horn A, Mazzoni A, Micera S. Clinical neuroscience and neurotechnology: An amazing symbiosis. iScience 2022; 25:105124. [PMID: 36193050 PMCID: PMC9526189 DOI: 10.1016/j.isci.2022.105124] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
In the last decades, clinical neuroscience found a novel ally in neurotechnologies, devices able to record and stimulate electrical activity in the nervous system. These technologies improved the ability to diagnose and treat neural disorders. Neurotechnologies are concurrently enabling a deeper understanding of healthy and pathological dynamics of the nervous system through stimulation and recordings during brain implants. On the other hand, clinical neurosciences are not only driving neuroengineering toward the most relevant clinical issues, but are also shaping the neurotechnologies thanks to clinical advancements. For instance, understanding the etiology of a disease informs the location of a therapeutic stimulation, but also the way stimulation patterns should be designed to be more effective/naturalistic. Here, we describe cases of fruitful integration such as Deep Brain Stimulation and cortical interfaces to highlight how this symbiosis between clinical neuroscience and neurotechnology is closer to a novel integrated framework than to a simple interdisciplinary interaction.
Collapse
Affiliation(s)
- Andrea Cometa
- The Biorobotics Institute, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, 56127 Pisa, Italy
| | - Antonio Falasconi
- Friedrich Miescher Institute for Biomedical Research, 4058 Basel, Switzerland
- Biozentrum, University of Basel, 4056 Basel, Switzerland
| | - Marco Biasizzo
- The Biorobotics Institute, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, 56127 Pisa, Italy
| | - Jacopo Carpaneto
- The Biorobotics Institute, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, 56127 Pisa, Italy
| | - Andreas Horn
- Center for Brain Circuit Therapeutics Department of Neurology Brigham & Women’s Hospital, Harvard Medical School, Boston, MA 02115, USA
- MGH Neurosurgery & Center for Neurotechnology and Neurorecovery (CNTR) at MGH Neurology Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
- Movement Disorder and Neuromodulation Unit, Department of Neurology, Charité – Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt- Universität zu Berlin, Department of Neurology, 10117 Berlin, Germany
| | - Alberto Mazzoni
- The Biorobotics Institute, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, 56127 Pisa, Italy
| | - Silvestro Micera
- The Biorobotics Institute, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, 56127 Pisa, Italy
- Translational Neural Engineering Lab, School of Engineering, École Polytechnique Fèdèrale de Lausanne, 1015 Lausanne, Switzerland
| |
Collapse
|
84
|
Awasthi P, Lin TH, Bae J, Miller LE, Danziger ZC. Validation of a non-invasive, real-time, human-in-the-loop model of intracortical brain-computer interfaces. J Neural Eng 2022; 19:056038. [PMID: 36198278 PMCID: PMC9855658 DOI: 10.1088/1741-2552/ac97c3] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 10/05/2022] [Indexed: 01/26/2023]
Abstract
Objective. Despite the tremendous promise of invasive brain-computer interfaces (iBCIs), the associated study costs, risks, and ethical considerations limit the opportunity to develop and test the algorithms that decode neural activity into a user's intentions. Our goal was to address this challenge by designing an iBCI model capable of testing many human subjects in closed-loop.Approach. We developed an iBCI model that uses artificial neural networks (ANNs) to translate human finger movements into realistic motor cortex firing patterns, which can then be decoded in real time. We call the model the joint angle BCI, or jaBCI. jaBCI allows readily recruited, healthy subjects to perform closed-loop iBCI tasks using any neural decoder, preserving subjects' control-relevant short-latency error correction and learning dynamics.Main results. We validated jaBCI offline through emulated neuron firing statistics, confirming that emulated neural signals have firing rates, low-dimensional PCA geometry, and rotational jPCA dynamics that are quite similar to the actual neurons (recorded in monkey M1) on which we trained the ANN. We also tested jaBCI in closed-loop experiments, our single study examining roughly as many subjects as have been tested world-wide with iBCIs (n= 25). Performance was consistent with that of the paralyzed, human iBCI users with implanted intracortical electrodes. jaBCI allowed us to imitate the experimental protocols (e.g. the same velocity Kalman filter decoder and center-out task) and compute the same seven behavioral measures used in three critical studies.Significance. These encouraging results suggest the jaBCI's real-time firing rate emulation is a useful means to provide statistically robust sample sizes for rapid prototyping and optimization of decoding algorithms, the study of bi-directional learning in iBCIs, and improving iBCI control.
Collapse
Affiliation(s)
- Peeyush Awasthi
- Department of Biomedical Engineering, Florida International University, Miami, FL, United States of Amercia
| | - Tzu-Hsiang Lin
- Department of Biomedical Engineering, Florida International University, Miami, FL, United States of Amercia
| | - Jihye Bae
- Department of Electrical and Computer Engineering, University of Kentucky, Lexington, KY, United States
| | - Lee E Miller
- Department of Neuroscience, Physical Medicine, and Rehabilitation, Northwestern University, Chicago, IL, United States
| | - Zachary C Danziger
- Department of Biomedical Engineering, Florida International University, Miami, FL, United States of Amercia,Author to whom any correspondence should be addressed
| |
Collapse
|
85
|
Vaccari FE, Diomedi S, Filippini M, Hadjidimitrakis K, Fattori P. New insights on single-neuron selectivity in the era of population-level approaches. Front Integr Neurosci 2022; 16:929052. [PMID: 36249900 PMCID: PMC9554653 DOI: 10.3389/fnint.2022.929052] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 09/02/2022] [Indexed: 11/13/2022] Open
Abstract
In the past, neuroscience was focused on individual neurons seen as the functional units of the nervous system, but this approach fell short over time to account for new experimental evidence, especially for what concerns associative and motor cortices. For this reason and thanks to great technological advances, a part of modern research has shifted the focus from the responses of single neurons to the activity of neural ensembles, now considered the real functional units of the system. However, on a microscale, individual neurons remain the computational components of these networks, thus the study of population dynamics cannot prescind from studying also individual neurons which represent their natural substrate. In this new framework, ideas such as the capability of single cells to encode a specific stimulus (neural selectivity) may become obsolete and need to be profoundly revised. One step in this direction was made by introducing the concept of “mixed selectivity,” the capacity of single cells to integrate multiple variables in a flexible way, allowing individual neurons to participate in different networks. In this review, we outline the most important features of mixed selectivity and we also present recent works demonstrating its presence in the associative areas of the posterior parietal cortex. Finally, in discussing these findings, we present some open questions that could be addressed by future studies.
Collapse
Affiliation(s)
| | - Stefano Diomedi
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Matteo Filippini
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Alma Mater Research Institute for Human-Centered Artificial Intelligence, University of Bologna, Bologna, Italy
- *Correspondence: Patrizia Fattori
| | | | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Alma Mater Research Institute for Human-Centered Artificial Intelligence, University of Bologna, Bologna, Italy
- Matteo Filippini
| |
Collapse
|
86
|
Loriette C, Amengual JL, Ben Hamed S. Beyond the brain-computer interface: Decoding brain activity as a tool to understand neuronal mechanisms subtending cognition and behavior. Front Neurosci 2022; 16:811736. [PMID: 36161174 PMCID: PMC9492914 DOI: 10.3389/fnins.2022.811736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Accepted: 08/23/2022] [Indexed: 11/13/2022] Open
Abstract
One of the major challenges in system neurosciences consists in developing techniques for estimating the cognitive information content in brain activity. This has an enormous potential in different domains spanning from clinical applications, cognitive enhancement to a better understanding of the neural bases of cognition. In this context, the inclusion of machine learning techniques to decode different aspects of human cognition and behavior and its use to develop brain-computer interfaces for applications in neuroprosthetics has supported a genuine revolution in the field. However, while these approaches have been shown quite successful for the study of the motor and sensory functions, success is still far from being reached when it comes to covert cognitive functions such as attention, motivation and decision making. While improvement in this field of BCIs is growing fast, a new research focus has emerged from the development of strategies for decoding neural activity. In this review, we aim at exploring how the advanced in decoding of brain activity is becoming a major neuroscience tool moving forward our understanding of brain functions, providing a robust theoretical framework to test predictions on the relationship between brain activity and cognition and behavior.
Collapse
Affiliation(s)
- Célia Loriette
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Université Claude Bernard Lyon 1, Bron, France
| | | | - Suliann Ben Hamed
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Université Claude Bernard Lyon 1, Bron, France
| |
Collapse
|
87
|
Zhao C, Zhan L, Thompson PM, Huang H. Predicting Spatio-Temporal Human Brain Response Using fMRI. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2022; 13431:336-345. [PMID: 39051032 PMCID: PMC11267033 DOI: 10.1007/978-3-031-16431-6_32] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/27/2024]
Abstract
The transformation and transmission of brain stimuli reflect the dynamical brain activity in space and time. Compared with functional magnetic resonance imaging (fMRI), magneto- or electroencephalography (M/EEG) fast couples to the neural activity through generated magnetic fields. However, the MEG signal is inhomogeneous throughout the whole brain, which is affected by the signal-to-noise ratio, the sensors' location and distance. Current non-invasive neuroimaging modalities such as fMRI and M/EEG excel high resolution in space or time but not in both. To solve the main limitations of current technique for brain activity recording, we propose a novel recurrent memory optimization approach to predict the internal behavioral states in space and time. The proposed method uses Optimal Polynomial Projections to capture the long temporal history with robust online compression. The training process takes the pairs of fMRI and MEG data as inputs and predicts the recurrent brain states through the Siamese network. In the testing process, the framework only uses fMRI data to generate the corresponding neural response in space and time. The experimental results with Human connectome project (HCP) show that the predicted signal could reflect the neural activity with high spatial resolution as fMRI and high temporal resolution as MEG signal. The experimental results demonstrate for the first time that the proposed method is able to predict the brain response in both milliseconds and millimeters using only fMRI signal.
Collapse
Affiliation(s)
- Chongyue Zhao
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Liang Zhan
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Paul M. Thompson
- Imaging Genetics Center, University of Southern California, Los Angeles, CA, USA
| | - Heng Huang
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
88
|
Driscoll LN, Duncker L, Harvey CD. Representational drift: Emerging theories for continual learning and experimental future directions. Curr Opin Neurobiol 2022; 76:102609. [PMID: 35939861 DOI: 10.1016/j.conb.2022.102609] [Citation(s) in RCA: 53] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 06/08/2022] [Accepted: 06/23/2022] [Indexed: 11/03/2022]
Abstract
Recent work has revealed that the neural activity patterns correlated with sensation, cognition, and action often are not stable and instead undergo large scale changes over days and weeks-a phenomenon called representational drift. Here, we highlight recent observations of drift, how drift is unlikely to be explained by experimental confounds, and how the brain can likely compensate for drift to allow stable computation. We propose that drift might have important roles in neural computation to allow continual learning, both for separating and relating memories that occur at distinct times. Finally, we present an outlook on future experimental directions that are needed to further characterize drift and to test emerging theories for drift's role in computation.
Collapse
Affiliation(s)
- Laura N Driscoll
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
| | - Lea Duncker
- Howard Hughes Medical Institute, Stanford University, Stanford, CA, USA.
| | | |
Collapse
|
89
|
Inagaki HK, Chen S, Daie K, Finkelstein A, Fontolan L, Romani S, Svoboda K. Neural Algorithms and Circuits for Motor Planning. Annu Rev Neurosci 2022; 45:249-271. [PMID: 35316610 DOI: 10.1146/annurev-neuro-092021-121730] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The brain plans and executes volitional movements. The underlying patterns of neural population activity have been explored in the context of movements of the eyes, limbs, tongue, and head in nonhuman primates and rodents. How do networks of neurons produce the slow neural dynamics that prepare specific movements and the fast dynamics that ultimately initiate these movements? Recent work exploits rapid and calibrated perturbations of neural activity to test specific dynamical systems models that are capable of producing the observed neural activity. These joint experimental and computational studies show that cortical dynamics during motor planning reflect fixed points of neural activity (attractors). Subcortical control signals reshape and move attractors over multiple timescales, causing commitment to specific actions and rapid transitions to movement execution. Experiments in rodents are beginning to reveal how these algorithms are implemented at the level of brain-wide neural circuits.
Collapse
Affiliation(s)
| | - Susu Chen
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA
| | - Kayvon Daie
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA.,Allen Institute for Neural Dynamics, Seattle, Washington, USA;
| | - Arseny Finkelstein
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA.,Department of Physiology and Pharmacology, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv-Yafo, Israel
| | - Lorenzo Fontolan
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA
| | - Sandro Romani
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA
| | - Karel Svoboda
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA.,Allen Institute for Neural Dynamics, Seattle, Washington, USA;
| |
Collapse
|
90
|
Hu Y, Sompolinsky H. The spectrum of covariance matrices of randomly connected recurrent neuronal networks with linear dynamics. PLoS Comput Biol 2022; 18:e1010327. [PMID: 35862445 PMCID: PMC9345493 DOI: 10.1371/journal.pcbi.1010327] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 08/02/2022] [Accepted: 06/24/2022] [Indexed: 11/18/2022] Open
Abstract
A key question in theoretical neuroscience is the relation between the connectivity structure and the collective dynamics of a network of neurons. Here we study the connectivity-dynamics relation as reflected in the distribution of eigenvalues of the covariance matrix of the dynamic fluctuations of the neuronal activities, which is closely related to the network dynamics’ Principal Component Analysis (PCA) and the associated effective dimensionality. We consider the spontaneous fluctuations around a steady state in a randomly connected recurrent network of stochastic neurons. An exact analytical expression for the covariance eigenvalue distribution in the large-network limit can be obtained using results from random matrices. The distribution has a finitely supported smooth bulk spectrum and exhibits an approximate power-law tail for coupling matrices near the critical edge. We generalize the results to include second-order connectivity motifs and discuss extensions to excitatory-inhibitory networks. The theoretical results are compared with those from finite-size networks and the effects of temporal and spatial sampling are studied. Preliminary application to whole-brain imaging data is presented. Using simple connectivity models, our work provides theoretical predictions for the covariance spectrum, a fundamental property of recurrent neuronal dynamics, that can be compared with experimental data. Here we study the distribution of eigenvalues, or spectrum, of the neuron-to-neuron covariance matrix in recurrently connected neuronal networks. The covariance spectrum is an important global feature of neuron population dynamics that requires simultaneous recordings of neurons. The spectrum is essential to the widely used Principal Component Analysis (PCA) and generalizes the dimensionality measure of population dynamics. We use a simple model to emulate the complex connections between neurons, where all pairs of neurons interact linearly at a strength specified randomly and independently. We derive a closed-form expression of the covariance spectrum, revealing an interesting long tail of large eigenvalues following a power law as the connection strength increases. To incorporate connectivity features important to biological neural circuits, we generalize the result to networks with an additional low-rank connectivity component that could come from learning and networks consisting of sparsely connected excitatory and inhibitory neurons. To facilitate comparing the theoretical results to experimental data, we derive the precise modifications needed to account for the effect of limited time samples and having unobserved neurons. Preliminary applications to large-scale calcium imaging data suggest our model can well capture the high dimensional population activity of neurons.
Collapse
Affiliation(s)
- Yu Hu
- Department of Mathematics and Division of Life Science, The Hong Kong University of Science and Technology, Hong Kong SAR, China
- * E-mail: (YH); (HS)
| | - Haim Sompolinsky
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
- Center for Brain Science, Harvard University, Cambridge, Massachusetts, United States of America
- * E-mail: (YH); (HS)
| |
Collapse
|
91
|
Abstract
The solutions neural networks find to solve a task are often inscrutable. We have had little insight into why particular structure emerges in a network. By reverse-engineering neural networks from dynamical principles, Dubreuil & Valente et. al. reveal how neural population structure enables computational flexibility.
Collapse
Affiliation(s)
| | - Siyan Zhou
- Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Kanaka Rajan
- Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| |
Collapse
|
92
|
Masset P, Qin S, Zavatone-Veth JA. Drifting neuronal representations: Bug or feature? BIOLOGICAL CYBERNETICS 2022; 116:253-266. [PMID: 34993613 DOI: 10.1007/s00422-021-00916-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 11/17/2021] [Indexed: 06/14/2023]
Abstract
The brain displays a remarkable ability to sustain stable memories, allowing animals to execute precise behaviors or recall stimulus associations years after they were first learned. Yet, recent long-term recording experiments have revealed that single-neuron representations continuously change over time, contravening the classical assumption that learned features remain static. How do unstable neural codes support robust perception, memories, and actions? Here, we review recent experimental evidence for such representational drift across brain areas, as well as dissections of its functional characteristics and underlying mechanisms. We emphasize theoretical proposals for how drift need not only be a form of noise for which the brain must compensate. Rather, it can emerge from computationally beneficial mechanisms in hierarchical networks performing robust probabilistic computations.
Collapse
Affiliation(s)
- Paul Masset
- Center for Brain Science, Harvard University, Cambridge, MA, USA.
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA.
| | - Shanshan Qin
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Jacob A Zavatone-Veth
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Physics, Harvard University, Cambridge, MA, USA
| |
Collapse
|
93
|
Amini E, Yusof A, Riek S, Selvanayagam VS. Interaction of hand orientations during familiarization of a goal-directed aiming task. Hum Mov Sci 2022; 83:102955. [PMID: 35487099 DOI: 10.1016/j.humov.2022.102955] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 12/21/2021] [Accepted: 04/18/2022] [Indexed: 11/17/2022]
Abstract
The purpose of the present study was to examine errors for an isometric goal-directed aiming task during familiarization at different hand orientation. Interaction between neutral and pronated hand orientations with and without directional feedback would provide insights into short-term adaptations and the nature of control. In this study, 30 healthy right-handed adults (age, 22.7 ± 3.1 years; weight, 69.4 ± 16.6 kg; height, 166.7 ± 7.9 cm) were randomly assigned to neutral or pronated hand orientation conditions. To assess familiarization, participants performed ten sets (16 targets/set) of goal-directed aiming task with continuous visual feedback towards targets symmetrically distributed about the origin. Following familiarization, participants then completed eight sets; four sets with and four sets without directional feedback, in an alternated order. For both hand orientations, directional errors were reduced in the first two sets (p < 0.05), suggesting only three sets were required for familiarization. Additionally, the learning rate was also similar for both hand orientations. Following familiarization, aiming errors without feedback were significantly higher than with feedback while no change between sets was observed, regardless of hand orientation. Aiming errors were reduced in the early phase with and without visual feedback, however, in the late phase, errors were corrected when visual feedback was provided. It suggests that hand orientation does not affect familiarization, and mechanisms similar to rapid learning may be involved. It is probable that learning is consolidated during familiarization along with feedforward input to maintain performance. In addition, proprioceptive feedback plays a role in reducing errors early, while the online visual feedback plays a role in reducing errors later, independent of hand orientation.
Collapse
Affiliation(s)
- Elaheh Amini
- Centre for Sport and Exercise Sciences, University of Malaya, 50603 Kuala Lumpur, Malaysia
| | - Ashril Yusof
- Centre for Sport and Exercise Sciences, University of Malaya, 50603 Kuala Lumpur, Malaysia
| | - Stephan Riek
- Graduate Research School, University of the Sunshine Coast, Locked Bag 4, Maroochydore DC 4558, Queensland, Australia; School of Human Movement and Nutrition Science, The University of Queensland, St Lucia 4072, Australia
| | | |
Collapse
|
94
|
Sheng J, Zhang L, Liu C, Liu J, Feng J, Zhou Y, Hu H, Xue G. Higher-dimensional neural representations predict better episodic memory. SCIENCE ADVANCES 2022; 8:eabm3829. [PMID: 35442734 PMCID: PMC9020666 DOI: 10.1126/sciadv.abm3829] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 03/03/2022] [Indexed: 06/14/2023]
Abstract
Episodic memory enables humans to encode and later vividly retrieve information about our rich experiences, yet the neural representations that support this mental capacity are poorly understood. Using a large fMRI dataset (n = 468) of face-name associative memory tasks and principal component analysis to examine neural representational dimensionality (RD), we found that the human brain maintained a high-dimensional representation of faces through hierarchical representation within and beyond the face-selective regions. Critically, greater RD was associated with better subsequent memory performance both within and across participants, and this association was specific to episodic memory but not general cognitive abilities. Furthermore, the frontoparietal activities could suppress the shared low-dimensional fluctuations and reduce the correlations of local neural responses, resulting in greater RD. RD was not associated with the degree of item-specific pattern similarity, and it made complementary contributions to episodic memory. These results provide a mechanistic understanding of the role of RD in supporting accurate episodic memory.
Collapse
|
95
|
Liu Y, Caracoglia J, Sen S, Freud E, Striem-Amit E. Are reaching and grasping effector-independent? Similarities and differences in reaching and grasping kinematics between the hand and foot. Exp Brain Res 2022; 240:1833-1848. [PMID: 35426511 PMCID: PMC9142431 DOI: 10.1007/s00221-022-06359-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 03/24/2022] [Indexed: 11/30/2022]
Abstract
While reaching and grasping are highly prevalent manual actions, neuroimaging studies provide evidence that their neural representations may be shared between different body parts, i.e., effectors. If these actions are guided by effector-independent mechanisms, similar kinematics should be observed when the action is performed by the hand or by a cortically remote and less experienced effector, such as the foot. We tested this hypothesis with two characteristic components of action: the initial ballistic stage of reaching, and the preshaping of the digits during grasping based on object size. We examined if these kinematic features reflect effector-independent mechanisms by asking participants to reach toward and to grasp objects of different widths with their hand and foot. First, during both reaching and grasping, the velocity profile up to peak velocity matched between the hand and the foot, indicating a shared ballistic acceleration phase. Second, maximum grip aperture and time of maximum grip aperture of grasping increased with object size for both effectors, indicating encoding of object size during transport. Differences between the hand and foot were found in the deceleration phase and time of maximum grip aperture, likely due to biomechanical differences and the participants’ inexperience with foot actions. These findings provide evidence for effector-independent visuomotor mechanisms of reaching and grasping that generalize across body parts.
Collapse
Affiliation(s)
- Yuqi Liu
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, 20057, USA.
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Sciences and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.
| | - James Caracoglia
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, 20057, USA
- Division of Graduate Medical Sciences, Boston University Medical Center, Boston, MA, 02215, USA
| | - Sriparna Sen
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, 20057, USA
| | - Erez Freud
- Department of Psychology, York University, Toronto, ON, M3J 1P3, Canada
- Centre for Vision Research, York University, Toronto, ON, M3J 1P3, Canada
| | - Ella Striem-Amit
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, 20057, USA.
| |
Collapse
|
96
|
Waaga T, Agmon H, Normand VA, Nagelhus A, Gardner RJ, Moser MB, Moser EI, Burak Y. Grid-cell modules remain coordinated when neural activity is dissociated from external sensory cues. Neuron 2022; 110:1843-1856.e6. [PMID: 35385698 PMCID: PMC9235855 DOI: 10.1016/j.neuron.2022.03.011] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 01/25/2022] [Accepted: 03/09/2022] [Indexed: 11/30/2022]
Abstract
The representation of an animal’s position in the medial entorhinal cortex (MEC) is distributed across several modules of grid cells, each characterized by a distinct spatial scale. The population activity within each module is tightly coordinated and preserved across environments and behavioral states. Little is known, however, about the coordination of activity patterns across modules. We analyzed the joint activity patterns of hundreds of grid cells simultaneously recorded in animals that were foraging either in the light, when sensory cues could stabilize the representation, or in darkness, when such stabilization was disrupted. We found that the states of different modules are tightly coordinated, even in darkness, when the internal representation of position within the MEC deviates substantially from the true position of the animal. These findings suggest that internal brain mechanisms dynamically coordinate the representation of position in different modules, ensuring that they jointly encode a coherent and smooth trajectory. Hundreds of grid cells were recorded simultaneously from multiple grid modules Coordination between grid modules was assessed in rats that foraged in darkness Coordination persists despite relative drift of the represented versus true position This suggests that internal network mechanisms maintain inter-module coordination
Collapse
Affiliation(s)
- Torgeir Waaga
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, Norway
| | - Haggai Agmon
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel.
| | - Valentin A Normand
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, Norway
| | - Anne Nagelhus
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, Norway
| | - Richard J Gardner
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, Norway
| | - May-Britt Moser
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, Norway
| | - Edvard I Moser
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, Norway.
| | - Yoram Burak
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel; Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem, Israel.
| |
Collapse
|
97
|
Wang T, Chen Y, Cui H. From Parametric Representation to Dynamical System: Shifting Views of the Motor Cortex in Motor Control. Neurosci Bull 2022; 38:796-808. [PMID: 35298779 PMCID: PMC9276910 DOI: 10.1007/s12264-022-00832-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Accepted: 11/29/2021] [Indexed: 11/01/2022] Open
Abstract
In contrast to traditional representational perspectives in which the motor cortex is involved in motor control via neuronal preference for kinetics and kinematics, a dynamical system perspective emerging in the last decade views the motor cortex as a dynamical machine that generates motor commands by autonomous temporal evolution. In this review, we first look back at the history of the representational and dynamical perspectives and discuss their explanatory power and controversy from both empirical and computational points of view. Here, we aim to reconcile the above perspectives, and evaluate their theoretical impact, future direction, and potential applications in brain-machine interfaces.
Collapse
Affiliation(s)
- Tianwei Wang
- Center for Excellence in Brain Science and Intelligent Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, 200031, China.,Shanghai Center for Brain and Brain-inspired Intelligence Technology, Shanghai, 200031, China.,University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Yun Chen
- Center for Excellence in Brain Science and Intelligent Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, 200031, China.,Shanghai Center for Brain and Brain-inspired Intelligence Technology, Shanghai, 200031, China.,University of Chinese Academy of Sciences, Beijing, 100049, China
| | - He Cui
- Center for Excellence in Brain Science and Intelligent Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, 200031, China. .,Shanghai Center for Brain and Brain-inspired Intelligence Technology, Shanghai, 200031, China. .,University of Chinese Academy of Sciences, Beijing, 100049, China.
| |
Collapse
|
98
|
Yu H, Zhao Q, Li S, Li K, Liu C, Wang J. Decoding Digital Visual Stimulation From Neural Manifold With Fuzzy Leaning on Cortical Oscillatory Dynamics. Front Comput Neurosci 2022; 16:852281. [PMID: 35360527 PMCID: PMC8961731 DOI: 10.3389/fncom.2022.852281] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 02/03/2022] [Indexed: 11/13/2022] Open
Abstract
A crucial point in neuroscience is how to correctly decode cognitive information from brain dynamics for motion control and neural rehabilitation. However, due to the instability and high dimensions of electroencephalogram (EEG) recordings, it is difficult to directly obtain information from original data. Thus, in this work, we design visual experiments and propose a novel decoding method based on the neural manifold of cortical activity to find critical visual information. First, we studied four major frequency bands divided from EEG and found that the responses of the EEG alpha band (8–15 Hz) in the frontal and occipital lobes to visual stimuli occupy a prominent place. Besides, the essential features of EEG data in the alpha band are further mined via two manifold learning methods. We connect temporally consecutive brain states in the t distribution random adjacency embedded (t-SNE) map on the trial-by-trial level and find the brain state dynamics to form a cyclic manifold, with the different tasks forming distinct loops. Meanwhile, it is proved that the latent factors of brain activities estimated by t-SNE can be used for more accurate decoding and the stable neural manifold is found. Taking the latent factors of the manifold as independent inputs, a fuzzy system-based Takagi-Sugeno-Kang model is established and further trained to identify visual EEG signals. The combination of t-SNE and fuzzy learning can highly improve the accuracy of visual cognitive decoding to 81.98%. Moreover, by optimizing the features, it is found that the combination of the frontal lobe, the parietal lobe, and the occipital lobe is the most effective factor for visual decoding with 83.05% accuracy. This work provides a potential tool for decoding visual EEG signals with the help of low-dimensional manifold dynamics, especially contributing to the brain–computer interface (BCI) control, brain function research, and neural rehabilitation.
Collapse
|
99
|
Li K, Wang J, Li S, Deng B, Yu H. Latent characteristics and neural manifold of brain functional network under acupuncture. IEEE Trans Neural Syst Rehabil Eng 2022; 30:758-769. [PMID: 35271443 DOI: 10.1109/tnsre.2022.3157380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Acupuncture can regulate the cognition of brain system, and different manipulations are the keys of realizing the curative effect of acupuncture on human body. Therefore, it is crucial to distinguish and monitor the different acupuncture manipulations automatically. In this brief, in order to enhance the robustness of electroencephalogram (EEG) detection against noise and interference, we propose an acupuncture manipulation detecting framework based on supervised ISOMAP and recurrent neural network (RNN). Primarily, the low-dimensional embedding neural manifold of brain dynamical functional network is extracted via the reconstructed geodetic distance. It is found that there exhibits stronger acupuncture-specific reconfiguration of brain network. Besides, we show that the distance travel along this manifold correlates strongly with changes of acupuncture manipulations. The low-dimensional brain topological structure of all subjects shows crescent-like feature when acupuncturing at Zusanli acupoints, and fixed-points are varying under diverse manipulation methods. Moreover, Takagi-Sugeno-Kang (TSK) classifier is adopted to identify acupuncture manipulations according to the nonlinear characteristics of neural manifolds. Compared with different classifier, TSK can further improve the accuracy of manipulation identification at 96.71%. The results demonstrate the effectiveness of our model in detecting the acupuncture manipulations, which may provide neural biomarkers for acupuncture physicians.
Collapse
|
100
|
Pinotsis DA, Miller EK. Beyond dimension reduction: Stable electric fields emerge from and allow representational drift. Neuroimage 2022; 253:119058. [PMID: 35272022 DOI: 10.1016/j.neuroimage.2022.119058] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 03/03/2022] [Accepted: 03/03/2022] [Indexed: 01/18/2023] Open
Abstract
It is known that the exact neurons maintaining a given memory (the neural ensemble) change from trial to trial. This raises the question of how the brain achieves stability in the face of this representational drift. Here, we demonstrate that this stability emerges at the level of the electric fields that arise from neural activity. We show that electric fields carry information about working memory content. The electric fields, in turn, can act as "guard rails" that funnel higher dimensional variable neural activity along stable lower dimensional routes. We obtained the latent space associated with each memory. We then confirmed the stability of the electric field by mapping the latent space to different cortical patches (that comprise a neural ensemble) and reconstructing information flow between patches. Stable electric fields can allow latent states to be transferred between brain areas, in accord with modern engram theory.
Collapse
Affiliation(s)
- Dimitris A Pinotsis
- Centre for Mathematical Neuroscience and Psychology and Department of Psychology, City-University of London, London EC1V 0HB, United Kingdom; The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | - Earl K Miller
- The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| |
Collapse
|