1
|
Kulkarni S, Bassett DS. Toward Principles of Brain Network Organization and Function. Annu Rev Biophys 2025; 54:353-378. [PMID: 39952667 DOI: 10.1146/annurev-biophys-030722-110624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2025]
Abstract
The brain is immensely complex, with diverse components and dynamic interactions building upon one another to orchestrate a wide range of behaviors. Understanding patterns of these complex interactions and how they are coordinated to support collective neural function is critical for parsing human and animal behavior, treating mental illness, and developing artificial intelligence. Rapid experimental advances in imaging, recording, and perturbing neural systems across various species now provide opportunities to distill underlying principles of brain organization and function. Here, we take stock of recent progress and review methods used in the statistical analysis of brain networks, drawing from fields of statistical physics, network theory, and information theory. Our discussion is organized by scale, starting with models of individual neurons and extending to large-scale networks mapped across brain regions. We then examine organizing principles and constraints that shape the biological structure and function of neural circuits. We conclude with an overview of several critical frontiers, including expanding current models, fostering tighter feedback between theory and experiment, and leveraging perturbative approaches to understand neural systems. Alongside these efforts, we highlight the importance of contextualizing their contributions by linking them to formal accounts of explanation and causation.
Collapse
Affiliation(s)
- Suman Kulkarni
- Department of Physics & Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania, USA;
| | - Dani S Bassett
- Department of Bioengineering, Department of Electrical & Systems Engineering, Department of Neurology, and Department of Psychiatry, University of Pennsylvania, Philadelphia, Pennsylvania, USA;
- Santa Fe Institute, Santa Fe, New Mexico, USA
- Department of Physics & Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania, USA;
- Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
2
|
Schmitt O. Relationships and representations of brain structures, connectivity, dynamics and functions. Prog Neuropsychopharmacol Biol Psychiatry 2025; 138:111332. [PMID: 40147809 DOI: 10.1016/j.pnpbp.2025.111332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/01/2024] [Revised: 02/20/2025] [Accepted: 03/10/2025] [Indexed: 03/29/2025]
Abstract
The review explores the complex interplay between brain structures and their associated functions, presenting a diversity of hierarchical models that enhances our understanding of these relationships. Central to this approach are structure-function flow diagrams, which offer a visual representation of how specific neuroanatomical structures are linked to their functional roles. These diagrams are instrumental in mapping the intricate connections between different brain regions, providing a clearer understanding of how functions emerge from the underlying neural architecture. The study details innovative attempts to develop new functional hierarchies that integrate structural and functional data. These efforts leverage recent advancements in neuroimaging techniques such as fMRI, EEG, MEG, and PET, as well as computational models that simulate neural dynamics. By combining these approaches, the study seeks to create a more refined and dynamic hierarchy that can accommodate the brain's complexity, including its capacity for plasticity and adaptation. A significant focus is placed on the overlap of structures and functions within the brain. The manuscript acknowledges that many brain regions are multifunctional, contributing to different cognitive and behavioral processes depending on the context. This overlap highlights the need for a flexible, non-linear hierarchy that can capture the brain's intricate functional landscape. Moreover, the study examines the interdependence of these functions, emphasizing how the loss or impairment of one function can impact others. Another crucial aspect discussed is the brain's ability to compensate for functional deficits following neurological diseases or injuries. The investigation explores how the brain reorganizes itself, often through the recruitment of alternative neural pathways or the enhancement of existing ones, to maintain functionality despite structural damage. This compensatory mechanism underscores the brain's remarkable plasticity, demonstrating its ability to adapt and reconfigure itself in response to injury, thereby ensuring the continuation of essential functions. In conclusion, the study presents a system of brain functions that integrates structural, functional, and dynamic perspectives. It offers a robust framework for understanding how the brain's complex network of structures supports a wide range of cognitive and behavioral functions, with significant implications for both basic neuroscience and clinical applications.
Collapse
Affiliation(s)
- Oliver Schmitt
- Medical School Hamburg - University of Applied Sciences and Medical University - Institute for Systems Medicine, Am Kaiserkai 1, Hamburg 20457, Germany; University of Rostock, Department of Anatomy, Gertrudenstr. 9, Rostock, 18055 Rostock, Germany.
| |
Collapse
|
3
|
Huang C, Englitz B, Reznik A, Zeldenrust F, Celikel T. Information transfer and recovery for the sense of touch. Cereb Cortex 2025; 35:bhaf073. [PMID: 40197640 PMCID: PMC11976729 DOI: 10.1093/cercor/bhaf073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2024] [Revised: 11/26/2024] [Accepted: 01/02/2025] [Indexed: 04/10/2025] Open
Abstract
Transformation of postsynaptic potentials into action potentials is the rate-limiting step of communication in neural networks. The efficiency of this intracellular information transfer also powerfully shapes stimulus representations in sensory cortices. Using whole-cell recordings and information-theoretic measures, we show herein that somatic postsynaptic potentials accurately represent stimulus location on a trial-by-trial basis in single neurons, even 4 synapses away from the sensory periphery in the whisker system. This information is largely lost during action potential generation but can be rapidly (<20 ms) recovered using complementary information in local populations in a cell-type-specific manner. These results show that as sensory information is transferred from one neural locus to another, the circuits reconstruct the stimulus with high fidelity so that sensory representations of single neurons faithfully represent the stimulus in the periphery, but only in their postsynaptic potentials, resulting in lossless information processing for the sense of touch in the primary somatosensory cortex.
Collapse
Affiliation(s)
- Chao Huang
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Heyendaalseweg 135, 6525 AJ Nijmegen, the Netherlands
- Laboratory of Neural Circuits and Plasticity, University of Southern California, 3616 Watt Way, Los Angeles, CA 90089, United States
| | - Bernhard Englitz
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Heyendaalseweg 135, 6525 AJ Nijmegen, the Netherlands
| | - Andrey Reznik
- Laboratory of Neural Circuits and Plasticity, University of Southern California, 3616 Watt Way, Los Angeles, CA 90089, United States
| | - Fleur Zeldenrust
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Heyendaalseweg 135, 6525 AJ Nijmegen, the Netherlands
| | - Tansu Celikel
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Heyendaalseweg 135, 6525 AJ Nijmegen, the Netherlands
- School of Psychology, Georgia Institute of Technology, 654 Cherry Street, Atlanta, GA 30332-0170, United States
| |
Collapse
|
4
|
Chen H, Kunimatsu J, Oya T, Imaizumi Y, Hori Y, Matsumoto M, Tsubo Y, Hikosaka O, Minamimoto T, Naya Y, Yamada H. Formation of brain-wide neural geometry during visual item recognition in monkeys. iScience 2025; 28:111936. [PMID: 40034850 PMCID: PMC11875189 DOI: 10.1016/j.isci.2025.111936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2024] [Revised: 10/31/2024] [Accepted: 01/28/2025] [Indexed: 03/05/2025] Open
Abstract
Neural dynamics are thought to reflect computations that relay and transform information in the brain. Previous studies have identified the neural population dynamics in many individual brain regions as a trajectory geometry, preserving a common computational motif. However, whether these populations share particular geometric patterns across brain-wide neural populations remains unclear. Here, by mapping neural dynamics widely across temporal/frontal/limbic regions in the cortical and subcortical structures of monkeys, we show that 10 neural populations, including 2,500 neurons, propagate visual item information in a stochastic manner. We found that visual inputs predominantly evoked rotational dynamics in the higher-order visual area, TE, and its downstream striatum tail, while curvy/straight dynamics appeared frequently downstream in the orbitofrontal/hippocampal network. These geometric changes were not deterministic but rather stochastic according to their respective emergence rates. Our meta-analysis results indicate that visual information propagates as a heterogeneous mixture of stochastic neural population signals in the brain.
Collapse
Affiliation(s)
- He Chen
- School of Psychological and Cognitive Sciences, Peking University, No. 52, Haidian Road, Haidian District, Beijing 100805, China
- Department of Physiology and Biophysics, Washington National Primate Research Center, University of Washington, Seattle, WA 98195, USA
| | - Jun Kunimatsu
- Division of Biomedical Science, Institute of Medicine, University of Tsukuba, 1-1-1 Tenno-dai, Tsukuba, Ibaraki 305-8577, Japan
- Transborder Medical Research Center, University of Tsukuba, 1-1-1 Tenno-dai, Tsukuba, Ibaraki 305-8577, Japan
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Tomomichi Oya
- Western Institute for Neuroscience, University of Western Ontario, London, ON N6A3K7, Canada
- Department of Physiology and Pharmacology, University of Western Ontario, London N6A 3K7, Canada
| | - Yuri Imaizumi
- College of Medical Sciences, University of Tsukuba, 1-1-1 Tenno-dai, Tsukuba, Ibaraki 305-8577, Japan
| | - Yukiko Hori
- Advanced Neuroimaging Center, National Institutes for Quantum Science and Technology, 4-9-1 Anagawa, Inage-ku, Chiba 263-8555, Japan
| | - Masayuki Matsumoto
- Division of Biomedical Science, Institute of Medicine, University of Tsukuba, 1-1-1 Tenno-dai, Tsukuba, Ibaraki 305-8577, Japan
- Transborder Medical Research Center, University of Tsukuba, 1-1-1 Tenno-dai, Tsukuba, Ibaraki 305-8577, Japan
| | - Yasuhiro Tsubo
- College of Information Science and Engineering, Ritsumeikan University, 2-150 Iwakura-cho, Ibaraki, Osaka 567-8570, Japan
| | - Okihide Hikosaka
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Takafumi Minamimoto
- Advanced Neuroimaging Center, National Institutes for Quantum Science and Technology, 4-9-1 Anagawa, Inage-ku, Chiba 263-8555, Japan
| | - Yuji Naya
- School of Psychological and Cognitive Sciences, Peking University, No. 52, Haidian Road, Haidian District, Beijing 100805, China
- IDG/McGovern Institute for Brain Research at Peking University, No. 52, Haidian Road, Haidian District, Beijing 100805, China
- Beijing Key Laboratory of Behavior and Mental Health, Peking University, No. 52, Haidian Road, Haidian District, Beijing 100805, China
| | - Hiroshi Yamada
- Division of Biomedical Science, Institute of Medicine, University of Tsukuba, 1-1-1 Tenno-dai, Tsukuba, Ibaraki 305-8577, Japan
- Transborder Medical Research Center, University of Tsukuba, 1-1-1 Tenno-dai, Tsukuba, Ibaraki 305-8577, Japan
| |
Collapse
|
5
|
Barzon G, Busiello DM, Nicoletti G. Excitation-Inhibition Balance Controls Information Encoding in Neural Populations. PHYSICAL REVIEW LETTERS 2025; 134:068403. [PMID: 40021162 DOI: 10.1103/physrevlett.134.068403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Revised: 10/17/2024] [Accepted: 01/27/2025] [Indexed: 03/03/2025]
Abstract
Understanding how the complex connectivity structure of the brain shapes its information-processing capabilities is a long-standing question. By focusing on a paradigmatic architecture, we study how the neural activity of excitatory and inhibitory populations encodes information on external signals. We show that at long times information is maximized at the edge of stability, where inhibition balances excitation, both in linear and nonlinear regimes. In the presence of multiple external signals, this maximum corresponds to the entropy of the input dynamics. By analyzing the case of a prolonged stimulus, we find that stronger inhibition is instead needed to maximize the instantaneous sensitivity, revealing an intrinsic tradeoff between short-time responses and long-time accuracy. In agreement with recent experimental findings, our results pave the way for a deeper information-theoretic understanding of how the balance between excitation and inhibition controls optimal information-processing in neural populations.
Collapse
Affiliation(s)
- Giacomo Barzon
- University of Padova, Padova Neuroscience Center, Padova, Italy
| | - Daniel Maria Busiello
- Max Planck Institute for the Physics of Complex Systems, Dresden, Germany
- University of Padova, Department of Physics and Astronomy "G. Galilei," , Padova, Italy
| | - Giorgio Nicoletti
- École Polytechnique Fédérale de Lausanne, ECHO Laboratory, Lausanne, Switzerland
- The Abdus Salam International Center for Theoretical Physics (ICTP), Quantitative Life Sciences section, Trieste, Italy
| |
Collapse
|
6
|
Perkins SM, Amematsro EA, Cunningham J, Wang Q, Churchland MM. An emerging view of neural geometry in motor cortex supports high-performance decoding. eLife 2025; 12:RP89421. [PMID: 39898793 PMCID: PMC11790250 DOI: 10.7554/elife.89421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2025] Open
Abstract
Decoders for brain-computer interfaces (BCIs) assume constraints on neural activity, chosen to reflect scientific beliefs while yielding tractable computations. Recent scientific advances suggest that the true constraints on neural activity, especially its geometry, may be quite different from those assumed by most decoders. We designed a decoder, MINT, to embrace statistical constraints that are potentially more appropriate. If those constraints are accurate, MINT should outperform standard methods that explicitly make different assumptions. Additionally, MINT should be competitive with expressive machine learning methods that can implicitly learn constraints from data. MINT performed well across tasks, suggesting its assumptions are well-matched to the data. MINT outperformed other interpretable methods in every comparison we made. MINT outperformed expressive machine learning methods in 37 of 42 comparisons. MINT's computations are simple, scale favorably with increasing neuron counts, and yield interpretable quantities such as data likelihoods. MINT's performance and simplicity suggest it may be a strong candidate for many BCI applications.
Collapse
Affiliation(s)
- Sean M Perkins
- Department of Biomedical Engineering, Columbia UniversityNew YorkUnited States
- Zuckerman Institute, Columbia UniversityNew YorkUnited States
| | - Elom A Amematsro
- Zuckerman Institute, Columbia UniversityNew YorkUnited States
- Department of Neuroscience, Columbia University Medical CenterNew YorkUnited States
| | - John Cunningham
- Zuckerman Institute, Columbia UniversityNew YorkUnited States
- Department of Statistics, Columbia UniversityNew YorkUnited States
- Center for Theoretical Neuroscience, Columbia University Medical CenterNew YorkUnited States
- Grossman Center for the Statistics of Mind, Columbia UniversityNew YorkUnited States
| | - Qi Wang
- Department of Biomedical Engineering, Columbia UniversityNew YorkUnited States
| | - Mark M Churchland
- Zuckerman Institute, Columbia UniversityNew YorkUnited States
- Department of Neuroscience, Columbia University Medical CenterNew YorkUnited States
- Grossman Center for the Statistics of Mind, Columbia UniversityNew YorkUnited States
- Kavli Institute for Brain Science, Columbia University Medical CenterNew YorkUnited States
| |
Collapse
|
7
|
Osuna-Orozco R, Castillo E, Harris KD, Santacruz SR. Identification of recurrent dynamics in distributed neural populations. PLoS Comput Biol 2025; 21:e1012816. [PMID: 39913891 PMCID: PMC11838891 DOI: 10.1371/journal.pcbi.1012816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 02/19/2025] [Accepted: 01/22/2025] [Indexed: 02/20/2025] Open
Abstract
Large-scale recordings of neural activity over broad anatomical areas with high spatial and temporal resolution are increasingly common in modern experimental neuroscience. Recently, recurrent switching dynamical systems have been used to tackle the scale and complexity of these data. However, an important challenge remains in providing insights into the existence and structure of recurrent linear dynamics in neural time series data. Here we test a scalable approach to time-varying autoregression with low-rank tensors to recover the recurrent dynamics in stochastic neural mass models with multiple stable attractors. We demonstrate that the parsimonious representation of time-varying system matrices in terms of temporal modes can recover the attractor structure of simple systems via clustering. We then consider simulations based on a human brain connectivity matrix in high and low global connection strength regimes, and reveal the hierarchical clustering structure of the dynamics. Finally, we explain the impact of the forecast time delay on the estimation of the underlying rank and temporal variability of the time series dynamics. This study illustrates that prediction error minimization is not sufficient to recover meaningful dynamic structure and that it is crucial to account for the three key timescales arising from dynamics, noise processes, and attractor switching.
Collapse
Affiliation(s)
- Rodrigo Osuna-Orozco
- Department of Biomedical Engineering, University of Texas at Austin, Austin, Texas, United States of America
| | - Edward Castillo
- Department of Biomedical Engineering, University of Texas at Austin, Austin, Texas, United States of America
| | - Kameron Decker Harris
- Department of Computer Science, Western Washington University, Bellingham, Washington, United States of America
| | - Samantha R. Santacruz
- Department of Biomedical Engineering, University of Texas at Austin, Austin, Texas, United States of America
| |
Collapse
|
8
|
Menéndez JA, Hennig JA, Golub MD, Oby ER, Sadtler PT, Batista AP, Chase SM, Yu BM, Latham PE. A theory of brain-computer interface learning via low-dimensional control. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2024.04.18.589952. [PMID: 38712193 PMCID: PMC11071278 DOI: 10.1101/2024.04.18.589952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
A remarkable demonstration of the flexibility of mammalian motor systems is primates' ability to learn to control brain-computer interfaces (BCIs). This constitutes a completely novel motor behavior, yet primates are capable of learning to control BCIs under a wide range of conditions. BCIs with carefully calibrated decoders, for example, can be learned with only minutes to hours of practice. With a few weeks of practice, even BCIs with randomly constructed decoders can be learned. What are the biological substrates of this learning process? Here, we develop a theory based on a re-aiming strategy, whereby learning operates within a low-dimensional subspace of task-relevant inputs driving the local population of recorded neurons. Through comprehensive numerical and formal analysis, we demonstrate that this theory can provide a unifying explanation for disparate phenomena previously reported in three different BCI learning tasks, and we derive a novel experimental prediction that we verify with previously published data. By explicitly modeling the underlying neural circuitry, the theory reveals an interpretation of these phenomena in terms of biological constraints on neural activity.
Collapse
Affiliation(s)
- J. A. Menéndez
- Gatsby Computational Neuroscience Unit, University College London
| | | | | | | | | | | | | | | | - P. E. Latham
- Gatsby Computational Neuroscience Unit, University College London
| |
Collapse
|
9
|
Zheng J, Meister M. The unbearable slowness of being: Why do we live at 10 bits/s? Neuron 2025; 113:192-204. [PMID: 39694032 PMCID: PMC11758279 DOI: 10.1016/j.neuron.2024.11.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2024] [Revised: 10/31/2024] [Accepted: 11/12/2024] [Indexed: 12/20/2024]
Abstract
This article is about the neural conundrum behind the slowness of human behavior. The information throughput of a human being is about 10 bits/s. In comparison, our sensory systems gather data at ∼109 bits/s. The stark contrast between these numbers remains unexplained and touches on fundamental aspects of brain function: what neural substrate sets this speed limit on the pace of our existence? Why does the brain need billions of neurons to process 10 bits/s? Why can we only think about one thing at a time? The brain seems to operate in two distinct modes: the "outer" brain handles fast high-dimensional sensory and motor signals, whereas the "inner" brain processes the reduced few bits needed to control behavior. Plausible explanations exist for the large neuron numbers in the outer brain, but not for the inner brain, and we propose new research directions to remedy this.
Collapse
Affiliation(s)
- Jieyu Zheng
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
| | - Markus Meister
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
| |
Collapse
|
10
|
Agudelo-Toro A, Michaels JA, Sheng WA, Scherberger H. Accurate neural control of a hand prosthesis by posture-related activity in the primate grasping circuit. Neuron 2024; 112:4115-4129.e8. [PMID: 39419024 DOI: 10.1016/j.neuron.2024.09.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 03/15/2024] [Accepted: 09/19/2024] [Indexed: 10/19/2024]
Abstract
Brain-computer interfaces (BCIs) have the potential to restore hand movement for people with paralysis, but current devices still lack the fine control required to interact with objects of daily living. Following our understanding of cortical activity during arm reaches, hand BCI studies have focused primarily on velocity control. However, mounting evidence suggests that posture, and not velocity, dominates in hand-related areas. To explore whether this signal can causally control a prosthesis, we developed a BCI training paradigm centered on the reproduction of posture transitions. Monkeys trained with this protocol were able to control a multidimensional hand prosthesis with high accuracy, including execution of the very intricate precision grip. Analysis revealed that the posture signal in the target grasping areas was the main contributor to control. We present, for the first time, neural posture control of a multidimensional hand prosthesis, opening the door for future interfaces to leverage this additional information channel.
Collapse
Affiliation(s)
- Andres Agudelo-Toro
- Neurobiology Laboratory, Deutsches Primatenzentrum GmbH, Göttingen 37077, Germany.
| | - Jonathan A Michaels
- Neurobiology Laboratory, Deutsches Primatenzentrum GmbH, Göttingen 37077, Germany; School of Kinesiology and Health Science, Faculty of Health, York University, Toronto, ON M3J 1P3, Canada
| | - Wei-An Sheng
- Neurobiology Laboratory, Deutsches Primatenzentrum GmbH, Göttingen 37077, Germany; Institute of Biomedical Sciences, Academia Sinica, Taipei 115, Taiwan
| | - Hansjörg Scherberger
- Neurobiology Laboratory, Deutsches Primatenzentrum GmbH, Göttingen 37077, Germany; Faculty of Biology and Psychology, University of Göttingen, Göttingen 37073, Germany.
| |
Collapse
|
11
|
Moore DD, MacLean JN, Walker JD, Hatsopoulos NG. A dynamic subset of network interactions underlies tuning to natural movements in marmoset sensorimotor cortex. Nat Commun 2024; 15:10517. [PMID: 39627212 PMCID: PMC11615226 DOI: 10.1038/s41467-024-54343-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 11/06/2024] [Indexed: 12/06/2024] Open
Abstract
Mechanisms of computation in sensorimotor cortex must be flexible and robust to support skilled motor behavior. Patterns of neuronal coactivity emerge as a result of computational processes. Pairwise spike-time statistical relationships, across the population, can be summarized as a functional network (FN) which retains single-unit properties. We record populations of single-unit neural activity in marmoset forelimb sensorimotor cortex during prey capture and spontaneous behavior and use an encoding model incorporating kinematic trajectories and network features to predict single-unit activity during forelimb movements. The contribution of network features depends on structured connectivity within strongly connected functional groups. We identify a context-specific functional group that is highly tuned to kinematics and reorganizes its connectivity between spontaneous and prey capture movements. In the remaining context-invariant group, interactions are comparatively stable across behaviors and units are less tuned to kinematics. This suggests different roles in producing natural forelimb movements and contextualizes single-unit tuning properties within population dynamics.
Collapse
Affiliation(s)
- Dalton D Moore
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL, 60637, USA
| | - Jason N MacLean
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, 60637, USA
- Department of Neurobiology, University of Chicago, Chicago, IL, 60637, USA
- University of Chicago Neuroscience Institute, Chicago, IL, 60637, USA
| | - Jeffrey D Walker
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL, 60637, USA
| | - Nicholas G Hatsopoulos
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL, 60637, USA.
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, 60637, USA.
- University of Chicago Neuroscience Institute, Chicago, IL, 60637, USA.
| |
Collapse
|
12
|
Bazzi S, Stansfield S, Hogan N, Sternad D. Simplified internal models in human control of complex objects. PLoS Comput Biol 2024; 20:e1012599. [PMID: 39556590 PMCID: PMC11723638 DOI: 10.1371/journal.pcbi.1012599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Revised: 01/10/2025] [Accepted: 10/29/2024] [Indexed: 11/20/2024] Open
Abstract
Humans are skillful at manipulating objects that possess nonlinear underactuated dynamics, such as clothes or containers filled with liquids. Several studies suggested that humans implement a predictive model-based strategy to control such objects. However, these studies only considered unconstrained reaching without any object involved or, at most, linear mass-spring systems with relatively simple dynamics. It is not clear what internal model humans develop of more complex objects, and what level of granularity is represented. To answer these questions, this study examined a task where participants physically interacted with a nonlinear underactuated system mimicking a cup of sloshing coffee: a cup with a ball rolling inside. The cup and ball were simulated in a virtual environment and subjects interacted with the system via a haptic robotic interface. Participants were instructed to move the system and arrive at a target region with both cup and ball at rest, 'zeroing out' residual oscillations of the ball. This challenging task affords a solution known as 'input shaping', whereby a series of pulses moves the dynamic object to the target leaving no residual oscillations. Since the timing and amplitude of these pulses depend on the controller's internal model of the object, input shaping served as a tool to identify the subjects' internal representation of the cup-and-ball. Five simulations with different internal models were compared against the human data. Results showed that the features in the data were correctly predicted by a simple internal model that represented the cup-and-ball as a single rigid mass coupled to the hand impedance. These findings provide evidence that humans use simplified internal models along with mechanical impedance to manipulate complex objects.
Collapse
Affiliation(s)
- Salah Bazzi
- Institute for Experiential Robotics, Northeastern University, Boston, Massachusetts, United States of America
| | - Stephan Stansfield
- Department of Mechanical Engineering, MIT, Cambridge, Massachusetts, United States of America
| | - Neville Hogan
- Department of Mechanical Engineering, MIT, Cambridge, Massachusetts, United States of America
- Department of Brain and Cognitive Sciences, MIT, Cambridge, Massachusetts, United States of America
| | - Dagmar Sternad
- Institute for Experiential Robotics, Northeastern University, Boston, Massachusetts, United States of America
- Department of Biology, Northeastern University, Boston, Massachusetts, United States of America
- Department of Electrical and Computer Engineering, Northeastern University, Boston, Massachusetts, United States of America
- Department of Physics, Northeastern University, Boston, Massachusetts, United States of America
| |
Collapse
|
13
|
Zhu J, Wei B, Tian J, Jiang F, Yi C. An Adaptively Weighted Averaging Method for Regional Time Series Extraction of fMRI-Based Brain Decoding. IEEE J Biomed Health Inform 2024; 28:5984-5995. [PMID: 38990750 DOI: 10.1109/jbhi.2024.3426930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/13/2024]
Abstract
Brain decoding that classifies cognitive states using the functional fluctuations of the brain can provide insightful information for understanding the brain mechanisms of cognitive functions. Among the common procedures of decoding the brain cognitive states with functional magnetic resonance imaging (fMRI), extracting the time series of each brain region after brain parcellation traditionally averages across the voxels within a brain region. This neglects the spatial information among the voxels and the requirement of extracting information for the downstream tasks. In this study, we propose to use a fully connected neural network that is jointly trained with the brain decoder to perform an adaptively weighted average across the voxels within each brain region. We perform extensive evaluations by cognitive state decoding, manifold learning, and interpretability analysis on the Human Connectome Project (HCP) dataset. The performance comparison of the cognitive state decoding presents an accuracy increase of up to 5% and stable accuracy improvement under different time window sizes, resampling sizes, and training data sizes. The results of manifold learning show that our method presents a considerable separability among cognitive states and basically excludes subject-specific information. The interpretability analysis shows that our method can identify reasonable brain regions corresponding to each cognitive state. Our study would aid the improvement of the basic pipeline of fMRI processing.
Collapse
|
14
|
Urbaniak R, Xie M, Mackevicius E. Linking cognitive strategy, neural mechanism, and movement statistics in group foraging behaviors. Sci Rep 2024; 14:21770. [PMID: 39294261 PMCID: PMC11411083 DOI: 10.1038/s41598-024-71931-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 09/02/2024] [Indexed: 09/20/2024] Open
Abstract
Foraging for food is a rich and ubiquitous animal behavior that involves complex cognitive decisions, and interactions between different individuals and species. There has been exciting recent progress in understanding multi-agent foraging behavior from cognitive, neuroscience, and statistical perspectives, but integrating these perspectives can be elusive. This paper seeks to unify these perspectives, allowing statistical analysis of observational animal movement data to shed light on the viability of cognitive models of foraging strategies. We start with cognitive agents with internal preferences expressed as value functions, and implement this in a biologically plausible neural network, and an equivalent statistical model, where statistical predictors of agents' movements correspond to the components of the value functions. We test this framework by simulating foraging agents and using Bayesian statistical modeling to correctly identify the factors that best predict the agents' behavior. As further validation, we use this framework to analyze an open-source locust foraging dataset. Finally, we collect new multi-agent real-world bird foraging data, and apply this method to analyze the preferences of different species. Together, this work provides an initial roadmap to integrate cognitive, neuroscience, and statistical approaches for reasoning about animal foraging in complex multi-agent environments.
Collapse
Affiliation(s)
| | - Marjorie Xie
- Basis Research Institute, New York, 10026, USA
- Arizona State University, School for the Future of Innovation in Society, Tempe, 85287, USA
- New York Academy of Sciences, New York, 10006, USA
- Columbia University, New York, 10027, USA
| | - Emily Mackevicius
- Basis Research Institute, New York, 10026, USA.
- Columbia University, New York, 10027, USA.
| |
Collapse
|
15
|
Seo S, Bharmauria V, Schütz A, Yan X, Wang H, Crawford JD. Multiunit Frontal Eye Field Activity Codes the Visuomotor Transformation, But Not Gaze Prediction or Retrospective Target Memory, in a Delayed Saccade Task. eNeuro 2024; 11:ENEURO.0413-23.2024. [PMID: 39054056 PMCID: PMC11373882 DOI: 10.1523/eneuro.0413-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 07/16/2024] [Accepted: 07/18/2024] [Indexed: 07/27/2024] Open
Abstract
Single-unit (SU) activity-action potentials isolated from one neuron-has traditionally been employed to relate neuronal activity to behavior. However, recent investigations have shown that multiunit (MU) activity-ensemble neural activity recorded within the vicinity of one microelectrode-may also contain accurate estimations of task-related neural population dynamics. Here, using an established model-fitting approach, we compared the spatial codes of SU response fields with corresponding MU response fields recorded from the frontal eye fields (FEFs) in head-unrestrained monkeys (Macaca mulatta) during a memory-guided saccade task. Overall, both SU and MU populations showed a simple visuomotor transformation: the visual response coded target-in-eye coordinates, transitioning progressively during the delay toward a future gaze-in-eye code in the saccade motor response. However, the SU population showed additional secondary codes, including a predictive gaze code in the visual response and retention of a target code in the motor response. Further, when SUs were separated into regular/fast spiking neurons, these cell types showed different spatial code progressions during the late delay period, only converging toward gaze coding during the final saccade motor response. Finally, reconstructing MU populations (by summing SU data within the same sites) failed to replicate either the SU or MU pattern. These results confirm the theoretical and practical potential of MU activity recordings as a biomarker for fundamental sensorimotor transformations (e.g., target-to-gaze coding in the oculomotor system), while also highlighting the importance of SU activity for coding more subtle (e.g., predictive/memory) aspects of sensorimotor behavior.
Collapse
Affiliation(s)
- Serah Seo
- Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
| | - Vishal Bharmauria
- Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
- Department of Neurosurgery and Brain Repair, Morsani College of Medicine, University of South Florida, Tampa, Florida 33606
| | - Adrian Schütz
- Department of Neurophysics, Philipps-Universität Marburg, 35032 Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, 35032 Marburg, and Justus-Liebig-Universität Giessen, Giessen, Germany
| | - Xiaogang Yan
- Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
| | - Hongying Wang
- Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
| | - J Douglas Crawford
- Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
- Departments of Psychology, Biology, Kinesiology & Health Sciences, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
16
|
Zheng C, Tang E. A topological mechanism for robust and efficient global oscillations in biological networks. Nat Commun 2024; 15:6453. [PMID: 39085205 PMCID: PMC11291491 DOI: 10.1038/s41467-024-50510-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 07/11/2024] [Indexed: 08/02/2024] Open
Abstract
Long and stable timescales are often observed in complex biochemical networks, such as in emergent oscillations. How these robust dynamics persist remains unclear, given the many stochastic reactions and shorter time scales demonstrated by underlying components. We propose a topological model that produces long oscillations around the network boundary, reducing the system dynamics to a lower-dimensional current in a robust manner. Using this to model KaiC, which regulates the circadian rhythm in cyanobacteria, we compare the coherence of oscillations to that in other KaiC models. Our topological model localizes currents on the system edge, with an efficient regime of simultaneously increased precision and decreased cost. Further, we introduce a new predictor of coherence from the analysis of spectral gaps, and show that our model saturates a global thermodynamic bound. Our work presents a new mechanism and parsimonious description for robust emergent oscillations in complex biological networks.
Collapse
Affiliation(s)
- Chongbin Zheng
- Center for Theoretical Biological Physics, Rice University, Houston, TX, 77005, USA
- Department of Physics and Astronomy, Rice University, Houston, TX, 77005, USA
| | - Evelyn Tang
- Center for Theoretical Biological Physics, Rice University, Houston, TX, 77005, USA.
- Department of Physics and Astronomy, Rice University, Houston, TX, 77005, USA.
| |
Collapse
|
17
|
Morales-Gregorio A, Kurth AC, Ito J, Kleinjohann A, Barthélemy FV, Brochier T, Grün S, van Albada SJ. Neural manifolds in V1 change with top-down signals from V4 targeting the foveal region. Cell Rep 2024; 43:114371. [PMID: 38923458 DOI: 10.1016/j.celrep.2024.114371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 03/25/2024] [Accepted: 05/31/2024] [Indexed: 06/28/2024] Open
Abstract
High-dimensional brain activity is often organized into lower-dimensional neural manifolds. However, the neural manifolds of the visual cortex remain understudied. Here, we study large-scale multi-electrode electrophysiological recordings of macaque (Macaca mulatta) areas V1, V4, and DP with a high spatiotemporal resolution. We find that the population activity of V1 contains two separate neural manifolds, which correlate strongly with eye closure (eyes open/closed) and have distinct dimensionalities. Moreover, we find strong top-down signals from V4 to V1, particularly to the foveal region of V1, which are significantly stronger during the eyes-open periods. Finally, in silico simulations of a balanced spiking neuron network qualitatively reproduce the experimental findings. Taken together, our analyses and simulations suggest that top-down signals modulate the population activity of V1. We postulate that the top-down modulation during the eyes-open periods prepares V1 for fast and efficient visual responses, resulting in a type of visual stand-by state.
Collapse
Affiliation(s)
- Aitor Morales-Gregorio
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; Institute of Zoology, University of Cologne, Cologne, Germany.
| | - Anno C Kurth
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; RWTH Aachen University, Aachen, Germany
| | - Junji Ito
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany
| | - Alexander Kleinjohann
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | - Frédéric V Barthélemy
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; Institut de Neurosciences de la Timone (INT), CNRS and Aix-Marseille Université, Marseille, France
| | - Thomas Brochier
- Institut de Neurosciences de la Timone (INT), CNRS and Aix-Marseille Université, Marseille, France
| | - Sonja Grün
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany; JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Sacha J van Albada
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; Institute of Zoology, University of Cologne, Cologne, Germany
| |
Collapse
|
18
|
Tye KM, Miller EK, Taschbach FH, Benna MK, Rigotti M, Fusi S. Mixed selectivity: Cellular computations for complexity. Neuron 2024; 112:2289-2303. [PMID: 38729151 PMCID: PMC11257803 DOI: 10.1016/j.neuron.2024.04.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 03/08/2024] [Accepted: 04/12/2024] [Indexed: 05/12/2024]
Abstract
The property of mixed selectivity has been discussed at a computational level and offers a strategy to maximize computational power by adding versatility to the functional role of each neuron. Here, we offer a biologically grounded implementational-level mechanistic explanation for mixed selectivity in neural circuits. We define pure, linear, and nonlinear mixed selectivity and discuss how these response properties can be obtained in simple neural circuits. Neurons that respond to multiple, statistically independent variables display mixed selectivity. If their activity can be expressed as a weighted sum, then they exhibit linear mixed selectivity; otherwise, they exhibit nonlinear mixed selectivity. Neural representations based on diverse nonlinear mixed selectivity are high dimensional; hence, they confer enormous flexibility to a simple downstream readout neural circuit. However, a simple neural circuit cannot possibly encode all possible mixtures of variables simultaneously, as this would require a combinatorially large number of mixed selectivity neurons. Gating mechanisms like oscillations and neuromodulation can solve this problem by dynamically selecting which variables are mixed and transmitted to the readout.
Collapse
Affiliation(s)
- Kay M Tye
- Salk Institute for Biological Studies, La Jolla, CA, USA; Howard Hughes Medical Institute, La Jolla, CA; Department of Neurobiology, School of Biological Sciences, University of California, San Diego, La Jolla, CA 92093, USA; Kavli Institute for Brain and Mind, San Diego, CA, USA.
| | - Earl K Miller
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | - Felix H Taschbach
- Salk Institute for Biological Studies, La Jolla, CA, USA; Biological Science Graduate Program, University of California, San Diego, La Jolla, CA 92093, USA; Department of Neurobiology, School of Biological Sciences, University of California, San Diego, La Jolla, CA 92093, USA.
| | - Marcus K Benna
- Department of Neurobiology, School of Biological Sciences, University of California, San Diego, La Jolla, CA 92093, USA.
| | | | - Stefano Fusi
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA.
| |
Collapse
|
19
|
Hermansen E, Klindt DA, Dunn BA. Uncovering 2-D toroidal representations in grid cell ensemble activity during 1-D behavior. Nat Commun 2024; 15:5429. [PMID: 38926360 PMCID: PMC11208534 DOI: 10.1038/s41467-024-49703-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Accepted: 06/13/2024] [Indexed: 06/28/2024] Open
Abstract
Minimal experiments, such as head-fixed wheel-running and sleep, offer experimental advantages but restrict the amount of observable behavior, making it difficult to classify functional cell types. Arguably, the grid cell, and its striking periodicity, would not have been discovered without the perspective provided by free behavior in an open environment. Here, we show that by shifting the focus from single neurons to populations, we change the minimal experimental complexity required. We identify grid cell modules and show that the activity covers a similar, stable toroidal state space during wheel running as in open field foraging. Trajectories on grid cell tori correspond to single trial runs in virtual reality and path integration in the dark, and the alignment of the representation rapidly shifts with changes in experimental conditions. Thus, we provide a methodology to discover and study complex internal representations in even the simplest of experiments.
Collapse
Affiliation(s)
- Erik Hermansen
- Department of Mathematical Sciences, NTNU, Trondheim, Norway.
| | - David A Klindt
- Department of Mathematical Sciences, NTNU, Trondheim, Norway
- Cold Spring Harbor Laboratory, Cold Spring Harbor, Laurel Hollow, New York, USA
| | - Benjamin A Dunn
- Department of Mathematical Sciences, NTNU, Trondheim, Norway.
| |
Collapse
|
20
|
Cisek P, Green AM. Toward a neuroscience of natural behavior. Curr Opin Neurobiol 2024; 86:102859. [PMID: 38583263 DOI: 10.1016/j.conb.2024.102859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Accepted: 03/04/2024] [Indexed: 04/09/2024]
Abstract
One of the most exciting new developments in systems neuroscience is the progress being made toward neurophysiological experiments that move beyond simplified laboratory settings and address the richness of natural behavior. This is enabled by technological advances such as wireless recording in freely moving animals, automated quantification of behavior, and new methods for analyzing large data sets. Beyond new empirical methods and data, however, there is also a need for new theories and concepts to interpret that data. Such theories need to address the particular challenges of natural behavior, which often differ significantly from the scenarios studied in traditional laboratory settings. Here, we discuss some strategies for developing such novel theories and concepts and some example hypotheses being proposed.
Collapse
Affiliation(s)
- Paul Cisek
- Department of Neuroscience, University of Montréal, Montréal, Québec, Canada.
| | - Andrea M Green
- Department of Neuroscience, University of Montréal, Montréal, Québec, Canada
| |
Collapse
|
21
|
Manley J, Lu S, Barber K, Demas J, Kim H, Meyer D, Traub FM, Vaziri A. Simultaneous, cortex-wide dynamics of up to 1 million neurons reveal unbounded scaling of dimensionality with neuron number. Neuron 2024; 112:1694-1709.e5. [PMID: 38452763 PMCID: PMC11098699 DOI: 10.1016/j.neuron.2024.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 05/18/2023] [Accepted: 02/14/2024] [Indexed: 03/09/2024]
Abstract
The brain's remarkable properties arise from the collective activity of millions of neurons. Widespread application of dimensionality reduction to multi-neuron recordings implies that neural dynamics can be approximated by low-dimensional "latent" signals reflecting neural computations. However, can such low-dimensional representations truly explain the vast range of brain activity, and if not, what is the appropriate resolution and scale of recording to capture them? Imaging neural activity at cellular resolution and near-simultaneously across the mouse cortex, we demonstrate an unbounded scaling of dimensionality with neuron number in populations up to 1 million neurons. Although half of the neural variance is contained within sixteen dimensions correlated with behavior, our discovered scaling of dimensionality corresponds to an ever-increasing number of neuronal ensembles without immediate behavioral or sensory correlates. The activity patterns underlying these higher dimensions are fine grained and cortex wide, highlighting that large-scale, cellular-resolution recording is required to uncover the full substrates of neuronal computations.
Collapse
Affiliation(s)
- Jason Manley
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA; The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
| | - Sihao Lu
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Kevin Barber
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Jeffrey Demas
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA; The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
| | - Hyewon Kim
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - David Meyer
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Francisca Martínez Traub
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Alipasha Vaziri
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA; The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA.
| |
Collapse
|
22
|
Fontenele AJ, Sooter JS, Norman VK, Gautam SH, Shew WL. Low-dimensional criticality embedded in high-dimensional awake brain dynamics. SCIENCE ADVANCES 2024; 10:eadj9303. [PMID: 38669340 PMCID: PMC11051676 DOI: 10.1126/sciadv.adj9303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 03/26/2024] [Indexed: 04/28/2024]
Abstract
Whether cortical neurons operate in a strongly or weakly correlated dynamical regime determines fundamental information processing capabilities and has fueled decades of debate. We offer a resolution of this debate; we show that two important dynamical regimes, typically considered incompatible, can coexist in the same local cortical circuit by separating them into two different subspaces. In awake mouse motor cortex, we find a low-dimensional subspace with large fluctuations consistent with criticality-a dynamical regime with moderate correlations and multi-scale information capacity and transmission. Orthogonal to this critical subspace, we find a high-dimensional subspace containing a desynchronized dynamical regime, which may optimize input discrimination. The critical subspace is apparent only at long timescales, which explains discrepancies among some previous studies. Using a computational model, we show that the emergence of a low-dimensional critical subspace at large timescales agrees with established theory of critical dynamics. Our results suggest that the cortex leverages its high dimensionality to multiplex dynamical regimes across different subspaces.
Collapse
|
23
|
Meng R, Bouchard KE. Bayesian inference of structured latent spaces from neural population activity with the orthogonal stochastic linear mixing model. PLoS Comput Biol 2024; 20:e1011975. [PMID: 38669271 PMCID: PMC11078355 DOI: 10.1371/journal.pcbi.1011975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 05/08/2024] [Accepted: 03/07/2024] [Indexed: 04/28/2024] Open
Abstract
The brain produces diverse functions, from perceiving sounds to producing arm reaches, through the collective activity of populations of many neurons. Determining if and how the features of these exogenous variables (e.g., sound frequency, reach angle) are reflected in population neural activity is important for understanding how the brain operates. Often, high-dimensional neural population activity is confined to low-dimensional latent spaces. However, many current methods fail to extract latent spaces that are clearly structured by exogenous variables. This has contributed to a debate about whether or not brains should be thought of as dynamical systems or representational systems. Here, we developed a new latent process Bayesian regression framework, the orthogonal stochastic linear mixing model (OSLMM) which introduces an orthogonality constraint amongst time-varying mixture coefficients, and provide Markov chain Monte Carlo inference procedures. We demonstrate superior performance of OSLMM on latent trajectory recovery in synthetic experiments and show superior computational efficiency and prediction performance on several real-world benchmark data sets. We primarily focus on demonstrating the utility of OSLMM in two neural data sets: μECoG recordings from rat auditory cortex during presentation of pure tones and multi-single unit recordings form monkey motor cortex during complex arm reaching. We show that OSLMM achieves superior or comparable predictive accuracy of neural data and decoding of external variables (e.g., reach velocity). Most importantly, in both experimental contexts, we demonstrate that OSLMM latent trajectories directly reflect features of the sounds and reaches, demonstrating that neural dynamics are structured by neural representations. Together, these results demonstrate that OSLMM will be useful for the analysis of diverse, large-scale biological time-series datasets.
Collapse
Affiliation(s)
- Rui Meng
- Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, California, United States of America
| | - Kristofer E. Bouchard
- Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, California, United States of America
- Scientific Data Division, Lawrence Berkeley National Laboratory, Berkeley, California, United States of America
- Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, California, United States of America
- Redwood Center for Theoretical Neuroscience, University of California Berkeley, Berkeley, California, United States of America
| |
Collapse
|
24
|
Churchland MM, Shenoy KV. Preparatory activity and the expansive null-space. Nat Rev Neurosci 2024; 25:213-236. [PMID: 38443626 DOI: 10.1038/s41583-024-00796-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/26/2024] [Indexed: 03/07/2024]
Abstract
The study of the cortical control of movement experienced a conceptual shift over recent decades, as the basic currency of understanding shifted from single-neuron tuning towards population-level factors and their dynamics. This transition was informed by a maturing understanding of recurrent networks, where mechanism is often characterized in terms of population-level factors. By estimating factors from data, experimenters could test network-inspired hypotheses. Central to such hypotheses are 'output-null' factors that do not directly drive motor outputs yet are essential to the overall computation. In this Review, we highlight how the hypothesis of output-null factors was motivated by the venerable observation that motor-cortex neurons are active during movement preparation, well before movement begins. We discuss how output-null factors then became similarly central to understanding neural activity during movement. We discuss how this conceptual framework provided key analysis tools, making it possible for experimenters to address long-standing questions regarding motor control. We highlight an intriguing trend: as experimental and theoretical discoveries accumulate, the range of computational roles hypothesized to be subserved by output-null factors continues to expand.
Collapse
Affiliation(s)
- Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA.
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA.
| | - Krishna V Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Neurobiology, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Bio-X Institute, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
| |
Collapse
|
25
|
Bush NE, Ramirez JM. Latent neural population dynamics underlying breathing, opioid-induced respiratory depression and gasping. Nat Neurosci 2024; 27:259-271. [PMID: 38182835 PMCID: PMC10849970 DOI: 10.1038/s41593-023-01520-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 11/06/2023] [Indexed: 01/07/2024]
Abstract
Breathing is vital and must be concurrently robust and flexible. This rhythmic behavior is generated and maintained within a rostrocaudally aligned set of medullary nuclei called the ventral respiratory column (VRC). The rhythmic properties of individual VRC nuclei are well known, yet technical challenges have limited the interrogation of the entire VRC population simultaneously. Here we characterize over 15,000 medullary units using high-density electrophysiology, opto-tagging and histological reconstruction. Population dynamics analysis reveals consistent rotational trajectories through a low-dimensional neural manifold. These rotations are robust and maintained even during opioid-induced respiratory depression. During severe hypoxia-induced gasping, the low-dimensional dynamics of the VRC reconfigure from rotational to all-or-none, ballistic efforts. Thus, latent dynamics provide a unifying lens onto the activities of large, heterogeneous populations of neurons involved in the simple, yet vital, behavior of breathing, and well describe how these populations respond to a variety of perturbations.
Collapse
Affiliation(s)
- Nicholas Edward Bush
- Center for Integrative Brain Research, Seattle Children's Research Institute, Seattle, WA, USA
| | - Jan-Marino Ramirez
- Center for Integrative Brain Research, Seattle Children's Research Institute, Seattle, WA, USA.
- Department of Pediatrics, University of Washington, Seattle, WA, USA.
- Department of Neurological Surgery, University of Washington, Seattle, WA, USA.
| |
Collapse
|
26
|
Manley J, Demas J, Kim H, Traub FM, Vaziri A. Simultaneous, cortex-wide and cellular-resolution neuronal population dynamics reveal an unbounded scaling of dimensionality with neuron number. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.15.575721. [PMID: 38293036 PMCID: PMC10827059 DOI: 10.1101/2024.01.15.575721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
The brain's remarkable properties arise from collective activity of millions of neurons. Widespread application of dimensionality reduction to multi-neuron recordings implies that neural dynamics can be approximated by low-dimensional "latent" signals reflecting neural computations. However, what would be the biological utility of such a redundant and metabolically costly encoding scheme and what is the appropriate resolution and scale of neural recording to understand brain function? Imaging the activity of one million neurons at cellular resolution and near-simultaneously across mouse cortex, we demonstrate an unbounded scaling of dimensionality with neuron number. While half of the neural variance lies within sixteen behavior-related dimensions, we find this unbounded scaling of dimensionality to correspond to an ever-increasing number of internal variables without immediate behavioral correlates. The activity patterns underlying these higher dimensions are fine-grained and cortex-wide, highlighting that large-scale recording is required to uncover the full neural substrates of internal and potentially cognitive processes.
Collapse
Affiliation(s)
- Jason Manley
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
- The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
| | - Jeffrey Demas
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
- The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
| | - Hyewon Kim
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Francisca Martínez Traub
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Alipasha Vaziri
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
- The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
- Lead Contact
| |
Collapse
|
27
|
Idesis S, Geli S, Faskowitz J, Vohryzek J, Sanz Perl Y, Pieper F, Galindo-Leon E, Engel AK, Deco G. Functional hierarchies in brain dynamics characterized by signal reversibility in ferret cortex. PLoS Comput Biol 2024; 20:e1011818. [PMID: 38241383 PMCID: PMC10836715 DOI: 10.1371/journal.pcbi.1011818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 02/02/2024] [Accepted: 01/09/2024] [Indexed: 01/21/2024] Open
Abstract
Brain signal irreversibility has been shown to be a promising approach to study neural dynamics. Nevertheless, the relation with cortical hierarchy and the influence of different electrophysiological features is not completely understood. In this study, we recorded local field potentials (LFPs) during spontaneous behavior, including awake and sleep periods, using custom micro-electrocorticographic (μECoG) arrays implanted in ferrets. In contrast to humans, ferrets remain less time in each state across the sleep-wake cycle. We deployed a diverse set of metrics in order to measure the levels of complexity of the different behavioral states. In particular, brain irreversibility, which is a signature of non-equilibrium dynamics, captured by the arrow of time of the signal, revealed the hierarchical organization of the ferret's cortex. We found different signatures of irreversibility and functional hierarchy of large-scale dynamics in three different brain states (active awake, quiet awake, and deep sleep), showing a lower level of irreversibility in the deep sleep stage, compared to the other. Irreversibility also allowed us to disentangle the influence of different cortical areas and frequency bands in this process, showing a predominance of the parietal cortex and the theta band. Furthermore, when inspecting the embedded dynamic through a Hidden Markov Model, the deep sleep stage was revealed to have a lower switching rate and lower entropy production. These results suggest functional hierarchies in organization that can be revealed through thermodynamic features and information theory metrics.
Collapse
Affiliation(s)
- Sebastian Idesis
- Center for Brain and Cognition (CBC), Department of Information Technologies and Communications (DTIC), Pompeu Fabra University, Edifici Mercè Rodoreda, Barcelona, Catalonia, Spain
| | - Sebastián Geli
- Center for Brain and Cognition (CBC), Department of Information Technologies and Communications (DTIC), Pompeu Fabra University, Edifici Mercè Rodoreda, Barcelona, Catalonia, Spain
| | - Joshua Faskowitz
- Department of Psychological and Brain Sciences, Indiana University Bloomington, Bloomington, Indiana, United States of America
| | - Jakub Vohryzek
- Center for Brain and Cognition (CBC), Department of Information Technologies and Communications (DTIC), Pompeu Fabra University, Edifici Mercè Rodoreda, Barcelona, Catalonia, Spain
- Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, United Kingdom
| | - Yonatan Sanz Perl
- Center for Brain and Cognition (CBC), Department of Information Technologies and Communications (DTIC), Pompeu Fabra University, Edifici Mercè Rodoreda, Barcelona, Catalonia, Spain
- National Scientific and Technical Research Council, Buenos Aires, Argentina
- Institut du Cerveau et de la Moelle épinière, ICM, Paris, France
| | - Florian Pieper
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Edgar Galindo-Leon
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Andreas K. Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Gustavo Deco
- Center for Brain and Cognition (CBC), Department of Information Technologies and Communications (DTIC), Pompeu Fabra University, Edifici Mercè Rodoreda, Barcelona, Catalonia, Spain
- Institució Catalana de Recerca I Estudis Avançats (ICREA), Barcelona, Catalonia, Spain
| |
Collapse
|
28
|
Elmoznino E, Bonner MF. High-performing neural network models of visual cortex benefit from high latent dimensionality. PLoS Comput Biol 2024; 20:e1011792. [PMID: 38198504 PMCID: PMC10805290 DOI: 10.1371/journal.pcbi.1011792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 01/23/2024] [Accepted: 12/30/2023] [Indexed: 01/12/2024] Open
Abstract
Geometric descriptions of deep neural networks (DNNs) have the potential to uncover core representational principles of computational models in neuroscience. Here we examined the geometry of DNN models of visual cortex by quantifying the latent dimensionality of their natural image representations. A popular view holds that optimal DNNs compress their representations onto low-dimensional subspaces to achieve invariance and robustness, which suggests that better models of visual cortex should have lower dimensional geometries. Surprisingly, we found a strong trend in the opposite direction-neural networks with high-dimensional image subspaces tended to have better generalization performance when predicting cortical responses to held-out stimuli in both monkey electrophysiology and human fMRI data. Moreover, we found that high dimensionality was associated with better performance when learning new categories of stimuli, suggesting that higher dimensional representations are better suited to generalize beyond their training domains. These findings suggest a general principle whereby high-dimensional geometry confers computational benefits to DNN models of visual cortex.
Collapse
Affiliation(s)
- Eric Elmoznino
- Department of Cognitive Science, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Michael F. Bonner
- Department of Cognitive Science, Johns Hopkins University, Baltimore, Maryland, United States of America
| |
Collapse
|
29
|
Hatsopoulos N, Moore D, MacLean J, Walker J. A dynamic subset of network interactions underlies tuning to natural movements in marmoset sensorimotor cortex. RESEARCH SQUARE 2023:rs.3.rs-3750312. [PMID: 38234779 PMCID: PMC10793486 DOI: 10.21203/rs.3.rs-3750312/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
Mechanisms of computation in sensorimotor cortex must be flexible and robust to support skilled motor behavior. Patterns of neuronal coactivity emerge as a result of computational processes. Pairwise spike-time statistical relationships, across the population, can be summarized as a functional network (FN) which retains single-unit properties. We record populations of single-unit neural activity in forelimb sensorimotor cortex during prey-capture and spontaneous behavior and use an encoding model incorporating kinematic trajectories and network features to predict single-unit activity during forelimb movements. The contribution of network features depends on structured connectivity within strongly connected functional groups. We identify a context-specific functional group that is highly tuned to kinematics and reorganizes its connectivity between spontaneous and prey-capture movements. In the remaining context-invariant group, interactions are comparatively stable across behaviors and units are less tuned to kinematics. This suggests different roles in producing natural forelimb movements and contextualizes single-unit tuning properties within population dynamics.
Collapse
|
30
|
Capouskova K, Zamora‐López G, Kringelbach ML, Deco G. Integration and segregation manifolds in the brain ensure cognitive flexibility during tasks and rest. Hum Brain Mapp 2023; 44:6349-6363. [PMID: 37846551 PMCID: PMC10681658 DOI: 10.1002/hbm.26511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 09/14/2023] [Accepted: 09/25/2023] [Indexed: 10/18/2023] Open
Abstract
Adapting to a constantly changing environment requires the human brain to flexibly switch among many demanding cognitive tasks, processing both specialized and integrated information associated with the activity in functional networks over time. In this study, we investigated the nature of the temporal alternation between segregated and integrated states in the brain during rest and six cognitive tasks using functional MRI. We employed a deep autoencoder to explore the 2D latent space associated with the segregated and integrated states. Our results show that the integrated state occupies less space in the latent space manifold compared to the segregated states. Moreover, the integrated state is characterized by lower entropy of occupancy than the segregated state, suggesting that integration plays a consolidating role, while segregation may serve as cognitive expertness. Comparing rest and the tasks, we found that rest exhibits higher entropy of occupancy, indicating a more random wandering of the mind compared to the expected focus during task performance. Our study demonstrates that both transient, short-lived integrated and segregated states are present during rest and task performance, flexibly switching between them, with integration serving as information compression and segregation related to information specialization.
Collapse
Affiliation(s)
- Katerina Capouskova
- Center for Brain and Cognition, Computational Neuroscience Group, DTICUniversitat Pompeu FabraBarcelonaSpain
| | - Gorka Zamora‐López
- Center for Brain and Cognition, Computational Neuroscience Group, DTICUniversitat Pompeu FabraBarcelonaSpain
| | - Morten L. Kringelbach
- Department of PsychiatryUniversity of OxfordOxfordUnited Kingdom
- Center for Music in the Brain, Department of Clinical MedicineAarhus UniversityAarhusDenmark
- Centre for Eudaimonia and Human Flourishing, Linacre CollegeUniversity of OxfordOxfordUnited Kingdom
| | - Gustavo Deco
- Center for Brain and Cognition, Computational Neuroscience Group, DTICUniversitat Pompeu FabraBarcelonaSpain
- Institució Catalana de Recerca i Estudis Avançats (ICREA)BarcelonaSpain
| |
Collapse
|
31
|
Rizzoglio F, Altan E, Ma X, Bodkin KL, Dekleva BM, Solla SA, Kennedy A, Miller LE. From monkeys to humans: observation-basedEMGbrain-computer interface decoders for humans with paralysis. J Neural Eng 2023; 20:056040. [PMID: 37844567 PMCID: PMC10618714 DOI: 10.1088/1741-2552/ad038e] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 10/02/2023] [Accepted: 10/16/2023] [Indexed: 10/18/2023]
Abstract
Objective. Intracortical brain-computer interfaces (iBCIs) aim to enable individuals with paralysis to control the movement of virtual limbs and robotic arms. Because patients' paralysis prevents training a direct neural activity to limb movement decoder, most iBCIs rely on 'observation-based' decoding in which the patient watches a moving cursor while mentally envisioning making the movement. However, this reliance on observed target motion for decoder development precludes its application to the prediction of unobservable motor output like muscle activity. Here, we ask whether recordings of muscle activity from a surrogate individual performing the same movement as the iBCI patient can be used as target for an iBCI decoder.Approach. We test two possible approaches, each using data from a human iBCI user and a monkey, both performing similar motor actions. In one approach, we trained a decoder to predict the electromyographic (EMG) activity of a monkey from neural signals recorded from a human. We then contrast this to a second approach, based on the hypothesis that the low-dimensional 'latent' neural representations of motor behavior, known to be preserved across time for a given behavior, might also be preserved across individuals. We 'transferred' an EMG decoder trained solely on monkey data to the human iBCI user after using Canonical Correlation Analysis to align the human latent signals to those of the monkey.Main results. We found that both direct and transfer decoding approaches allowed accurate EMG predictions between two monkeys and from a monkey to a human.Significance. Our findings suggest that these latent representations of behavior are consistent across animals and even primate species. These methods are an important initial step in the development of iBCI decoders that generate EMG predictions that could serve as signals for a biomimetic decoder controlling motion and impedance of a prosthetic arm, or even muscle force directly through functional electrical stimulation.
Collapse
Affiliation(s)
- Fabio Rizzoglio
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
| | - Ege Altan
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, United States of America
| | - Xuan Ma
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
| | - Kevin L Bodkin
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
| | - Brian M Dekleva
- Rehab Neural Engineering Labs, Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, PA, United States of America
| | - Sara A Solla
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
- Department of Physics and Astronomy, Northwestern University, Evanston, IL, United States of America
| | - Ann Kennedy
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
| | - Lee E Miller
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, United States of America
- Shirley Ryan AbilityLab, Chicago, IL, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL, United States of America
| |
Collapse
|
32
|
De A, Chaudhuri R. Common population codes produce extremely nonlinear neural manifolds. Proc Natl Acad Sci U S A 2023; 120:e2305853120. [PMID: 37733742 PMCID: PMC10523500 DOI: 10.1073/pnas.2305853120] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 08/03/2023] [Indexed: 09/23/2023] Open
Abstract
Populations of neurons represent sensory, motor, and cognitive variables via patterns of activity distributed across the population. The size of the population used to encode a variable is typically much greater than the dimension of the variable itself, and thus, the corresponding neural population activity occupies lower-dimensional subsets of the full set of possible activity states. Given population activity data with such lower-dimensional structure, a fundamental question asks how close the low-dimensional data lie to a linear subspace. The linearity or nonlinearity of the low-dimensional structure reflects important computational features of the encoding, such as robustness and generalizability. Moreover, identifying such linear structure underlies common data analysis methods such as Principal Component Analysis (PCA). Here, we show that for data drawn from many common population codes the resulting point clouds and manifolds are exceedingly nonlinear, with the dimension of the best-fitting linear subspace growing at least exponentially with the true dimension of the data. Consequently, linear methods like PCA fail dramatically at identifying the true underlying structure, even in the limit of arbitrarily many data points and no noise.
Collapse
Affiliation(s)
- Anandita De
- Center for Neuroscience, University of California, Davis, CA95618
- Department of Physics, University of California, Davis, CA95616
| | - Rishidev Chaudhuri
- Center for Neuroscience, University of California, Davis, CA95618
- Department of Neurobiology, Physiology and Behavior, University of California, Davis, CA95616
- Department of Mathematics, University of California, Davis, CA95616
| |
Collapse
|
33
|
Clark DG, Abbott LF, Litwin-Kumar A. Dimension of Activity in Random Neural Networks. PHYSICAL REVIEW LETTERS 2023; 131:118401. [PMID: 37774280 DOI: 10.1103/physrevlett.131.118401] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2022] [Revised: 05/25/2023] [Accepted: 08/08/2023] [Indexed: 10/01/2023]
Abstract
Neural networks are high-dimensional nonlinear dynamical systems that process information through the coordinated activity of many connected units. Understanding how biological and machine-learning networks function and learn requires knowledge of the structure of this coordinated activity, information contained, for example, in cross covariances between units. Self-consistent dynamical mean field theory (DMFT) has elucidated several features of random neural networks-in particular, that they can generate chaotic activity-however, a calculation of cross covariances using this approach has not been provided. Here, we calculate cross covariances self-consistently via a two-site cavity DMFT. We use this theory to probe spatiotemporal features of activity coordination in a classic random-network model with independent and identically distributed (i.i.d.) couplings, showing an extensive but fractionally low effective dimension of activity and a long population-level timescale. Our formulas apply to a wide range of single-unit dynamics and generalize to non-i.i.d. couplings. As an example of the latter, we analyze the case of partially symmetric couplings.
Collapse
Affiliation(s)
- David G Clark
- Zuckerman Institute, Department of Neuroscience, Columbia University, New York, New York 10027, USA
| | - L F Abbott
- Zuckerman Institute, Department of Neuroscience, Columbia University, New York, New York 10027, USA
| | - Ashok Litwin-Kumar
- Zuckerman Institute, Department of Neuroscience, Columbia University, New York, New York 10027, USA
| |
Collapse
|
34
|
Kass RE, Bong H, Olarinre M, Xin Q, Urban KN. Identification of interacting neural populations: methods and statistical considerations. J Neurophysiol 2023; 130:475-496. [PMID: 37465897 PMCID: PMC10642974 DOI: 10.1152/jn.00131.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 07/17/2023] [Accepted: 07/17/2023] [Indexed: 07/20/2023] Open
Abstract
As improved recording technologies have created new opportunities for neurophysiological investigation, emphasis has shifted from individual neurons to multiple populations that form circuits, and it has become important to provide evidence of cross-population coordinated activity. We review various methods for doing so, placing them in six major categories while avoiding technical descriptions and instead focusing on high-level motivations and concerns. Our aim is to indicate what the methods can achieve and the circumstances under which they are likely to succeed. Toward this end, we include a discussion of four cross-cutting issues: the definition of neural populations, trial-to-trial variability and Poisson-like noise, time-varying dynamics, and causality.
Collapse
Affiliation(s)
- Robert E Kass
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
- Department of Statistics & Data Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
| | - Heejong Bong
- Department of Statistics & Data Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
| | - Motolani Olarinre
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
- Department of Statistics & Data Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
| | - Qi Xin
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
- Department of Statistics & Data Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
| | - Konrad N Urban
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
- Department of Statistics & Data Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
| |
Collapse
|
35
|
Pinotsis DA, Miller EK. In vivo ephaptic coupling allows memory network formation. Cereb Cortex 2023; 33:9877-9895. [PMID: 37420330 PMCID: PMC10472500 DOI: 10.1093/cercor/bhad251] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 06/20/2023] [Accepted: 06/23/2023] [Indexed: 07/09/2023] Open
Abstract
It is increasingly clear that memories are distributed across multiple brain areas. Such "engram complexes" are important features of memory formation and consolidation. Here, we test the hypothesis that engram complexes are formed in part by bioelectric fields that sculpt and guide the neural activity and tie together the areas that participate in engram complexes. Like the conductor of an orchestra, the fields influence each musician or neuron and orchestrate the output, the symphony. Our results use the theory of synergetics, machine learning, and data from a spatial delayed saccade task and provide evidence for in vivo ephaptic coupling in memory representations.
Collapse
Affiliation(s)
- Dimitris A Pinotsis
- Department of Psychology, Centre for Mathematical Neuroscience and Psychology, University of London, London EC1V 0HB, United Kingdom
- The Picower Institute for Learning & Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, United States
| | - Earl K Miller
- The Picower Institute for Learning & Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, United States
| |
Collapse
|
36
|
Chien JM, Wallis JD, Rich EL. Abstraction of Reward Context Facilitates Relative Reward Coding in Neural Populations of the Macaque Anterior Cingulate Cortex. J Neurosci 2023; 43:5944-5962. [PMID: 37495383 PMCID: PMC10436688 DOI: 10.1523/jneurosci.0292-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 06/26/2023] [Accepted: 07/22/2023] [Indexed: 07/28/2023] Open
Abstract
The anterior cingulate cortex (ACC) is believed to be involved in many cognitive processes, including linking goals to actions and tracking decision-relevant contextual information. ACC neurons robustly encode expected outcomes, but how this relates to putative functions of ACC remains unknown. Here, we approach this question from the perspective of population codes by analyzing neural spiking data in the ventral and dorsal banks of the ACC in two male monkeys trained to perform a stimulus-motor mapping task to earn rewards or avoid losses. We found that neural populations favor a low dimensional representational geometry that emphasizes the valence of potential outcomes while also facilitating the independent, abstract representation of multiple task-relevant variables. Valence encoding persisted throughout the trial, and realized outcomes were primarily encoded in a relative sense, such that cue valence acted as a context for outcome encoding. This suggests that the population coding we observe could be a mechanism that allows feedback to be interpreted in a context-dependent manner. Together, our results point to a prominent role for ACC in context setting and relative interpretation of outcomes, facilitated by abstract, or untangled, representations of task variables.SIGNIFICANCE STATEMENT The ability to interpret events in light of the current context is a critical facet of higher-order cognition. The ACC is suggested to be important for tracking contextual information, whereas alternate views hold that its function is more related to the motor system and linking goals to appropriate actions. We evaluated these possibilities by analyzing geometric properties of neural population activity in monkey ACC when contexts were determined by the valence of potential outcomes and found that this information was represented as a dominant, abstract concept. Ensuing outcomes were then coded relative to these contexts, suggesting an important role for these representations in context-dependent evaluation. Such mechanisms may be critical for the abstract reasoning and generalization characteristic of biological intelligence.
Collapse
Affiliation(s)
- Jonathan M Chien
- Nash Family Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, New York 10029
- Department of Neuroscience and Physiology, New York University Grossman School of Medicine, New York, New York 10016
| | - Joni D Wallis
- Department of Psychology and Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California 94720
| | - Erin L Rich
- Nash Family Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, New York 10029
| |
Collapse
|
37
|
Forró C, Musall S, Montes VR, Linkhorst J, Walter P, Wessling M, Offenhäusser A, Ingebrandt S, Weber Y, Lampert A, Santoro F. Toward the Next Generation of Neural Iontronic Interfaces. Adv Healthc Mater 2023; 12:e2301055. [PMID: 37434349 PMCID: PMC11468917 DOI: 10.1002/adhm.202301055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 05/23/2023] [Indexed: 07/13/2023]
Abstract
Neural interfaces are evolving at a rapid pace owing to advances in material science and fabrication, reduced cost of scalable complementary metal oxide semiconductor (CMOS) technologies, and highly interdisciplinary teams of researchers and engineers that span a large range from basic to applied and clinical sciences. This study outlines currently established technologies, defined as instruments and biological study systems that are routinely used in neuroscientific research. After identifying the shortcomings of current technologies, such as a lack of biocompatibility, topological optimization, low bandwidth, and lack of transparency, it maps out promising directions along which progress should be made to achieve the next generation of symbiotic and intelligent neural interfaces. Lastly, it proposes novel applications that can be achieved by these developments, ranging from the understanding and reproduction of synaptic learning to live-long multimodal measurements to monitor and treat various neuronal disorders.
Collapse
Affiliation(s)
- Csaba Forró
- Institute for Biological Information Processing ‐ Bioelectronics IBI‐3Wilhelm‐Johnen‐Straße52428JülichGermany
- Institute of Materials in Electrical Engineering 1RWTH AachenSommerfeldstr. 2452074AachenGermany
| | - Simon Musall
- Institute for Biological Information Processing ‐ Bioelectronics IBI‐3Wilhelm‐Johnen‐Straße52428JülichGermany
- Institute for ZoologyRWTH Aachen UniversityWorringerweg 352074AachenGermany
| | - Viviana Rincón Montes
- Institute for Biological Information Processing ‐ Bioelectronics IBI‐3Wilhelm‐Johnen‐Straße52428JülichGermany
| | - John Linkhorst
- Chemical Process EngineeringRWTH AachenForckenbeckstr. 5152074AachenGermany
| | - Peter Walter
- Department of OphthalmologyUniversity Hospital RWTH AachenPauwelsstr. 3052074AachenGermany
| | - Matthias Wessling
- Chemical Process EngineeringRWTH AachenForckenbeckstr. 5152074AachenGermany
- DWI Leibniz Institute for Interactive MaterialsRWTH AachenForckenbeckstr. 5052074AachenGermany
| | - Andreas Offenhäusser
- Institute for Biological Information Processing ‐ Bioelectronics IBI‐3Wilhelm‐Johnen‐Straße52428JülichGermany
| | - Sven Ingebrandt
- Institute of Materials in Electrical Engineering 1RWTH AachenSommerfeldstr. 2452074AachenGermany
| | - Yvonne Weber
- Department of EpileptologyNeurology, RWTH AachenPauwelsstr. 3052074AachenGermany
| | - Angelika Lampert
- Institute of NeurophysiologyUniklinik RWTH AachenPauwelsstrasse 3052074AachenGermany
| | - Francesca Santoro
- Institute for Biological Information Processing ‐ Bioelectronics IBI‐3Wilhelm‐Johnen‐Straße52428JülichGermany
- Institute of Materials in Electrical Engineering 1RWTH AachenSommerfeldstr. 2452074AachenGermany
| |
Collapse
|
38
|
Fontenele AJ, Sooter JS, Norman VK, Gautam SH, Shew WL. Low dimensional criticality embedded in high dimensional awake brain dynamics. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.05.522896. [PMID: 37546833 PMCID: PMC10401950 DOI: 10.1101/2023.01.05.522896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
Whether cortical neurons operate in a strongly or weakly correlated dynamical regime determines fundamental information processing capabilities and has fueled decades of debate. Here we offer a resolution of this debate; we show that two important dynamical regimes, typically considered incompatible, can coexist in the same local cortical circuit by separating them into two different subspaces. In awake mouse motor cortex, we find a low-dimensional subspace with large fluctuations consistent with criticality - a dynamical regime with moderate correlations and multi-scale information capacity and transmission. Orthogonal to this critical subspace, we find a high-dimensional subspace containing a desynchronized dynamical regime, which may optimize input discrimination. The critical subspace is apparent only at long timescales, which explains discrepancies among some previous studies. Using a computational model, we show that the emergence of a low-dimensional critical subspace at large timescale agrees with established theory of critical dynamics. Our results suggest that cortex leverages its high dimensionality to multiplex dynamical regimes across different subspaces.
Collapse
Affiliation(s)
- Antonio J. Fontenele
- UA Integrative Systems Neuroscience Group, Department of Physics, University of Arkansas, Fayetteville, AR, USA, 72701
| | - J. Samuel Sooter
- UA Integrative Systems Neuroscience Group, Department of Physics, University of Arkansas, Fayetteville, AR, USA, 72701
| | - V. Kindler Norman
- UA Integrative Systems Neuroscience Group, Department of Physics, University of Arkansas, Fayetteville, AR, USA, 72701
| | - Shree Hari Gautam
- UA Integrative Systems Neuroscience Group, Department of Physics, University of Arkansas, Fayetteville, AR, USA, 72701
| | - Woodrow L. Shew
- UA Integrative Systems Neuroscience Group, Department of Physics, University of Arkansas, Fayetteville, AR, USA, 72701
| |
Collapse
|
39
|
Naik S, Dehaene-Lambertz G, Battaglia D. Repairing Artifacts in Neural Activity Recordings Using Low-Rank Matrix Estimation. SENSORS (BASEL, SWITZERLAND) 2023; 23:4847. [PMID: 37430760 PMCID: PMC10220667 DOI: 10.3390/s23104847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 05/09/2023] [Accepted: 05/10/2023] [Indexed: 07/12/2023]
Abstract
Electrophysiology recordings are frequently affected by artifacts (e.g., subject motion or eye movements), which reduces the number of available trials and affects the statistical power. When artifacts are unavoidable and data are scarce, signal reconstruction algorithms that allow for the retention of sufficient trials become crucial. Here, we present one such algorithm that makes use of large spatiotemporal correlations in neural signals and solves the low-rank matrix completion problem, to fix artifactual entries. The method uses a gradient descent algorithm in lower dimensions to learn the missing entries and provide faithful reconstruction of signals. We carried out numerical simulations to benchmark the method and estimate optimal hyperparameters for actual EEG data. The fidelity of reconstruction was assessed by detecting event-related potentials (ERP) from a highly artifacted EEG time series from human infants. The proposed method significantly improved the standardized error of the mean in ERP group analysis and a between-trial variability analysis compared to a state-of-the-art interpolation technique. This improvement increased the statistical power and revealed significant effects that would have been deemed insignificant without reconstruction. The method can be applied to any time-continuous neural signal where artifacts are sparse and spread out across epochs and channels, increasing data retention and statistical power.
Collapse
Affiliation(s)
- Shruti Naik
- Cognitive Neuroimaging Unit, Centre National de la Recherche Scientifique (CNRS), Institut National de la Santé et de la Recherche Médicale (INSERM), CEA, Université Paris-Saclay, NeuroSpin Center, F-91190 Gif-sur-Yvette, France
| | - Ghislaine Dehaene-Lambertz
- Cognitive Neuroimaging Unit, Centre National de la Recherche Scientifique (CNRS), Institut National de la Santé et de la Recherche Médicale (INSERM), CEA, Université Paris-Saclay, NeuroSpin Center, F-91190 Gif-sur-Yvette, France
| | - Demian Battaglia
- Institut de Neurosciences des Systèmes, U1106, Centre National de la Recherche Scientifique (CNRS) Aix-Marseille Université, F-13005 Marseille, France
- Institute for Advanced Studies, University of Strasbourg, (USIAS), F-67000 Strasbourg, France
| |
Collapse
|
40
|
Marciniak Dg Agra K, Dg Agra P. F = ma. Is the macaque brain Newtonian? Cogn Neuropsychol 2023; 39:376-408. [PMID: 37045793 DOI: 10.1080/02643294.2023.2191843] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/14/2023]
Abstract
Intuitive Physics, the ability to anticipate how the physical events involving mass objects unfold in time and space, is a central component of intelligent systems. Intuitive physics is a promising tool for gaining insight into mechanisms that generalize across species because both humans and non-human primates are subject to the same physical constraints when engaging with the environment. Physical reasoning abilities are widely present within the animal kingdom, but monkeys, with acute 3D vision and a high level of dexterity, appreciate and manipulate the physical world in much the same way humans do.
Collapse
Affiliation(s)
- Karolina Marciniak Dg Agra
- The Rockefeller University, Laboratory of Neural Circuits, New York, NY, USA
- Center for Brain, Minds and Machines, Cambridge, MA, USA
| | - Pedro Dg Agra
- The Rockefeller University, Laboratory of Neural Circuits, New York, NY, USA
- Center for Brain, Minds and Machines, Cambridge, MA, USA
| |
Collapse
|
41
|
Safavi S, Panagiotaropoulos TI, Kapoor V, Ramirez-Villegas JF, Logothetis NK, Besserve M. Uncovering the organization of neural circuits with Generalized Phase Locking Analysis. PLoS Comput Biol 2023; 19:e1010983. [PMID: 37011110 PMCID: PMC10109521 DOI: 10.1371/journal.pcbi.1010983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 04/17/2023] [Accepted: 02/27/2023] [Indexed: 04/05/2023] Open
Abstract
Despite the considerable progress of in vivo neural recording techniques, inferring the biophysical mechanisms underlying large scale coordination of brain activity from neural data remains challenging. One obstacle is the difficulty to link high dimensional functional connectivity measures to mechanistic models of network activity. We address this issue by investigating spike-field coupling (SFC) measurements, which quantify the synchronization between, on the one hand, the action potentials produced by neurons, and on the other hand mesoscopic "field" signals, reflecting subthreshold activities at possibly multiple recording sites. As the number of recording sites gets large, the amount of pairwise SFC measurements becomes overwhelmingly challenging to interpret. We develop Generalized Phase Locking Analysis (GPLA) as an interpretable dimensionality reduction of this multivariate SFC. GPLA describes the dominant coupling between field activity and neural ensembles across space and frequencies. We show that GPLA features are biophysically interpretable when used in conjunction with appropriate network models, such that we can identify the influence of underlying circuit properties on these features. We demonstrate the statistical benefits and interpretability of this approach in various computational models and Utah array recordings. The results suggest that GPLA, used jointly with biophysical modeling, can help uncover the contribution of recurrent microcircuits to the spatio-temporal dynamics observed in multi-channel experimental recordings.
Collapse
Affiliation(s)
- Shervin Safavi
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- IMPRS for Cognitive and Systems Neuroscience, University of Tübingen, Tübingen, Germany
| | - Theofanis I. Panagiotaropoulos
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin center, 91191 Gif/Yvette, France
| | - Vishal Kapoor
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- International Center for Primate Brain Research (ICPBR), Center for Excellence in Brain Science and Intelligence Technology (CEBSIT), Chinese Academy of Sciences (CAS), Shanghai 201602, China
| | - Juan F. Ramirez-Villegas
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Institute of Science and Technology Austria (IST Austria), Klosterneuburg, Austria
| | - Nikos K. Logothetis
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- International Center for Primate Brain Research (ICPBR), Center for Excellence in Brain Science and Intelligence Technology (CEBSIT), Chinese Academy of Sciences (CAS), Shanghai 201602, China
- Centre for Imaging Sciences, Biomedical Imaging Institute, The University of Manchester, Manchester, United Kingdom
| | - Michel Besserve
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Department of Empirical Inference, Max Planck Institute for Intelligent Systems and MPI-ETH Center for Learning Systems, Tübingen, Germany
| |
Collapse
|
42
|
Tang W, Shin JD, Jadhav SP. Geometric transformation of cognitive maps for generalization across hippocampal-prefrontal circuits. Cell Rep 2023; 42:112246. [PMID: 36924498 PMCID: PMC10124109 DOI: 10.1016/j.celrep.2023.112246] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 01/09/2023] [Accepted: 02/26/2023] [Indexed: 03/17/2023] Open
Abstract
The ability to abstract information to guide decisions during navigation across changing environments is essential for adaptation and requires the integrity of the hippocampal-prefrontal circuitry. The hippocampus encodes navigational information in a cognitive map, but it remains unclear how cognitive maps are transformed across hippocampal-prefrontal circuits to support abstraction and generalization. Here, we simultaneously record hippocampal-prefrontal ensembles as rats generalize navigational rules across distinct environments. We find that, whereas hippocampal representational maps maintain specificity of separate environments, prefrontal maps generalize across environments. Furthermore, while both maps are structured within a neural manifold of population activity, they have distinct representational geometries. Prefrontal geometry enables abstraction of rule-informative variables, a representational format that generalizes to novel conditions of existing variable classes. Hippocampal geometry lacks such abstraction. Together, these findings elucidate how cognitive maps are structured into distinct geometric representations to support abstraction and generalization while maintaining memory specificity.
Collapse
Affiliation(s)
- Wenbo Tang
- Neuroscience Program, Department of Psychology, and Volen National Center for Complex Systems, Brandeis University, Waltham, MA 02453, USA.
| | - Justin D Shin
- Neuroscience Program, Department of Psychology, and Volen National Center for Complex Systems, Brandeis University, Waltham, MA 02453, USA
| | - Shantanu P Jadhav
- Neuroscience Program, Department of Psychology, and Volen National Center for Complex Systems, Brandeis University, Waltham, MA 02453, USA.
| |
Collapse
|
43
|
Flesch T, Saxe A, Summerfield C. Continual task learning in natural and artificial agents. Trends Neurosci 2023; 46:199-210. [PMID: 36682991 PMCID: PMC10914671 DOI: 10.1016/j.tins.2022.12.006] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 12/07/2022] [Accepted: 12/15/2022] [Indexed: 01/21/2023]
Abstract
How do humans and other animals learn new tasks? A wave of brain recording studies has investigated how neural representations change during task learning, with a focus on how tasks can be acquired and coded in ways that minimise mutual interference. We review recent work that has explored the geometry and dimensionality of neural task representations in neocortex, and computational models that have exploited these findings to understand how the brain may partition knowledge between tasks. We discuss how ideas from machine learning, including those that combine supervised and unsupervised learning, are helping neuroscientists understand how natural tasks are learned and coded in biological brains.
Collapse
Affiliation(s)
- Timo Flesch
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Andrew Saxe
- Gatsby Computational Neuroscience Unit & Sainsbury Wellcome Centre, UCL, London, UK.
| | | |
Collapse
|
44
|
Beiran M, Meirhaeghe N, Sohn H, Jazayeri M, Ostojic S. Parametric control of flexible timing through low-dimensional neural manifolds. Neuron 2023; 111:739-753.e8. [PMID: 36640766 PMCID: PMC9992137 DOI: 10.1016/j.neuron.2022.12.016] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 09/23/2022] [Accepted: 12/08/2022] [Indexed: 01/15/2023]
Abstract
Biological brains possess an unparalleled ability to adapt behavioral responses to changing stimuli and environments. How neural processes enable this capacity is a fundamental open question. Previous works have identified two candidate mechanisms: a low-dimensional organization of neural activity and a modulation by contextual inputs. We hypothesized that combining the two might facilitate generalization and adaptation in complex tasks. We tested this hypothesis in flexible timing tasks where dynamics play a key role. Examining trained recurrent neural networks, we found that confining the dynamics to a low-dimensional subspace allowed tonic inputs to parametrically control the overall input-output transform, enabling generalization to novel inputs and adaptation to changing conditions. Reverse-engineering and theoretical analyses demonstrated that this parametric control relies on a mechanism where tonic inputs modulate the dynamics along non-linear manifolds while preserving their geometry. Comparisons with data from behaving monkeys confirmed the behavioral and neural signatures of this mechanism.
Collapse
Affiliation(s)
- Manuel Beiran
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL University, 75005 Paris, France; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Nicolas Meirhaeghe
- Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Institut de Neurosciences de la Timone (INT), UMR 7289, CNRS, Aix-Marseille Université, Marseille 13005, France
| | - Hansem Sohn
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL University, 75005 Paris, France.
| |
Collapse
|
45
|
Rouleau N, Murugan NJ, Kaplan DL. Functional bioengineered models of the central nervous system. NATURE REVIEWS BIOENGINEERING 2023; 1:252-270. [PMID: 37064657 PMCID: PMC9903289 DOI: 10.1038/s44222-023-00027-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 01/16/2023] [Indexed: 02/10/2023]
Abstract
The functional complexity of the central nervous system (CNS) is unparalleled in living organisms. Its nested cells, circuits and networks encode memories, move bodies and generate experiences. Neural tissues can be engineered to assemble model systems that recapitulate essential features of the CNS and to investigate neurodevelopment, delineate pathophysiology, improve regeneration and accelerate drug discovery. In this Review, we discuss essential structure-function relationships of the CNS and examine materials and design considerations, including composition, scale, complexity and maturation, of cell biology-based and engineering-based CNS models. We highlight region-specific CNS models that can emulate functions of the cerebral cortex, hippocampus, spinal cord, neural-X interfaces and other regions, and investigate a range of applications for CNS models, including fundamental and clinical research. We conclude with an outlook to future possibilities of CNS models, highlighting the engineering challenges that remain to be overcome.
Collapse
Affiliation(s)
- Nicolas Rouleau
- Department of Health Sciences, Wilfrid Laurier University, Waterloo, Ontario Canada
- Department of Biomedical Engineering, Tufts University, Medford, MA USA
| | - Nirosha J. Murugan
- Department of Health Sciences, Wilfrid Laurier University, Waterloo, Ontario Canada
- Department of Biomedical Engineering, Tufts University, Medford, MA USA
| | - David L. Kaplan
- Department of Biomedical Engineering, Tufts University, Medford, MA USA
| |
Collapse
|
46
|
Mitchell-Heggs R, Prado S, Gava GP, Go MA, Schultz SR. Neural manifold analysis of brain circuit dynamics in health and disease. J Comput Neurosci 2023; 51:1-21. [PMID: 36522604 PMCID: PMC9840597 DOI: 10.1007/s10827-022-00839-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 08/30/2022] [Accepted: 10/29/2022] [Indexed: 12/23/2022]
Abstract
Recent developments in experimental neuroscience make it possible to simultaneously record the activity of thousands of neurons. However, the development of analysis approaches for such large-scale neural recordings have been slower than those applicable to single-cell experiments. One approach that has gained recent popularity is neural manifold learning. This approach takes advantage of the fact that often, even though neural datasets may be very high dimensional, the dynamics of neural activity tends to traverse a much lower-dimensional space. The topological structures formed by these low-dimensional neural subspaces are referred to as "neural manifolds", and may potentially provide insight linking neural circuit dynamics with cognitive function and behavioral performance. In this paper we review a number of linear and non-linear approaches to neural manifold learning, including principal component analysis (PCA), multi-dimensional scaling (MDS), Isomap, locally linear embedding (LLE), Laplacian eigenmaps (LEM), t-SNE, and uniform manifold approximation and projection (UMAP). We outline these methods under a common mathematical nomenclature, and compare their advantages and disadvantages with respect to their use for neural data analysis. We apply them to a number of datasets from published literature, comparing the manifolds that result from their application to hippocampal place cells, motor cortical neurons during a reaching task, and prefrontal cortical neurons during a multi-behavior task. We find that in many circumstances linear algorithms produce similar results to non-linear methods, although in particular cases where the behavioral complexity is greater, non-linear methods tend to find lower-dimensional manifolds, at the possible expense of interpretability. We demonstrate that these methods are applicable to the study of neurological disorders through simulation of a mouse model of Alzheimer's Disease, and speculate that neural manifold analysis may help us to understand the circuit-level consequences of molecular and cellular neuropathology.
Collapse
Affiliation(s)
- Rufus Mitchell-Heggs
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, London, SW7 2AZ United Kingdom
- Centre for Discovery Brain Sciences, The University of Edinburgh, Edinburgh, EH8 9XD United Kingdom
| | - Seigfred Prado
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, London, SW7 2AZ United Kingdom
- Department of Electronics Engineering, University of Santo Tomas, Manila, Philippines
| | - Giuseppe P. Gava
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, London, SW7 2AZ United Kingdom
| | - Mary Ann Go
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, London, SW7 2AZ United Kingdom
| | - Simon R. Schultz
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, London, SW7 2AZ United Kingdom
| |
Collapse
|
47
|
Reversible Inactivation of Ferret Auditory Cortex Impairs Spatial and Nonspatial Hearing. J Neurosci 2023; 43:749-763. [PMID: 36604168 PMCID: PMC9899081 DOI: 10.1523/jneurosci.1426-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 11/16/2022] [Accepted: 11/29/2022] [Indexed: 01/06/2023] Open
Abstract
A key question in auditory neuroscience is to what extent are brain regions functionally specialized for processing specific sound features, such as location and identity. In auditory cortex, correlations between neural activity and sounds support both the specialization of distinct cortical subfields, and encoding of multiple sound features within individual cortical areas. However, few studies have tested the contribution of auditory cortex to hearing in multiple contexts. Here we determined the role of ferret primary auditory cortex in both spatial and nonspatial hearing by reversibly inactivating the middle ectosylvian gyrus during behavior using cooling (n = 2 females) or optogenetics (n = 1 female). Optogenetic experiments used the mDLx promoter to express Channelrhodopsin-2 in GABAergic interneurons, and we confirmed both viral expression (n = 2 females) and light-driven suppression of spiking activity in auditory cortex, recorded using Neuropixels under anesthesia (n = 465 units from 2 additional untrained female ferrets). Cortical inactivation via cooling or optogenetics impaired vowel discrimination in colocated noise. Ferrets implanted with cooling loops were tested in additional conditions that revealed no deficit when identifying vowels in clean conditions, or when the temporally coincident vowel and noise were spatially separated by 180 degrees. These animals did, however, show impaired sound localization when inactivating the same auditory cortical region implicated in vowel discrimination in noise. Our results demonstrate that, as a brain region showing mixed selectivity for spatial and nonspatial features of sound, primary auditory cortex contributes to multiple forms of hearing.SIGNIFICANCE STATEMENT Neurons in primary auditory cortex are often sensitive to the location and identity of sounds. Here we inactivated auditory cortex during spatial and nonspatial listening tasks using cooling, or optogenetics. Auditory cortical inactivation impaired multiple behaviors, demonstrating a role in both the analysis of sound location and identity and confirming a functional contribution of mixed selectivity observed in neural activity. Parallel optogenetic experiments in two additional untrained ferrets linked behavior to physiology by demonstrating that expression of Channelrhodopsin-2 permitted rapid light-driven suppression of auditory cortical activity recorded under anesthesia.
Collapse
|
48
|
Nogueira R, Rodgers CC, Bruno RM, Fusi S. The geometry of cortical representations of touch in rodents. Nat Neurosci 2023; 26:239-250. [PMID: 36624277 DOI: 10.1038/s41593-022-01237-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Accepted: 11/16/2022] [Indexed: 01/11/2023]
Abstract
Neurons often encode highly heterogeneous non-linear functions of multiple task variables, a signature of a high-dimensional geometry. We studied the representational geometry in the somatosensory cortex of mice trained to report the curvature of objects touched by their whiskers. High-speed videos of the whiskers revealed that the task can be solved by linearly integrating multiple whisker contacts over time. However, the neural activity in somatosensory cortex reflects non-linear integration of spatio-temporal features of the sensory inputs. Although the responses at first appeared disorganized, we identified an interesting structure in the representational geometry: different whisker contacts are disentangled variables represented in approximately, but not fully, orthogonal subspaces of the neural activity space. This geometry allows linear readouts to perform a broad class of tasks of different complexities without compromising the ability to generalize to novel situations.
Collapse
Affiliation(s)
- Ramon Nogueira
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA.
- Department of Neuroscience, Columbia University, New York, NY, USA.
| | - Chris C Rodgers
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
- Department of Neuroscience, Columbia University, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA
- Department of Neurosurgery, Emory University, Atlanta, GA, USA
| | - Randy M Bruno
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
- Department of Neuroscience, Columbia University, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, UK
| | - Stefano Fusi
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA.
- Department of Neuroscience, Columbia University, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA.
| |
Collapse
|
49
|
Sorscher B, Mel GC, Ocko SA, Giocomo LM, Ganguli S. A unified theory for the computational and mechanistic origins of grid cells. Neuron 2023; 111:121-137.e13. [PMID: 36306779 DOI: 10.1016/j.neuron.2022.10.003] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 05/05/2022] [Accepted: 10/03/2022] [Indexed: 02/05/2023]
Abstract
The discovery of entorhinal grid cells has generated considerable interest in how and why hexagonal firing fields might emerge in a generic manner from neural circuits, and what their computational significance might be. Here, we forge a link between the problem of path integration and the existence of hexagonal grids, by demonstrating that such grids arise in neural networks trained to path integrate under simple biologically plausible constraints. Moreover, we develop a unifying theory for why hexagonal grids are ubiquitous in path-integrator circuits. Such trained networks also yield powerful mechanistic hypotheses, exhibiting realistic levels of biological variability not captured by hand-designed models. We furthermore develop methods to analyze the connectome and activity maps of our networks to elucidate fundamental mechanisms underlying path integration. These methods provide a road map to go from connectomic and physiological measurements to conceptual understanding in a manner that could generalize to other settings.
Collapse
Affiliation(s)
- Ben Sorscher
- Department of Applied Physics, Stanford University, Stanford, CA 94305, USA
| | - Gabriel C Mel
- Neurosciences PhD Program, Stanford University, Stanford, CA 94305, USA.
| | - Samuel A Ocko
- Department of Applied Physics, Stanford University, Stanford, CA 94305, USA
| | - Lisa M Giocomo
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Surya Ganguli
- Department of Applied Physics, Stanford University, Stanford, CA 94305, USA; Department of Neurobiology, Stanford University School of Medicine, Stanford, CA 94305, USA
| |
Collapse
|
50
|
Koh TH, Bishop WE, Kawashima T, Jeon BB, Srinivasan R, Mu Y, Wei Z, Kuhlman SJ, Ahrens MB, Chase SM, Yu BM. Dimensionality reduction of calcium-imaged neuronal population activity. NATURE COMPUTATIONAL SCIENCE 2023; 3:71-85. [PMID: 37476302 PMCID: PMC10358781 DOI: 10.1038/s43588-022-00390-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 12/05/2022] [Indexed: 07/22/2023]
Abstract
Calcium imaging has been widely adopted for its ability to record from large neuronal populations. To summarize the time course of neural activity, dimensionality reduction methods, which have been applied extensively to population spiking activity, may be particularly useful. However, it is unclear if the dimensionality reduction methods applied to spiking activity are appropriate for calcium imaging. We thus carried out a systematic study of design choices based on standard dimensionality reduction methods. We also developed a method to perform deconvolution and dimensionality reduction simultaneously (Calcium Imaging Linear Dynamical System, CILDS). CILDS most accurately recovered the single-trial, low-dimensional time courses from simulated calcium imaging data. CILDS also outperformed the other methods on calcium imaging recordings from larval zebrafish and mice. More broadly, this study represents a foundation for summarizing calcium imaging recordings of large neuronal populations using dimensionality reduction in diverse experimental settings.
Collapse
Affiliation(s)
- Tze Hui Koh
- Department of Biomedical Engineering, Carnegie Mellon University, PA
- Center for the Neural Basis of Cognition, PA
| | - William E. Bishop
- Center for the Neural Basis of Cognition, PA
- Department of Machine Learning, Carnegie Mellon University, PA
- Janelia Research Campus, Howard Hughes Medical Institute, VA
| | - Takashi Kawashima
- Janelia Research Campus, Howard Hughes Medical Institute, VA
- Department of Brain Sciences, Weizmann Institute of Science, Israel
| | - Brian B. Jeon
- Department of Biomedical Engineering, Carnegie Mellon University, PA
- Center for the Neural Basis of Cognition, PA
| | - Ranjani Srinivasan
- Department of Biomedical Engineering, Carnegie Mellon University, PA
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD
| | - Yu Mu
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, China
| | - Ziqiang Wei
- Janelia Research Campus, Howard Hughes Medical Institute, VA
| | - Sandra J. Kuhlman
- Carnegie Mellon Neuroscience Institute, Carnegie Mellon University, PA
- Department of Biological Sciences, Carnegie Mellon University, PA
| | - Misha B. Ahrens
- Janelia Research Campus, Howard Hughes Medical Institute, VA
| | - Steven M. Chase
- Department of Biomedical Engineering, Carnegie Mellon University, PA
- Carnegie Mellon Neuroscience Institute, Carnegie Mellon University, PA
| | - Byron M. Yu
- Department of Biomedical Engineering, Carnegie Mellon University, PA
- Carnegie Mellon Neuroscience Institute, Carnegie Mellon University, PA
- Department of Electrical and Computer Engineering, Carnegie Mellon University, PA
| |
Collapse
|