1
|
Peters LM, Roadarmel A, Overton JA, Stickle MP, Lin JJ, Kong Z, Saez I, Moxon KA. SHRUNKNeural dynamics encoding risky choices during deliberation reveal separate choice subspaces. Prog Neurobiol 2025:102776. [PMID: 40345520 DOI: 10.1016/j.pneurobio.2025.102776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2024] [Revised: 04/30/2025] [Accepted: 05/05/2025] [Indexed: 05/11/2025]
Abstract
Human decision-making involves the coordinated activity of multiple brain areas, acting in concert, to enable humans to make choices. Most decisions are carried out under conditions of uncertainty, where the desired outcome may not be achieved if the wrong decision is made. In these cases, humans deliberate before making a choice. The neural dynamics underlying deliberation are unknown and intracranial recordings in clinical settings present a unique opportunity to record high temporal resolution electrophysiological data from many (hundreds) brain locations during behavior. Combined with dynamic systems modeling, these allow identification of latent brain states that describe the neural dynamics during decision-making, providing insight into these neural dynamics and computations. Results show that the neural dynamics underlying risky decisions, but not decisions without risk, converge to separate subspaces depending on the subject's preferred choice and that the degree of overlap between these subspaces declines as choice approaches, suggesting a network level representation of evidence accumulation. These results bridge the gap between regression analyses and data driven models of latent states and suggest that during risky decisions, deliberation and evidence accumulation toward a final decision are represented by the same neural dynamics, providing novel insights into the neural computations underlying human choice.
Collapse
Affiliation(s)
| | | | - Jacqueline A Overton
- Dept. of Neuroscience, Icahn School of Medicine at Mount Sinai; Dept. of Psychiatry, Icahn School of Medicine at Mount Sinai
| | | | | | - Zhaodon Kong
- Dept. of Mechanical and Aerospace Engineering, UC Davis
| | - Ignacio Saez
- Dept. of Neuroscience, Icahn School of Medicine at Mount Sinai; Dept. of Neurosurgery, Icahn School of Medicine at Mount Sinai; Dept. of Neurology, Icahn School of Medicine at Mount Sinai.
| | - Karen Anne Moxon
- Dept. of Biomedical Engineering, UC Davis; Dept. of Neurological Surgery, UC Davis.
| |
Collapse
|
2
|
Ceccarelli F, Londei F, Arena G, Genovesio A, Ferrucci L. Home-Cage Training for Non-Human Primates: An Opportunity to Reduce Stress and Study Natural Behavior in Neurophysiology Experiments. Animals (Basel) 2025; 15:1340. [PMID: 40362154 PMCID: PMC12071079 DOI: 10.3390/ani15091340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2025] [Revised: 04/29/2025] [Accepted: 05/02/2025] [Indexed: 05/15/2025] Open
Abstract
Research involving non-human primates remains a cornerstone in fields such as biomedical research and systems neuroscience. However, the daily routines of laboratory work can induce stress in these animals, potentially compromising their well-being and the reliability of experimental outcomes. To address this, many laboratories have adopted home-cage training protocols to mitigate stress caused by routine procedures such as transport and restraint-a factor that can impact both macaque physiology and experimental validity. This review explores the primary methods and experimental setups employed in home-cage training, highlighting their potential not only to address ethical concerns surrounding animal welfare but also to reduce training time and risks for the researchers. Furthermore, by combining home-cage training with wireless recordings, it becomes possible to expand research opportunities in behavioral neurophysiology with non-human primates. This approach enables the study of various cognitive processes in more naturalistic settings, thereby increasing the ecological validity of scientific findings through innovative experimental designs that thoroughly investigate the complexity of the animals' natural behavior.
Collapse
Affiliation(s)
- Francesco Ceccarelli
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy; (F.C.); (F.L.); (G.A.)
| | - Fabrizio Londei
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy; (F.C.); (F.L.); (G.A.)
| | - Giulia Arena
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy; (F.C.); (F.L.); (G.A.)
- Institute of Biochemistry and Cell Biology (IBBC), National Research Council of Italy (CNR), Via Ramarini 32, Monterotondo Scalo, 00015 Rome, Italy
- Behavioral Neuroscience PhD Program, Sapienza University, 00185 Rome, Italy
| | - Aldo Genovesio
- Department of Pharmaceutical Sciences, University of Piemonte Orientale, 28100 Novara, Italy
| | - Lorenzo Ferrucci
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy; (F.C.); (F.L.); (G.A.)
| |
Collapse
|
3
|
Kulkarni S, Bassett DS. Toward Principles of Brain Network Organization and Function. Annu Rev Biophys 2025; 54:353-378. [PMID: 39952667 DOI: 10.1146/annurev-biophys-030722-110624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2025]
Abstract
The brain is immensely complex, with diverse components and dynamic interactions building upon one another to orchestrate a wide range of behaviors. Understanding patterns of these complex interactions and how they are coordinated to support collective neural function is critical for parsing human and animal behavior, treating mental illness, and developing artificial intelligence. Rapid experimental advances in imaging, recording, and perturbing neural systems across various species now provide opportunities to distill underlying principles of brain organization and function. Here, we take stock of recent progress and review methods used in the statistical analysis of brain networks, drawing from fields of statistical physics, network theory, and information theory. Our discussion is organized by scale, starting with models of individual neurons and extending to large-scale networks mapped across brain regions. We then examine organizing principles and constraints that shape the biological structure and function of neural circuits. We conclude with an overview of several critical frontiers, including expanding current models, fostering tighter feedback between theory and experiment, and leveraging perturbative approaches to understand neural systems. Alongside these efforts, we highlight the importance of contextualizing their contributions by linking them to formal accounts of explanation and causation.
Collapse
Affiliation(s)
- Suman Kulkarni
- Department of Physics & Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania, USA;
| | - Dani S Bassett
- Department of Bioengineering, Department of Electrical & Systems Engineering, Department of Neurology, and Department of Psychiatry, University of Pennsylvania, Philadelphia, Pennsylvania, USA;
- Santa Fe Institute, Santa Fe, New Mexico, USA
- Department of Physics & Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania, USA;
- Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
4
|
Lee J, Mun J, Choo M, Park SM. Predictive modeling of hemodynamics during viscerosensory neurostimulation via neural computation mechanism in the brainstem. NPJ Digit Med 2025; 8:220. [PMID: 40269082 PMCID: PMC12019394 DOI: 10.1038/s41746-025-01635-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2024] [Accepted: 04/11/2025] [Indexed: 04/25/2025] Open
Abstract
Neurostimulation for cardiovascular control faces challenges due to the lack of predictive modeling for stimulus-driven dynamic responses, which is crucial for precise neuromodulation via quality feedback. We address this by employing a digital twin approach that leverages computational mechanisms underlying neuro-hemodynamic responses during neurostimulation. Our results emphasize the computational role of the nucleus tractus solitarius (NTS) in the brainstem in determining these responses. The intrinsic neural circuit within the NTS harbors collective dynamics residing in a low-dimensional latent space, which effectively captures stimulus-driven hemodynamic perturbations. Building on this, we developed a digital twin framework for individually optimized predictive modeling of neuromodulatory outcomes. This framework potentially enables the design of closed-loop neurostimulation systems for precise hemodynamic control. Consequently, our digital twin based on neural computation mechanisms marks an advancement in the artificial regulation of internal organs, paving the way for precise translational medicine to treat chronic diseases.
Collapse
Affiliation(s)
- Jiho Lee
- Department of Convergence IT Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
- Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
| | - Junseung Mun
- Department of Convergence IT Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
- Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
| | - Minhye Choo
- Department of Convergence IT Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
- Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
| | - Sung-Min Park
- Department of Convergence IT Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea.
- Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea.
- Department of Electrical Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea.
- Department of Mechanical Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea.
- Institute of Convergence Science, Yonsei University, Seoul, Republic of Korea.
| |
Collapse
|
5
|
Beau M, Herzfeld DJ, Naveros F, Hemelt ME, D'Agostino F, Oostland M, Sánchez-López A, Chung YY, Maibach M, Kyranakis S, Stabb HN, Martínez Lopera MG, Lajko A, Zedler M, Ohmae S, Hall NJ, Clark BA, Cohen D, Lisberger SG, Kostadinov D, Hull C, Häusser M, Medina JF. A deep learning strategy to identify cell types across species from high-density extracellular recordings. Cell 2025; 188:2218-2234.e22. [PMID: 40023155 DOI: 10.1016/j.cell.2025.01.041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2024] [Revised: 11/20/2024] [Accepted: 01/28/2025] [Indexed: 03/04/2025]
Abstract
High-density probes allow electrophysiological recordings from many neurons simultaneously across entire brain circuits but fail to reveal cell type. Here, we develop a strategy to identify cell types from extracellular recordings in awake animals and reveal the computational roles of neurons with distinct functional, molecular, and anatomical properties. We combine optogenetics and pharmacology using the cerebellum as a testbed to generate a curated ground-truth library of electrophysiological properties for Purkinje cells, molecular layer interneurons, Golgi cells, and mossy fibers. We train a semi-supervised deep learning classifier that predicts cell types with greater than 95% accuracy based on the waveform, discharge statistics, and layer of the recorded neuron. The classifier's predictions agree with expert classification on recordings using different probes, in different laboratories, from functionally distinct cerebellar regions, and across species. Our classifier extends the power of modern dynamical systems analyses by revealing the unique contributions of simultaneously recorded cell types during behavior.
Collapse
Affiliation(s)
- Maxime Beau
- Wolfson Institute for Biomedical Research, University College London, London, UK
| | - David J Herzfeld
- Department of Neurobiology, Duke University School of Medicine, Durham, NC, USA
| | - Francisco Naveros
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA; Department of Computer Engineering, Automation and Robotics, Research Centre for Information and Communication Technologies, University of Granada, Granada, Spain
| | - Marie E Hemelt
- Department of Neurobiology, Duke University School of Medicine, Durham, NC, USA
| | - Federico D'Agostino
- Wolfson Institute for Biomedical Research, University College London, London, UK
| | - Marlies Oostland
- Wolfson Institute for Biomedical Research, University College London, London, UK; Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, the Netherlands
| | | | - Young Yoon Chung
- Wolfson Institute for Biomedical Research, University College London, London, UK
| | - Michael Maibach
- Wolfson Institute for Biomedical Research, University College London, London, UK
| | - Stephen Kyranakis
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Hannah N Stabb
- Wolfson Institute for Biomedical Research, University College London, London, UK
| | | | - Agoston Lajko
- Wolfson Institute for Biomedical Research, University College London, London, UK
| | - Marie Zedler
- Wolfson Institute for Biomedical Research, University College London, London, UK
| | - Shogo Ohmae
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Nathan J Hall
- Department of Neurobiology, Duke University School of Medicine, Durham, NC, USA
| | - Beverley A Clark
- Wolfson Institute for Biomedical Research, University College London, London, UK
| | - Dana Cohen
- The Leslie and Susan Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan, Israel
| | - Stephen G Lisberger
- Department of Neurobiology, Duke University School of Medicine, Durham, NC, USA
| | - Dimitar Kostadinov
- Wolfson Institute for Biomedical Research, University College London, London, UK; Centre for Developmental Neurobiology, King's College London, London, UK
| | - Court Hull
- Department of Neurobiology, Duke University School of Medicine, Durham, NC, USA
| | - Michael Häusser
- Wolfson Institute for Biomedical Research, University College London, London, UK; School of Biomedical Sciences, The University of Hong Kong, Hong Kong, China
| | - Javier F Medina
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA.
| |
Collapse
|
6
|
Marin-Llobet A, Manasanch A, Dalla Porta L, Torao-Angosto M, Sanchez-Vives MV. Neural models for detection and classification of brain states and transitions. Commun Biol 2025; 8:599. [PMID: 40211025 PMCID: PMC11986132 DOI: 10.1038/s42003-025-07991-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Accepted: 03/24/2025] [Indexed: 04/12/2025] Open
Abstract
Exploring natural or pharmacologically induced brain dynamics, such as sleep, wakefulness, or anesthesia, provides rich functional models for studying brain states. These models allow detailed examination of unique spatiotemporal neural activity patterns that reveal brain function. However, assessing transitions between brain states remains computationally challenging. Here we introduce a pipeline to detect brain states and their transitions in the cerebral cortex using a dual-model Convolutional Neural Network (CNN) and a self-supervised autoencoder-based multimodal clustering algorithm. This approach distinguishes brain states such as slow oscillations, microarousals, and wakefulness with high confidence. Using chronic local field potential recordings from rats, our method achieved a global accuracy of 91%, with up to 96% accuracy for certain states. For the transitions, we report an average accuracy of 74%. Our models were trained using a leave-one-out methodology, allowing for broad applicability across subjects and pre-trained models for deployments. It also features a confidence parameter, ensuring that only highly certain cases are automatically classified, leaving ambiguous cases for the multimodal unsupervised classifier or further expert review. Our approach presents a reliable and efficient tool for brain state labeling and analysis, with applications in basic and clinical neuroscience.
Collapse
Affiliation(s)
- Arnau Marin-Llobet
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Roselló 149-153, 08036, Barcelona, Spain
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Boston, MA, 02138, USA
| | - Arnau Manasanch
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Roselló 149-153, 08036, Barcelona, Spain
- Faculty of Medicine and Health Sciences, University of Barcelona, 08036, Barcelona, Spain
| | - Leonardo Dalla Porta
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Roselló 149-153, 08036, Barcelona, Spain
| | - Melody Torao-Angosto
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Roselló 149-153, 08036, Barcelona, Spain
| | - Maria V Sanchez-Vives
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Roselló 149-153, 08036, Barcelona, Spain.
- ICREA, Passeig Lluís Companys 23, 08010, Barcelona, Spain.
| |
Collapse
|
7
|
Dernoncourt F, Avrillon S, Logtens T, Cattagni T, Farina D, Hug F. Flexible control of motor units: is the multidimensionality of motor unit manifolds a sufficient condition? J Physiol 2025; 603:2349-2368. [PMID: 39964831 PMCID: PMC12013786 DOI: 10.1113/jp287857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2024] [Accepted: 01/27/2025] [Indexed: 02/20/2025] Open
Abstract
Understanding flexibility in the neural control of movement requires identifying the distribution of common inputs to the motor units. In this study, we identified large samples of motor units from two lower limb muscles: the vastus lateralis (VL; up to 60 motor units per participant) and the gastrocnemius medialis (GM; up to 67 motor units per participant). First, we applied a linear dimensionality reduction method to assess the dimensionality of the manifolds underlying the motor unit activity. We subsequently investigated the flexibility in motor unit control under two conditions: sinusoidal contractions with torque feedback, and online control with visual feedback on motor unit firing rates. Overall, we found that the activity of GM motor units was effectively captured by a single latent factor defining a unidimensional manifold, whereas the VL motor units were better represented by three latent factors defining a multidimensional manifold. Despite this difference in dimensionality, the recruitment of motor units in the two muscles exhibited similarly low levels of flexibility. Using a spiking network model, we tested the hypothesis that dimensionality derived from factorization does not solely represent descending cortical commands but is also influenced by spinal circuitry. We demonstrated that a heterogeneous distribution of inputs to motor units, or specific configurations of recurrent inhibitory circuits, could produce a multidimensional manifold. This study clarifies an important debated issue, demonstrating that while motor unit firings of a non-compartmentalized muscle can lie in a multidimensional manifold, the CNS may still have limited capacity for flexible control of these units. KEY POINTS: To generate movement, the CNS distributes both excitatory and inhibitory inputs to the motor units. The level of flexibility in the neural control of these motor units remains a topic of debate with significant implications for identifying the smallest unit of movement control. By combining experimental data and in silico models, we demonstrated that the activity of a large sample of motor units from a single muscle can be represented by a multidimensional linear manifold; however, these units show very limited flexibility in their recruitment. The dimensionality of the linear manifold may not directly reflect the dimensionality of descending inputs but could instead relate to the organization of local spinal circuits.
Collapse
Affiliation(s)
| | - Simon Avrillon
- Université Côte d'Azur, LAMHESSNiceFrance
- Department of Bioengineering, Faculty of EngineeringImperial College LondonLondonUK
| | | | - Thomas Cattagni
- Nantes Université, Laboratory ‘MovementInteractions, Performance’ (UR 4334)NantesFrance
| | - Dario Farina
- Department of Bioengineering, Faculty of EngineeringImperial College LondonLondonUK
| | - François Hug
- Université Côte d'Azur, LAMHESSNiceFrance
- The University of QueenslandSchool of Biomedical SciencesBrisbaneQueenslandAustralia
| |
Collapse
|
8
|
Humphries MD. The Computational Bottleneck of Basal Ganglia Output (and What to Do About it). eNeuro 2025; 12:ENEURO.0431-23.2024. [PMID: 40274408 PMCID: PMC12039478 DOI: 10.1523/eneuro.0431-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 10/12/2024] [Accepted: 10/16/2024] [Indexed: 04/26/2025] Open
Abstract
What the basal ganglia do is an oft-asked question; answers range from the selection of actions to the specification of movement to the estimation of time. Here, I argue that how the basal ganglia do what they do is a less-asked but equally important question. I show that the output regions of the basal ganglia create a stringent computational bottleneck, both structurally, because they have far fewer neurons than do their target regions, and dynamically, because of their tonic, inhibitory output. My proposed solution to this bottleneck is that the activity of an output neuron is setting the weight of a basis function, a function defined by that neuron's synaptic contacts. I illustrate how this may work in practice, allowing basal ganglia output to shift cortical dynamics and control eye movements via the superior colliculus. This solution can account for troubling issues in our understanding of the basal ganglia: why we see output neurons increasing their activity during behavior, rather than only decreasing as predicted by theories based on disinhibition, and why the output of the basal ganglia seems to have so many codes squashed into such a tiny region of the brain.
Collapse
|
9
|
Chou CN, Kim R, Arend LA, Yang YY, Mensh BD, Shim WM, Perich MG, Chung S. Geometry Linked to Untangling Efficiency Reveals Structure and Computation in Neural Populations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2024.02.26.582157. [PMID: 40236228 PMCID: PMC11996410 DOI: 10.1101/2024.02.26.582157] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 04/17/2025]
Abstract
From an eagle spotting a fish in shimmering water to a scientist extracting patterns from noisy data, many cognitive tasks require untangling overlapping signals. Neural circuits achieve this by transforming complex sensory inputs into distinct, separable representations that guide behavior. Data-visualization techniques convey the geometry of these transformations, and decoding approaches quantify performance efficiency. However, we lack a framework for linking these two key aspects. Here we address this gap by introducing a data-driven analysis framework, which we call Geometry Linked to Untangling Efficiency (GLUE) with manifold capacity theory, that links changes in the geometrical properties of neural activity patterns to representational untangling at the computational level. We applied GLUE to over seven neuroscience datasets-spanning multiple organisms, tasks, and recording techniques-and found that task-relevant representations untangle in many domains, including along the cortical hierarchy, through learning, and over the course of intrinsic neural dynamics. Furthermore, GLUE can characterize the underlying geometric mechanisms of representational untangling, and explain how it facilitates efficient and robust computation. Beyond neuroscience, GLUE provides a powerful framework for quantifying information organization in data-intensive fields such as structural genomics and interpretable AI, where analyzing high-dimensional representations remains a fundamental challenge.
Collapse
|
10
|
Koh N, Ma Z, Sarup A, Kristl AC, Agrios M, Young M, Miri A. Selective direct motor cortical influence during naturalistic climbing in mice. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2023.06.18.545509. [PMID: 39229015 PMCID: PMC11370436 DOI: 10.1101/2023.06.18.545509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2024]
Abstract
It remains poorly resolved when and how motor cortical output directly influences limb muscle activity through descending projections, which impedes mechanistic understanding of motor control. Here we addressed this in mice performing an ethologically inspired climbing behavior. We quantified the direct influence of forelimb primary motor cortex (caudal forelimb area, CFA) on muscles across the muscle activity states expressed during climbing. We found that CFA instructs muscle activity pattern by selectively activating certain muscles, while less frequently activating or suppressing their antagonists. From Neuropixels recordings, we identified linear combinations (components) of motor cortical activity that covary with these effects. These components differ partially from those that covary with muscle activity and differ almost completely from those that covary with kinematics. Collectively, our results reveal an instructive direct motor cortical influence on limb muscles that is selective within a motor behavior and reliant on a distinct neural activity subspace.
Collapse
|
11
|
Hu B, Temiz NZ, Chou CN, Rupprecht P, Meissner-Bernard C, Titze B, Chung S, Friedrich RW. Representational learning by optimization of neural manifolds in an olfactory memory network. RESEARCH SQUARE 2025:rs.3.rs-6155477. [PMID: 40195987 PMCID: PMC11975023 DOI: 10.21203/rs.3.rs-6155477/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/09/2025]
Abstract
Cognitive brain functions rely on experience-dependent internal representations of relevant information. Such representations are organized by attractor dynamics or other mechanisms that constrain population activity onto "neural manifolds". Quantitative analyses of representational manifolds are complicated by their potentially complex geometry, particularly in the absence of attractor states. Here we trained juvenile and adult zebrafish in an odor discrimination task and measured neuronal population activity to analyze representations of behaviorally relevant odors in telencephalic area pDp, the homolog of piriform cortex. No obvious signatures of attractor dynamics were detected. However, olfactory discrimination training selectively enhanced the separation of neural manifolds representing task-relevant odors from other representations, consistent with predictions of autoassociative network models endowed with precise synaptic balance. Analytical approaches using the framework of manifold capacity revealed multiple geometrical modifications of representational manifolds that supported the classification of task-relevant sensory information. Manifold capacity predicted odor discrimination across individuals better than other descriptors of population activity, indicating a close link between manifold geometry and behavior. Hence, pDp and possibly related recurrent networks store information in the geometry of representational manifolds, resulting in joint sensory and semantic maps that may support distributed learning processes.
Collapse
Affiliation(s)
- Bo Hu
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland
- University of Basel, 4003 Basel, Switzerland
| | - Nesibe Z. Temiz
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland
- University of Basel, 4003 Basel, Switzerland
| | - Chi-Ning Chou
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
| | - Peter Rupprecht
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland
- Neuroscience Center Zurich, 8057 Zurich, Switzerland
- Brain Research Institute, University of Zurich, 8057 Zurich, Switzerland
| | - Claire Meissner-Bernard
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland
| | - Benjamin Titze
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland
| | - SueYeon Chung
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| | - Rainer W. Friedrich
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland
- University of Basel, 4003 Basel, Switzerland
| |
Collapse
|
12
|
Blanco Malerba S, Pieropan M, Burak Y, Azeredo da Silveira R. Random compressed coding with neurons. Cell Rep 2025; 44:115412. [PMID: 40111998 DOI: 10.1016/j.celrep.2025.115412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 11/20/2023] [Accepted: 02/18/2025] [Indexed: 03/22/2025] Open
Abstract
Classical models of efficient coding in neurons assume simple mean responses-"tuning curves"- such as bell-shaped or monotonic functions of a stimulus feature. Real neurons, however, can be more complex: grid cells, for example, exhibit periodic responses that impart the neural population code with high accuracy. But do highly accurate codes require fine-tuning of the response properties? We address this question with the use of a simple model: a population of neurons with random, spatially extended, and irregular tuning curves. Irregularity enhances the local resolution of the code but gives rise to catastrophic, global errors. For optimal smoothness of the tuning curves, when local and global errors balance out, the neural population compresses information about a continuous stimulus into a low-dimensional representation, and the resulting distributed code achieves exponential accuracy. An analysis of recordings from monkey motor cortex points to such "compressed efficient coding." Efficient codes do not require a finely tuned design-they emerge robustly from irregularity or randomness.
Collapse
Affiliation(s)
- Simone Blanco Malerba
- Laboratoire de Physique de l'Ecole Normale Supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université de Paris, 75005 Paris, France; Institute for Neural Information Processing, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf, 20251 Hamburg, Germany
| | - Mirko Pieropan
- Laboratoire de Physique de l'Ecole Normale Supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université de Paris, 75005 Paris, France
| | - Yoram Burak
- Racah Institute of Physics, Hebrew University of Jerusalem, Jerusalem 9190401, Israel; Edmond and Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, Jerusalem 9190401, Israel
| | - Rava Azeredo da Silveira
- Laboratoire de Physique de l'Ecole Normale Supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université de Paris, 75005 Paris, France; Institute of Molecular and Clinical Ophthalmology Basel, 4031 Basel, Switzerland; Faculty of Science, University of Basel, 4056 Basel, Switzerland; Department of Economics, University of Zurich, 8001 Zurich, Switzerland.
| |
Collapse
|
13
|
Kudryashova N, Hurwitz C, Perich MG, Hennig MH. BAND: Behavior-Aligned Neural Dynamics is all you need to capture motor corrections. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.03.21.644350. [PMID: 40196470 PMCID: PMC11974739 DOI: 10.1101/2025.03.21.644350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/09/2025]
Abstract
Neural activity in motor cortical areas is well-explained by latent neural population dynamics: the motor preparation phase sets the initial condition for the movement while the dynamics that unfold during the motor execution phase orchestrate the sequence of muscle activations. While preparatory activity explains a large fraction of both neural and behavior variability during the execution of a planned movement, it cannot account for corrections and adjustments during movements as this requires sensory feedback not available during planning. Therefore, accounting for unplanned, sensory-guided movement requires knowledge of relevant inputs to the motor cortex from other brain areas. Here, we provide evidence that these inputs cause transient deviations from an autonomous neural population trajectory, and show that these dynamics cannot be found by unsupervised inference methods. We introduce the new Behavior-Aligned Neural Dynamics (BAND) model, which exploits semi-supervised learning to predict both planned and unplanned movements from neural activity in the motor cortex that can be missed by unsupervised inference methods. Our analysis using BAND suggests that 1) transient motor corrections are encoded in small neural variability; 2) motor corrections are encoded in a sparse sub-population of primary motor cortex neurons (M1); and 3) combining latent dynamical modeling with behavior supervision allows for capturing both the movement plan and corrections.
Collapse
Affiliation(s)
- Nina Kudryashova
- School of Informatics, University of Edinburgh; Informatics Forum, 10 Crichton St, Newington, Edinburgh EH8 9AB, United Kingdom
| | - Cole Hurwitz
- Zuckerman Institute, Columbia University; 3227 Broadway, New York, NY 10027, United States
| | - Matthew G Perich
- Département de neurosciences, Faculté de médecine, Université de Montréal; Pavillon Roger-Gaudry, 2900 Edouard Montpetit Blvd, Montreal, Quebec H3T 1J4, Canada
- Mila, Quebec Artificial Intelligence Institute; 6666 Rue Saint-Urbain, Montréal, QC H2S 3H1, Canada
| | - Matthias H Hennig
- School of Informatics, University of Edinburgh; Informatics Forum, 10 Crichton St, Newington, Edinburgh EH8 9AB, United Kingdom
| |
Collapse
|
14
|
Shymkiv Y, Hamm JP, Escola S, Yuste R. Slow cortical dynamics generate context processing and novelty detection. Neuron 2025; 113:847-857.e8. [PMID: 39933524 PMCID: PMC11925667 DOI: 10.1016/j.neuron.2025.01.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2024] [Revised: 11/08/2024] [Accepted: 01/15/2025] [Indexed: 02/13/2025]
Abstract
The cortex amplifies responses to novel stimuli while suppressing redundant ones. Novelty detection is necessary to efficiently process sensory information and build predictive models of the environment, and it is also altered in schizophrenia. To investigate the circuit mechanisms underlying novelty detection, we used an auditory "oddball" paradigm and two-photon calcium imaging to measure responses to simple and complex stimuli across mouse auditory cortex. Stimulus statistics and complexity generated specific responses across auditory areas. Neuronal ensembles reliably encoded auditory features and temporal context. Interestingly, stimulus-evoked population responses were particularly long lasting, reflecting stimulus history and affecting future responses. These slow cortical dynamics encoded stimulus temporal context and generated stronger responses to novel stimuli. Recurrent neural network models trained on the oddball task also exhibited slow network dynamics and recapitulated the biological data. We conclude that the slow dynamics of recurrent cortical networks underlie processing and novelty detection.
Collapse
Affiliation(s)
- Yuriy Shymkiv
- Neurotechnology Center, Department of Biological Sciences, Columbia University, New York, NY, USA.
| | - Jordan P Hamm
- Neurotechnology Center, Department of Biological Sciences, Columbia University, New York, NY, USA
| | - Sean Escola
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Psychiatry, Columbia University, New York, NY, USA
| | - Rafael Yuste
- Neurotechnology Center, Department of Biological Sciences, Columbia University, New York, NY, USA
| |
Collapse
|
15
|
Zhu T, Areshenkoff CN, De Brouwer AJ, Nashed JY, Flanagan JR, Gallivan JP. Contractions in human cerebellar-cortical manifold structure underlie motor reinforcement learning. J Neurosci 2025; 45:e2158242025. [PMID: 40101964 PMCID: PMC12044045 DOI: 10.1523/jneurosci.2158-24.2025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2024] [Revised: 02/12/2025] [Accepted: 03/06/2025] [Indexed: 03/20/2025] Open
Abstract
How the brain learns new motor commands through reinforcement involves distributed neural circuits beyond known frontal-striatal pathways, yet a comprehensive understanding of this broader neural architecture remains elusive. Here, using human functional MRI (N = 46, 27 females) and manifold learning techniques, we identified a low-dimensional neural space that captured the dynamic changes in whole-brain functional organization during a reward-based trajectory learning task. By quantifying participants' learning rates through an Actor-Critic model, we discovered that periods of accelerated learning were characterized by significant manifold contractions across multiple brain regions, including areas of limbic and hippocampal cortex, as well as the cerebellum. This contraction reflected enhanced network integration, with notably stronger connectivity between several of these regions and the sensorimotor cerebellum correlating with higher learning rates. These findings challenge the traditional view of the cerebellum as solely involved in error-based learning, supporting the emerging view that it coordinates with other brain regions during reinforcement learning.Significance Statement This study reveals how distributed brain systems, including the cerebellum and hippocampus, alter their functional connectivity to support motor learning through reinforcement. Using advanced manifold learning techniques on functional MRI data, we examined changes in regional connectivity during reward-based learning and their relationship to learning rate. For several brain regions, we found that periods of heightened learning were associated with increased cerebellar connectivity, suggesting a key role for the cerebellum in reward-based motor learning. These findings challenge the traditional view of the cerebellum as solely involved in supervised (error-based) learning and add to a growing rodent literature supporting a role for cerebellar circuits in reward-driven learning.
Collapse
Affiliation(s)
- Tianyao Zhu
- Center for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.
| | - Corson N Areshenkoff
- Center for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
- Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Anouk J De Brouwer
- Center for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Joseph Y Nashed
- Center for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - J Randall Flanagan
- Center for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
- Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Jason P Gallivan
- Center for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.
- Department of Psychology, Queen's University, Kingston, Ontario, Canada
- Department of Biomedical and Molecular Sciences, Queen's University, Kingston, Ontario, Canada
| |
Collapse
|
16
|
Chiossi HSC, Nardin M, Tkačik G, Csicsvari J. Learning reshapes the hippocampal representation hierarchy. Proc Natl Acad Sci U S A 2025; 122:e2417025122. [PMID: 40063792 PMCID: PMC11929462 DOI: 10.1073/pnas.2417025122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2024] [Accepted: 01/27/2025] [Indexed: 03/25/2025] Open
Abstract
A key feature of biological and artificial neural networks is the progressive refinement of their neural representations with experience. In neuroscience, this fact has inspired several recent studies in sensory and motor systems. However, less is known about how higher associational cortical areas, such as the hippocampus, modify representations throughout the learning of complex tasks. Here, we focus on associative learning, a process that requires forming a connection between the representations of different variables for appropriate behavioral response. We trained rats in a space-context associative task and monitored hippocampal neural activity throughout the entire learning period, over several days. This allowed us to assess changes in the representations of context, movement direction, and position, as well as their relationship to behavior. We identified a hierarchical representational structure in the encoding of these three task variables that was preserved throughout learning. Nevertheless, we also observed changes at the lower levels of the hierarchy where context was encoded. These changes were local in neural activity space and restricted to physical positions where context identification was necessary for correct decision-making, supporting better context decoding and contextual code compression. Our results demonstrate that the hippocampal code not only accommodates hierarchical relationships between different variables but also enables efficient learning through minimal changes in neural activity space. Beyond the hippocampus, our work reveals a representation learning mechanism that might be implemented in other biological and artificial networks performing similar tasks.
Collapse
Affiliation(s)
| | | | - Gašper Tkačik
- Institute of Science and Technology Austria, KlosterneuburgAT-3400, Austria
| | - Jozsef Csicsvari
- Institute of Science and Technology Austria, KlosterneuburgAT-3400, Austria
| |
Collapse
|
17
|
Israely S, Ninou H, Rajchert O, Elmaleh L, Harel R, Mawase F, Kadmon J, Prut Y. Cerebellar output shapes cortical preparatory activity during motor adaptation. Nat Commun 2025; 16:2574. [PMID: 40089504 PMCID: PMC11910607 DOI: 10.1038/s41467-025-57832-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2024] [Accepted: 03/05/2025] [Indexed: 03/17/2025] Open
Abstract
The cerebellum plays a key role in motor adaptation by driving trial-to-trial recalibration of movements based on previous errors. In primates, cortical correlates of adaptation are encoded already in the pre-movement motor plan, but these early cortical signals could be driven by a cerebellar-to-cortical information flow or evolve independently through intracortical mechanisms. To address this question, we trained female macaque monkeys to reach against a viscous force field (FF) while blocking cerebellar outflow. The cerebellar block led to impaired FF adaptation and a compensatory, re-aiming-like shift in motor cortical preparatory activity. In the null-field conditions, the cerebellar block altered neural preparatory activity by increasing task-representation dimensionality and impeding generalization. A computational model indicated that low-dimensional (cerebellar-like) feedback is sufficient to replicate these findings. We conclude that cerebellar signals carry task structure information that constrains the dimensionality of the cortical preparatory manifold and promotes generalization. In the absence of these signals, cortical mechanisms are harnessed to partially restore adaptation.
Collapse
Affiliation(s)
- Sharon Israely
- The Edmond and Lily Safra Center For Brain Sciences, The Hebrew University, Jerusalem, Israel
| | - Hugo Ninou
- The Edmond and Lily Safra Center For Brain Sciences, The Hebrew University, Jerusalem, Israel
- Département D'Etudes Cognitives, Ecole Normale Supérieure, Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, PSL University, Paris, France
- Laboratoire de Physique de l'Ecole Normale Superieure, Ecole Normale Supérieure, PSL University, Paris, France
| | - Ori Rajchert
- Faculty of Biomedical Engineering, Technion - Israel Institute of Technology, Haifa, Israel
| | - Lee Elmaleh
- The Edmond and Lily Safra Center For Brain Sciences, The Hebrew University, Jerusalem, Israel
| | - Ran Harel
- Department of Neurosurgery, Sheba Medical Center, Tel Aviv, Israel
| | - Firas Mawase
- Faculty of Biomedical Engineering, Technion - Israel Institute of Technology, Haifa, Israel
| | - Jonathan Kadmon
- The Edmond and Lily Safra Center For Brain Sciences, The Hebrew University, Jerusalem, Israel.
| | - Yifat Prut
- The Edmond and Lily Safra Center For Brain Sciences, The Hebrew University, Jerusalem, Israel.
| |
Collapse
|
18
|
Magnasco MO. Input-driven circuit reconfiguration in critical recurrent neural networks. Proc Natl Acad Sci U S A 2025; 122:e2418818122. [PMID: 40053358 PMCID: PMC11912373 DOI: 10.1073/pnas.2418818122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2024] [Accepted: 01/09/2025] [Indexed: 03/19/2025] Open
Abstract
Changing a circuit dynamically, without actually changing the hardware itself, is called reconfiguration, and is of great importance due to its manifold technological applications. Circuit reconfiguration appears to be a feature of the cerebral cortex, so understanding the dynamical principles underlying self-reconfiguration may prove of import to elucidate brain function. We present a very simple example of dynamical reconfiguration: a family of networks whose signal pathways can be switched on the fly, only through use of their inputs, with no changes to their synaptic weights. These are single-layer convolutional recurrent network with local unitary synaptic weights and a smooth sigmoidal activation function. We generate traveling waves using the high spatiotemporal frequencies of the input, and we use the low spatiotemporal frequencies of the input to landscape the ongoing activity, channeling said traveling waves through an input-specified spatial pattern. This mechanism uses inherent properties of marginally stable, dynamically critical systems, which are a direct consequence of their unitary convolution kernels: every network in the family can do this. We show these networks solve the classical connectedness detection problem, by allowing signal propagation only along the regions to be evaluated for connectedness, and forbidding it elsewhere.
Collapse
Affiliation(s)
- Marcelo O. Magnasco
- Laboratory of Integrative Neuroscience, Rockefeller University, New York, NY10065
| |
Collapse
|
19
|
Clark DG, Beiran M. Structure of activity in multiregion recurrent neural networks. Proc Natl Acad Sci U S A 2025; 122:e2404039122. [PMID: 40053363 PMCID: PMC11912375 DOI: 10.1073/pnas.2404039122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Accepted: 02/07/2025] [Indexed: 03/12/2025] Open
Abstract
Neural circuits comprise multiple interconnected regions, each with complex dynamics. The interplay between local and global activity is thought to underlie computational flexibility, yet the structure of multiregion neural activity and its origins in synaptic connectivity remain poorly understood. We investigate recurrent neural networks with multiple regions, each containing neurons with random and structured connections. Inspired by experimental evidence of communication subspaces, we use low-rank connectivity between regions to enable selective activity routing. These networks exhibit high-dimensional fluctuations within regions and low-dimensional signal transmission between them. Using dynamical mean-field theory, with cross-region currents as order parameters, we show that regions act as both generators and transmitters of activity-roles that are often in tension. Taming within-region activity can be crucial for effective signal routing. Unlike previous models that suppressed neural activity to control signal flow, our model achieves routing by exciting different high-dimensional activity patterns through connectivity structure and nonlinear dynamics. Our analysis of this disordered system offers insights into multiregion neural data and trained neural networks.
Collapse
Affiliation(s)
- David G. Clark
- Zuckerman Institute, Columbia University, New York, NY10027
- Kavli Institute for Brain Science, Columbia University, New York, NY10027
| | - Manuel Beiran
- Zuckerman Institute, Columbia University, New York, NY10027
- Kavli Institute for Brain Science, Columbia University, New York, NY10027
| |
Collapse
|
20
|
Ianni GR, Vázquez Y, Rouse AG, Schieber MH, Prut Y, Freiwald WA. Facial gestures are enacted via a cortical hierarchy of dynamic and stable codes. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.03.03.641159. [PMID: 40161717 PMCID: PMC11952350 DOI: 10.1101/2025.03.03.641159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/02/2025]
Abstract
Successful communication requires the generation and perception of a shared set of signals. Facial gestures are one fundamental set of communicative behaviors in primates, generated through the dynamic arrangement of dozens of fine muscles. While much progress has been made uncovering the neural mechanisms of face perception, little is known about those controlling facial gesture production. Commensurate with the importance of facial gestures in daily social life, anatomical work has shown that facial muscles are under direct control from multiple cortical regions, including primary and premotor in lateral frontal cortex, and cingulate in medial frontal cortex. Furthermore, neuropsychological evidence from focal lesion patients has suggested that lateral cortex controls voluntary movements, and medial emotional expressions. Here we show that lateral and medial cortical face motor regions encode both types of gestures. They do so through unique temporal activity patterns, distinguishable well-prior to movement onset. During gesture production, cortical regions encoded facial kinematics in a context-dependent manner. Our results show how cortical regions projecting in parallel downstream, but each situated at a different level of a posterior-anterior hierarchy form a continuum of gesture coding from dynamic to temporally stable, in order to produce context-related, coherent motor outputs during social communication.
Collapse
|
21
|
Natraj N, Seko S, Abiri R, Miao R, Yan H, Graham Y, Tu-Chan A, Chang EF, Ganguly K. Sampling representational plasticity of simple imagined movements across days enables long-term neuroprosthetic control. Cell 2025; 188:1208-1225.e32. [PMID: 40054446 PMCID: PMC11932800 DOI: 10.1016/j.cell.2025.02.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Revised: 01/26/2025] [Accepted: 02/03/2025] [Indexed: 03/26/2025]
Abstract
The nervous system needs to balance the stability of neural representations with plasticity. It is unclear what the representational stability of simple well-rehearsed actions is, particularly in humans, and their adaptability to new contexts. Using an electrocorticography brain-computer interface (BCI) in tetraplegic participants, we found that the low-dimensional manifold and relative representational distances for a repertoire of simple imagined movements were remarkably stable. The manifold's absolute location, however, demonstrated constrained day-to-day drift. Strikingly, neural statistics, especially variance, could be flexibly regulated to increase representational distances during BCI control without somatotopic changes. Discernability strengthened with practice and was BCI-specific, demonstrating contextual specificity. Sampling representational plasticity and drift across days subsequently uncovered a meta-representational structure with generalizable decision boundaries for the repertoire; this allowed long-term neuroprosthetic control of a robotic arm and hand for reaching and grasping. Our study offers insights into mesoscale representational statistics that also enable long-term complex neuroprosthetic control.
Collapse
Affiliation(s)
- Nikhilesh Natraj
- Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA; VA San Francisco Healthcare System, San Francisco, CA, USA
| | - Sarah Seko
- Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA
| | - Reza Abiri
- Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA
| | - Runfeng Miao
- Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA
| | - Hongyi Yan
- Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA
| | - Yasmin Graham
- Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA
| | - Adelyn Tu-Chan
- Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA
| | - Edward F Chang
- Department of Neurological Surgery, Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - Karunesh Ganguly
- Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA; VA San Francisco Healthcare System, San Francisco, CA, USA.
| |
Collapse
|
22
|
Potter H, Mitchell K. Beyond Mechanism-Extending Our Concepts of Causation in Neuroscience. Eur J Neurosci 2025; 61:e70064. [PMID: 40075160 PMCID: PMC11903913 DOI: 10.1111/ejn.70064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2024] [Revised: 02/24/2025] [Accepted: 03/02/2025] [Indexed: 03/14/2025]
Abstract
In neuroscience, the search for the causes of behaviour is often just taken to be the search for neural mechanisms. This view typically involves three forms of causal reduction: first, from the ontological level of cognitive processes to that of neural mechanisms; second, from the activity of the whole brain to that of isolated parts; and third, from a consideration of temporally extended, historical processes to a focus on synchronic states. While modern neuroscience has made impressive progress in identifying synchronic neural mechanisms, providing unprecedented real-time control of behaviour, we contend that this does not amount to a full causal explanation. In particular, there is an attendant danger of eliminating the cognitive from our explanatory framework, and even eliminating the organism itself. To fully understand the causes of behaviour, we need to understand not just what happens when different neurons are activated, but why those things happen. In this paper, we introduce a range of well-developed, non-reductive, and temporally extended notions of causality from philosophy, which neuroscientists may be able to draw on in order to build more complete causal explanations of behaviour. These include concepts of criterial causation, triggering versus structuring causes, constraints, macroscopic causation, historicity, and semantic causation-all of which, we argue, can be used to undergird a naturalistic understanding of mental causation and agent causation. These concepts can, collectively, help bring cognition and the organism itself back into the picture, as a causal agent unto itself, while still grounding causation in respectable scientific terms.
Collapse
Affiliation(s)
- Henry D. Potter
- Smurfit Institute of Genetics and Institute of NeuroscienceTrinity College DublinDublin 2Ireland
| | - Kevin J. Mitchell
- Smurfit Institute of Genetics and Institute of NeuroscienceTrinity College DublinDublin 2Ireland
| |
Collapse
|
23
|
Gosztolai A, Peach RL, Arnaudon A, Barahona M, Vandergheynst P. MARBLE: interpretable representations of neural population dynamics using geometric deep learning. Nat Methods 2025; 22:612-620. [PMID: 39962310 PMCID: PMC11903309 DOI: 10.1038/s41592-024-02582-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 09/27/2024] [Accepted: 11/26/2024] [Indexed: 03/14/2025]
Abstract
The dynamics of neuron populations commonly evolve on low-dimensional manifolds. Thus, we need methods that learn the dynamical processes over neural manifolds to infer interpretable and consistent latent representations. We introduce a representation learning method, MARBLE, which decomposes on-manifold dynamics into local flow fields and maps them into a common latent space using unsupervised geometric deep learning. In simulated nonlinear dynamical systems, recurrent neural networks and experimental single-neuron recordings from primates and rodents, we discover emergent low-dimensional latent representations that parametrize high-dimensional neural dynamics during gain modulation, decision-making and changes in the internal state. These representations are consistent across neural networks and animals, enabling the robust comparison of cognitive computations. Extensive benchmarking demonstrates state-of-the-art within- and across-animal decoding accuracy of MARBLE compared to current representation learning approaches, with minimal user input. Our results suggest that a manifold structure provides a powerful inductive bias to develop decoding algorithms and assimilate data across experiments.
Collapse
Affiliation(s)
- Adam Gosztolai
- Institute of Artificial Intelligence, Medical University of Vienna, Vienna, Austria.
| | - Robert L Peach
- Department of Neurology, University Hospital Würzburg, Würzburg, Germany
- Department of Brain Sciences, Imperial College London, London, UK
| | - Alexis Arnaudon
- Blue Brain Project, EPFL, Campus Biotech, Geneva, Switzerland
| | | | | |
Collapse
|
24
|
Rudroff T. Decoding thoughts, encoding ethics: A narrative review of the BCI-AI revolution. Brain Res 2025; 1850:149423. [PMID: 39719191 DOI: 10.1016/j.brainres.2024.149423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2024] [Revised: 12/19/2024] [Accepted: 12/20/2024] [Indexed: 12/26/2024]
Abstract
OBJECTIVES This narrative review aims to analyze mechanisms underlying Brain-Computer Interface (BCI) and Artificial Intelligence (AI) integration, evaluate recent advances in signal acquisition and processing techniques, and assess AI-enhanced neural decoding strategies. The review identifies critical research gaps and examines emerging solutions across multiple domains of BCI-AI integration. METHODS A narrative review was conducted using major biomedical and scientific databases including PubMed, Web of Science, IEEE Xplore, and Scopus (2014-2024). Literature was analyzed to identify key developments in BCI-AI integration, with particular emphasis on recent advances (2019-2024). The review process involved thematic analysis of selected publications focusing on practical applications, technical innovations, and emerging challenges. RESULTS Recent advances demonstrate significant improvements in BCI-AI systems: 1) High-density electrode arrays achieve spatial resolution up to 5 mm, with stable recordings over 15 months; 2) Deep learning decoders show 40 % improvement in information transfer rates compared to traditional methods; 3) Adaptive algorithms maintain >90 % success rates in motor control tasks over 200-day periods without recalibration; 4) Novel closed-loop optimization frameworks reduce user training time by 55 % while improving accuracy. Latest developments in flexible neural interfaces and self-supervised learning approaches show promise in addressing long-term stability and cross-user generalization challenges. CONCLUSIONS BCI-AI integration shows remarkable progress in improving signal quality, decoding accuracy, and user adaptation. While challenges remain in long-term stability and user training, advances in adaptive algorithms and feedback mechanisms demonstrate the technology's growing viability for clinical applications. Recent innovations in electrode technology, AI architectures, and closed-loop systems, combined with emerging standardization frameworks, suggest accelerating progress toward widespread therapeutic use and human augmentation applications.
Collapse
Affiliation(s)
- Thorsten Rudroff
- Turku PET Centre, University of Turku and Turku University Hospital, Turku, Finland.
| |
Collapse
|
25
|
Israely S, Ninou H, Rajchert O, Elmaleh L, Harel R, Mawase F, Kadmon J, Prut Y. Cerebellar output shapes cortical preparatory activity during motor adaptation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2024.07.12.603354. [PMID: 40060411 PMCID: PMC11888169 DOI: 10.1101/2024.07.12.603354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 03/15/2025]
Abstract
The cerebellum plays a key role in motor adaptation by driving trial-to-trial recalibration of movements based on previous errors. In primates, cortical correlates of adaptation are encoded already in the pre-movement motor plan, but these early cortical signals could be driven by a cerebellar-to-cortical information flow or evolve independently through intracortical mechanisms. To address this question, we trained female macaque monkeys to reach against a viscous force field (FF) while blocking cerebellar outflow. The cerebellar block led to impaired FF adaptation and a compensatory, re-aiming-like shift in motor cortical preparatory activity. In the null-field conditions, the cerebellar block altered neural preparatory activity by increasing task-representation dimensionality and impeding generalization. A computational model indicated that low-dimensional (cerebellar-like) feedback is sufficient to replicate these findings. We conclude that cerebellar signals carry task structure information that constrains the dimensionality of the cortical preparatory manifold and promotes generalization. In the absence of these signals, cortical mechanisms are harnessed to partially restore adaptation.
Collapse
Affiliation(s)
- Sharon Israely
- The Edmond and Lily Safra Center For Brain Sciences, The Hebrew University, Jerusalem, 91904-01, Israel
| | - Hugo Ninou
- The Edmond and Lily Safra Center For Brain Sciences, The Hebrew University, Jerusalem, 91904-01, Israel
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Département D’Etudes Cognitives, Ecole Normale Supérieure, PSL University, Paris, France
- Laboratoire de Physique de l’Ecole Normale Superieure, Ecole Normale Supérieure, PSL University, Paris, France
| | - Ori Rajchert
- Faculty of Biomedical Engineering, Technion - Israel Institute of Technology, Haifa, Israel
| | - Lee Elmaleh
- The Edmond and Lily Safra Center For Brain Sciences, The Hebrew University, Jerusalem, 91904-01, Israel
| | - Ran Harel
- Department of Neurosurgery, Sheba Medical Center, 5262000 Tel Aviv, Israel
| | - Firas Mawase
- Faculty of Biomedical Engineering, Technion - Israel Institute of Technology, Haifa, Israel
| | - Jonathan Kadmon
- The Edmond and Lily Safra Center For Brain Sciences, The Hebrew University, Jerusalem, 91904-01, Israel
| | - Yifat Prut
- The Edmond and Lily Safra Center For Brain Sciences, The Hebrew University, Jerusalem, 91904-01, Israel
| |
Collapse
|
26
|
Hasnain MA, Birnbaum JE, Ugarte Nunez JL, Hartman EK, Chandrasekaran C, Economo MN. Separating cognitive and motor processes in the behaving mouse. Nat Neurosci 2025; 28:640-653. [PMID: 39905210 DOI: 10.1038/s41593-024-01859-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 11/21/2024] [Indexed: 02/06/2025]
Abstract
The cognitive processes supporting complex animal behavior are closely associated with movements responsible for critical processes, such as facial expressions or the active sampling of our environments. These movements are strongly related to neural activity across much of the brain and are often highly correlated with ongoing cognitive processes. A fundamental issue for understanding the neural signatures of cognition and movements is whether cognitive processes are separable from related movements or if they are driven by common neural mechanisms. Here we demonstrate how the separability of cognitive and motor processes can be assessed and, when separable, how the neural dynamics associated with each component can be isolated. We designed a behavioral task in mice that involves multiple cognitive processes, and we show that dynamics commonly taken to support cognitive processes are strongly contaminated by movements. When cognitive and motor components are isolated using a novel approach for subspace decomposition, we find that they exhibit distinct dynamical trajectories and are encoded by largely separate populations of cells. Accurately isolating dynamics associated with particular cognitive and motor processes will be essential for developing conceptual and computational models of neural circuit function.
Collapse
Affiliation(s)
- Munib A Hasnain
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Center for Neurophotonics, Boston University, Boston, MA, USA
| | - Jaclyn E Birnbaum
- Center for Neurophotonics, Boston University, Boston, MA, USA
- Graduate Program for Neuroscience, Boston University, Boston, MA, USA
| | | | - Emma K Hartman
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Chandramouli Chandrasekaran
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA
- Department of Neurobiology & Anatomy, Boston University, Boston, MA, USA
- Center for Systems Neuroscience, Boston University, Boston, MA, USA
| | - Michael N Economo
- Department of Biomedical Engineering, Boston University, Boston, MA, USA.
- Center for Neurophotonics, Boston University, Boston, MA, USA.
- Center for Systems Neuroscience, Boston University, Boston, MA, USA.
| |
Collapse
|
27
|
Versteeg C, McCart JD, Ostrow M, Zoltowski DM, Washington CB, Driscoll L, Codol O, Michaels JA, Linderman SW, Sussillo D, Pandarinath C. Computation-through-Dynamics Benchmark: Simulated datasets and quality metrics for dynamical models of neural activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.02.07.637062. [PMID: 39975240 PMCID: PMC11839132 DOI: 10.1101/2025.02.07.637062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2025]
Abstract
A primary goal of systems neuroscience is to discover how ensembles of neurons transform inputs into goal-directed behavior, a process known as neural computation. A powerful framework for understanding neural computation uses neural dynamics - the rules that describe the temporal evolution of neural activity - to explain how goal-directed input-output transformations occur. As dynamical rules are not directly observable, we need computational models that can infer neural dynamics from recorded neural activity. We typically validate such models using synthetic datasets with known ground-truth dynamics, but unfortunately existing synthetic datasets don't reflect fundamental features of neural computation and are thus poor proxies for neural systems. Further, the field lacks validated metrics for quantifying the accuracy of the dynamics inferred by models. The Computation-through-Dynamics Benchmark (CtDB) fills these critical gaps by providing: 1) synthetic datasets that reflect computational properties of biological neural circuits, 2) interpretable metrics for quantifying model performance, and 3) a standardized pipeline for training and evaluating models with or without known external inputs. In this manuscript, we demonstrate how CtDB can help guide the development, tuning, and troubleshooting of neural dynamics models. In summary, CtDB provides a critical platform for model developers to better understand and characterize neural computation through the lens of dynamics.
Collapse
Affiliation(s)
- Christopher Versteeg
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
| | - Jonathan D McCart
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
- Center for Machine Learning, Georgia Institute of Technology, Atlanta, GA, USA
| | - Mitchell Ostrow
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
| | - David M Zoltowski
- Wu Tsai Neurosciences Institute, Stanford, CA, USA
- Department of Statistics, Stanford University, Stanford, CA, USA
| | - Clayton B Washington
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
| | - Laura Driscoll
- Allen Institute for Neural Dynamics, Seattle, WA, USA
- Department of Neurobiology & Biophysics, University of Washington, Seattle, WA, USA
| | - Olivier Codol
- Département de Neurosciences, Faculté de Médecine, Université de Montréal, Montréal, Canada
- MILA, Quebec Artificial Intelligence Institute, Montréal, Canada
| | - Jonathan A Michaels
- School of Kinesiology and Health Science, Faculty of Health, York University, Toronto, ON, Canada
| | - Scott W Linderman
- Wu Tsai Neurosciences Institute, Stanford, CA, USA
- Department of Statistics, Stanford University, Stanford, CA, USA
| | - David Sussillo
- Wu Tsai Neurosciences Institute, Stanford, CA, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Chethan Pandarinath
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
- Department of Neurosurgery, Emory University, Atlanta, GA, USA
| |
Collapse
|
28
|
Park J, Polidoro P, Fortunato C, Arnold J, Mensh B, Gallego JA, Dudman JT. Conjoint specification of action by neocortex and striatum. Neuron 2025; 113:620-636.e6. [PMID: 39837325 DOI: 10.1016/j.neuron.2024.12.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 09/09/2024] [Accepted: 12/19/2024] [Indexed: 01/23/2025]
Abstract
The interplay between two major forebrain structures-cortex and subcortical striatum-is critical for flexible, goal-directed action. Traditionally, it has been proposed that striatum is critical for selecting what type of action is initiated, while the primary motor cortex is involved in specifying the continuous parameters of an upcoming/ongoing movement. Recent data indicate that striatum may also be involved in specification. These alternatives have been difficult to reconcile because comparing very distinct actions, as is often done, makes essentially indistinguishable predictions. Here, we develop quantitative models to reveal a somewhat paradoxical insight: only comparing neural activity across similar actions makes strongly distinguishing predictions. We thus developed a novel reach-to-pull task in which mice reliably selected between two similar but distinct reach targets and pull forces. Simultaneous cortical and subcortical recordings were uniquely consistent with a model in which cortex and striatum jointly specify continuous parameters governing movement execution.
Collapse
Affiliation(s)
- Junchol Park
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA.
| | - Peter Polidoro
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - Catia Fortunato
- Department of Bioengineering, Imperial College London, London W12 0BZ, UK
| | - Jon Arnold
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - Brett Mensh
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - Juan A Gallego
- Department of Bioengineering, Imperial College London, London W12 0BZ, UK
| | - Joshua T Dudman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA.
| |
Collapse
|
29
|
Biswas S, Emond MR, Philip GS, Jontes JD. Canalization of circuit assembly by δ-protocadherins in the zebrafish optic tectum. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.01.29.635523. [PMID: 39975130 PMCID: PMC11838265 DOI: 10.1101/2025.01.29.635523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 02/21/2025]
Abstract
Neurons are precisely and reproducibly assembled into complex networks during development. How genes collaborate to guide this assembly remains an enduring mystery. In humans, large numbers of genes have been implicated in neurodevelopmental disorders that are characterized by variable and overlapping phenotypes. The complexity of the brain, the large number of genes involved and the heterogeneity of the disorders makes understanding the relationships between genes, development and neural function challenging. Waddington suggested the concept of canalization to describe the role of genes in shaping developmental trajectories that lead to precise outcomes1. Here, we show that members of the δ-protocadherin family of homophilic adhesion molecules, Protocadherin-19 and Protocadherin-17, contribute to developmental canalization of visual circuit assembly in the zebrafish. We provided oriented visual stimuli to zebrafish larvae and performed in vivo 2-photon calcium imaging in the optic tectum. The latent dynamics resulting from the population activity were confined to a conserved manifold. Among different wild type larvae, these dynamics were remarkably similar, allowing quantitative comparisons within and among genotypes. In both Protocadherin-19 and Protocadherin-17 mutants, the latent dynamics diverged from wild type. Importantly, these deviations could be averaged away, suggesting that the loss of these adhesion molecules leads to stochastic phenotypic variability and introduced disruptions of circuit organization that varied among individual mutants. These results provide a specific, quantitative example of canalization in the development of a vertebrate neural circuit, and suggest a framework for understanding the observed variability in complex brain disorders.
Collapse
Affiliation(s)
- Sayantanee Biswas
- Department of Biological Chemistry and Pharmacology Ohio State University Wexner College of Medicine Columbus, OH 43210
| | - Michelle R. Emond
- Department of Biological Chemistry and Pharmacology Ohio State University Wexner College of Medicine Columbus, OH 43210
| | - Grace S. Philip
- Department of Biological Chemistry and Pharmacology Ohio State University Wexner College of Medicine Columbus, OH 43210
| | - James D. Jontes
- Department of Biological Chemistry and Pharmacology Ohio State University Wexner College of Medicine Columbus, OH 43210
| |
Collapse
|
30
|
Barzon G, Busiello DM, Nicoletti G. Excitation-Inhibition Balance Controls Information Encoding in Neural Populations. PHYSICAL REVIEW LETTERS 2025; 134:068403. [PMID: 40021162 DOI: 10.1103/physrevlett.134.068403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Revised: 10/17/2024] [Accepted: 01/27/2025] [Indexed: 03/03/2025]
Abstract
Understanding how the complex connectivity structure of the brain shapes its information-processing capabilities is a long-standing question. By focusing on a paradigmatic architecture, we study how the neural activity of excitatory and inhibitory populations encodes information on external signals. We show that at long times information is maximized at the edge of stability, where inhibition balances excitation, both in linear and nonlinear regimes. In the presence of multiple external signals, this maximum corresponds to the entropy of the input dynamics. By analyzing the case of a prolonged stimulus, we find that stronger inhibition is instead needed to maximize the instantaneous sensitivity, revealing an intrinsic tradeoff between short-time responses and long-time accuracy. In agreement with recent experimental findings, our results pave the way for a deeper information-theoretic understanding of how the balance between excitation and inhibition controls optimal information-processing in neural populations.
Collapse
Affiliation(s)
- Giacomo Barzon
- University of Padova, Padova Neuroscience Center, Padova, Italy
| | - Daniel Maria Busiello
- Max Planck Institute for the Physics of Complex Systems, Dresden, Germany
- University of Padova, Department of Physics and Astronomy "G. Galilei," , Padova, Italy
| | - Giorgio Nicoletti
- École Polytechnique Fédérale de Lausanne, ECHO Laboratory, Lausanne, Switzerland
- The Abdus Salam International Center for Theoretical Physics (ICTP), Quantitative Life Sciences section, Trieste, Italy
| |
Collapse
|
31
|
Mitchell KJ, Cheney N. The Genomic Code: the genome instantiates a generative model of the organism. Trends Genet 2025:S0168-9525(25)00008-3. [PMID: 39934051 DOI: 10.1016/j.tig.2025.01.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Revised: 01/15/2025] [Accepted: 01/17/2025] [Indexed: 02/13/2025]
Abstract
How does the genome encode the form of the organism? What is the nature of this genomic code? Inspired by recent work in machine learning and neuroscience, we propose that the genome encodes a generative model of the organism. In this scheme, by analogy with variational autoencoders (VAEs), the genome comprises a connectionist network, embodying a compressed space of 'latent variables', with weights that get encoded by the learning algorithm of evolution and decoded through the processes of development. The generative model analogy accounts for the complex, distributed genetic architecture of most traits and the emergent robustness and evolvability of developmental processes, while also offering a conception that lends itself to formalization.
Collapse
Affiliation(s)
- Kevin J Mitchell
- Institutes of Genetics and Neuroscience, Trinity College Dublin, Dublin, Ireland.
| | - Nick Cheney
- Department of Computer Science, University of Vermont, Burlington, VT, USA
| |
Collapse
|
32
|
Yoshida K, Toyoizumi T. A biological model of nonlinear dimensionality reduction. SCIENCE ADVANCES 2025; 11:eadp9048. [PMID: 39908371 PMCID: PMC11801247 DOI: 10.1126/sciadv.adp9048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Accepted: 01/06/2025] [Indexed: 02/07/2025]
Abstract
Obtaining appropriate low-dimensional representations from high-dimensional sensory inputs in an unsupervised manner is essential for straightforward downstream processing. Although nonlinear dimensionality reduction methods such as t-distributed stochastic neighbor embedding (t-SNE) have been developed, their implementation in simple biological circuits remains unclear. Here, we develop a biologically plausible dimensionality reduction algorithm compatible with t-SNE, which uses a simple three-layer feedforward network mimicking the Drosophila olfactory circuit. The proposed learning rule, described as three-factor Hebbian plasticity, is effective for datasets such as entangled rings and MNIST, comparable to t-SNE. We further show that the algorithm could be working in olfactory circuits in Drosophila by analyzing the multiple experimental data in previous studies. We lastly suggest that the algorithm is also beneficial for association learning between inputs and rewards, allowing the generalization of these associations to other inputs not yet associated with rewards.
Collapse
Affiliation(s)
- Kensuke Yoshida
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
- Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
- Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
| |
Collapse
|
33
|
Perkins SM, Amematsro EA, Cunningham J, Wang Q, Churchland MM. An emerging view of neural geometry in motor cortex supports high-performance decoding. eLife 2025; 12:RP89421. [PMID: 39898793 PMCID: PMC11790250 DOI: 10.7554/elife.89421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2025] Open
Abstract
Decoders for brain-computer interfaces (BCIs) assume constraints on neural activity, chosen to reflect scientific beliefs while yielding tractable computations. Recent scientific advances suggest that the true constraints on neural activity, especially its geometry, may be quite different from those assumed by most decoders. We designed a decoder, MINT, to embrace statistical constraints that are potentially more appropriate. If those constraints are accurate, MINT should outperform standard methods that explicitly make different assumptions. Additionally, MINT should be competitive with expressive machine learning methods that can implicitly learn constraints from data. MINT performed well across tasks, suggesting its assumptions are well-matched to the data. MINT outperformed other interpretable methods in every comparison we made. MINT outperformed expressive machine learning methods in 37 of 42 comparisons. MINT's computations are simple, scale favorably with increasing neuron counts, and yield interpretable quantities such as data likelihoods. MINT's performance and simplicity suggest it may be a strong candidate for many BCI applications.
Collapse
Affiliation(s)
- Sean M Perkins
- Department of Biomedical Engineering, Columbia UniversityNew YorkUnited States
- Zuckerman Institute, Columbia UniversityNew YorkUnited States
| | - Elom A Amematsro
- Zuckerman Institute, Columbia UniversityNew YorkUnited States
- Department of Neuroscience, Columbia University Medical CenterNew YorkUnited States
| | - John Cunningham
- Zuckerman Institute, Columbia UniversityNew YorkUnited States
- Department of Statistics, Columbia UniversityNew YorkUnited States
- Center for Theoretical Neuroscience, Columbia University Medical CenterNew YorkUnited States
- Grossman Center for the Statistics of Mind, Columbia UniversityNew YorkUnited States
| | - Qi Wang
- Department of Biomedical Engineering, Columbia UniversityNew YorkUnited States
| | - Mark M Churchland
- Zuckerman Institute, Columbia UniversityNew YorkUnited States
- Department of Neuroscience, Columbia University Medical CenterNew YorkUnited States
- Grossman Center for the Statistics of Mind, Columbia UniversityNew YorkUnited States
- Kavli Institute for Brain Science, Columbia University Medical CenterNew YorkUnited States
| |
Collapse
|
34
|
Guo H, Kuang S, Gail A. Sensorimotor environment but not task rule reconfigures population dynamics in rhesus monkey posterior parietal cortex. Nat Commun 2025; 16:1116. [PMID: 39900579 PMCID: PMC11791165 DOI: 10.1038/s41467-025-56360-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Accepted: 01/15/2025] [Indexed: 02/05/2025] Open
Abstract
Primates excel at mapping sensory inputs flexibly onto motor outcomes. We asked if the neural dynamics to support context-sensitive sensorimotor mapping generalizes or differs between different behavioral contexts that demand such flexibility. We compared reaching under mirror-reversed vision, a case of adaptation to a modified sensorimotor environment (SE), with anti reaching, a case of applying an abstract task rule (TR). While neural dynamics in monkey posterior parietal cortex show shifted initial states and non-aligned low-dimensional neural subspaces in the SE task, remapping is achieved in overlapping subspaces in the TR task. A recurrent neural network model demonstrates how output constraints mimicking SE and TR tasks are sufficient to generate the two fundamentally different neural computational dynamics. We conclude that sensorimotor remapping to implement an abstract task rule happens within the existing repertoire of neural dynamics, while compensation of perturbed sensory feedback requires exploration of independent neural dynamics in parietal cortex.
Collapse
Affiliation(s)
- Hao Guo
- German Primate Center, Göttingen, Germany
| | - Shenbing Kuang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Alexander Gail
- German Primate Center, Göttingen, Germany.
- Faculty of Biology and Psychology, University of Göttingen, Göttingen, Germany.
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany.
| |
Collapse
|
35
|
Ma X, Rizzoglio F, Bodkin KL, Miller LE. Unsupervised, piecewise linear decoding enables an accurate prediction of muscle activity in a multi-task brain computer interface. J Neural Eng 2025; 22:016019. [PMID: 39823647 PMCID: PMC11775726 DOI: 10.1088/1741-2552/adab93] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2024] [Revised: 12/31/2024] [Accepted: 01/17/2025] [Indexed: 01/19/2025]
Abstract
Objective.Creating an intracortical brain computer interface (iBCI) capable of seamless transitions between tasks and contexts would greatly enhance user experience. However, the nonlinearity in neural activity presents challenges to computing a global iBCI decoder. We aimed to develop a method that differs from a globally optimized decoder to address this issue.Approach.We devised an unsupervised approach that relies on the structure of a low-dimensional neural manifold to implement a piecewise linear decoder. We created a distinctive dataset in which monkeys performed a diverse set of tasks, some trained, others innate, while we recorded neural signals from the motor cortex (M1) and electromyographs (EMGs) from upper limb muscles. We used both linear and nonlinear dimensionality reduction techniques to discover neural manifolds and applied unsupervised algorithms to identify clusters within those spaces. Finally, we fit a linear decoder of EMG for each cluster. A specific decoder was activated corresponding to the cluster each new neural data point belonged to.Main results.We found clusters in the neural manifolds corresponding with the different tasks or task sub-phases. The performance of piecewise decoding improved as the number of clusters increased and plateaued gradually. With only two clusters it already outperformed a global linear decoder, and unexpectedly, it outperformed even a global recurrent neural network decoder with 10-12 clusters.Significance.This study introduced a computationally lightweight solution for creating iBCI decoders that can function effectively across a broad range of tasks. EMG decoding is particularly challenging, as muscle activity is used, under varying contexts, to control interaction forces and limb stiffness, as well as motion. The results suggest that a piecewise linear decoder can provide a good approximation to the nonlinearity between neural activity and motor outputs, a result of our increased understanding of the structure of neural manifolds in motor cortex.
Collapse
Affiliation(s)
- Xuan Ma
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
| | - Fabio Rizzoglio
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
| | - Kevin L Bodkin
- Department of Neurobiology, Northwestern University, Evanston, IL, United States of America
| | - Lee E Miller
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, United States of America
- Shirley Ryan AbilityLab, Chicago, IL, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL, United States of America
| |
Collapse
|
36
|
Menéndez JA, Hennig JA, Golub MD, Oby ER, Sadtler PT, Batista AP, Chase SM, Yu BM, Latham PE. A theory of brain-computer interface learning via low-dimensional control. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2024.04.18.589952. [PMID: 38712193 PMCID: PMC11071278 DOI: 10.1101/2024.04.18.589952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
A remarkable demonstration of the flexibility of mammalian motor systems is primates' ability to learn to control brain-computer interfaces (BCIs). This constitutes a completely novel motor behavior, yet primates are capable of learning to control BCIs under a wide range of conditions. BCIs with carefully calibrated decoders, for example, can be learned with only minutes to hours of practice. With a few weeks of practice, even BCIs with randomly constructed decoders can be learned. What are the biological substrates of this learning process? Here, we develop a theory based on a re-aiming strategy, whereby learning operates within a low-dimensional subspace of task-relevant inputs driving the local population of recorded neurons. Through comprehensive numerical and formal analysis, we demonstrate that this theory can provide a unifying explanation for disparate phenomena previously reported in three different BCI learning tasks, and we derive a novel experimental prediction that we verify with previously published data. By explicitly modeling the underlying neural circuitry, the theory reveals an interpretation of these phenomena in terms of biological constraints on neural activity.
Collapse
Affiliation(s)
- J. A. Menéndez
- Gatsby Computational Neuroscience Unit, University College London
| | | | | | | | | | | | | | | | - P. E. Latham
- Gatsby Computational Neuroscience Unit, University College London
| |
Collapse
|
37
|
Price MS, Rastegari E, Gupta R, Vo K, Moore TI, Venkatachalam K. Intracellular Lactate Dynamics in Drosophila Glutamatergic Neurons. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2024.02.26.582095. [PMID: 38464270 PMCID: PMC10925175 DOI: 10.1101/2024.02.26.582095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/12/2024]
Abstract
Rates of lactate production and consumption reflect the metabolic state of many cell types, including neurons. Here, we investigate the effects of nutrient deprivation on lactate dynamics in Drosophila glutamatergic neurons by leveraging the limiting effects of the diffusion barrier surrounding cells in culture. We found that neurons constitutively consume lactate when availability of trehalose, the glucose disaccharide preferred by insects, is limited by the diffusion barrier. Acute mechanical disruption of the barrier reduced this reliance on lactate. Through kinetic modeling and experimental validation, we demonstrate that neuronal lactate consumption rates correlate inversely with their mitochondrial density. Further, we found that lactate levels in neurons exhibited temporal correlations that allowed prediction of cytosolic lactate dynamics after the disruption of the diffusion barrier from pre-perturbation lactate fluctuations. Collectively, our findings reveal the influence of diffusion barriers on neuronal metabolic preferences, and demonstrate the existence of temporal correlations between lactate dynamics under conditions of nutrient deprivation and those evoked by the subsequent restoration of nutrient availability.
Collapse
Affiliation(s)
- Matthew S. Price
- Department of Integrative Biology and Pharmacology, McGovern Medical School at the University of Texas Health Sciences Center (UTHealth), Houston, TX, USA
- Neuroscience Graduate Program, The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences
| | - Elham Rastegari
- Department of Integrative Biology and Pharmacology, McGovern Medical School at the University of Texas Health Sciences Center (UTHealth), Houston, TX, USA
| | - Richa Gupta
- Department of Integrative Biology and Pharmacology, McGovern Medical School at the University of Texas Health Sciences Center (UTHealth), Houston, TX, USA
| | - Katie Vo
- Department of Integrative Biology and Pharmacology, McGovern Medical School at the University of Texas Health Sciences Center (UTHealth), Houston, TX, USA
| | - Travis I. Moore
- Department of Integrative Biology and Pharmacology, McGovern Medical School at the University of Texas Health Sciences Center (UTHealth), Houston, TX, USA
- Molecular and Translational Biology Graduate Program, The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences
| | - Kartik Venkatachalam
- Department of Integrative Biology and Pharmacology, McGovern Medical School at the University of Texas Health Sciences Center (UTHealth), Houston, TX, USA
- Neuroscience Graduate Program, The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences
- Molecular and Translational Biology Graduate Program, The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences
| |
Collapse
|
38
|
Zheng J, Meister M. The unbearable slowness of being: Why do we live at 10 bits/s? Neuron 2025; 113:192-204. [PMID: 39694032 PMCID: PMC11758279 DOI: 10.1016/j.neuron.2024.11.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2024] [Revised: 10/31/2024] [Accepted: 11/12/2024] [Indexed: 12/20/2024]
Abstract
This article is about the neural conundrum behind the slowness of human behavior. The information throughput of a human being is about 10 bits/s. In comparison, our sensory systems gather data at ∼109 bits/s. The stark contrast between these numbers remains unexplained and touches on fundamental aspects of brain function: what neural substrate sets this speed limit on the pace of our existence? Why does the brain need billions of neurons to process 10 bits/s? Why can we only think about one thing at a time? The brain seems to operate in two distinct modes: the "outer" brain handles fast high-dimensional sensory and motor signals, whereas the "inner" brain processes the reduced few bits needed to control behavior. Plausible explanations exist for the large neuron numbers in the outer brain, but not for the inner brain, and we propose new research directions to remedy this.
Collapse
Affiliation(s)
- Jieyu Zheng
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
| | - Markus Meister
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
| |
Collapse
|
39
|
Ruffini G, Castaldo F, Vohryzek J. Structured Dynamics in the Algorithmic Agent. ENTROPY (BASEL, SWITZERLAND) 2025; 27:90. [PMID: 39851710 PMCID: PMC11765005 DOI: 10.3390/e27010090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2024] [Revised: 01/10/2025] [Accepted: 01/14/2025] [Indexed: 01/26/2025]
Abstract
In the Kolmogorov Theory of Consciousness, algorithmic agents utilize inferred compressive models to track coarse-grained data produced by simplified world models, capturing regularities that structure subjective experience and guide action planning. Here, we study the dynamical aspects of this framework by examining how the requirement of tracking natural data drives the structural and dynamical properties of the agent. We first formalize the notion of a generative model using the language of symmetry from group theory, specifically employing Lie pseudogroups to describe the continuous transformations that characterize invariance in natural data. Then, adopting a generic neural network as a proxy for the agent dynamical system and drawing parallels to Noether's theorem in physics, we demonstrate that data tracking forces the agent to mirror the symmetry properties of the generative world model. This dual constraint on the agent's constitutive parameters and dynamical repertoire enforces a hierarchical organization consistent with the manifold hypothesis in the neural network. Our findings bridge perspectives from algorithmic information theory (Kolmogorov complexity, compressive modeling), symmetry (group theory), and dynamics (conservation laws, reduced manifolds), offering insights into the neural correlates of agenthood and structured experience in natural systems, as well as the design of artificial intelligence and computational models of the brain.
Collapse
Affiliation(s)
- Giulio Ruffini
- Brain Modeling Department, Neuroelectrics, 08035 Barcelona, Spain;
| | | | - Jakub Vohryzek
- Computational Neuroscience Group, Universitat Pompeu Fabra, 08005 Barcelona, Spain;
- Centre for Eudaimonia and Human Flourishing, Linacre College, Oxford OX3 9BX, UK
| |
Collapse
|
40
|
Egas Santander D, Pokorny C, Ecker A, Lazovskis J, Santoro M, Smith JP, Hess K, Levi R, Reimann MW. Heterogeneous and higher-order cortical connectivity undergirds efficient, robust, and reliable neural codes. iScience 2025; 28:111585. [PMID: 39845419 PMCID: PMC11751574 DOI: 10.1016/j.isci.2024.111585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2024] [Revised: 10/16/2024] [Accepted: 12/09/2024] [Indexed: 01/24/2025] Open
Abstract
We hypothesized that the heterogeneous architecture of biological neural networks provides a substrate to regulate the well-known tradeoff between robustness and efficiency, thereby allowing different subpopulations of the same network to optimize for different objectives. To distinguish between subpopulations, we developed a metric based on the mathematical theory of simplicial complexes that captures the complexity of their connectivity by contrasting its higher-order structure to a random control and confirmed its relevance in several openly available connectomes. Using a biologically detailed cortical model and an electron microscopic dataset, we showed that subpopulations with low simplicial complexity exhibit efficient activity. Conversely, subpopulations of high simplicial complexity play a supporting role in boosting the reliability of the network as a whole, softening the robustness-efficiency tradeoff. Crucially, we found that both types of subpopulations can and do coexist within a single connectome in biological neural networks, due to the heterogeneity of their connectivity.
Collapse
Affiliation(s)
- Daniela Egas Santander
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, 1202 6 Geneva, Switzerland
| | - Christoph Pokorny
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, 1202 6 Geneva, Switzerland
| | - András Ecker
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, 1202 6 Geneva, Switzerland
| | - Jānis Lazovskis
- Riga Business School, Riga Technical University, 1010 Riga, Latvia
| | - Matteo Santoro
- Scuola Internazionale Superiore di Studi Avanzati (SISSA), 34136 Trieste, Italy
| | - Jason P. Smith
- Department of Mathematics, Nottingham Trent University, Nottingham NG1 4FQ, UK
| | - Kathryn Hess
- UPHESS, BMI, École Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland
| | - Ran Levi
- Department of Mathematics, University of Aberdeen, Aberdeen AB24 3UE, UK
| | - Michael W. Reimann
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, 1202 6 Geneva, Switzerland
| |
Collapse
|
41
|
Meissner-Bernard C, Zenke F, Friedrich RW. Geometry and dynamics of representations in a precisely balanced memory network related to olfactory cortex. eLife 2025; 13:RP96303. [PMID: 39804831 PMCID: PMC11733691 DOI: 10.7554/elife.96303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2025] Open
Abstract
Biological memory networks are thought to store information by experience-dependent changes in the synaptic connectivity between assemblies of neurons. Recent models suggest that these assemblies contain both excitatory and inhibitory neurons (E/I assemblies), resulting in co-tuning and precise balance of excitation and inhibition. To understand computational consequences of E/I assemblies under biologically realistic constraints we built a spiking network model based on experimental data from telencephalic area Dp of adult zebrafish, a precisely balanced recurrent network homologous to piriform cortex. We found that E/I assemblies stabilized firing rate distributions compared to networks with excitatory assemblies and global inhibition. Unlike classical memory models, networks with E/I assemblies did not show discrete attractor dynamics. Rather, responses to learned inputs were locally constrained onto manifolds that 'focused' activity into neuronal subspaces. The covariance structure of these manifolds supported pattern classification when information was retrieved from selected neuronal subsets. Networks with E/I assemblies therefore transformed the geometry of neuronal coding space, resulting in continuous representations that reflected both relatedness of inputs and an individual's experience. Such continuous representations enable fast pattern classification, can support continual learning, and may provide a basis for higher-order learning and cognitive computations.
Collapse
Affiliation(s)
| | - Friedemann Zenke
- Friedrich Miescher Institute for Biomedical ResearchBaselSwitzerland
- University of BaselBaselSwitzerland
| | - Rainer W Friedrich
- Friedrich Miescher Institute for Biomedical ResearchBaselSwitzerland
- University of BaselBaselSwitzerland
| |
Collapse
|
42
|
Kim JH, Daie K, Li N. A combinatorial neural code for long-term motor memory. Nature 2025; 637:663-672. [PMID: 39537930 PMCID: PMC11735397 DOI: 10.1038/s41586-024-08193-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 10/10/2024] [Indexed: 11/16/2024]
Abstract
Motor skill repertoire can be stably retained over long periods, but the neural mechanism that underlies stable memory storage remains poorly understood1-8. Moreover, it is unknown how existing motor memories are maintained as new motor skills are continuously acquired. Here we tracked neural representation of learned actions throughout a significant portion of the lifespan of a mouse and show that learned actions are stably retained in combination with context, which protects existing memories from erasure during new motor learning. We established a continual learning paradigm in which mice learned to perform directional licking in different task contexts while we tracked motor cortex activity for up to six months using two-photon imaging. Within the same task context, activity driving directional licking was stable over time with little representational drift. When learning new task contexts, new preparatory activity emerged to drive the same licking actions. Learning created parallel new motor memories instead of modifying existing representations. Re-learning to make the same actions in the previous task context re-activated the previous preparatory activity, even months later. Continual learning of new task contexts kept creating new preparatory activity patterns. Context-specific memories, as we observed in the motor system, may provide a solution for stable memory storage throughout continual learning.
Collapse
Affiliation(s)
- Jae-Hyun Kim
- Department of Neurobiology, Duke University, Durham, NC, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Kayvon Daie
- Allen Institute for Neural Dynamics, Seattle, WA, USA
| | - Nuo Li
- Department of Neurobiology, Duke University, Durham, NC, USA.
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA.
| |
Collapse
|
43
|
Bardella G, Franchini S, Pani P, Ferraina S. Lattice physics approaches for neural networks. iScience 2024; 27:111390. [PMID: 39679297 PMCID: PMC11638618 DOI: 10.1016/j.isci.2024.111390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2024] Open
Abstract
Modern neuroscience has evolved into a frontier field that draws on numerous disciplines, resulting in the flourishing of novel conceptual frames primarily inspired by physics and complex systems science. Contributing in this direction, we recently introduced a mathematical framework to describe the spatiotemporal interactions of systems of neurons using lattice field theory, the reference paradigm for theoretical particle physics. In this note, we provide a concise summary of the basics of the theory, aiming to be intuitive to the interdisciplinary neuroscience community. We contextualize our methods, illustrating how to readily connect the parameters of our formulation to experimental variables using well-known renormalization procedures. This synopsis yields the key concepts needed to describe neural networks using lattice physics. Such classes of methods are attention-worthy in an era of blistering improvements in numerical computations, as they can facilitate relating the observation of neural activity to generative models underpinned by physical principles.
Collapse
Affiliation(s)
- Giampiero Bardella
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Simone Franchini
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Pierpaolo Pani
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Stefano Ferraina
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
44
|
Agudelo-Toro A, Michaels JA, Sheng WA, Scherberger H. Accurate neural control of a hand prosthesis by posture-related activity in the primate grasping circuit. Neuron 2024; 112:4115-4129.e8. [PMID: 39419024 DOI: 10.1016/j.neuron.2024.09.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 03/15/2024] [Accepted: 09/19/2024] [Indexed: 10/19/2024]
Abstract
Brain-computer interfaces (BCIs) have the potential to restore hand movement for people with paralysis, but current devices still lack the fine control required to interact with objects of daily living. Following our understanding of cortical activity during arm reaches, hand BCI studies have focused primarily on velocity control. However, mounting evidence suggests that posture, and not velocity, dominates in hand-related areas. To explore whether this signal can causally control a prosthesis, we developed a BCI training paradigm centered on the reproduction of posture transitions. Monkeys trained with this protocol were able to control a multidimensional hand prosthesis with high accuracy, including execution of the very intricate precision grip. Analysis revealed that the posture signal in the target grasping areas was the main contributor to control. We present, for the first time, neural posture control of a multidimensional hand prosthesis, opening the door for future interfaces to leverage this additional information channel.
Collapse
Affiliation(s)
- Andres Agudelo-Toro
- Neurobiology Laboratory, Deutsches Primatenzentrum GmbH, Göttingen 37077, Germany.
| | - Jonathan A Michaels
- Neurobiology Laboratory, Deutsches Primatenzentrum GmbH, Göttingen 37077, Germany; School of Kinesiology and Health Science, Faculty of Health, York University, Toronto, ON M3J 1P3, Canada
| | - Wei-An Sheng
- Neurobiology Laboratory, Deutsches Primatenzentrum GmbH, Göttingen 37077, Germany; Institute of Biomedical Sciences, Academia Sinica, Taipei 115, Taiwan
| | - Hansjörg Scherberger
- Neurobiology Laboratory, Deutsches Primatenzentrum GmbH, Göttingen 37077, Germany; Faculty of Biology and Psychology, University of Göttingen, Göttingen 37073, Germany.
| |
Collapse
|
45
|
Grosse-Wentrup M, Kumar A, Meunier A, Zimmer M. Neuro-cognitive multilevel causal modeling: A framework that bridges the explanatory gap between neuronal activity and cognition. PLoS Comput Biol 2024; 20:e1012674. [PMID: 39680605 PMCID: PMC11717354 DOI: 10.1371/journal.pcbi.1012674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Revised: 01/09/2025] [Accepted: 11/25/2024] [Indexed: 12/18/2024] Open
Abstract
Explaining how neuronal activity gives rise to cognition arguably remains the most significant challenge in cognitive neuroscience. We introduce neuro-cognitive multilevel causal modeling (NC-MCM), a framework that bridges the explanatory gap between neuronal activity and cognition by construing cognitive states as (behaviorally and dynamically) causally consistent abstractions of neuronal states. Multilevel causal modeling allows us to interchangeably reason about the neuronal- and cognitive causes of behavior while maintaining a physicalist (in contrast to a strong dualist) position. We introduce an algorithm for learning cognitive-level causal models from neuronal activation patterns and demonstrate its ability to learn cognitive states of the nematode C. elegans from calcium imaging data. We show that the cognitive-level model of the NC-MCM framework provides a concise representation of the neuronal manifold of C. elegans and its relation to behavior as a graph, which, in contrast to other neuronal manifold learning algorithms, supports causal reasoning. We conclude the article by arguing that the ability of the NC-MCM framework to learn causally interpretable abstractions of neuronal dynamics and their relation to behavior in a purely data-driven fashion is essential for understanding biological systems whose complexity prohibits the development of hand-crafted computational models.
Collapse
Affiliation(s)
- Moritz Grosse-Wentrup
- Research Group Neuroinformatics, Faculty of Computer Science, University of Vienna, Vienna, Austria
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
- Data Science @ UniVie, University of Vienna, Vienna, Austria
| | - Akshey Kumar
- Research Group Neuroinformatics, Faculty of Computer Science, University of Vienna, Vienna, Austria
- UniVie Doctoral School Computer Science (DoCS), University of Vienna, Vienna, Austria
| | - Anja Meunier
- Research Group Neuroinformatics, Faculty of Computer Science, University of Vienna, Vienna, Austria
- UniVie Doctoral School Computer Science (DoCS), University of Vienna, Vienna, Austria
| | - Manuel Zimmer
- Department of Neuroscience and Developmental Biology, Vienna Biocenter (VBC), University of Vienna, Vienna, Austria
| |
Collapse
|
46
|
Roads BD, Love BC. The Dimensions of dimensionality. Trends Cogn Sci 2024; 28:1118-1131. [PMID: 39153897 DOI: 10.1016/j.tics.2024.07.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 07/16/2024] [Accepted: 07/17/2024] [Indexed: 08/19/2024]
Abstract
Cognitive scientists often infer multidimensional representations from data. Whether the data involve text, neuroimaging, neural networks, or human judgments, researchers frequently infer and analyze latent representational spaces (i.e., embeddings). However, the properties of a latent representation (e.g., prediction performance, interpretability, compactness) depend on the inference procedure, which can vary widely across endeavors. For example, dimensions are not always globally interpretable and the dimensionality of different embeddings may not be readily comparable. Moreover, the dichotomy between multidimensional spaces and purportedly richer representational formats, such as graph representations, is misleading. We review what the different notions of dimension in cognitive science imply for how these latent representations should be used and interpreted.
Collapse
Affiliation(s)
- Brett D Roads
- Department of Experimental Psychology, University College London, London, WC1E, UK.
| | - Bradley C Love
- Department of Experimental Psychology, University College London, London, WC1E, UK
| |
Collapse
|
47
|
Schuessler F, Mastrogiuseppe F, Ostojic S, Barak O. Aligned and oblique dynamics in recurrent neural networks. eLife 2024; 13:RP93060. [PMID: 39601404 DOI: 10.7554/elife.93060] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2024] Open
Abstract
The relation between neural activity and behaviorally relevant variables is at the heart of neuroscience research. When strong, this relation is termed a neural representation. There is increasing evidence, however, for partial dissociations between activity in an area and relevant external variables. While many explanations have been proposed, a theoretical framework for the relationship between external and internal variables is lacking. Here, we utilize recurrent neural networks (RNNs) to explore the question of when and how neural dynamics and the network's output are related from a geometrical point of view. We find that training RNNs can lead to two dynamical regimes: dynamics can either be aligned with the directions that generate output variables, or oblique to them. We show that the choice of readout weight magnitude before training can serve as a control knob between the regimes, similar to recent findings in feedforward networks. These regimes are functionally distinct. Oblique networks are more heterogeneous and suppress noise in their output directions. They are furthermore more robust to perturbations along the output directions. Crucially, the oblique regime is specific to recurrent (but not feedforward) networks, arising from dynamical stability considerations. Finally, we show that tendencies toward the aligned or the oblique regime can be dissociated in neural recordings. Altogether, our results open a new perspective for interpreting neural activity by relating network dynamics and their output.
Collapse
Affiliation(s)
- Friedrich Schuessler
- Faculty of Electrical Engineering and Computer Science, Technical University of Berlin, Berlin, Germany
- Science of Intelligence, Research Cluster of Excellence, Berlin, Germany
| | | | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure-PSL Research University, Paris, France
| | - Omri Barak
- Rappaport Faculty of Medicine and Network Biology Research Laboratories, Technion - Israel Institute of Technology, Haifa, Israel
| |
Collapse
|
48
|
Liang KF, Kao JC. A reinforcement learning based software simulator for motor brain-computer interfaces. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.11.25.625180. [PMID: 39651250 PMCID: PMC11623538 DOI: 10.1101/2024.11.25.625180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2024]
Abstract
Intracortical motor brain-computer interfaces (BCIs) are expensive and time-consuming to design because accurate evaluation traditionally requires real-time experiments. In a BCI system, a user interacts with an imperfect decoder and continuously changes motor commands in response to unexpected decoded movements. This "closed-loop" nature of BCI leads to emergent interactions between the user and decoder that are challenging to model. The gold standard for BCI evaluation is therefore real-time experiments, which significantly limits the speed and community of BCI research. We present a new BCI simulator that enables researchers to accurately and quickly design BCIs for cursor control entirely in software. Our simulator replaces the BCI user with a deep reinforcement learning (RL) agent that interacts with a simulated BCI system and learns to optimally control it. We demonstrate that our simulator is accurate and versatile, reproducing the published results of three distinct types of BCI decoders: (1) a state-of-the-art linear decoder (FIT-KF), (2) a "two-stage" BCI decoder requiring closed-loop decoder adaptation (ReFIT-KF), and (3) a nonlinear recurrent neural network decoder (FORCE).
Collapse
|
49
|
Hu B, Temiz NZ, Chou CN, Rupprecht P, Meissner-Bernard C, Titze B, Chung S, Friedrich RW. Representational learning by optimization of neural manifolds in an olfactory memory network. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.11.17.623906. [PMID: 39605658 PMCID: PMC11601331 DOI: 10.1101/2024.11.17.623906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2024]
Abstract
Higher brain functions depend on experience-dependent representations of relevant information that may be organized by attractor dynamics or by geometrical modifications of continuous "neural manifolds". To explore these scenarios we analyzed odor-evoked activity in telencephalic area pDp of juvenile and adult zebrafish, the homolog of piriform cortex. No obvious signatures of attractor dynamics were detected. Rather, olfactory discrimination training selectively enhanced the separation of neural manifolds representing task-relevant odors from other representations, consistent with predictions of autoassociative network models endowed with precise synaptic balance. Analytical approaches using the framework of manifold capacity revealed multiple geometrical modifications of representational manifolds that supported the classification of task-relevant sensory information. Manifold capacity predicted odor discrimination across individuals, indicating a close link between manifold geometry and behavior. Hence, pDp and possibly related recurrent networks store information in the geometry of representational manifolds, resulting in joint sensory and semantic maps that may support distributed learning processes.
Collapse
Affiliation(s)
- Bo Hu
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland
- University of Basel, 4003 Basel, Switzerland
| | - Nesibe Z. Temiz
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland
- University of Basel, 4003 Basel, Switzerland
| | - Chi-Ning Chou
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
| | - Peter Rupprecht
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland
- Neuroscience Center Zurich, 8057 Zurich, Switzerland
- Brain Research Institute, University of Zurich, 8057 Zurich, Switzerland
| | - Claire Meissner-Bernard
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland
| | - Benjamin Titze
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland
| | - SueYeon Chung
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| | - Rainer W. Friedrich
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland
- University of Basel, 4003 Basel, Switzerland
| |
Collapse
|
50
|
Blini E, Arrighi R, Anobile G. Pupillary manifolds: uncovering the latent geometrical structures behind phasic changes in pupil size. Sci Rep 2024; 14:27306. [PMID: 39516679 PMCID: PMC11549318 DOI: 10.1038/s41598-024-78772-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2024] [Accepted: 11/04/2024] [Indexed: 11/16/2024] Open
Abstract
The size of the pupils reflects directly the balance of different branches of the autonomic nervous system. This measure is inexpensive, non-invasive, and has provided invaluable insights on a wide range of mental processes, from attention to emotion and executive functions. Two outstanding limitations of current pupillometry research are the lack of consensus in the analytical approaches, which vary wildly across research groups and disciplines, and the fact that, unlike other neuroimaging techniques, pupillometry lacks the dimensionality to shed light on the different sources of the observed effects. In other words, pupillometry provides an integrated readout of several distinct networks, but it is unclear whether each has a specific fingerprint, stemming from its function or physiological substrate. Here we show that phasic changes in pupil size are inherently low-dimensional, with modes that are highly consistent across behavioral tasks of very different nature, suggesting that these changes occur along pupillary manifolds that are highly constrained by the underlying physiological structures rather than functions. These results provide not only a unified approach to analyze pupillary data, but also the opportunity for physiology and psychology to refer to the same processes by tracing the sources of the reported changes in pupil size in the underlying biology.
Collapse
Affiliation(s)
- Elvio Blini
- Department of Neuroscience, Psychology, Pharmacology and Child Health, University of Florence, Via di San Salvi 12, Building 26, Florence, Italy.
| | - Roberto Arrighi
- Department of Neuroscience, Psychology, Pharmacology and Child Health, University of Florence, Via di San Salvi 12, Building 26, Florence, Italy
| | - Giovanni Anobile
- Department of Neuroscience, Psychology, Pharmacology and Child Health, University of Florence, Via di San Salvi 12, Building 26, Florence, Italy
| |
Collapse
|