1
|
Candelori B, Bardella G, Spinelli I, Ramawat S, Pani P, Ferraina S, Scardapane S. Spatio-temporal transformers for decoding neural movement control. J Neural Eng 2025; 22:016023. [PMID: 39870043 DOI: 10.1088/1741-2552/adaef0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 01/27/2025] [Indexed: 01/29/2025]
Abstract
Objective. Deep learning tools applied to high-resolution neurophysiological data have significantly progressed, offering enhanced decoding, real-time processing, and readability for practical applications. However, the design of artificial neural networks to analyze neural activityin vivoremains a challenge, requiring a delicate balance between efficiency in low-data regimes and the interpretability of the results.Approach. To address this challenge, we introduce a novel specialized transformer architecture to analyze single-neuron spiking activity. The model is tested on multi-electrode recordings from the dorsal premotor cortex of non-human primates performing a motor inhibition task.Main results. The proposed architecture provides an early prediction of the correct movement direction, achieving accurate results no later than 230 ms after the Go signal presentation across animals. Additionally, the model can forecast whether the movement will be generated or withheld before a stop signal, unattended, is actually presented. To further understand the internal dynamics of the model, we compute the predicted correlations between time steps and between neurons at successive layers of the architecture, with the evolution of these correlations mirrors findings from previous theoretical analyses.Significance. Overall, our framework provides a comprehensive use case for the practical implementation of deep learning tools in motor control research, highlighting both the predictive capabilities and interpretability of the proposed architecture.
Collapse
Affiliation(s)
- Benedetta Candelori
- Department of Information Engineering, Electronics and Telecommunications, Sapienza University of Rome, Rome, Italy
| | - Giampiero Bardella
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Indro Spinelli
- Department of Computer Science, Sapienza University of Rome, Rome, Italy
| | - Surabhi Ramawat
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Pierpaolo Pani
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Stefano Ferraina
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Simone Scardapane
- Department of Information Engineering, Electronics and Telecommunications, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
2
|
Hu A, Zoltowski D, Nair A, Anderson D, Duncker L, Linderman S. Modeling Latent Neural Dynamics with Gaussian Process Switching Linear Dynamical Systems. ARXIV 2025:arXiv:2408.03330v3. [PMID: 39876935 PMCID: PMC11774443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 01/31/2025]
Abstract
Understanding how the collective activity of neural populations relates to computation and ultimately behavior is a key goal in neuroscience. To this end, statistical methods which describe high-dimensional neural time series in terms of low-dimensional latent dynamics have played a fundamental role in characterizing neural systems. Yet, what constitutes a successful method involves two opposing criteria: (1) methods should be expressive enough to capture complex nonlinear dynamics, and (2) they should maintain a notion of interpretability often only warranted by simpler linear models. In this paper, we develop an approach that balances these two objectives: the Gaussian Process Switching Linear Dynamical System (gpSLDS). Our method builds on previous work modeling the latent state evolution via a stochastic differential equation whose nonlinear dynamics are described by a Gaussian process (GP-SDEs). We propose a novel kernel function which enforces smoothly interpolated locally linear dynamics, and therefore expresses flexible - yet interpretable - dynamics akin to those of recurrent switching linear dynamical systems (rSLDS). Our approach resolves key limitations of the rSLDS such as artifactual oscillations in dynamics near discrete state boundaries, while also providing posterior uncertainty estimates of the dynamics. To fit our models, we leverage a modified learning objective which improves the estimation accuracy of kernel hyperparameters compared to previous GP-SDE fitting approaches. We apply our method to synthetic data and data recorded in two neuroscience experiments and demonstrate favorable performance in comparison to the rSLDS.
Collapse
|
3
|
Bardella G, Franchini S, Pani P, Ferraina S. Lattice physics approaches for neural networks. iScience 2024; 27:111390. [PMID: 39679297 PMCID: PMC11638618 DOI: 10.1016/j.isci.2024.111390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2024] Open
Abstract
Modern neuroscience has evolved into a frontier field that draws on numerous disciplines, resulting in the flourishing of novel conceptual frames primarily inspired by physics and complex systems science. Contributing in this direction, we recently introduced a mathematical framework to describe the spatiotemporal interactions of systems of neurons using lattice field theory, the reference paradigm for theoretical particle physics. In this note, we provide a concise summary of the basics of the theory, aiming to be intuitive to the interdisciplinary neuroscience community. We contextualize our methods, illustrating how to readily connect the parameters of our formulation to experimental variables using well-known renormalization procedures. This synopsis yields the key concepts needed to describe neural networks using lattice physics. Such classes of methods are attention-worthy in an era of blistering improvements in numerical computations, as they can facilitate relating the observation of neural activity to generative models underpinned by physical principles.
Collapse
Affiliation(s)
- Giampiero Bardella
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Simone Franchini
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Pierpaolo Pani
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Stefano Ferraina
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
4
|
Stringer C, Pachitariu M. Analysis methods for large-scale neuronal recordings. Science 2024; 386:eadp7429. [PMID: 39509504 DOI: 10.1126/science.adp7429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Accepted: 09/27/2024] [Indexed: 11/15/2024]
Abstract
Simultaneous recordings from hundreds or thousands of neurons are becoming routine because of innovations in instrumentation, molecular tools, and data processing software. Such recordings can be analyzed with data science methods, but it is not immediately clear what methods to use or how to adapt them for neuroscience applications. We review, categorize, and illustrate diverse analysis methods for neural population recordings and describe how these methods have been used to make progress on longstanding questions in neuroscience. We review a variety of approaches, ranging from the mathematically simple to the complex, from exploratory to hypothesis-driven, and from recently developed to more established methods. We also illustrate some of the common statistical pitfalls in analyzing large-scale neural data.
Collapse
Affiliation(s)
- Carsen Stringer
- Howard Hughes Medical Institute (HHMI) Janelia Research Campus, Ashburn, VA, USA
| | - Marius Pachitariu
- Howard Hughes Medical Institute (HHMI) Janelia Research Campus, Ashburn, VA, USA
| |
Collapse
|
5
|
DePasquale B, Brody CD, Pillow JW. Neural population dynamics underlying evidence accumulation in multiple rat brain regions. eLife 2024; 13:e84955. [PMID: 39162374 PMCID: PMC12005723 DOI: 10.7554/elife.84955] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 08/07/2024] [Indexed: 08/21/2024] Open
Abstract
Accumulating evidence to make decisions is a core cognitive function. Previous studies have tended to estimate accumulation using either neural or behavioral data alone. Here, we develop a unified framework for modeling stimulus-driven behavior and multi-neuron activity simultaneously. We applied our method to choices and neural recordings from three rat brain regions-the posterior parietal cortex (PPC), the frontal orienting fields (FOF), and the anterior-dorsal striatum (ADS)-while subjects performed a pulse-based accumulation task. Each region was best described by a distinct accumulation model, which all differed from the model that best described the animal's choices. FOF activity was consistent with an accumulator where early evidence was favored while the ADS reflected near perfect accumulation. Neural responses within an accumulation framework unveiled a distinct association between each brain region and choice. Choices were better predicted from all regions using a comprehensive, accumulation-based framework and different brain regions were found to differentially reflect choice-related accumulation signals: FOF and ADS both reflected choice but ADS showed more instances of decision vacillation. Previous studies relating neural data to behaviorally inferred accumulation dynamics have implicitly assumed that individual brain regions reflect the whole-animal level accumulator. Our results suggest that different brain regions represent accumulated evidence in dramatically different ways and that accumulation at the whole-animal level may be constructed from a variety of neural-level accumulators.
Collapse
Affiliation(s)
- Brian DePasquale
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Carlos D Brody
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
- Howard Hughes Medical Institute, Princeton UniversityPrincetonUnited States
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
- Department of Psychology, Princeton UniversityPrincetonUnited States
| |
Collapse
|
6
|
Bardella G, Franchini S, Pan L, Balzan R, Ramawat S, Brunamonti E, Pani P, Ferraina S. Neural Activity in Quarks Language: Lattice Field Theory for a Network of Real Neurons. ENTROPY (BASEL, SWITZERLAND) 2024; 26:495. [PMID: 38920504 PMCID: PMC11203154 DOI: 10.3390/e26060495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 05/28/2024] [Accepted: 05/30/2024] [Indexed: 06/27/2024]
Abstract
Brain-computer interfaces have seen extraordinary surges in developments in recent years, and a significant discrepancy now exists between the abundance of available data and the limited headway made in achieving a unified theoretical framework. This discrepancy becomes particularly pronounced when examining the collective neural activity at the micro and meso scale, where a coherent formalization that adequately describes neural interactions is still lacking. Here, we introduce a mathematical framework to analyze systems of natural neurons and interpret the related empirical observations in terms of lattice field theory, an established paradigm from theoretical particle physics and statistical mechanics. Our methods are tailored to interpret data from chronic neural interfaces, especially spike rasters from measurements of single neuron activity, and generalize the maximum entropy model for neural networks so that the time evolution of the system is also taken into account. This is obtained by bridging particle physics and neuroscience, paving the way for particle physics-inspired models of the neocortex.
Collapse
Affiliation(s)
- Giampiero Bardella
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Roma, Italy (E.B.); (P.P.); (S.F.)
| | - Simone Franchini
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Roma, Italy (E.B.); (P.P.); (S.F.)
| | - Liming Pan
- School of Cyber Science and Technology, University of Science and Technology of China, Hefei 230026, China;
| | - Riccardo Balzan
- Laboratoire de Chimie et Biochimie Pharmacologiques et Toxicologiques, UMR 8601, UFR Biomédicale et des Sciences de Base, Université Paris Descartes-CNRS, PRES Paris Sorbonne Cité, 75006 Paris, France;
| | - Surabhi Ramawat
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Roma, Italy (E.B.); (P.P.); (S.F.)
| | - Emiliano Brunamonti
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Roma, Italy (E.B.); (P.P.); (S.F.)
| | - Pierpaolo Pani
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Roma, Italy (E.B.); (P.P.); (S.F.)
| | - Stefano Ferraina
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Roma, Italy (E.B.); (P.P.); (S.F.)
| |
Collapse
|
7
|
Liu Q, Wei C, Qu Y, Liang Z. Modelling and Controlling System Dynamics of the Brain: An Intersection of Machine Learning and Control Theory. ADVANCES IN NEUROBIOLOGY 2024; 41:63-87. [PMID: 39589710 DOI: 10.1007/978-3-031-69188-1_3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/27/2024]
Abstract
The human brain, as a complex system, has long captivated multidisciplinary researchers aiming to decode its intricate structure and function. This intricate network has driven scientific pursuits to advance our understanding of cognition, behavior, and neurological disorders by delving into the complex mechanisms underlying brain function and dysfunction. Modelling brain dynamics using machine learning techniques deepens our comprehension of brain dynamics from a computational perspective. These computational models allow researchers to simulate and analyze neural interactions, facilitating the identification of dysfunctions in connectivity or activity patterns. Additionally, the trained dynamical system, serving as a surrogate model, optimizes neurostimulation strategies under the guidelines of control theory. In this chapter, we discuss the recent studies on modelling and controlling brain dynamics at the intersection of machine learning and control theory, providing a framework to understand and improve cognitive function, and treat neurological and psychiatric disorders.
Collapse
Affiliation(s)
- Quanying Liu
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, GD, P.R. China.
| | - Chen Wei
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, GD, P.R. China
| | - Youzhi Qu
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, GD, P.R. China
| | - Zhichao Liang
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, GD, P.R. China
| |
Collapse
|
8
|
Genkin M, Shenoy KV, Chandrasekaran C, Engel TA. The dynamics and geometry of choice in premotor cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.22.550183. [PMID: 37546748 PMCID: PMC10401920 DOI: 10.1101/2023.07.22.550183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
The brain represents sensory variables in the coordinated activity of neural populations, in which tuning curves of single neurons define the geometry of the population code. Whether the same coding principle holds for dynamic cognitive variables remains unknown because internal cognitive processes unfold with a unique time course on single trials observed only in the irregular spiking of heterogeneous neural populations. Here we show the existence of such a population code for the dynamics of choice formation in the primate premotor cortex. We developed an approach to simultaneously infer population dynamics and tuning functions of single neurons to the population state. Applied to spike data recorded during decision-making, our model revealed that populations of neurons encoded the same dynamic variable predicting choices, and heterogeneous firing rates resulted from the diverse tuning of single neurons to this decision variable. The inferred dynamics indicated an attractor mechanism for decision computation. Our results reveal a common geometric principle for neural encoding of sensory and dynamic cognitive variables.
Collapse
Affiliation(s)
| | - Krishna V Shenoy
- Howard Hughes Medical Institute, Stanford University, Stanford, CA
- Department of Electrical Engineering, Stanford University, Stanford, CA
| | - Chandramouli Chandrasekaran
- Department of Anatomy & Neurobiology, Boston University, Boston, MA
- Department of Psychological and Brain Sciences, Boston University, Boston, MA
- Center for Systems Neuroscience, Boston University, Boston, MA
| | - Tatiana A Engel
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| |
Collapse
|
9
|
Langdon C, Genkin M, Engel TA. A unifying perspective on neural manifolds and circuits for cognition. Nat Rev Neurosci 2023; 24:363-377. [PMID: 37055616 PMCID: PMC11058347 DOI: 10.1038/s41583-023-00693-x] [Citation(s) in RCA: 53] [Impact Index Per Article: 26.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/06/2023] [Indexed: 04/15/2023]
Abstract
Two different perspectives have informed efforts to explain the link between the brain and behaviour. One approach seeks to identify neural circuit elements that carry out specific functions, emphasizing connectivity between neurons as a substrate for neural computations. Another approach centres on neural manifolds - low-dimensional representations of behavioural signals in neural population activity - and suggests that neural computations are realized by emergent dynamics. Although manifolds reveal an interpretable structure in heterogeneous neuronal activity, finding the corresponding structure in connectivity remains a challenge. We highlight examples in which establishing the correspondence between low-dimensional activity and connectivity has been possible, unifying the neural manifold and circuit perspectives. This relationship is conspicuous in systems in which the geometry of neural responses mirrors their spatial layout in the brain, such as the fly navigational system. Furthermore, we describe evidence that, in systems in which neural responses are heterogeneous, the circuit comprises interactions between activity patterns on the manifold via low-rank connectivity. We suggest that unifying the manifold and circuit approaches is important if we are to be able to causally test theories about the neural computations that underlie behaviour.
Collapse
Affiliation(s)
- Christopher Langdon
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Mikhail Genkin
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Tatiana A Engel
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA.
| |
Collapse
|
10
|
Gao TT, Yan G. Autonomous inference of complex network dynamics from incomplete and noisy data. NATURE COMPUTATIONAL SCIENCE 2022; 2:160-168. [PMID: 38177441 DOI: 10.1038/s43588-022-00217-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 02/17/2022] [Indexed: 01/06/2024]
Abstract
The availability of empirical data that capture the structure and behaviour of complex networked systems has been greatly increased in recent years; however, a versatile computational toolbox for unveiling a complex system's nodal and interaction dynamics from data remains elusive. Here we develop a two-phase approach for the autonomous inference of complex network dynamics, and its effectiveness is demonstrated by the tests of inferring neuronal, genetic, social and coupled oscillator dynamics on various synthetic and real networks. Importantly, the approach is robust to incompleteness and noises, including low resolution, observational and dynamical noises, missing and spurious links, and dynamical heterogeneity. We apply the two-phase approach to infer the early spreading dynamics of influenza A flu on the worldwide airline network, and the inferred dynamical equation can also capture the spread of severe acute respiratory syndrome and coronavirus disease 2019. These findings together offer an avenue to discover the hidden microscopic mechanisms of a broad array of real networked systems.
Collapse
Affiliation(s)
- Ting-Ting Gao
- MOE Key Laboratory of Advanced Micro-Structured Materials and School of Physics Science and Engineering, Tongji University, Shanghai, People's Republic of China
- Frontiers Science Center for Intelligent Autonomous Systems, Tongji University, Shanghai, People's Republic of China
| | - Gang Yan
- MOE Key Laboratory of Advanced Micro-Structured Materials and School of Physics Science and Engineering, Tongji University, Shanghai, People's Republic of China.
- Frontiers Science Center for Intelligent Autonomous Systems, Tongji University, Shanghai, People's Republic of China.
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, People's Republic of China.
| |
Collapse
|