1
|
Kudryashova N, Hurwitz C, Perich MG, Hennig MH. BAND: Behavior-Aligned Neural Dynamics is all you need to capture motor corrections. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.03.21.644350. [PMID: 40196470 PMCID: PMC11974739 DOI: 10.1101/2025.03.21.644350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/09/2025]
Abstract
Neural activity in motor cortical areas is well-explained by latent neural population dynamics: the motor preparation phase sets the initial condition for the movement while the dynamics that unfold during the motor execution phase orchestrate the sequence of muscle activations. While preparatory activity explains a large fraction of both neural and behavior variability during the execution of a planned movement, it cannot account for corrections and adjustments during movements as this requires sensory feedback not available during planning. Therefore, accounting for unplanned, sensory-guided movement requires knowledge of relevant inputs to the motor cortex from other brain areas. Here, we provide evidence that these inputs cause transient deviations from an autonomous neural population trajectory, and show that these dynamics cannot be found by unsupervised inference methods. We introduce the new Behavior-Aligned Neural Dynamics (BAND) model, which exploits semi-supervised learning to predict both planned and unplanned movements from neural activity in the motor cortex that can be missed by unsupervised inference methods. Our analysis using BAND suggests that 1) transient motor corrections are encoded in small neural variability; 2) motor corrections are encoded in a sparse sub-population of primary motor cortex neurons (M1); and 3) combining latent dynamical modeling with behavior supervision allows for capturing both the movement plan and corrections.
Collapse
Affiliation(s)
- Nina Kudryashova
- School of Informatics, University of Edinburgh; Informatics Forum, 10 Crichton St, Newington, Edinburgh EH8 9AB, United Kingdom
| | - Cole Hurwitz
- Zuckerman Institute, Columbia University; 3227 Broadway, New York, NY 10027, United States
| | - Matthew G Perich
- Département de neurosciences, Faculté de médecine, Université de Montréal; Pavillon Roger-Gaudry, 2900 Edouard Montpetit Blvd, Montreal, Quebec H3T 1J4, Canada
- Mila, Quebec Artificial Intelligence Institute; 6666 Rue Saint-Urbain, Montréal, QC H2S 3H1, Canada
| | - Matthias H Hennig
- School of Informatics, University of Edinburgh; Informatics Forum, 10 Crichton St, Newington, Edinburgh EH8 9AB, United Kingdom
| |
Collapse
|
2
|
Mathis MW, Perez Rotondo A, Chang EF, Tolias AS, Mathis A. Decoding the brain: From neural representations to mechanistic models. Cell 2024; 187:5814-5832. [PMID: 39423801 PMCID: PMC11637322 DOI: 10.1016/j.cell.2024.08.051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Revised: 07/29/2024] [Accepted: 08/26/2024] [Indexed: 10/21/2024]
Abstract
A central principle in neuroscience is that neurons within the brain act in concert to produce perception, cognition, and adaptive behavior. Neurons are organized into specialized brain areas, dedicated to different functions to varying extents, and their function relies on distributed circuits to continuously encode relevant environmental and body-state features, enabling other areas to decode (interpret) these representations for computing meaningful decisions and executing precise movements. Thus, the distributed brain can be thought of as a series of computations that act to encode and decode information. In this perspective, we detail important concepts of neural encoding and decoding and highlight the mathematical tools used to measure them, including deep learning methods. We provide case studies where decoding concepts enable foundational and translational science in motor, visual, and language processing.
Collapse
Affiliation(s)
- Mackenzie Weygandt Mathis
- Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland; Neuro-X Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland.
| | - Adriana Perez Rotondo
- Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland; Neuro-X Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Edward F Chang
- Department of Neurological Surgery, UCSF, San Francisco, CA, USA
| | - Andreas S Tolias
- Department of Ophthalmology, Byers Eye Institute, Stanford University, Stanford, CA, USA; Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Stanford BioX, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Alexander Mathis
- Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland; Neuro-X Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| |
Collapse
|
3
|
Cai W, Taghia J, Menon V. A multi-demand operating system underlying diverse cognitive tasks. Nat Commun 2024; 15:2185. [PMID: 38467606 PMCID: PMC10928152 DOI: 10.1038/s41467-024-46511-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 02/28/2024] [Indexed: 03/13/2024] Open
Abstract
The existence of a multiple-demand cortical system with an adaptive, domain-general, role in cognition has been proposed, but the underlying dynamic mechanisms and their links to cognitive control abilities are poorly understood. Here we use a probabilistic generative Bayesian model of brain circuit dynamics to determine dynamic brain states across multiple cognitive domains, independent datasets, and participant groups, including task fMRI data from Human Connectome Project, Dual Mechanisms of Cognitive Control study and a neurodevelopment study. We discover a shared brain state across seven distinct cognitive tasks and found that the dynamics of this shared brain state predicted cognitive control abilities in each task. Our findings reveal the flexible engagement of dynamic brain processes across multiple cognitive domains and participant groups, and uncover the generative mechanisms underlying the functioning of a domain-general cognitive operating system. Our computational framework opens promising avenues for probing neurocognitive function and dysfunction.
Collapse
Affiliation(s)
- Weidong Cai
- Department of Psychiatry & Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, USA.
- Wu Tsai Neuroscience Institute, Stanford University, Stanford, CA, USA.
| | - Jalil Taghia
- Department of Information Technology, Uppsala University, Uppsala, Sweden
| | - Vinod Menon
- Department of Psychiatry & Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, USA.
- Wu Tsai Neuroscience Institute, Stanford University, Stanford, CA, USA.
- Department of Neurology & Neurological Sciences, Stanford University School of Medicine, Stanford, CA, USA.
| |
Collapse
|
4
|
Kuzmina E, Kriukov D, Lebedev M. Neuronal travelling waves explain rotational dynamics in experimental datasets and modelling. Sci Rep 2024; 14:3566. [PMID: 38347042 PMCID: PMC10861525 DOI: 10.1038/s41598-024-53907-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Accepted: 02/06/2024] [Indexed: 02/15/2024] Open
Abstract
Spatiotemporal properties of neuronal population activity in cortical motor areas have been subjects of experimental and theoretical investigations, generating numerous interpretations regarding mechanisms for preparing and executing limb movements. Two competing models, representational and dynamical, strive to explain the relationship between movement parameters and neuronal activity. A dynamical model uses the jPCA method that holistically characterizes oscillatory activity in neuron populations by maximizing the data rotational dynamics. Different rotational dynamics interpretations revealed by the jPCA approach have been proposed. Yet, the nature of such dynamics remains poorly understood. We comprehensively analyzed several neuronal-population datasets and found rotational dynamics consistently accounted for by a traveling wave pattern. For quantifying rotation strength, we developed a complex-valued measure, the gyration number. Additionally, we identified parameters influencing rotation extent in the data. Our findings suggest that rotational dynamics and traveling waves are typically the same phenomena, so reevaluation of the previous interpretations where they were considered separate entities is needed.
Collapse
Affiliation(s)
- Ekaterina Kuzmina
- Skolkovo Institute of Science and Technology, Vladimir Zelman Center for Neurobiology and Brain Rehabilitation, Moscow, Russia, 121205.
- Artificial Intelligence Research Institute (AIRI), Moscow, Russia.
| | - Dmitrii Kriukov
- Artificial Intelligence Research Institute (AIRI), Moscow, Russia
- Skolkovo Institute of Science and Technology, Center for Molecular and Cellular Biology, Moscow, Russia, 121205
| | - Mikhail Lebedev
- Faculty of Mechanics and Mathematics, Lomonosov Moscow State University, Moscow, Russia, 119992
- Sechenov Institute of Evolutionary Physiology and Biochemistry of the Russian Academy of Sciences, Saint-Petersburg, Russia, 194223
| |
Collapse
|
5
|
Nozari E, Bertolero MA, Stiso J, Caciagli L, Cornblath EJ, He X, Mahadevan AS, Pappas GJ, Bassett DS. Macroscopic resting-state brain dynamics are best described by linear models. Nat Biomed Eng 2024; 8:68-84. [PMID: 38082179 PMCID: PMC11357987 DOI: 10.1038/s41551-023-01117-y] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 09/26/2023] [Indexed: 12/22/2023]
Abstract
It is typically assumed that large networks of neurons exhibit a large repertoire of nonlinear behaviours. Here we challenge this assumption by leveraging mathematical models derived from measurements of local field potentials via intracranial electroencephalography and of whole-brain blood-oxygen-level-dependent brain activity via functional magnetic resonance imaging. We used state-of-the-art linear and nonlinear families of models to describe spontaneous resting-state activity of 700 participants in the Human Connectome Project and 122 participants in the Restoring Active Memory project. We found that linear autoregressive models provide the best fit across both data types and three performance metrics: predictive power, computational complexity and the extent of the residual dynamics unexplained by the model. To explain this observation, we show that microscopic nonlinear dynamics can be counteracted or masked by four factors associated with macroscopic dynamics: averaging over space and over time, which are inherent to aggregated macroscopic brain activity, and observation noise and limited data samples, which stem from technological limitations. We therefore argue that easier-to-interpret linear models can faithfully describe macroscopic brain dynamics during resting-state conditions.
Collapse
Affiliation(s)
- Erfan Nozari
- Department of Mechanical Engineering, University of California, Riverside, CA, USA
- Department of Electrical and Computer Engineering, University of California, Riverside, CA, USA
- Department of Bioengineering, University of California, Riverside, CA, USA
| | - Maxwell A Bertolero
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Jennifer Stiso
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
| | - Lorenzo Caciagli
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Eli J Cornblath
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
| | - Xiaosong He
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Arun S Mahadevan
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - George J Pappas
- Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Dani S Bassett
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Psychiatry, University of Pennsylvania, Philadelphia, PA, USA.
- Santa Fe Institute, Santa Fe, NM, USA.
| |
Collapse
|
6
|
Dyer EL, Kording K. Why the simplest explanation isn't always the best. Proc Natl Acad Sci U S A 2023; 120:e2319169120. [PMID: 38117857 PMCID: PMC10756184 DOI: 10.1073/pnas.2319169120] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2023] Open
Affiliation(s)
- Eva L. Dyer
- Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA30332
| | - Konrad Kording
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA19104
| |
Collapse
|
7
|
Weiss O, Bounds HA, Adesnik H, Coen-Cagli R. Modeling the diverse effects of divisive normalization on noise correlations. PLoS Comput Biol 2023; 19:e1011667. [PMID: 38033166 PMCID: PMC10715670 DOI: 10.1371/journal.pcbi.1011667] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 12/12/2023] [Accepted: 11/07/2023] [Indexed: 12/02/2023] Open
Abstract
Divisive normalization, a prominent descriptive model of neural activity, is employed by theories of neural coding across many different brain areas. Yet, the relationship between normalization and the statistics of neural responses beyond single neurons remains largely unexplored. Here we focus on noise correlations, a widely studied pairwise statistic, because its stimulus and state dependence plays a central role in neural coding. Existing models of covariability typically ignore normalization despite empirical evidence suggesting it affects correlation structure in neural populations. We therefore propose a pairwise stochastic divisive normalization model that accounts for the effects of normalization and other factors on covariability. We first show that normalization modulates noise correlations in qualitatively different ways depending on whether normalization is shared between neurons, and we discuss how to infer when normalization signals are shared. We then apply our model to calcium imaging data from mouse primary visual cortex (V1), and find that it accurately fits the data, often outperforming a popular alternative model of correlations. Our analysis indicates that normalization signals are often shared between V1 neurons in this dataset. Our model will enable quantifying the relation between normalization and covariability in a broad range of neural systems, which could provide new constraints on circuit mechanisms of normalization and their role in information transmission and representation.
Collapse
Affiliation(s)
- Oren Weiss
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, New York, United States of America
| | - Hayley A. Bounds
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California, United States of America
| | - Hillel Adesnik
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California, United States of America
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, California, United States of America
| | - Ruben Coen-Cagli
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, New York, United States of America
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
- Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, New York, United States of America
| |
Collapse
|
8
|
Hennig MH. The sloppy relationship between neural circuit structure and function. J Physiol 2023; 601:3025-3035. [PMID: 35876720 DOI: 10.1113/jp282757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 07/20/2022] [Indexed: 11/08/2022] Open
Abstract
Investigating and describing the relationships between the structure of a circuit and its function has a long tradition in neuroscience. Since neural circuits acquire their structure through sophisticated developmental programmes, and memories and experiences are maintained through synaptic modification, it is to be expected that structure is closely linked to function. Recent findings challenge this hypothesis from three different angles: function does not strongly constrain circuit parameters, many parameters in neural circuits are irrelevant and contribute little to function, and circuit parameters are unstable and subject to constant random drift. At the same time, however, recent work also showed that dynamics in neural circuit activity that is related to function are robust over time and across individuals. Here this apparent contradiction is addressed by considering the properties of neural manifolds that restrict circuit activity to functionally relevant subspaces, and it will be suggested that degenerate, anisotropic and unstable parameter spaces are closely related to the structure and implementation of functionally relevant neural manifolds.
Collapse
Affiliation(s)
- Matthias H Hennig
- Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, Edinburgh, Scotland
| |
Collapse
|
9
|
Mitskopoulos L, Onken A. Discovering Low-Dimensional Descriptions of Multineuronal Dependencies. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1026. [PMID: 37509973 PMCID: PMC10378554 DOI: 10.3390/e25071026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 06/12/2023] [Accepted: 07/04/2023] [Indexed: 07/30/2023]
Abstract
Coordinated activity in neural populations is crucial for information processing. Shedding light on the multivariate dependencies that shape multineuronal responses is important to understand neural codes. However, existing approaches based on pairwise linear correlations are inadequate at capturing complicated interaction patterns and miss features that shape aspects of the population function. Copula-based approaches address these shortcomings by extracting the dependence structures in the joint probability distribution of population responses. In this study, we aimed to dissect neural dependencies with a C-Vine copula approach coupled with normalizing flows for estimating copula densities. While this approach allows for more flexibility compared to fitting parametric copulas, drawing insights on the significance of these dependencies from large sets of copula densities is challenging. To alleviate this challenge, we used a weighted non-negative matrix factorization procedure to leverage shared latent features in neural population dependencies. We validated the method on simulated data and applied it on copulas we extracted from recordings of neurons in the mouse visual cortex as well as in the macaque motor cortex. Our findings reveal that neural dependencies occupy low-dimensional subspaces, but distinct modules are synergistically combined to give rise to diverse interaction patterns that may serve the population function.
Collapse
Affiliation(s)
| | - Arno Onken
- School of Informatics, University of Edinburgh, Edinburgh EH8 9AB, UK
| |
Collapse
|
10
|
Mitskopoulos L, Amvrosiadis T, Onken A. Mixed vine copula flows for flexible modeling of neural dependencies. Front Neurosci 2022; 16:910122. [PMID: 36213754 PMCID: PMC9546167 DOI: 10.3389/fnins.2022.910122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 09/05/2022] [Indexed: 11/18/2022] Open
Abstract
Recordings of complex neural population responses provide a unique opportunity for advancing our understanding of neural information processing at multiple scales and improving performance of brain computer interfaces. However, most existing analytical techniques fall short of capturing the complexity of interactions within the concerted population activity. Vine copula-based approaches have shown to be successful at addressing complex high-order dependencies within the population, disentangled from the single-neuron statistics. However, most applications have focused on parametric copulas which bear the risk of misspecifying dependence structures. In order to avoid this risk, we adopted a fully non-parametric approach for the single-neuron margins and copulas by using Neural Spline Flows (NSF). We validated the NSF framework on simulated data of continuous and discrete types with various forms of dependency structures and with different dimensionality. Overall, NSFs performed similarly to existing non-parametric estimators, while allowing for considerably faster and more flexible sampling which also enables faster Monte Carlo estimation of copula entropy. Moreover, our framework was able to capture low and higher order heavy tail dependencies in neuronal responses recorded in the mouse primary visual cortex during a visual learning task while the animal was navigating a virtual reality environment. These findings highlight an often ignored aspect of complexity in coordinated neuronal activity which can be important for understanding and deciphering collective neural dynamics for neurotechnological applications.
Collapse
Affiliation(s)
- Lazaros Mitskopoulos
- School of Informatics, Institute for Adaptive and Neural Computation, University of Edinburgh, Edinburgh, United Kingdom
- *Correspondence: Lazaros Mitskopoulos
| | - Theoklitos Amvrosiadis
- Centre for Discovery Brain Sciences, Edinburgh Medical School: Biomedical Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Arno Onken
- School of Informatics, Institute for Adaptive and Neural Computation, University of Edinburgh, Edinburgh, United Kingdom
| |
Collapse
|
11
|
Grienberger C, Giovannucci A, Zeiger W, Portera-Cailliau C. Two-photon calcium imaging of neuronal activity. NATURE REVIEWS. METHODS PRIMERS 2022; 2:67. [PMID: 38124998 PMCID: PMC10732251 DOI: 10.1038/s43586-022-00147-1] [Citation(s) in RCA: 54] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/07/2022] [Indexed: 12/23/2023]
Abstract
In vivo two-photon calcium imaging (2PCI) is a technique used for recording neuronal activity in the intact brain. It is based on the principle that, when neurons fire action potentials, intracellular calcium levels rise, which can be detected using fluorescent molecules that bind to calcium. This Primer is designed for scientists who are considering embarking on experiments with 2PCI. We provide the reader with a background on the basic concepts behind calcium imaging and on the reasons why 2PCI is an increasingly powerful and versatile technique in neuroscience. The Primer explains the different steps involved in experiments with 2PCI, provides examples of what ideal preparations should look like and explains how data are analysed. We also discuss some of the current limitations of the technique, and the types of solutions to circumvent them. Finally, we conclude by anticipating what the future of 2PCI might look like, emphasizing some of the analysis pipelines that are being developed and international efforts for data sharing.
Collapse
Affiliation(s)
- Christine Grienberger
- Department of Biology and Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| | - Andrea Giovannucci
- Joint Department of Biomedical Engineering University of North Carolina at Chapel Hill and North Carolina State University, Chapel Hill, NC, USA
- UNC Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - William Zeiger
- Department of Neurology, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| | - Carlos Portera-Cailliau
- Department of Neurology, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
- Department of Neurobiology, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| |
Collapse
|
12
|
Huang Y, Yu Z. Representation Learning for Dynamic Functional Connectivities via Variational Dynamic Graph Latent Variable Models. ENTROPY (BASEL, SWITZERLAND) 2022; 24:152. [PMID: 35205448 PMCID: PMC8871213 DOI: 10.3390/e24020152] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 01/14/2022] [Accepted: 01/14/2022] [Indexed: 02/04/2023]
Abstract
Latent variable models (LVMs) for neural population spikes have revealed informative low-dimensional dynamics about the neural data and have become powerful tools for analyzing and interpreting neural activity. However, these approaches are unable to determine the neurophysiological meaning of the inferred latent dynamics. On the other hand, emerging evidence suggests that dynamic functional connectivities (DFC) may be responsible for neural activity patterns underlying cognition or behavior. We are interested in studying how DFC are associated with the low-dimensional structure of neural activities. Most existing LVMs are based on a point process and fail to model evolving relationships. In this work, we introduce a dynamic graph as the latent variable and develop a Variational Dynamic Graph Latent Variable Model (VDGLVM), a representation learning model based on the variational information bottleneck framework. VDGLVM utilizes a graph generative model and a graph neural network to capture dynamic communication between nodes that one has no access to from the observed data. The proposed computational model provides guaranteed behavior-decoding performance and improves LVMs by associating the inferred latent dynamics with probable DFC.
Collapse
Affiliation(s)
| | - Zhuliang Yu
- College of Automation Science and Technology, South China University of Technology, Guangzhou 510641, China;
| |
Collapse
|
13
|
Parametric Copula-GP model for analyzing multidimensional neuronal and behavioral relationships. PLoS Comput Biol 2022; 18:e1009799. [PMID: 35089913 PMCID: PMC8827448 DOI: 10.1371/journal.pcbi.1009799] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 02/09/2022] [Accepted: 01/02/2022] [Indexed: 11/19/2022] Open
Abstract
One of the main goals of current systems neuroscience is to understand how neuronal populations integrate sensory information to inform behavior. However, estimating stimulus or behavioral information that is encoded in high-dimensional neuronal populations is challenging. We propose a method based on parametric copulas which allows modeling joint distributions of neuronal and behavioral variables characterized by different statistics and timescales. To account for temporal or spatial changes in dependencies between variables, we model varying copula parameters by means of Gaussian Processes (GP). We validate the resulting Copula-GP framework on synthetic data and on neuronal and behavioral recordings obtained in awake mice. We show that the use of a parametric description of the high-dimensional dependence structure in our method provides better accuracy in mutual information estimation in higher dimensions compared to other non-parametric methods. Moreover, by quantifying the redundancy between neuronal and behavioral variables, our model exposed the location of the reward zone in an unsupervised manner (i.e., without using any explicit cues about the task structure). These results demonstrate that the Copula-GP framework is particularly useful for the analysis of complex multidimensional relationships between neuronal, sensory and behavioral variables. Understanding the relationship between a set of variables is a common problem in many fields, such as weather forecast or stock market data. In neuroscience, one of the main challenges is to characterize the dependencies between neuronal activity, sensory stimuli and behavioral outputs. A method of choice for modeling such statistical dependencies is based on copulas, which disentangle dependencies from single variable statistics. To account for changes in dependencies, we model changes in copula parameters by means of Gaussian Processes, conditioned on a task-related variable. The novelty of our approach includes 1) explicit modeling of the dependencies; and 2) combining different copulas to describe experimentally observed variability. We validate the goodness-of-fit as well as information estimates on synthetic data and on recordings from the visual cortex of mice performing a behavioral task. Our parametric model demonstrates significantly better performance in describing high dimensional dependencies compared to other commonly used techniques. We demonstrate that our model can estimate information and predict behaviorally-relevant parameters of the task without providing any explicit cues to the model. Our results indicate that our model is interpretable in the context of neuroscience applications, scalable to large datasets and suitable for accurate statistical modeling and information estimation.
Collapse
|