1
|
Cathignol A, Kusch L, Angiolelli M, Lopez E, Polverino A, Romano A, Sorrentino G, Jirsa V, Rabuffo G, Sorrentino P. Magnetoencephalography Dimensionality Reduction Informed by Dynamic Brain States. Eur J Neurosci 2025; 61:e70128. [PMID: 40353396 PMCID: PMC12067517 DOI: 10.1111/ejn.70128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2024] [Revised: 04/01/2025] [Accepted: 04/21/2025] [Indexed: 05/14/2025]
Abstract
Complex spontaneous brain dynamics mirror the large number of interactions taking place among regions, supporting higher functions. Such complexity is manifested in the interregional dependencies among signals derived from different brain areas, as observed utilising neuroimaging techniques, like magnetoencephalography. The dynamics of this data produce numerous subsets of active regions at any moment as they evolve. Notably, converging evidence shows that these states can be understood in terms of transient coordinated events that spread across the brain over multiple spatial and temporal scales. Those can be used as a proxy of the 'effectiveness' of the dynamics, as they become stereotyped or disorganised in neurological diseases. However, given the high-dimensional nature of the data, representing them has been challenging thus far. Dimensionality reduction techniques are typically deployed to describe complex interdependencies and improve their interpretability. However, many dimensionality reduction techniques lose information about the sequence of configurations that took place. Here, we leverage a newly described algorithm, potential of heat-diffusion for affinity-based transition embedding (PHATE), specifically designed to preserve the dynamics of the system in the low-dimensional embedding space. We analysed source-reconstructed resting-state magnetoencephalography from 18 healthy subjects to represent the dynamics of the configuration in low-dimensional space. After reduction with PHATE, unsupervised clustering via K-means is applied to identify distinct clusters. The topography of the states is described, and the dynamics are represented as a transition matrix. All the results have been checked against null models, providing a parsimonious account of the large-scale, fast, aperiodic dynamics during resting-state. The study applies the PHATE algorithm to source-reconstructed magnetoencephalography (MEG) data, reducing dimensionality while preserving large-scale neural dynamics. Results reveal distinct configurations, or 'states', of brain activity, identified via unsupervised clustering. Their transitions are characterised by a transition matrix. This method offers a simplified yet rich view of complex brain interactions, opening new perspectives on large-scale brain dynamics in health and disease.
Collapse
Affiliation(s)
- Annie E. Cathignol
- Faculty of Biology and MedicineUniversity of LausanneLausanneSwitzerland
- School of Engineering and Management VaudHES‐SO University of Applied Sciences and Arts Western SwitzerlandYverdon‐les‐BainsSwitzerland
| | - Lionel Kusch
- Institut de Neurosciences des SystèmesAix‐Marseille UniversitéMarseilleFrance
| | | | - Emahnuel Troisi Lopez
- Institute of Applied Sciences and Intelligent Systems, National Research CouncilPozzuoliItaly
| | | | - Antonella Romano
- Department of Medical Motor and Wellness SciencesUniversity of Naples “Parthenope”NaplesItaly
| | - Giuseppe Sorrentino
- DiSEGIM, Department of Economics, Law, Cybersecurity, and Sports SciencesUniversity of Naples ParthenopeNolaItaly
| | - Viktor Jirsa
- Institut de Neurosciences des SystèmesAix‐Marseille UniversitéMarseilleFrance
| | - Giovanni Rabuffo
- Institut de Neurosciences des SystèmesAix‐Marseille UniversitéMarseilleFrance
| | - Pierpaolo Sorrentino
- Institut de Neurosciences des SystèmesAix‐Marseille UniversitéMarseilleFrance
- Institute of Applied Sciences and Intelligent Systems, National Research CouncilPozzuoliItaly
- Department of Biomedical SciencesUniversity of SassariSassariItaly
| |
Collapse
|
2
|
Legenkaia M, Bourdieu L, Monasson R. Uncertainties in signal recovery from heterogeneous and convoluted time series with principal component analysis. Phys Rev E 2025; 111:044314. [PMID: 40411023 DOI: 10.1103/physreve.111.044314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2024] [Accepted: 03/03/2025] [Indexed: 05/26/2025]
Abstract
Principal component analysis (PCA) is one of the most used tools for extracting low-dimensional representations of data, in particular for time series. Performances are known to strongly depend on the quality (amount of noise) and the quantity of data. We here investigate the impact of heterogeneities, often present in real data, on the reconstruction of low-dimensional trajectories and of their associated modes. We focus in particular on the effects of sample-to-sample fluctuations and of component-dependent temporal convolution and noise in the measurements. We derive analytical predictions for the error on the reconstructed trajectory and the confusion between the modes using the replica method in a high-dimensional setting, in which the number and the dimension of the data are comparable. We find in particular that sample-to-sample variability is deleterious for the reconstruction of the signal trajectory, but beneficial for the inference of the modes, and that the fluctuations in the temporal convolution kernels prevent perfect recovery of the latent modes even for very weak measurement noise. Our predictions are corroborated by simulations with synthetic data for a variety of control parameters.
Collapse
Affiliation(s)
- Mariia Legenkaia
- Université PSL, Institut de Biologie de l'Ecole Normale Supérieure (IBENS), Ecole Normale Supérieure, CNRS, INSERM, Paris F-75005, France
- Sorbonne Université, Laboratoire de Physique de l'ENS, PSL and CNRS-UMR8023, 24 Rue Lhomond, 75005 Paris, France
| | - Laurent Bourdieu
- Université PSL, Institut de Biologie de l'Ecole Normale Supérieure (IBENS), Ecole Normale Supérieure, CNRS, INSERM, Paris F-75005, France
| | - Rémi Monasson
- Sorbonne Université, Laboratoire de Physique de l'ENS, PSL and CNRS-UMR8023, 24 Rue Lhomond, 75005 Paris, France
| |
Collapse
|
3
|
Yang Z, Teaney NA, Buttermore ED, Sahin M, Afshar-Saber W. Harnessing the potential of human induced pluripotent stem cells, functional assays and machine learning for neurodevelopmental disorders. Front Neurosci 2025; 18:1524577. [PMID: 39844857 PMCID: PMC11750789 DOI: 10.3389/fnins.2024.1524577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2024] [Accepted: 12/19/2024] [Indexed: 01/24/2025] Open
Abstract
Neurodevelopmental disorders (NDDs) affect 4.7% of the global population and are associated with delays in brain development and a spectrum of impairments that can lead to lifelong disability and even mortality. Identification of biomarkers for accurate diagnosis and medications for effective treatment are lacking, in part due to the historical use of preclinical model systems that do not translate well to the clinic for neurological disorders, such as rodents and heterologous cell lines. Human-induced pluripotent stem cells (hiPSCs) are a promising in vitro system for modeling NDDs, providing opportunities to understand mechanisms driving NDDs in human neurons. Functional assays, including patch clamping, multielectrode array, and imaging-based assays, are popular tools employed with hiPSC disease models for disease investigation. Recent progress in machine learning (ML) algorithms also presents unprecedented opportunities to advance the NDD research process. In this review, we compare two-dimensional and three-dimensional hiPSC formats for disease modeling, discuss the applications of functional assays, and offer insights on incorporating ML into hiPSC-based NDD research and drug screening.
Collapse
Affiliation(s)
- Ziqin Yang
- Rosamund Stone Zander Translational Neuroscience Center, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States
- FM Kirby Neurobiology Center, Department of Neurology, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Nicole A. Teaney
- Rosamund Stone Zander Translational Neuroscience Center, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States
- FM Kirby Neurobiology Center, Department of Neurology, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Elizabeth D. Buttermore
- Rosamund Stone Zander Translational Neuroscience Center, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States
- FM Kirby Neurobiology Center, Department of Neurology, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States
- Human Neuron Core, Boston Children’s Hospital, Boston, MA, United States
| | - Mustafa Sahin
- Rosamund Stone Zander Translational Neuroscience Center, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States
- FM Kirby Neurobiology Center, Department of Neurology, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States
- Human Neuron Core, Boston Children’s Hospital, Boston, MA, United States
| | - Wardiya Afshar-Saber
- Rosamund Stone Zander Translational Neuroscience Center, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States
- FM Kirby Neurobiology Center, Department of Neurology, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
4
|
Gozel O, Doiron B. Between-area communication through the lens of within-area neuronal dynamics. SCIENCE ADVANCES 2024; 10:eadl6120. [PMID: 39413191 PMCID: PMC11482330 DOI: 10.1126/sciadv.adl6120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 09/13/2024] [Indexed: 10/18/2024]
Abstract
A core problem in systems and circuits neuroscience is deciphering the origin of shared dynamics in neuronal activity: Do they emerge through local network interactions, or are they inherited from external sources? We explore this question with large-scale networks of spatially ordered spiking neuron models where a downstream network receives input from an upstream sender network. We show that linear measures of the communication between the sender and receiver networks can discriminate between emergent or inherited population dynamics. A match in the dimensionality of the sender and receiver population activities promotes faithful communication. In contrast, a nonlinear mapping between the sender to receiver activity, for example, through downstream emergent population-wide fluctuations, can impair linear communication. Our work exposes the benefits and limitations of linear measures when analyzing between-area communication in circuits with rich population-wide neuronal dynamics.
Collapse
Affiliation(s)
- Olivia Gozel
- Departments of Neurobiology and Statistics, University of Chicago, Chicago, IL 60637, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL 60637, USA
| | - Brent Doiron
- Departments of Neurobiology and Statistics, University of Chicago, Chicago, IL 60637, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL 60637, USA
| |
Collapse
|
5
|
Wu S, Huang C, Snyder AC, Smith MA, Doiron B, Yu BM. Automated customization of large-scale spiking network models to neuronal population activity. NATURE COMPUTATIONAL SCIENCE 2024; 4:690-705. [PMID: 39285002 PMCID: PMC12047676 DOI: 10.1038/s43588-024-00688-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 08/08/2024] [Indexed: 09/22/2024]
Abstract
Understanding brain function is facilitated by constructing computational models that accurately reproduce aspects of brain activity. Networks of spiking neurons capture the underlying biophysics of neuronal circuits, yet their activity's dependence on model parameters is notoriously complex. As a result, heuristic methods have been used to configure spiking network models, which can lead to an inability to discover activity regimes complex enough to match large-scale neuronal recordings. Here we propose an automatic procedure, Spiking Network Optimization using Population Statistics (SNOPS), to customize spiking network models that reproduce the population-wide covariability of large-scale neuronal recordings. We first confirmed that SNOPS accurately recovers simulated neural activity statistics. Then, we applied SNOPS to recordings in macaque visual and prefrontal cortices and discovered previously unknown limitations of spiking network models. Taken together, SNOPS can guide the development of network models, thereby enabling deeper insight into how networks of neurons give rise to brain function.
Collapse
Affiliation(s)
- Shenghao Wu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
- Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Chengcheng Huang
- Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
| | - Adam C Snyder
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Matthew A Smith
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Brent Doiron
- Department of Statistics, University of Chicago, Chicago, IL, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
| | - Byron M Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA.
- Neural Basis of Cognition, Pittsburgh, PA, USA.
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
| |
Collapse
|
6
|
Bashford L, Rosenthal IA, Kellis S, Bjånes D, Pejsa K, Brunton BW, Andersen RA. Neural subspaces of imagined movements in parietal cortex remain stable over several years in humans. J Neural Eng 2024; 21:046059. [PMID: 39134021 PMCID: PMC11350602 DOI: 10.1088/1741-2552/ad6e19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 07/15/2024] [Accepted: 08/12/2024] [Indexed: 08/21/2024]
Abstract
Objective.A crucial goal in brain-machine interfacing is the long-term stability of neural decoding performance, ideally without regular retraining. Long-term stability has only been previously demonstrated in non-human primate experiments and only in primary sensorimotor cortices. Here we extend previous methods to determine long-term stability in humans by identifying and aligning low-dimensional structures in neural data.Approach.Over a period of 1106 and 871 d respectively, two participants completed an imagined center-out reaching task. The longitudinal accuracy between all day pairs was assessed by latent subspace alignment using principal components analysis and canonical correlations analysis of multi-unit intracortical recordings in different brain regions (Brodmann Area 5, Anterior Intraparietal Area and the junction of the postcentral and intraparietal sulcus).Main results.We show the long-term stable representation of neural activity in subspaces of intracortical recordings from higher-order association areas in humans.Significance.These results can be practically applied to significantly expand the longevity and generalizability of brain-computer interfaces.Clinical TrialsNCT01849822, NCT01958086, NCT01964261.
Collapse
Affiliation(s)
- L Bashford
- Division of Biology and Biological Engineering, and T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, United States of America
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - I A Rosenthal
- Division of Biology and Biological Engineering, and T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, United States of America
| | - S Kellis
- Division of Biology and Biological Engineering, and T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, United States of America
| | - D Bjånes
- Division of Biology and Biological Engineering, and T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, United States of America
| | - K Pejsa
- Division of Biology and Biological Engineering, and T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, United States of America
| | - B W Brunton
- Department of Biology, University of Washington, Seattle, WA, United States of America
| | - R A Andersen
- Division of Biology and Biological Engineering, and T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, United States of America
| |
Collapse
|
7
|
Ostojic S, Fusi S. Computational role of structure in neural activity and connectivity. Trends Cogn Sci 2024; 28:677-690. [PMID: 38553340 DOI: 10.1016/j.tics.2024.03.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 02/29/2024] [Accepted: 03/07/2024] [Indexed: 07/05/2024]
Abstract
One major challenge of neuroscience is identifying structure in seemingly disorganized neural activity. Different types of structure have different computational implications that can help neuroscientists understand the functional role of a particular brain area. Here, we outline a unified approach to characterize structure by inspecting the representational geometry and the modularity properties of the recorded activity and show that a similar approach can also reveal structure in connectivity. We start by setting up a general framework for determining geometry and modularity in activity and connectivity and relating these properties with computations performed by the network. We then use this framework to review the types of structure found in recent studies of model networks performing three classes of computations.
Collapse
Affiliation(s)
- Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research University, 75005 Paris, France.
| | - Stefano Fusi
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| |
Collapse
|
8
|
Churchland MM, Shenoy KV. Preparatory activity and the expansive null-space. Nat Rev Neurosci 2024; 25:213-236. [PMID: 38443626 DOI: 10.1038/s41583-024-00796-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/26/2024] [Indexed: 03/07/2024]
Abstract
The study of the cortical control of movement experienced a conceptual shift over recent decades, as the basic currency of understanding shifted from single-neuron tuning towards population-level factors and their dynamics. This transition was informed by a maturing understanding of recurrent networks, where mechanism is often characterized in terms of population-level factors. By estimating factors from data, experimenters could test network-inspired hypotheses. Central to such hypotheses are 'output-null' factors that do not directly drive motor outputs yet are essential to the overall computation. In this Review, we highlight how the hypothesis of output-null factors was motivated by the venerable observation that motor-cortex neurons are active during movement preparation, well before movement begins. We discuss how output-null factors then became similarly central to understanding neural activity during movement. We discuss how this conceptual framework provided key analysis tools, making it possible for experimenters to address long-standing questions regarding motor control. We highlight an intriguing trend: as experimental and theoretical discoveries accumulate, the range of computational roles hypothesized to be subserved by output-null factors continues to expand.
Collapse
Affiliation(s)
- Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA.
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA.
| | - Krishna V Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Neurobiology, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Bio-X Institute, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
| |
Collapse
|
9
|
Zimnik AJ, Cora Ames K, An X, Driscoll L, Lara AH, Russo AA, Susoy V, Cunningham JP, Paninski L, Churchland MM, Glaser JI. Identifying Interpretable Latent Factors with Sparse Component Analysis. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.05.578988. [PMID: 38370650 PMCID: PMC10871230 DOI: 10.1101/2024.02.05.578988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
In many neural populations, the computationally relevant signals are posited to be a set of 'latent factors' - signals shared across many individual neurons. Understanding the relationship between neural activity and behavior requires the identification of factors that reflect distinct computational roles. Methods for identifying such factors typically require supervision, which can be suboptimal if one is unsure how (or whether) factors can be grouped into distinct, meaningful sets. Here, we introduce Sparse Component Analysis (SCA), an unsupervised method that identifies interpretable latent factors. SCA seeks factors that are sparse in time and occupy orthogonal dimensions. With these simple constraints, SCA facilitates surprisingly clear parcellations of neural activity across a range of behaviors. We applied SCA to motor cortex activity from reaching and cycling monkeys, single-trial imaging data from C. elegans, and activity from a multitask artificial network. SCA consistently identified sets of factors that were useful in describing network computations.
Collapse
Affiliation(s)
- Andrew J Zimnik
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - K Cora Ames
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Xinyue An
- Department of Neurology, Northwestern University, Chicago, IL, USA
- Interdepartmental Neuroscience Program, Northwestern University, Chicago, IL, USA
| | - Laura Driscoll
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Allen Institute for Neural Dynamics, Allen Institute, Seattle, CA, USA
| | - Antonio H Lara
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Abigail A Russo
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Vladislav Susoy
- Department of Physics, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - John P Cunningham
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
| | - Liam Paninski
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY, USA
| | - Joshua I Glaser
- Department of Neurology, Northwestern University, Chicago, IL, USA
- Department of Computer Science, Northwestern University, Evanston, IL, USA
| |
Collapse
|
10
|
Meyers EM. NeuroDecodeR: a package for neural decoding in R. Front Neuroinform 2024; 17:1275903. [PMID: 38235167 PMCID: PMC10791947 DOI: 10.3389/fninf.2023.1275903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 10/16/2023] [Indexed: 01/19/2024] Open
Abstract
Neural decoding is a powerful method to analyze neural activity. However, the code needed to run a decoding analysis can be complex, which can present a barrier to using the method. In this paper we introduce a package that makes it easy to perform decoding analyses in the R programing language. We describe how the package is designed in a modular fashion which allows researchers to easily implement a range of different analyses. We also discuss how to format data to be able to use the package, and we give two examples of how to use the package to analyze real data. We believe that this package, combined with the rich data analysis ecosystem in R, will make it significantly easier for researchers to create reproducible decoding analyses, which should help increase the pace of neuroscience discoveries.
Collapse
Affiliation(s)
- Ethan M. Meyers
- Department of Statistics and Data Science, Yale University, New Haven, CT, United States
- School of Cognitive Science, Hampshire College, Amherst, MA, United States
- The Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA, United States
| |
Collapse
|
11
|
Uzun YS, Santos R, Marchetto MC, Padmanabhan K. Network size affects the complexity of activity in human iPSC-derived neuronal populations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.31.564939. [PMID: 37961249 PMCID: PMC10635014 DOI: 10.1101/2023.10.31.564939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
Multi-electrode recording of neural activity in cultures offer opportunities for understanding how the structure of a network gives rise to function. Although it is hypothesized that network size is critical for determining the dynamics of activity, this relationship in human neural cultures remains largely unexplored. By applying new methods for analyzing neural activity to human iPSC derived cultures at either low-densities or high-densities, we uncovered the significant impacts that neuron number has on the individual neurophysiological properties of cells (such as firing rates), the collective behavior of the networks these cultures formed (as measured by entropy), and the relationship between the two. As a result, simply changing the densities of neurons generated dynamics and network behavior that differed not just in degree, but in kind. Beyond revealing the relationship between network structure and function, our findings provide a novel analytical framework to study diseases where network level activity is affected.
Collapse
Affiliation(s)
- Yavuz Selim Uzun
- Department of Physics and Astronomy, University of Rochester
- Del Monte Institute for Neuroscience, University of Rochester School of Medicine
| | - Renata Santos
- Université Paris Cité, Institute of Psychiatry and Neuroscience of Paris (IPNP), INSERM U1266, Signaling mechanisms in neurological disorders, 102 rue de la Santé, 75014 Paris, France
- Institut Imagine, INSERM U1163, Mechanisms and therapy of genetic brain diseases, Université Paris Cité, 24 Boulevard du Montparnasse, 75015 Paris, France
- Institut des Sciences Biologiques, CNRS, 16 rue Pierre et Marie Curie, 75005 Paris, France
| | | | - Krishnan Padmanabhan
- Del Monte Institute for Neuroscience, University of Rochester School of Medicine
- Department of Neuroscience, University of Rochester School of Medicine and Dentistry
- Center for Visual Science, University of Rochester School of Medicine and Dentistry
- Intellectual Development and Disability Research Center, University of Rochester School of Medicine and Dentistry
| |
Collapse
|
12
|
Rizzoglio F, Altan E, Ma X, Bodkin KL, Dekleva BM, Solla SA, Kennedy A, Miller LE. From monkeys to humans: observation-basedEMGbrain-computer interface decoders for humans with paralysis. J Neural Eng 2023; 20:056040. [PMID: 37844567 PMCID: PMC10618714 DOI: 10.1088/1741-2552/ad038e] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 10/02/2023] [Accepted: 10/16/2023] [Indexed: 10/18/2023]
Abstract
Objective. Intracortical brain-computer interfaces (iBCIs) aim to enable individuals with paralysis to control the movement of virtual limbs and robotic arms. Because patients' paralysis prevents training a direct neural activity to limb movement decoder, most iBCIs rely on 'observation-based' decoding in which the patient watches a moving cursor while mentally envisioning making the movement. However, this reliance on observed target motion for decoder development precludes its application to the prediction of unobservable motor output like muscle activity. Here, we ask whether recordings of muscle activity from a surrogate individual performing the same movement as the iBCI patient can be used as target for an iBCI decoder.Approach. We test two possible approaches, each using data from a human iBCI user and a monkey, both performing similar motor actions. In one approach, we trained a decoder to predict the electromyographic (EMG) activity of a monkey from neural signals recorded from a human. We then contrast this to a second approach, based on the hypothesis that the low-dimensional 'latent' neural representations of motor behavior, known to be preserved across time for a given behavior, might also be preserved across individuals. We 'transferred' an EMG decoder trained solely on monkey data to the human iBCI user after using Canonical Correlation Analysis to align the human latent signals to those of the monkey.Main results. We found that both direct and transfer decoding approaches allowed accurate EMG predictions between two monkeys and from a monkey to a human.Significance. Our findings suggest that these latent representations of behavior are consistent across animals and even primate species. These methods are an important initial step in the development of iBCI decoders that generate EMG predictions that could serve as signals for a biomimetic decoder controlling motion and impedance of a prosthetic arm, or even muscle force directly through functional electrical stimulation.
Collapse
Affiliation(s)
- Fabio Rizzoglio
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
| | - Ege Altan
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, United States of America
| | - Xuan Ma
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
| | - Kevin L Bodkin
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
| | - Brian M Dekleva
- Rehab Neural Engineering Labs, Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, PA, United States of America
| | - Sara A Solla
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
- Department of Physics and Astronomy, Northwestern University, Evanston, IL, United States of America
| | - Ann Kennedy
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
| | - Lee E Miller
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, United States of America
- Shirley Ryan AbilityLab, Chicago, IL, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL, United States of America
| |
Collapse
|
13
|
Wu S, Huang C, Snyder A, Smith M, Doiron B, Yu B. Automated customization of large-scale spiking network models to neuronal population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.21.558920. [PMID: 37790533 PMCID: PMC10542160 DOI: 10.1101/2023.09.21.558920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Understanding brain function is facilitated by constructing computational models that accurately reproduce aspects of brain activity. Networks of spiking neurons capture the underlying biophysics of neuronal circuits, yet the dependence of their activity on model parameters is notoriously complex. As a result, heuristic methods have been used to configure spiking network models, which can lead to an inability to discover activity regimes complex enough to match large-scale neuronal recordings. Here we propose an automatic procedure, Spiking Network Optimization using Population Statistics (SNOPS), to customize spiking network models that reproduce the population-wide covariability of large-scale neuronal recordings. We first confirmed that SNOPS accurately recovers simulated neural activity statistics. Then, we applied SNOPS to recordings in macaque visual and prefrontal cortices and discovered previously unknown limitations of spiking network models. Taken together, SNOPS can guide the development of network models and thereby enable deeper insight into how networks of neurons give rise to brain function.
Collapse
Affiliation(s)
- Shenghao Wu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Chengcheng Huang
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
| | - Adam Snyder
- Department of Neuroscience, University of Rochester, Rochester, NY, USA
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Matthew Smith
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Brent Doiron
- Department of Neurobiology, University of Chicago, Chicago, IL, USA
- Department of Statistics, University of Chicago, Chicago, IL, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
| | - Byron Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| |
Collapse
|
14
|
Bashford L, Rosenthal I, Kellis S, Bjånes D, Pejsa K, Brunton BW, Andersen RA. Neural subspaces of imagined movements in parietal cortex remain stable over several years in humans. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.05.547767. [PMID: 37461446 PMCID: PMC10350015 DOI: 10.1101/2023.07.05.547767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 07/28/2023]
Abstract
A crucial goal in brain-machine interfacing is long-term stability of neural decoding performance, ideally without regular retraining. Here we demonstrate stable neural decoding over several years in two human participants, achieved by latent subspace alignment of multi-unit intracortical recordings in posterior parietal cortex. These results can be practically applied to significantly expand the longevity and generalizability of future movement decoding devices.
Collapse
Affiliation(s)
- L Bashford
- Division of Biology and Biological Engineering, and T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA
| | - I Rosenthal
- Division of Biology and Biological Engineering, and T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA
| | - S Kellis
- Division of Biology and Biological Engineering, and T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA
| | - D Bjånes
- Division of Biology and Biological Engineering, and T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA
| | - K Pejsa
- Division of Biology and Biological Engineering, and T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA
| | - BW Brunton
- Department of Biology, University of Washington, Seattle, WA, USA
| | - RA Andersen
- Division of Biology and Biological Engineering, and T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA
| |
Collapse
|
15
|
Sabesan S, Fragner A, Bench C, Drakopoulos F, Lesica NA. Large-scale electrophysiology and deep learning reveal distorted neural signal dynamics after hearing loss. eLife 2023; 12:e85108. [PMID: 37162188 PMCID: PMC10202456 DOI: 10.7554/elife.85108] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 04/27/2023] [Indexed: 05/11/2023] Open
Abstract
Listeners with hearing loss often struggle to understand speech in noise, even with a hearing aid. To better understand the auditory processing deficits that underlie this problem, we made large-scale brain recordings from gerbils, a common animal model for human hearing, while presenting a large database of speech and noise sounds. We first used manifold learning to identify the neural subspace in which speech is encoded and found that it is low-dimensional and that the dynamics within it are profoundly distorted by hearing loss. We then trained a deep neural network (DNN) to replicate the neural coding of speech with and without hearing loss and analyzed the underlying network dynamics. We found that hearing loss primarily impacts spectral processing, creating nonlinear distortions in cross-frequency interactions that result in a hypersensitivity to background noise that persists even after amplification with a hearing aid. Our results identify a new focus for efforts to design improved hearing aids and demonstrate the power of DNNs as a tool for the study of central brain structures.
Collapse
Affiliation(s)
| | | | - Ciaran Bench
- Ear Institute, University College LondonLondonUnited Kingdom
| | | | | |
Collapse
|
16
|
Safavi S, Panagiotaropoulos TI, Kapoor V, Ramirez-Villegas JF, Logothetis NK, Besserve M. Uncovering the organization of neural circuits with Generalized Phase Locking Analysis. PLoS Comput Biol 2023; 19:e1010983. [PMID: 37011110 PMCID: PMC10109521 DOI: 10.1371/journal.pcbi.1010983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 04/17/2023] [Accepted: 02/27/2023] [Indexed: 04/05/2023] Open
Abstract
Despite the considerable progress of in vivo neural recording techniques, inferring the biophysical mechanisms underlying large scale coordination of brain activity from neural data remains challenging. One obstacle is the difficulty to link high dimensional functional connectivity measures to mechanistic models of network activity. We address this issue by investigating spike-field coupling (SFC) measurements, which quantify the synchronization between, on the one hand, the action potentials produced by neurons, and on the other hand mesoscopic "field" signals, reflecting subthreshold activities at possibly multiple recording sites. As the number of recording sites gets large, the amount of pairwise SFC measurements becomes overwhelmingly challenging to interpret. We develop Generalized Phase Locking Analysis (GPLA) as an interpretable dimensionality reduction of this multivariate SFC. GPLA describes the dominant coupling between field activity and neural ensembles across space and frequencies. We show that GPLA features are biophysically interpretable when used in conjunction with appropriate network models, such that we can identify the influence of underlying circuit properties on these features. We demonstrate the statistical benefits and interpretability of this approach in various computational models and Utah array recordings. The results suggest that GPLA, used jointly with biophysical modeling, can help uncover the contribution of recurrent microcircuits to the spatio-temporal dynamics observed in multi-channel experimental recordings.
Collapse
Affiliation(s)
- Shervin Safavi
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- IMPRS for Cognitive and Systems Neuroscience, University of Tübingen, Tübingen, Germany
| | - Theofanis I. Panagiotaropoulos
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin center, 91191 Gif/Yvette, France
| | - Vishal Kapoor
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- International Center for Primate Brain Research (ICPBR), Center for Excellence in Brain Science and Intelligence Technology (CEBSIT), Chinese Academy of Sciences (CAS), Shanghai 201602, China
| | - Juan F. Ramirez-Villegas
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Institute of Science and Technology Austria (IST Austria), Klosterneuburg, Austria
| | - Nikos K. Logothetis
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- International Center for Primate Brain Research (ICPBR), Center for Excellence in Brain Science and Intelligence Technology (CEBSIT), Chinese Academy of Sciences (CAS), Shanghai 201602, China
- Centre for Imaging Sciences, Biomedical Imaging Institute, The University of Manchester, Manchester, United Kingdom
| | - Michel Besserve
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Department of Empirical Inference, Max Planck Institute for Intelligent Systems and MPI-ETH Center for Learning Systems, Tübingen, Germany
| |
Collapse
|
17
|
Affiliation(s)
- Max Dabagia
- School of Computer Science, Georgia Institute of Technology, Atlanta, GA, USA
| | - Konrad P Kording
- Department of Biomedical Engineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Eva L Dyer
- Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA, USA.
| |
Collapse
|
18
|
DePasquale B, Sussillo D, Abbott LF, Churchland MM. The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks. Neuron 2023; 111:631-649.e10. [PMID: 36630961 PMCID: PMC10118067 DOI: 10.1016/j.neuron.2022.12.007] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 06/17/2022] [Accepted: 12/05/2022] [Indexed: 01/12/2023]
Abstract
Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.
Collapse
Affiliation(s)
- Brian DePasquale
- Princeton Neuroscience Institute, Princeton University, Princeton NJ, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - L F Abbott
- Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Physiology and Cellular Biophysics, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
| |
Collapse
|
19
|
Abstract
Despite great strides in both machine learning and neuroscience, we do not know how the human brain solves problems in the general sense. We approach this question by drawing on the framework of engineering control theory. We demonstrate a computational neural model with only localist learning laws that is able to find solutions to arbitrary problems. The model and humans perform a multi-step task with arbitrary and changing starting and desired ending states. Using a combination of computational neural modeling, human fMRI, and representational similarity analysis, we show here that the roles of a number of brain regions can be reinterpreted as interacting mechanisms of a control theoretic system. The results suggest a new set of functional perspectives on the orbitofrontal cortex, hippocampus, basal ganglia, anterior temporal lobe, lateral prefrontal cortex, and visual cortex, as well as a new path toward artificial general intelligence.
Collapse
|
20
|
Guilbert J, Légaré A, De Koninck P, Desrosiers P, Desjardins M. Toward an integrative neurovascular framework for studying brain networks. NEUROPHOTONICS 2022; 9:032211. [PMID: 35434179 PMCID: PMC8989057 DOI: 10.1117/1.nph.9.3.032211] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 03/11/2022] [Indexed: 05/28/2023]
Abstract
Brain functional connectivity based on the measure of blood oxygen level-dependent (BOLD) functional magnetic resonance imaging (fMRI) signals has become one of the most widely used measurements in human neuroimaging. However, the nature of the functional networks revealed by BOLD fMRI can be ambiguous, as highlighted by a recent series of experiments that have suggested that typical resting-state networks can be replicated from purely vascular or physiologically driven BOLD signals. After going through a brief review of the key concepts of brain network analysis, we explore how the vascular and neuronal systems interact to give rise to the brain functional networks measured with BOLD fMRI. This leads us to emphasize a view of the vascular network not only as a confounding element in fMRI but also as a functionally relevant system that is entangled with the neuronal network. To study the vascular and neuronal underpinnings of BOLD functional connectivity, we consider a combination of methodological avenues based on multiscale and multimodal optical imaging in mice, used in combination with computational models that allow the integration of vascular information to explain functional connectivity.
Collapse
Affiliation(s)
- Jérémie Guilbert
- Université Laval, Department of Physics, Physical Engineering, and Optics, Québec, Canada
- Université Laval, Centre de recherche du CHU de Québec, Québec, Canada
| | - Antoine Légaré
- Université Laval, Department of Physics, Physical Engineering, and Optics, Québec, Canada
- Centre de recherche CERVO, Québec, Canada
- Université Laval, Department of Biochemistry, Microbiology, and Bioinformatics, Québec, Canada
| | - Paul De Koninck
- Centre de recherche CERVO, Québec, Canada
- Université Laval, Department of Biochemistry, Microbiology, and Bioinformatics, Québec, Canada
| | - Patrick Desrosiers
- Université Laval, Department of Physics, Physical Engineering, and Optics, Québec, Canada
- Centre de recherche CERVO, Québec, Canada
| | - Michèle Desjardins
- Université Laval, Department of Physics, Physical Engineering, and Optics, Québec, Canada
- Université Laval, Centre de recherche du CHU de Québec, Québec, Canada
| |
Collapse
|
21
|
Yoder JA, Anderson CB, Wang C, Izquierdo EJ. Reinforcement Learning for Central Pattern Generation in Dynamical Recurrent Neural Networks. Front Comput Neurosci 2022; 16:818985. [PMID: 35465269 PMCID: PMC9028035 DOI: 10.3389/fncom.2022.818985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2021] [Accepted: 03/10/2022] [Indexed: 11/21/2022] Open
Abstract
Lifetime learning, or the change (or acquisition) of behaviors during a lifetime, based on experience, is a hallmark of living organisms. Multiple mechanisms may be involved, but biological neural circuits have repeatedly demonstrated a vital role in the learning process. These neural circuits are recurrent, dynamic, and non-linear and models of neural circuits employed in neuroscience and neuroethology tend to involve, accordingly, continuous-time, non-linear, and recurrently interconnected components. Currently, the main approach for finding configurations of dynamical recurrent neural networks that demonstrate behaviors of interest is using stochastic search techniques, such as evolutionary algorithms. In an evolutionary algorithm, these dynamic recurrent neural networks are evolved to perform the behavior over multiple generations, through selection, inheritance, and mutation, across a population of solutions. Although, these systems can be evolved to exhibit lifetime learning behavior, there are no explicit rules built into these dynamic recurrent neural networks that facilitate learning during their lifetime (e.g., reward signals). In this work, we examine a biologically plausible lifetime learning mechanism for dynamical recurrent neural networks. We focus on a recently proposed reinforcement learning mechanism inspired by neuromodulatory reward signals and ongoing fluctuations in synaptic strengths. Specifically, we extend one of the best-studied and most-commonly used dynamic recurrent neural networks to incorporate the reinforcement learning mechanism. First, we demonstrate that this extended dynamical system (model and learning mechanism) can autonomously learn to perform a central pattern generation task. Second, we compare the robustness and efficiency of the reinforcement learning rules in relation to two baseline models, a random walk and a hill-climbing walk through parameter space. Third, we systematically study the effect of the different meta-parameters of the learning mechanism on the behavioral learning performance. Finally, we report on preliminary results exploring the generality and scalability of this learning mechanism for dynamical neural networks as well as directions for future work.
Collapse
Affiliation(s)
- Jason A. Yoder
- Computer Science and Software Engineering Department, Rose-Hulman Institute of Technology, Terre Haute, IN, United States
- *Correspondence: Jason A. Yoder
| | - Cooper B. Anderson
- Computer Science and Software Engineering Department, Rose-Hulman Institute of Technology, Terre Haute, IN, United States
| | - Cehong Wang
- Computer Science and Software Engineering Department, Rose-Hulman Institute of Technology, Terre Haute, IN, United States
| | - Eduardo J. Izquierdo
- Computational Neuroethology Lab, Cognitive Science Program, Indiana University, Bloomington, IN, United States
| |
Collapse
|
22
|
Bae H, Lee S, Lee CY, Kim CE. A Novel Framework for Understanding the Pattern Identification of Traditional Asian Medicine From the Machine Learning Perspective. Front Med (Lausanne) 2022; 8:763533. [PMID: 35186965 PMCID: PMC8853725 DOI: 10.3389/fmed.2021.763533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Accepted: 12/23/2021] [Indexed: 11/13/2022] Open
Abstract
Pattern identification (PI), a unique diagnostic system of traditional Asian medicine, is the process of inferring the pathological nature or location of lesions based on observed symptoms. Despite its critical role in theory and practice, the information processing principles underlying PI systems are generally unclear. We present a novel framework for comprehending the PI system from a machine learning perspective. After a brief introduction to the dimensionality of the data, we propose that the PI system can be modeled as a dimensionality reduction process and discuss analytical issues that can be addressed using our framework. Our framework promotes a new approach in understanding the underlying mechanisms of the PI process with strong mathematical tools, thereby enriching the explanatory theories of traditional Asian medicine.
Collapse
Affiliation(s)
- Hyojin Bae
- Department of Physiology, Gachon University College of Korean Medicine, Seongnam, South Korea
| | - Sanghun Lee
- Korean Medicine Data Division, Korea Institute of Oriental Medicine, Daejeon, South Korea.,Department of Korean Convergence Medical Science, University of Science and Technology, Daejeon, South Korea
| | - Choong-Yeol Lee
- Department of Physiology, Gachon University College of Korean Medicine, Seongnam, South Korea
| | - Chang-Eop Kim
- Department of Physiology, Gachon University College of Korean Medicine, Seongnam, South Korea
| |
Collapse
|
23
|
Urai AE, Doiron B, Leifer AM, Churchland AK. Large-scale neural recordings call for new insights to link brain and behavior. Nat Neurosci 2022; 25:11-19. [PMID: 34980926 DOI: 10.1038/s41593-021-00980-9] [Citation(s) in RCA: 117] [Impact Index Per Article: 39.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 11/08/2021] [Indexed: 12/17/2022]
Abstract
Neuroscientists today can measure activity from more neurons than ever before, and are facing the challenge of connecting these brain-wide neural recordings to computation and behavior. In the present review, we first describe emerging tools and technologies being used to probe large-scale brain activity and new approaches to characterize behavior in the context of such measurements. We next highlight insights obtained from large-scale neural recordings in diverse model systems, and argue that some of these pose a challenge to traditional theoretical frameworks. Finally, we elaborate on existing modeling frameworks to interpret these data, and argue that the interpretation of brain-wide neural recordings calls for new theoretical approaches that may depend on the desired level of understanding. These advances in both neural recordings and theory development will pave the way for critical advances in our understanding of the brain.
Collapse
Affiliation(s)
- Anne E Urai
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
- Cognitive Psychology Unit, Leiden University, Leiden, The Netherlands
| | | | | | - Anne K Churchland
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA.
- University of California Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
24
|
Koay SA, Charles AS, Thiberge SY, Brody CD, Tank DW. Sequential and efficient neural-population coding of complex task information. Neuron 2021; 110:328-349.e11. [PMID: 34776042 DOI: 10.1016/j.neuron.2021.10.020] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 08/20/2021] [Accepted: 10/13/2021] [Indexed: 11/28/2022]
Abstract
Recent work has highlighted that many types of variables are represented in each neocortical area. How can these many neural representations be organized together without interference and coherently maintained/updated through time? We recorded from excitatory neural populations in posterior cortices as mice performed a complex, dynamic task involving multiple interrelated variables. The neural encoding implied that highly correlated task variables were represented by less-correlated neural population modes, while pairs of neurons exhibited a spectrum of signal correlations. This finding relates to principles of efficient coding, but notably utilizes neural population modes as the encoding unit and suggests partial whitening of task-specific information where different variables are represented with different signal-to-noise levels. Remarkably, this encoding function was multiplexed with sequential neural dynamics yet reliably followed changes in task-variable correlations throughout the trial. We suggest that neural circuits can implement time-dependent encodings in a simple way using random sequential dynamics as a temporal scaffold.
Collapse
Affiliation(s)
- Sue Ann Koay
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA.
| | - Adam S Charles
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Stephan Y Thiberge
- Bezos Center for Neural Circuit Dynamics, Princeton University, Princeton, NJ 08544, USA
| | - Carlos D Brody
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Department of Molecular Biology, Princeton University, Princeton, NJ 08544, USA; Howard Hughes Medical Institute, Princeton University, Princeton, NJ 08544, USA.
| | - David W Tank
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Bezos Center for Neural Circuit Dynamics, Princeton University, Princeton, NJ 08544, USA; Department of Molecular Biology, Princeton University, Princeton, NJ 08544, USA.
| |
Collapse
|
25
|
Altan E, Solla SA, Miller LE, Perreault EJ. Estimating the dimensionality of the manifold underlying multi-electrode neural recordings. PLoS Comput Biol 2021; 17:e1008591. [PMID: 34843461 PMCID: PMC8659648 DOI: 10.1371/journal.pcbi.1008591] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Revised: 12/09/2021] [Accepted: 11/11/2021] [Indexed: 01/07/2023] Open
Abstract
It is generally accepted that the number of neurons in a given brain area far exceeds the number of neurons needed to carry any specific function controlled by that area. For example, motor areas of the human brain contain tens of millions of neurons that control the activation of tens or at most hundreds of muscles. This massive redundancy implies the covariation of many neurons, which constrains the population activity to a low-dimensional manifold within the space of all possible patterns of neural activity. To gain a conceptual understanding of the complexity of the neural activity within a manifold, it is useful to estimate its dimensionality, which quantifies the number of degrees of freedom required to describe the observed population activity without significant information loss. While there are many algorithms for dimensionality estimation, we do not know which are well suited for analyzing neural activity. The objective of this study was to evaluate the efficacy of several representative algorithms for estimating the dimensionality of linearly and nonlinearly embedded data. We generated synthetic neural recordings with known intrinsic dimensionality and used them to test the algorithms' accuracy and robustness. We emulated some of the important challenges associated with experimental data by adding noise, altering the nature of the embedding of the low-dimensional manifold within the high-dimensional recordings, varying the dimensionality of the manifold, and limiting the amount of available data. We demonstrated that linear algorithms overestimate the dimensionality of nonlinear, noise-free data. In cases of high noise, most algorithms overestimated the dimensionality. We thus developed a denoising algorithm based on deep learning, the "Joint Autoencoder", which significantly improved subsequent dimensionality estimation. Critically, we found that all algorithms failed when the intrinsic dimensionality was high (above 20) or when the amount of data used for estimation was low. Based on the challenges we observed, we formulated a pipeline for estimating the dimensionality of experimental neural data.
Collapse
Affiliation(s)
- Ege Altan
- Department of Neuroscience, Northwestern University, Chicago, Illinois, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, Illinois, United States of America
| | - Sara A. Solla
- Department of Neuroscience, Northwestern University, Chicago, Illinois, United States of America
- Department of Physics and Astronomy, Northwestern University, Evanston, Illinois, United States of America
| | - Lee E. Miller
- Department of Neuroscience, Northwestern University, Chicago, Illinois, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, Illinois, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, Illinois, United States of America
- Shirley Ryan AbilityLab, Chicago, Illinois, United States of America
| | - Eric J. Perreault
- Department of Biomedical Engineering, Northwestern University, Evanston, Illinois, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, Illinois, United States of America
- Shirley Ryan AbilityLab, Chicago, Illinois, United States of America
| |
Collapse
|
26
|
Abstract
Significant experimental, computational, and theoretical work has identified rich structure within the coordinated activity of interconnected neural populations. An emerging challenge now is to uncover the nature of the associated computations, how they are implemented, and what role they play in driving behavior. We term this computation through neural population dynamics. If successful, this framework will reveal general motifs of neural population activity and quantitatively describe how neural population dynamics implement computations necessary for driving goal-directed behavior. Here, we start with a mathematical primer on dynamical systems theory and analytical tools necessary to apply this perspective to experimental data. Next, we highlight some recent discoveries resulting from successful application of dynamical systems. We focus on studies spanning motor control, timing, decision-making, and working memory. Finally, we briefly discuss promising recent lines of investigation and future directions for the computation through neural population dynamics framework.
Collapse
Affiliation(s)
- Saurabh Vyas
- Department of Bioengineering, Stanford University, Stanford, California 94305, USA; .,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA
| | - Matthew D Golub
- Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA.,Google AI, Google Inc., Mountain View, California 94305, USA
| | - Krishna V Shenoy
- Department of Bioengineering, Stanford University, Stanford, California 94305, USA; .,Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA.,Department of Neurobiology, Bio-X Institute, Neurosciences Program, and Howard Hughes Medical Institute, Stanford University, Stanford, California 94305, USA
| |
Collapse
|
27
|
Park J, Brady DJ, Zheng G, Tian L, Gao L. Review of bio-optical imaging systems with a high space-bandwidth product. ADVANCED PHOTONICS 2021; 3:044001. [PMID: 35178513 PMCID: PMC8849623 DOI: 10.1117/1.ap.3.4.044001] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Optical imaging has served as a primary method to collect information about biosystems across scales-from functionalities of tissues to morphological structures of cells and even at biomolecular levels. However, to adequately characterize a complex biosystem, an imaging system with a number of resolvable points, referred to as a space-bandwidth product (SBP), in excess of one billion is typically needed. Since a gigapixel-scale far exceeds the capacity of current optical imagers, compromises must be made to obtain either a low spatial resolution or a narrow field-of-view (FOV). The problem originates from constituent refractive optics-the larger the aperture, the more challenging the correction of lens aberrations. Therefore, it is impractical for a conventional optical imaging system to achieve an SBP over hundreds of millions. To address this unmet need, a variety of high-SBP imagers have emerged over the past decade, enabling an unprecedented resolution and FOV beyond the limit of conventional optics. We provide a comprehensive survey of high-SBP imaging techniques, exploring their underlying principles and applications in bioimaging.
Collapse
Affiliation(s)
- Jongchan Park
- University of California, Department of Bioengineering, Los Angeles, California, United States
| | - David J. Brady
- University of Arizona, James C. Wyant College of Optical Sciences, Tucson, Arizona, United States
| | - Guoan Zheng
- University of Connecticut, Department of Biomedical Engineering, Storrs, Connecticut, United States
- University of Connecticut, Department of Electrical and Computer Engineering, Storrs, Connecticut, United States
| | - Lei Tian
- Boston University, Department of Electrical and Computer Engineering, Boston, Massachusetts, United States
| | - Liang Gao
- University of California, Department of Bioengineering, Los Angeles, California, United States
| |
Collapse
|
28
|
Safavi S, Logothetis NK, Besserve M. From Univariate to Multivariate Coupling Between Continuous Signals and Point Processes: A Mathematical Framework. Neural Comput 2021; 33:1751-1817. [PMID: 34411270 DOI: 10.1162/neco_a_01389] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2020] [Accepted: 01/19/2021] [Indexed: 11/04/2022]
Abstract
Time series data sets often contain heterogeneous signals, composed of both continuously changing quantities and discretely occurring events. The coupling between these measurements may provide insights into key underlying mechanisms of the systems under study. To better extract this information, we investigate the asymptotic statistical properties of coupling measures between continuous signals and point processes. We first introduce martingale stochastic integration theory as a mathematical model for a family of statistical quantities that include the phase locking value, a classical coupling measure to characterize complex dynamics. Based on the martingale central limit theorem, we can then derive the asymptotic gaussian distribution of estimates of such coupling measure that can be exploited for statistical testing. Second, based on multivariate extensions of this result and random matrix theory, we establish a principled way to analyze the low-rank coupling between a large number of point processes and continuous signals. For a null hypothesis of no coupling, we establish sufficient conditions for the empirical distribution of squared singular values of the matrix to converge, as the number of measured signals increases, to the well-known Marchenko-Pastur (MP) law, and the largest squared singular value converges to the upper end of the MP support. This justifies a simple thresholding approach to assess the significance of multivariate coupling. Finally, we illustrate with simulations the relevance of our univariate and multivariate results in the context of neural time series, addressing how to reliably quantify the interplay between multichannel local field potential signals and the spiking activity of a large population of neurons.
Collapse
Affiliation(s)
- Shervin Safavi
- MPI for Biological Cybernetics, and IMPRS for Cognitive and Systems Neuroscience, University of Tübingen, 72076 Tübingen, Germany
| | - Nikos K Logothetis
- MPI for Biological Cybernetics, 72076 Tübingen, Germany; International Center for Primate Brain Research, Songjiang, Shanghai 200031, China; and University of Manchester, Manchester M13 9PL, U.K.
| | - Michel Besserve
- MPI for Biological Cybernetics and MPI for Intelligent Systems, 72076 Tübingen, Germany
| |
Collapse
|
29
|
Ren C, Komiyama T. Characterizing Cortex-Wide Dynamics with Wide-Field Calcium Imaging. J Neurosci 2021; 41:4160-4168. [PMID: 33893217 PMCID: PMC8143209 DOI: 10.1523/jneurosci.3003-20.2021] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 03/26/2021] [Accepted: 03/30/2021] [Indexed: 12/27/2022] Open
Abstract
The brain functions through coordinated activity among distributed regions. Wide-field calcium imaging, combined with improved genetically encoded calcium indicators, allows sufficient signal-to-noise ratio and spatiotemporal resolution to afford a unique opportunity to capture cortex-wide dynamics on a moment-by-moment basis in behaving animals. Recent applications of this approach have been uncovering cortical dynamics at unprecedented scales during various cognitive processes, ranging from relatively simple sensorimotor integration to more complex decision-making tasks. In this review, we will highlight recent scientific advances enabled by wide-field calcium imaging in behaving mice. We then summarize several technical considerations and future opportunities for wide-field imaging to uncover large-scale circuit dynamics.
Collapse
Affiliation(s)
- Chi Ren
- Neurobiology Section, Center for Neural Circuits and Behavior, Department of Neurosciences, and Halıcıoğlu Data Science Institute, University of California San Diego, La Jolla, California 92093
| | - Takaki Komiyama
- Neurobiology Section, Center for Neural Circuits and Behavior, Department of Neurosciences, and Halıcıoğlu Data Science Institute, University of California San Diego, La Jolla, California 92093
| |
Collapse
|
30
|
Large-Scale and Multiscale Networks in the Rodent Brain during Novelty Exploration. eNeuro 2021; 8:ENEURO.0494-20.2021. [PMID: 33757983 PMCID: PMC8121262 DOI: 10.1523/eneuro.0494-20.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Revised: 01/27/2021] [Accepted: 02/10/2021] [Indexed: 11/21/2022] Open
Abstract
Neural activity is coordinated across multiple spatial and temporal scales, and these patterns of coordination are implicated in both healthy and impaired cognitive operations. However, empirical cross-scale investigations are relatively infrequent, because of limited data availability and to the difficulty of analyzing rich multivariate datasets. Here, we applied frequency-resolved multivariate source-separation analyses to characterize a large-scale dataset comprising spiking and local field potential (LFP) activity recorded simultaneously in three brain regions (prefrontal cortex, parietal cortex, hippocampus) in freely-moving mice. We identified a constellation of multidimensional, inter-regional networks across a range of frequencies (2-200 Hz). These networks were reproducible within animals across different recording sessions, but varied across different animals, suggesting individual variability in network architecture. The theta band (∼4-10 Hz) networks had several prominent features, including roughly equal contribution from all regions and strong inter-network synchronization. Overall, these findings demonstrate a multidimensional landscape of large-scale functional activations of cortical networks operating across multiple spatial, spectral, and temporal scales during open-field exploration.
Collapse
|
31
|
Zenke F, Vogels TP. The Remarkable Robustness of Surrogate Gradient Learning for Instilling Complex Function in Spiking Neural Networks. Neural Comput 2021; 33:899-925. [PMID: 33513328 DOI: 10.1162/neco_a_01367] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Accepted: 11/06/2020] [Indexed: 01/10/2023]
Abstract
Brains process information in spiking neural networks. Their intricate connections shape the diverse functions these networks perform. Yet how network connectivity relates to function is poorly understood, and the functional capabilities of models of spiking networks are still rudimentary. The lack of both theoretical insight and practical algorithms to find the necessary connectivity poses a major impediment to both studying information processing in the brain and building efficient neuromorphic hardware systems. The training algorithms that solve this problem for artificial neural networks typically rely on gradient descent. But doing so in spiking networks has remained challenging due to the nondifferentiable nonlinearity of spikes. To avoid this issue, one can employ surrogate gradients to discover the required connectivity. However, the choice of a surrogate is not unique, raising the question of how its implementation influences the effectiveness of the method. Here, we use numerical simulations to systematically study how essential design parameters of surrogate gradients affect learning performance on a range of classification problems. We show that surrogate gradient learning is robust to different shapes of underlying surrogate derivatives, but the choice of the derivative's scale can substantially affect learning performance. When we combine surrogate gradients with suitable activity regularization techniques, spiking networks perform robust information processing at the sparse activity limit. Our study provides a systematic account of the remarkable robustness of surrogate gradient learning and serves as a practical guide to model functional spiking neural networks.
Collapse
Affiliation(s)
- Friedemann Zenke
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford OX1 3SR, U.K., and Friedrich Miescher Institute for Biomedical Research, 4058 Basel, Switzerland,
| | - Tim P Vogels
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford OX1 3SR, U.K., and Institute for Science and Technology, 3400 Klosterneuburg, Austria,
| |
Collapse
|
32
|
Genkin M, Engel TA. Moving beyond generalization to accurate interpretation of flexible models. NAT MACH INTELL 2020; 2:674-683. [DOI: 10.1038/s42256-020-00242-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
33
|
Tauste Campo A. Inferring neural information flow from spiking data. Comput Struct Biotechnol J 2020; 18:2699-2708. [PMID: 33101608 PMCID: PMC7548302 DOI: 10.1016/j.csbj.2020.09.007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Revised: 09/05/2020] [Accepted: 09/07/2020] [Indexed: 01/02/2023] Open
Abstract
The brain can be regarded as an information processing system in which neurons store and propagate information about external stimuli and internal processes. Therefore, estimating interactions between neural activity at the cellular scale has significant implications in understanding how neuronal circuits encode and communicate information across brain areas to generate behavior. While the number of simultaneously recorded neurons is growing exponentially, current methods relying only on pairwise statistical dependencies still suffer from a number of conceptual and technical challenges that preclude experimental breakthroughs describing neural information flows. In this review, we examine the evolution of the field over the years, starting from descriptive statistics to model-based and model-free approaches. Then, we discuss in detail the Granger Causality framework, which includes many popular state-of-the-art methods and we highlight some of its limitations from a conceptual and practical estimation perspective. Finally, we discuss directions for future research, including the development of theoretical information flow models and the use of dimensionality reduction techniques to extract relevant interactions from large-scale recording datasets.
Collapse
Affiliation(s)
- Adrià Tauste Campo
- Centre for Brain and Cognition, Universitat Pompeu Fabra, Ramon Trias Fargas 25, 08018 Barcelona, Spain
| |
Collapse
|
34
|
Thivierge JP. Frequency-separated principal component analysis of cortical population activity. J Neurophysiol 2020; 124:668-681. [PMID: 32727265 DOI: 10.1152/jn.00167.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
A hallmark of neocortical activity is the presence of low-dimensional fluctuations in firing rate that are coordinated across neurons. However, the impact of these fluctuations on sensory processing remains unclear. Here, we examined fluctuations in populations of orientation-selective neurons from anesthetized macaque primary visual cortex (V1) during stimulus viewing as well as spontaneous activity. We introduce a novel approach termed frequency-separated principal component analysis (FS-PCA) to characterize these fluctuations. This method unveiled a distribution of components with a broad range of frequencies whose eigenvalues and variance followed an approximate power law. During stimulus viewing, subpopulations of V1 neurons correlated either positively or negatively with low-dimensional fluctuations. These two subpopulations displayed distinct activation properties and noise correlations in response to sensory input. Together, results suggest that slow, low-dimensional fluctuations in V1 population activity shape the response of individual neurons to oriented stimuli and may impact the transmission of sensory information to downstream regions of the primary visual system.NEW & NOTEWORTHY A method termed frequency-separated principal component analysis (FS-PCA) is introduced for analyzing populations of simultaneously recorded neurons. This framework extends standard principal component analysis by extracting components of activity delimited to specific frequency bands. FS-PCA revealed that circuits of the primary visual cortex generate a broad range of components dominated by low-frequency activity. Furthermore, low-dimensional fluctuations in population activity modulated the response of individual neurons to sensory input.
Collapse
Affiliation(s)
- Jean-Philippe Thivierge
- School of Psychology, University of Ottawa, Ottawa, Ontario, Canada.,Brain and Mind Research Institute, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
35
|
Ito T, Hearne L, Mill R, Cocuzza C, Cole MW. Discovering the Computational Relevance of Brain Network Organization. Trends Cogn Sci 2020; 24:25-38. [PMID: 31727507 PMCID: PMC6943194 DOI: 10.1016/j.tics.2019.10.005] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2019] [Revised: 10/08/2019] [Accepted: 10/16/2019] [Indexed: 12/26/2022]
Abstract
Understanding neurocognitive computations will require not just localizing cognitive information distributed throughout the brain but also determining how that information got there. We review recent advances in linking empirical and simulated brain network organization with cognitive information processing. Building on these advances, we offer a new framework for understanding the role of connectivity in cognition: network coding (encoding/decoding) models. These models utilize connectivity to specify the transfer of information via neural activity flow processes, successfully predicting the formation of cognitive representations in empirical neural data. The success of these models supports the possibility that localized neural functions mechanistically emerge (are computed) from distributed activity flow processes that are specified primarily by connectivity patterns.
Collapse
Affiliation(s)
- Takuya Ito
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102, USA; Behavioral and Neural Sciences PhD Program, Rutgers University, Newark, NJ 07102, USA
| | - Luke Hearne
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102, USA
| | - Ravi Mill
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102, USA
| | - Carrisa Cocuzza
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102, USA; Behavioral and Neural Sciences PhD Program, Rutgers University, Newark, NJ 07102, USA
| | - Michael W Cole
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102, USA.
| |
Collapse
|
36
|
Datta SR, Anderson DJ, Branson K, Perona P, Leifer A. Computational Neuroethology: A Call to Action. Neuron 2019; 104:11-24. [PMID: 31600508 PMCID: PMC6981239 DOI: 10.1016/j.neuron.2019.09.038] [Citation(s) in RCA: 232] [Impact Index Per Article: 38.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2019] [Revised: 09/16/2019] [Accepted: 09/23/2019] [Indexed: 12/11/2022]
Abstract
The brain is worthy of study because it is in charge of behavior. A flurry of recent technical advances in measuring and quantifying naturalistic behaviors provide an important opportunity for advancing brain science. However, the problem of understanding unrestrained behavior in the context of neural recordings and manipulations remains unsolved, and developing approaches to addressing this challenge is critical. Here we discuss considerations in computational neuroethology-the science of quantifying naturalistic behaviors for understanding the brain-and propose strategies to evaluate progress. We point to open questions that require resolution and call upon the broader systems neuroscience community to further develop and leverage measures of naturalistic, unrestrained behavior, which will enable us to more effectively probe the richness and complexity of the brain.
Collapse
Affiliation(s)
| | - David J Anderson
- Division of Biology and Biological Engineering 156-29, California Institute of Technology, Pasadena, CA 91125, USA; Howard Hughes Medical Institute, Pasadena, CA, 91125, USA; Tianqiao and Chrissy Chen Institute for Neuroscience, California Institute of Technology, Pasadena, CA 91125, USA
| | - Kristin Branson
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - Pietro Perona
- Division of Engineering & Applied Sciences 136-93, California Institute of Technology, Pasadena, CA 91125, USA
| | - Andrew Leifer
- Department of Physics, Princeton University, Princeton, NJ 08544, USA; Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA.
| |
Collapse
|
37
|
Doiron B, Lengyel M. Editorial overview: Computational neuroscience. Curr Opin Neurobiol 2019; 58:iii-vii. [DOI: 10.1016/j.conb.2019.09.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
38
|
Keemink SW, Machens CK. Decoding and encoding (de)mixed population responses. Curr Opin Neurobiol 2019; 58:112-121. [PMID: 31563083 DOI: 10.1016/j.conb.2019.09.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Revised: 08/19/2019] [Accepted: 09/08/2019] [Indexed: 10/25/2022]
Abstract
A central tenet of neuroscience is that the brain works through large populations of interacting neurons. With recent advances in recording techniques, the inner working of these populations has come into full view. Analyzing the resulting large-scale data sets is challenging because of the often complex and 'mixed' dependency of neural activities on experimental parameters, such as stimuli, decisions, or motor responses. Here we review recent insights gained from analyzing these data with dimensionality reduction methods that 'demix' these dependencies. We demonstrate that the mappings from (carefully chosen) experimental parameters to population activities appear to be typical and stable across tasks, brain areas, and animals, and are often identifiable by linear methods. By considering when and why dimensionality reduction and demixing work well, we argue for a view of population coding in which populations represent (demixed) latent signals, corresponding to stimuli, decisions, motor responses, and so on. These latent signals are encoded into neural population activity via non-linear mappings and decoded via linear readouts. We explain how such a scheme can facilitate the propagation of information across cortical areas, and we review neural network architectures that can reproduce the encoding and decoding of latent signals in population activities. These architectures promise a link from the biophysics of single neurons to the activities of neural populations.
Collapse
|
39
|
An argument for hyperbolic geometry in neural circuits. Curr Opin Neurobiol 2019; 58:101-104. [PMID: 31476550 DOI: 10.1016/j.conb.2019.07.008] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Accepted: 07/25/2019] [Indexed: 11/22/2022]
Abstract
This review connects several lines of research to argue that hyperbolic geometry should be broadly applicable to neural circuits as well as other biological circuits. The reason for this is that networks that conform to hyperbolic geometry are maximally responsive to external and internal perturbations. These networks also allow for efficient communication under conditions where nodes are added or removed. We will argue that one of the signatures of hyperbolic geometry is the celebrated Zipf's law (also sometimes known as the Pareto distribution) that states that the probability to observe a given pattern is inversely related to its rank. Zipf's law is observed in a variety of biological systems - from protein sequences, neural networks to economics. These observations provide further evidence for the ubiquity of networks with an underlying hyperbolic metric structure. Recent studies in neuroscience specifically point to the relevance of a three-dimensional hyperbolic space for neural signaling. The three-dimensional hyperbolic space may confer additional robustness compared to other dimensions. We illustrate how the use of hyperbolic coordinates revealed a novel topographic organization within the olfactory system. The use of such coordinates may facilitate representation of relevant signals elsewhere in the brain.
Collapse
|
40
|
Whiteway MR, Butts DA. The quest for interpretable models of neural population activity. Curr Opin Neurobiol 2019; 58:86-93. [PMID: 31426024 DOI: 10.1016/j.conb.2019.07.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Accepted: 07/14/2019] [Indexed: 11/24/2022]
Abstract
Many aspects of brain function arise from the coordinated activity of large populations of neurons. Recent developments in neural recording technologies are providing unprecedented access to the activity of such populations during increasingly complex experimental contexts; however, extracting scientific insights from such recordings requires the concurrent development of analytical tools that relate this population activity to system-level function. This is a primary motivation for latent variable models, which seek to provide a low-dimensional description of population activity that can be related to experimentally controlled variables, as well as uncontrolled variables such as internal states (e.g. attention and arousal) and elements of behavior. While deriving an understanding of function from traditional latent variable methods relies on low-dimensional visualizations, new approaches are targeting more interpretable descriptions of the components underlying system-level function.
Collapse
Affiliation(s)
- Matthew R Whiteway
- Zuckerman Mind Brain Behavior Institute, Jerome L Greene Science Center, Columbia University, 3227 Broadway, 5th Floor, Quad D, New York, NY 10027, USA
| | - Daniel A Butts
- Department of Biology and Program in Neuroscience and Cognitive Science, University of Maryland, 1210 Biology-Psychology Bldg. #144, College Park, MD 20742, USA.
| |
Collapse
|