1
|
Wei 魏赣超 G, Tajik Mansouri زینب تاجیک منصوری Z, Wang 王晓婧 X, Stevenson IH. Calibrating Bayesian Decoders of Neural Spiking Activity. J Neurosci 2024; 44:e2158232024. [PMID: 38538143 PMCID: PMC11063820 DOI: 10.1523/jneurosci.2158-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 01/29/2024] [Accepted: 03/11/2024] [Indexed: 05/03/2024] Open
Abstract
Accurately decoding external variables from observations of neural activity is a major challenge in systems neuroscience. Bayesian decoders, which provide probabilistic estimates, are some of the most widely used. Here we show how, in many common settings, the probabilistic predictions made by traditional Bayesian decoders are overconfident. That is, the estimates for the decoded stimulus or movement variables are more certain than they should be. We then show how Bayesian decoding with latent variables, taking account of low-dimensional shared variability in the observations, can improve calibration, although additional correction for overconfidence is still needed. Using data from males, we examine (1) decoding the direction of grating stimuli from spike recordings in the primary visual cortex in monkeys, (2) decoding movement direction from recordings in the primary motor cortex in monkeys, (3) decoding natural images from multiregion recordings in mice, and (4) decoding position from hippocampal recordings in rats. For each setting, we characterize the overconfidence, and we describe a possible method to correct miscalibration post hoc. Properly calibrated Bayesian decoders may alter theoretical results on probabilistic population coding and lead to brain-machine interfaces that more accurately reflect confidence levels when identifying external variables.
Collapse
Affiliation(s)
- Ganchao Wei 魏赣超
- Department of Statistical Science, Duke University, Durham, North Carolina 27708
| | | | | | - Ian H Stevenson
- Departments of Biomedical Engineering, University of Connecticut, Storrs, Connecticut 06269
- Psychological Sciences, University of Connecticut, Storrs, Connecticut 06269
- Connecticut Institute for Brain and Cognitive Science, University of Connecticut, Storrs, Connecticut 06269
| |
Collapse
|
2
|
Voigtlaender S, Pawelczyk J, Geiger M, Vaios EJ, Karschnia P, Cudkowicz M, Dietrich J, Haraldsen IRJH, Feigin V, Owolabi M, White TL, Świeboda P, Farahany N, Natarajan V, Winter SF. Artificial intelligence in neurology: opportunities, challenges, and policy implications. J Neurol 2024; 271:2258-2273. [PMID: 38367046 DOI: 10.1007/s00415-024-12220-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 01/20/2024] [Accepted: 01/22/2024] [Indexed: 02/19/2024]
Abstract
Neurological conditions are the leading cause of disability and mortality combined, demanding innovative, scalable, and sustainable solutions. Brain health has become a global priority with adoption of the World Health Organization's Intersectoral Global Action Plan in 2022. Simultaneously, rapid advancements in artificial intelligence (AI) are revolutionizing neurological research and practice. This scoping review of 66 original articles explores the value of AI in neurology and brain health, systematizing the landscape for emergent clinical opportunities and future trends across the care trajectory: prevention, risk stratification, early detection, diagnosis, management, and rehabilitation. AI's potential to advance personalized precision neurology and global brain health directives hinges on resolving core challenges across four pillars-models, data, feasibility/equity, and regulation/innovation-through concerted pursuit of targeted recommendations. Paramount actions include swift, ethical, equity-focused integration of novel technologies into clinical workflows, mitigating data-related issues, counteracting digital inequity gaps, and establishing robust governance frameworks balancing safety and innovation.
Collapse
Affiliation(s)
- Sebastian Voigtlaender
- Systems Neuroscience Division, Max-Planck-Institute for Biological Cybernetics, Tübingen, Germany
- Virtual Diagnostics Team, QuantCo Inc., Cambridge, MA, USA
| | - Johannes Pawelczyk
- Faculty of Medicine, Ruprecht-Karls-University, Heidelberg, Germany
- Graduate Center of Medicine and Health, Technical University Munich, Munich, Germany
| | - Mario Geiger
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- NVIDIA, Zurich, Switzerland
| | - Eugene J Vaios
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Philipp Karschnia
- Department of Neurosurgery, Ludwig-Maximilians-University and University Hospital Munich, Munich, Germany
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Merit Cudkowicz
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Jorg Dietrich
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Ira R J Hebold Haraldsen
- Department of Neurology, Division of Clinical Neuroscience, Oslo University Hospital, Oslo, Norway
| | - Valery Feigin
- National Institute for Stroke and Applied Neurosciences, Auckland University of Technology, Auckland, New Zealand
| | - Mayowa Owolabi
- Center for Genomics and Precision Medicine, College of Medicine, University of Ibadan, Ibadan, Nigeria
- Neurology Unit, Department of Medicine, University of Ibadan, Ibadan, Nigeria
- Blossom Specialist Medical Center, Ibadan, Nigeria
- Lebanese American University of Beirut, Beirut, Lebanon
| | - Tara L White
- Department of Behavioral and Social Sciences, Brown University, Providence, RI, USA
| | | | | | | | - Sebastian F Winter
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
3
|
Ali YH, Bodkin K, Rigotti-Thompson M, Patel K, Card NS, Bhaduri B, Nason-Tomaszewski SR, Mifsud DM, Hou X, Nicolas C, Allcroft S, Hochberg LR, Au Yong N, Stavisky SD, Miller LE, Brandman DM, Pandarinath C. BRAND: a platform for closed-loop experiments with deep network models. J Neural Eng 2024; 21:026046. [PMID: 38579696 PMCID: PMC11021878 DOI: 10.1088/1741-2552/ad3b3a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 01/27/2024] [Accepted: 04/05/2024] [Indexed: 04/07/2024]
Abstract
Objective.Artificial neural networks (ANNs) are state-of-the-art tools for modeling and decoding neural activity, but deploying them in closed-loop experiments with tight timing constraints is challenging due to their limited support in existing real-time frameworks. Researchers need a platform that fully supports high-level languages for running ANNs (e.g. Python and Julia) while maintaining support for languages that are critical for low-latency data acquisition and processing (e.g. C and C++).Approach.To address these needs, we introduce the Backend for Realtime Asynchronous Neural Decoding (BRAND). BRAND comprises Linux processes, termednodes, which communicate with each other in agraphvia streams of data. Its asynchronous design allows for acquisition, control, and analysis to be executed in parallel on streams of data that may operate at different timescales. BRAND uses Redis, an in-memory database, to send data between nodes, which enables fast inter-process communication and supports 54 different programming languages. Thus, developers can easily deploy existing ANN models in BRAND with minimal implementation changes.Main results.In our tests, BRAND achieved <600 microsecond latency between processes when sending large quantities of data (1024 channels of 30 kHz neural data in 1 ms chunks). BRAND runs a brain-computer interface with a recurrent neural network (RNN) decoder with less than 8 ms of latency from neural data input to decoder prediction. In a real-world demonstration of the system, participant T11 in the BrainGate2 clinical trial (ClinicalTrials.gov Identifier: NCT00912041) performed a standard cursor control task, in which 30 kHz signal processing, RNN decoding, task control, and graphics were all executed in BRAND. This system also supports real-time inference with complex latent variable models like Latent Factor Analysis via Dynamical Systems.Significance.By providing a framework that is fast, modular, and language-agnostic, BRAND lowers the barriers to integrating the latest tools in neuroscience and machine learning into closed-loop experiments.
Collapse
Affiliation(s)
- Yahia H Ali
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Kevin Bodkin
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
| | - Mattia Rigotti-Thompson
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Kushant Patel
- Department of Neurological Surgery, University of California, Davis, CA, United States of America
| | - Nicholas S Card
- Department of Neurological Surgery, University of California, Davis, CA, United States of America
| | - Bareesh Bhaduri
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Samuel R Nason-Tomaszewski
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Domenick M Mifsud
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Xianda Hou
- Department of Neurological Surgery, University of California, Davis, CA, United States of America
| | - Claire Nicolas
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Boston, MA, United States of America
| | - Shane Allcroft
- School of Engineering and Carney Institute for Brain Science, Brown University, Providence, RI, United States of America
| | - Leigh R Hochberg
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Boston, MA, United States of America
- School of Engineering and Carney Institute for Brain Science, Brown University, Providence, RI, United States of America
- Harvard Medical School, Boston, MA, United States of America
- Veterans Affairs Rehabilitation Research & Development Center for Neurorestoration and Neurotechnology, Providence VA Medical Center, Providence, RI, United States of America
| | - Nicholas Au Yong
- Department of Neurosurgery, Emory University, Atlanta, GA, United States of America
| | - Sergey D Stavisky
- Department of Neurological Surgery, University of California, Davis, CA, United States of America
| | - Lee E Miller
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL, United States of America
- Shirley Ryan AbilityLab, Chicago, IL, United States of America
| | - David M Brandman
- Department of Neurological Surgery, University of California, Davis, CA, United States of America
| | - Chethan Pandarinath
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
- Department of Neurosurgery, Emory University, Atlanta, GA, United States of America
| |
Collapse
|
4
|
Misra J, Pessoa L. Brain dynamics and spatiotemporal trajectories during threat processing. bioRxiv 2024:2024.04.06.588389. [PMID: 38617278 PMCID: PMC11014591 DOI: 10.1101/2024.04.06.588389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
In the past decades, functional MRI research has investigated mental states and their brain bases in largely static fashion based on evoked responses during blocked and event-related designs. Despite some progress in naturalistic designs, our understanding of threat processing remains largely limited to those obtained with standard paradigms. In the present paper, we applied Switching Linear Dynamical Systems to uncover the dynamics of threat processing during a continuous threat-of-shock paradigm. Importantly, unlike studies in systems neuroscience that frequently assume that systems are decoupled from external inputs, we characterized both endogenous and exogenous contributions to dynamics. First, we demonstrated that the SLDS model learned the regularities of the experimental paradigm, such that states and state transitions estimated from fMRI time series data from 85 ROIs reflected both the proximity of the circles and their direction (approach vs. retreat). After establishing that the model captured key properties of threat-related processing, we characterized the dynamics of the states and their transitions. The results revealed that threat processing can profitably be viewed in terms of dynamic multivariate patterns whose trajectories are a combination of intrinsic and extrinsic factors that jointly determine how the brain temporally evolves during dynamic threat. We propose that viewing threat processing through the lens of dynamical systems offers important avenues to uncover properties of the dynamics of threat that are not unveiled with standard experimental designs and analyses.
Collapse
Affiliation(s)
- Joyneel Misra
- Departmentof Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States of America
| | - Luiz Pessoa
- Departmentof Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States of America
- Department of Psychology and Maryland Neuroimaging Center, University of Maryland, College Park, Maryland, United States of America
| |
Collapse
|
5
|
Tlaie A, Shapcott K, van der Plas TL, Rowland J, Lees R, Keeling J, Packer A, Tiesinga P, Schölvinck ML, Havenith MN. What does the mean mean? A simple test for neuroscience. PLoS Comput Biol 2024; 20:e1012000. [PMID: 38640119 PMCID: PMC11062559 DOI: 10.1371/journal.pcbi.1012000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 05/01/2024] [Accepted: 03/12/2024] [Indexed: 04/21/2024] Open
Abstract
Trial-averaged metrics, e.g. tuning curves or population response vectors, are a ubiquitous way of characterizing neuronal activity. But how relevant are such trial-averaged responses to neuronal computation itself? Here we present a simple test to estimate whether average responses reflect aspects of neuronal activity that contribute to neuronal processing. The test probes two assumptions implicitly made whenever average metrics are treated as meaningful representations of neuronal activity: Reliability: Neuronal responses repeat consistently enough across trials that they convey a recognizable reflection of the average response to downstream regions.Behavioural relevance: If a single-trial response is more similar to the average template, it is more likely to evoke correct behavioural responses. We apply this test to two data sets: (1) Two-photon recordings in primary somatosensory cortices (S1 and S2) of mice trained to detect optogenetic stimulation in S1; and (2) Electrophysiological recordings from 71 brain areas in mice performing a contrast discrimination task. Under the highly controlled settings of Data set 1, both assumptions were largely fulfilled. In contrast, the less restrictive paradigm of Data set 2 met neither assumption. Simulations predict that the larger diversity of neuronal response preferences, rather than higher cross-trial reliability, drives the better performance of Data set 1. We conclude that when behaviour is less tightly restricted, average responses do not seem particularly relevant to neuronal computation, potentially because information is encoded more dynamically. Most importantly, we encourage researchers to apply this simple test of computational relevance whenever using trial-averaged neuronal metrics, in order to gauge how representative cross-trial averages are in a given context.
Collapse
Affiliation(s)
- Alejandro Tlaie
- Ernst Strüngmann Institute for Neuroscience, Frankfurt am Main, Germany
- Laboratory for Clinical Neuroscience, Centre for Biomedical Technology, Technical University of Madrid, Madrid, Spain
| | | | - Thijs L. van der Plas
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| | - James Rowland
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| | - Robert Lees
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| | - Joshua Keeling
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| | - Adam Packer
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| | - Paul Tiesinga
- Department of Neuroinformatics, Donders Institute, Radboud University, Nijmegen, The Netherlands
| | | | - Martha N. Havenith
- Ernst Strüngmann Institute for Neuroscience, Frankfurt am Main, Germany
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
6
|
Meng R, Bouchard KE. Bayesian inference of structured latent spaces from neural population activity with the orthogonal stochastic linear mixing model. PLoS Comput Biol 2024; 20:e1011975. [PMID: 38669271 PMCID: PMC11078355 DOI: 10.1371/journal.pcbi.1011975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 05/08/2024] [Accepted: 03/07/2024] [Indexed: 04/28/2024] Open
Abstract
The brain produces diverse functions, from perceiving sounds to producing arm reaches, through the collective activity of populations of many neurons. Determining if and how the features of these exogenous variables (e.g., sound frequency, reach angle) are reflected in population neural activity is important for understanding how the brain operates. Often, high-dimensional neural population activity is confined to low-dimensional latent spaces. However, many current methods fail to extract latent spaces that are clearly structured by exogenous variables. This has contributed to a debate about whether or not brains should be thought of as dynamical systems or representational systems. Here, we developed a new latent process Bayesian regression framework, the orthogonal stochastic linear mixing model (OSLMM) which introduces an orthogonality constraint amongst time-varying mixture coefficients, and provide Markov chain Monte Carlo inference procedures. We demonstrate superior performance of OSLMM on latent trajectory recovery in synthetic experiments and show superior computational efficiency and prediction performance on several real-world benchmark data sets. We primarily focus on demonstrating the utility of OSLMM in two neural data sets: μECoG recordings from rat auditory cortex during presentation of pure tones and multi-single unit recordings form monkey motor cortex during complex arm reaching. We show that OSLMM achieves superior or comparable predictive accuracy of neural data and decoding of external variables (e.g., reach velocity). Most importantly, in both experimental contexts, we demonstrate that OSLMM latent trajectories directly reflect features of the sounds and reaches, demonstrating that neural dynamics are structured by neural representations. Together, these results demonstrate that OSLMM will be useful for the analysis of diverse, large-scale biological time-series datasets.
Collapse
Affiliation(s)
- Rui Meng
- Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, California, United States of America
| | - Kristofer E. Bouchard
- Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, California, United States of America
- Scientific Data Division, Lawrence Berkeley National Laboratory, Berkeley, California, United States of America
- Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, California, United States of America
- Redwood Center for Theoretical Neuroscience, University of California Berkeley, Berkeley, California, United States of America
| |
Collapse
|
7
|
Churchland MM, Shenoy KV. Preparatory activity and the expansive null-space. Nat Rev Neurosci 2024; 25:213-236. [PMID: 38443626 DOI: 10.1038/s41583-024-00796-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/26/2024] [Indexed: 03/07/2024]
Abstract
The study of the cortical control of movement experienced a conceptual shift over recent decades, as the basic currency of understanding shifted from single-neuron tuning towards population-level factors and their dynamics. This transition was informed by a maturing understanding of recurrent networks, where mechanism is often characterized in terms of population-level factors. By estimating factors from data, experimenters could test network-inspired hypotheses. Central to such hypotheses are 'output-null' factors that do not directly drive motor outputs yet are essential to the overall computation. In this Review, we highlight how the hypothesis of output-null factors was motivated by the venerable observation that motor-cortex neurons are active during movement preparation, well before movement begins. We discuss how output-null factors then became similarly central to understanding neural activity during movement. We discuss how this conceptual framework provided key analysis tools, making it possible for experimenters to address long-standing questions regarding motor control. We highlight an intriguing trend: as experimental and theoretical discoveries accumulate, the range of computational roles hypothesized to be subserved by output-null factors continues to expand.
Collapse
Affiliation(s)
- Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA.
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA.
| | - Krishna V Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Neurobiology, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Bio-X Institute, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
| |
Collapse
|
8
|
Hasnain MA, Birnbaum JE, Nunez JLU, Hartman EK, Chandrasekaran C, Economo MN. Separating cognitive and motor processes in the behaving mouse. bioRxiv 2024:2023.08.23.554474. [PMID: 37662199 PMCID: PMC10473744 DOI: 10.1101/2023.08.23.554474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
The cognitive processes supporting complex animal behavior are closely associated with ubiquitous movements responsible for our posture, facial expressions, ability to actively sample our sensory environments, and other critical processes. These movements are strongly related to neural activity across much of the brain and are often highly correlated with ongoing cognitive processes, making it challenging to dissociate the neural dynamics that support cognitive processes from those supporting related movements. In such cases, a critical issue is whether cognitive processes are separable from related movements, or if they are driven by common neural mechanisms. Here, we demonstrate how the separability of cognitive and motor processes can be assessed, and, when separable, how the neural dynamics associated with each component can be isolated. We establish a novel two-context behavioral task in mice that involves multiple cognitive processes and show that commonly observed dynamics taken to support cognitive processes are strongly contaminated by movements. When cognitive and motor components are isolated using a novel approach for subspace decomposition, we find that they exhibit distinct dynamical trajectories. Further, properly accounting for movement revealed that largely separate populations of cells encode cognitive and motor variables, in contrast to the 'mixed selectivity' often reported. Accurately isolating the dynamics associated with particular cognitive and motor processes will be essential for developing conceptual and computational models of neural circuit function and evaluating the function of the cell types of which neural circuits are composed.
Collapse
Affiliation(s)
- Munib A. Hasnain
- Department of Biomedical Engineering, Boston University, Boston, MA
- Center for Neurophotonics, Boston University, Boston, MA
| | - Jaclyn E. Birnbaum
- Graduate Program for Neuroscience, Boston University, Boston, MA
- Center for Neurophotonics, Boston University, Boston, MA
| | | | - Emma K. Hartman
- Department of Biomedical Engineering, Boston University, Boston, MA
| | - Chandramouli Chandrasekaran
- Department of Psychological and Brain Sciences, Boston University, Boston, MA
- Department of Neurobiology & Anatomy, Boston University, Boston, MA
- Center for Systems Neuroscience, Boston University, Boston, MA
| | - Michael N. Economo
- Department of Biomedical Engineering, Boston University, Boston, MA
- Center for Neurophotonics, Boston University, Boston, MA
- Center for Systems Neuroscience, Boston University, Boston, MA
| |
Collapse
|
9
|
Morrell MC, Nemenman I, Sederberg A. Neural criticality from effective latent variables. eLife 2024; 12:RP89337. [PMID: 38470471 PMCID: PMC10957169 DOI: 10.7554/elife.89337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2024] Open
Abstract
Observations of power laws in neural activity data have raised the intriguing notion that brains may operate in a critical state. One example of this critical state is 'avalanche criticality', which has been observed in various systems, including cultured neurons, zebrafish, rodent cortex, and human EEG. More recently, power laws were also observed in neural populations in the mouse under an activity coarse-graining procedure, and they were explained as a consequence of the neural activity being coupled to multiple latent dynamical variables. An intriguing possibility is that avalanche criticality emerges due to a similar mechanism. Here, we determine the conditions under which latent dynamical variables give rise to avalanche criticality. We find that populations coupled to multiple latent variables produce critical behavior across a broader parameter range than those coupled to a single, quasi-static latent variable, but in both cases, avalanche criticality is observed without fine-tuning of model parameters. We identify two regimes of avalanches, both critical but differing in the amount of information carried about the latent variable. Our results suggest that avalanche criticality arises in neural systems in which activity is effectively modeled as a population driven by a few dynamical variables and these variables can be inferred from the population activity.
Collapse
Affiliation(s)
- Mia C Morrell
- Department of Physics, New York UniversityNew YorkUnited States
| | - Ilya Nemenman
- Department of Physics, Department of Biology, Initiative in Theory and Modeling of Living Systems, Emory UniversityAtlantaUnited States
| | - Audrey Sederberg
- Department of Neuroscience, University of Minnesota Medical SchoolMinneapolisUnited States
| |
Collapse
|
10
|
Temmar H, Willsey MS, Costello JT, Mender MJ, Cubillos LH, Lam JL, Wallace DM, Kelberman MM, Patil PG, Chestek CA. Artificial neural network for brain-machine interface consistently produces more naturalistic finger movements than linear methods. bioRxiv 2024:2024.03.01.583000. [PMID: 38496403 PMCID: PMC10942378 DOI: 10.1101/2024.03.01.583000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
Brain-machine interfaces (BMI) aim to restore function to persons living with spinal cord injuries by 'decoding' neural signals into behavior. Recently, nonlinear BMI decoders have outperformed previous state-of-the-art linear decoders, but few studies have investigated what specific improvements these nonlinear approaches provide. In this study, we compare how temporally convolved feedforward neural networks (tcFNNs) and linear approaches predict individuated finger movements in open and closed-loop settings. We show that nonlinear decoders generate more naturalistic movements, producing distributions of velocities 85.3% closer to true hand control than linear decoders. Addressing concerns that neural networks may come to inconsistent solutions, we find that regularization techniques improve the consistency of tcFNN convergence by 194.6%, along with improving average performance, and training speed. Finally, we show that tcFNN can leverage training data from multiple task variations to improve generalization. The results of this study show that nonlinear methods produce more naturalistic movements and show potential for generalizing over less constrained tasks. Teaser A neural network decoder produces consistent naturalistic movements and shows potential for real-world generalization through task variations.
Collapse
|
11
|
Chang YJ, Chen YI, Yeh HC, Santacruz SR. Neurobiologically realistic neural network enables cross-scale modeling of neural dynamics. Sci Rep 2024; 14:5145. [PMID: 38429297 PMCID: PMC10907713 DOI: 10.1038/s41598-024-54593-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 02/14/2024] [Indexed: 03/03/2024] Open
Abstract
Fundamental principles underlying computation in multi-scale brain networks illustrate how multiple brain areas and their coordinated activity give rise to complex cognitive functions. Whereas brain activity has been studied at the micro- to meso-scale to reveal the connections between the dynamical patterns and the behaviors, investigations of neural population dynamics are mainly limited to single-scale analysis. Our goal is to develop a cross-scale dynamical model for the collective activity of neuronal populations. Here we introduce a bio-inspired deep learning approach, termed NeuroBondGraph Network (NBGNet), to capture cross-scale dynamics that can infer and map the neural data from multiple scales. Our model not only exhibits more than an 11-fold improvement in reconstruction accuracy, but also predicts synchronous neural activity and preserves correlated low-dimensional latent dynamics. We also show that the NBGNet robustly predicts held-out data across a long time scale (2 weeks) without retraining. We further validate the effective connectivity defined from our model by demonstrating that neural connectivity during motor behaviour agrees with the established neuroanatomical hierarchy of motor control in the literature. The NBGNet approach opens the door to revealing a comprehensive understanding of brain computation, where network mechanisms of multi-scale activity are critical.
Collapse
Affiliation(s)
- Yin-Jui Chang
- Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA
| | - Yuan-I Chen
- Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA
| | - Hsin-Chih Yeh
- Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA
- Texas Materials Institute, The University of Texas at Austin, Austin, TX, USA
| | - Samantha R Santacruz
- Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA.
- Institute for Neuroscience, The University of Texas at Austin, Austin, TX, USA.
- Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX, USA.
| |
Collapse
|
12
|
Ahmadipour P, Sani OG, Pesaran B, Shanechi MM. Multimodal subspace identification for modeling discrete-continuous spiking and field potential population activity. J Neural Eng 2024; 21:026001. [PMID: 38016450 PMCID: PMC10913727 DOI: 10.1088/1741-2552/ad1053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 10/23/2023] [Accepted: 11/28/2023] [Indexed: 11/30/2023]
Abstract
Objective.Learning dynamical latent state models for multimodal spiking and field potential activity can reveal their collective low-dimensional dynamics and enable better decoding of behavior through multimodal fusion. Toward this goal, developing unsupervised learning methods that are computationally efficient is important, especially for real-time learning applications such as brain-machine interfaces (BMIs). However, efficient learning remains elusive for multimodal spike-field data due to their heterogeneous discrete-continuous distributions and different timescales.Approach.Here, we develop a multiscale subspace identification (multiscale SID) algorithm that enables computationally efficient learning for modeling and dimensionality reduction for multimodal discrete-continuous spike-field data. We describe the spike-field activity as combined Poisson and Gaussian observations, for which we derive a new analytical SID method. Importantly, we also introduce a novel constrained optimization approach to learn valid noise statistics, which is critical for multimodal statistical inference of the latent state, neural activity, and behavior. We validate the method using numerical simulations and with spiking and local field potential population activity recorded during a naturalistic reach and grasp behavior.Main results.We find that multiscale SID accurately learned dynamical models of spike-field signals and extracted low-dimensional dynamics from these multimodal signals. Further, it fused multimodal information, thus better identifying the dynamical modes and predicting behavior compared to using a single modality. Finally, compared to existing multiscale expectation-maximization learning for Poisson-Gaussian observations, multiscale SID had a much lower training time while being better in identifying the dynamical modes and having a better or similar accuracy in predicting neural activity and behavior.Significance.Overall, multiscale SID is an accurate learning method that is particularly beneficial when efficient learning is of interest, such as for online adaptive BMIs to track non-stationary dynamics or for reducing offline training time in neuroscience investigations.
Collapse
Affiliation(s)
- Parima Ahmadipour
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Bijan Pesaran
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, and the Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
13
|
Lindley S, Lu Y, Shukla D. The Experimentalist's Guide to Machine Learning for Small Molecule Design. ACS Appl Bio Mater 2024; 7:657-684. [PMID: 37535819 PMCID: PMC10880109 DOI: 10.1021/acsabm.3c00054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Accepted: 07/17/2023] [Indexed: 08/05/2023]
Abstract
Initially part of the field of artificial intelligence, machine learning (ML) has become a booming research area since branching out into its own field in the 1990s. After three decades of refinement, ML algorithms have accelerated scientific developments across a variety of research topics. The field of small molecule design is no exception, and an increasing number of researchers are applying ML techniques in their pursuit of discovering, generating, and optimizing small molecule compounds. The goal of this review is to provide simple, yet descriptive, explanations of some of the most commonly utilized ML algorithms in the field of small molecule design along with those that are highly applicable to an experimentally focused audience. The algorithms discussed here span across three ML paradigms: supervised learning, unsupervised learning, and ensemble methods. Examples from the published literature will be provided for each algorithm. Some common pitfalls of applying ML to biological and chemical data sets will also be explained, alongside a brief summary of a few more advanced paradigms, including reinforcement learning and semi-supervised learning.
Collapse
Affiliation(s)
- Sarah
E. Lindley
- Department
of Bioengineering, University of Illinois, Urbana−Champaign, Illinois 61801, United States
| | - Yiyang Lu
- Department
of Chemical and Biomolecular Engineering, University of Illinois, Urbana−Champaign, Illinois 61801, United States
| | - Diwakar Shukla
- Department
of Bioengineering, University of Illinois, Urbana−Champaign, Illinois 61801, United States
- Department
of Chemical and Biomolecular Engineering, University of Illinois, Urbana−Champaign, Illinois 61801, United States
- Center
for Biophysics & Computational Biology, University of Illinois, Urbana−Champaign, Illinois 61801, United States
- Department
of Plant Biology, University of Illinois, Urbana−Champaign, Illinois 61801, United States
| |
Collapse
|
14
|
Kawahara D, Fujisawa S. Advantages of Persistent Cohomology in Estimating Animal Location From Grid Cell Population Activity. Neural Comput 2024; 36:385-411. [PMID: 38363660 DOI: 10.1162/neco_a_01645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 10/09/2023] [Indexed: 02/18/2024]
Abstract
Many cognitive functions are represented as cell assemblies. In the case of spatial navigation, the population activity of place cells in the hippocampus and grid cells in the entorhinal cortex represents self-location in the environment. The brain cannot directly observe self-location information in the environment. Instead, it relies on sensory information and memory to estimate self-location. Therefore, estimating low-dimensional dynamics, such as the movement trajectory of an animal exploring its environment, from only the high-dimensional neural activity is important in deciphering the information represented in the brain. Most previous studies have estimated the low-dimensional dynamics (i.e., latent variables) behind neural activity by unsupervised learning with Bayesian population decoding using artificial neural networks or gaussian processes. Recently, persistent cohomology has been used to estimate latent variables from the phase information (i.e., circular coordinates) of manifolds created by neural activity. However, the advantages of persistent cohomology over Bayesian population decoding are not well understood. We compared persistent cohomology and Bayesian population decoding in estimating the animal location from simulated and actual grid cell population activity. We found that persistent cohomology can estimate the animal location with fewer neurons than Bayesian population decoding and robustly estimate the animal location from actual noisy data.
Collapse
Affiliation(s)
- Daisuke Kawahara
- Department of Complexity Science and Engineering, University of Tokyo, Kashiwa, Chiba 277-8563, Japan
- Laboratory for Systems Neurophysiology, RIKEN Center for Brain Science, Wako, Saitama 351-0198, Japan
| | - Shigeyoshi Fujisawa
- Department of Complexity Science and Engineering, University of Tokyo, Kashiwa, Chiba 277-8563, Japan
- Laboratory for Systems Neurophysiology, RIKEN Center for Brain Science, Wako, Saitama 351-0198, Japan
| |
Collapse
|
15
|
Vahidi P, Sani OG, Shanechi MM. Modeling and dissociation of intrinsic and input-driven neural population dynamics underlying behavior. Proc Natl Acad Sci U S A 2024; 121:e2212887121. [PMID: 38335258 PMCID: PMC10873612 DOI: 10.1073/pnas.2212887121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 12/03/2023] [Indexed: 02/12/2024] Open
Abstract
Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other brain regions. To avoid misinterpreting temporally structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of behavior. We first show how training dynamical models of neural activity while considering behavior but not input or input but not behavior may lead to misinterpretations. We then develop an analytical learning method for linear dynamical models that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of the task while other methods can be influenced by the task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the different subjects and tasks, whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.
Collapse
Affiliation(s)
- Parsa Vahidi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Omid G. Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Maryam M. Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA90089
- Thomas Lord Department of Computer Science and Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| |
Collapse
|
16
|
Kuzmina E, Kriukov D, Lebedev M. Neuronal travelling waves explain rotational dynamics in experimental datasets and modelling. Sci Rep 2024; 14:3566. [PMID: 38347042 PMCID: PMC10861525 DOI: 10.1038/s41598-024-53907-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Accepted: 02/06/2024] [Indexed: 02/15/2024] Open
Abstract
Spatiotemporal properties of neuronal population activity in cortical motor areas have been subjects of experimental and theoretical investigations, generating numerous interpretations regarding mechanisms for preparing and executing limb movements. Two competing models, representational and dynamical, strive to explain the relationship between movement parameters and neuronal activity. A dynamical model uses the jPCA method that holistically characterizes oscillatory activity in neuron populations by maximizing the data rotational dynamics. Different rotational dynamics interpretations revealed by the jPCA approach have been proposed. Yet, the nature of such dynamics remains poorly understood. We comprehensively analyzed several neuronal-population datasets and found rotational dynamics consistently accounted for by a traveling wave pattern. For quantifying rotation strength, we developed a complex-valued measure, the gyration number. Additionally, we identified parameters influencing rotation extent in the data. Our findings suggest that rotational dynamics and traveling waves are typically the same phenomena, so reevaluation of the previous interpretations where they were considered separate entities is needed.
Collapse
Affiliation(s)
- Ekaterina Kuzmina
- Skolkovo Institute of Science and Technology, Vladimir Zelman Center for Neurobiology and Brain Rehabilitation, Moscow, Russia, 121205.
- Artificial Intelligence Research Institute (AIRI), Moscow, Russia.
| | - Dmitrii Kriukov
- Artificial Intelligence Research Institute (AIRI), Moscow, Russia
- Skolkovo Institute of Science and Technology, Center for Molecular and Cellular Biology, Moscow, Russia, 121205
| | - Mikhail Lebedev
- Faculty of Mechanics and Mathematics, Lomonosov Moscow State University, Moscow, Russia, 119992
- Sechenov Institute of Evolutionary Physiology and Biochemistry of the Russian Academy of Sciences, Saint-Petersburg, Russia, 194223
| |
Collapse
|
17
|
Hassan J, Saeed SM, Deka L, Uddin MJ, Das DB. Applications of Machine Learning (ML) and Mathematical Modeling (MM) in Healthcare with Special Focus on Cancer Prognosis and Anticancer Therapy: Current Status and Challenges. Pharmaceutics 2024; 16:260. [PMID: 38399314 PMCID: PMC10892549 DOI: 10.3390/pharmaceutics16020260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 01/29/2024] [Accepted: 02/07/2024] [Indexed: 02/25/2024] Open
Abstract
The use of data-driven high-throughput analytical techniques, which has given rise to computational oncology, is undisputed. The widespread use of machine learning (ML) and mathematical modeling (MM)-based techniques is widely acknowledged. These two approaches have fueled the advancement in cancer research and eventually led to the uptake of telemedicine in cancer care. For diagnostic, prognostic, and treatment purposes concerning different types of cancer research, vast databases of varied information with manifold dimensions are required, and indeed, all this information can only be managed by an automated system developed utilizing ML and MM. In addition, MM is being used to probe the relationship between the pharmacokinetics and pharmacodynamics (PK/PD interactions) of anti-cancer substances to improve cancer treatment, and also to refine the quality of existing treatment models by being incorporated at all steps of research and development related to cancer and in routine patient care. This review will serve as a consolidation of the advancement and benefits of ML and MM techniques with a special focus on the area of cancer prognosis and anticancer therapy, leading to the identification of challenges (data quantity, ethical consideration, and data privacy) which are yet to be fully addressed in current studies.
Collapse
Affiliation(s)
- Jasmin Hassan
- Drug Delivery & Therapeutics Lab, Dhaka 1212, Bangladesh; (J.H.); (S.M.S.)
| | | | - Lipika Deka
- Faculty of Computing, Engineering and Media, De Montfort University, Leicester LE1 9BH, UK;
| | - Md Jasim Uddin
- Department of Pharmaceutical Technology, Faculty of Pharmacy, Universiti Malaya, Kuala Lumpur 50603, Malaysia
| | - Diganta B. Das
- Department of Chemical Engineering, Loughborough University, Loughborough LE11 3TU, UK
| |
Collapse
|
18
|
Lee WH, Karpowicz BM, Pandarinath C, Rouse AG. Identifying distinct neural features between the initial and corrective phases of precise reaching using AutoLFADS. bioRxiv 2024:2023.06.30.547252. [PMID: 38352314 PMCID: PMC10862710 DOI: 10.1101/2023.06.30.547252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/09/2024]
Abstract
Many initial movements require subsequent corrective movements, but how motor cortex transitions to make corrections and how similar the encoding is to initial movements is unclear. In our study, we explored how the brain's motor cortex signals both initial and corrective movements during a precision reaching task. We recorded a large population of neurons from two male rhesus macaques across multiple sessions to examine the neural firing rates during not only initial movements but also subsequent corrective movements. AutoLFADS, an auto-encoder-based deep-learning model, was applied to provide a clearer picture of neurons' activity on individual corrective movements across sessions. Decoding of reach velocity generalized poorly from initial to corrective submovements. Unlike initial movements, it was challenging to predict the velocity of corrective movements using traditional linear methods in a single, global neural space. We identified several locations in the neural space where corrective submovements originated after the initial reaches, signifying firing rates different than the baseline before initial movements. To improve corrective movement decoding, we demonstrate that a state-dependent decoder incorporating the population firing rates at the initiation of correction improved performance, highlighting the diverse neural features of corrective movements. In summary, we show neural differences between initial and corrective submovements and how the neural activity encodes specific combinations of velocity and position. These findings are inconsistent with assumptions that neural correlations with kinematic features are global and independent, emphasizing that traditional methods often fall short in describing these diverse neural processes for online corrective movements. Significance Statement We analyzed submovement neural population dynamics during precision reaching. Using an auto- encoder-based deep-learning model, AutoLFADS, we examined neural activity on a single-trial basis. Our study shows distinct neural dynamics between initial and corrective submovements. We demonstrate the existence of unique neural features within each submovement class that encode complex combinations of position and reach direction. Our study also highlights the benefit of state-specific decoding strategies, which consider the neural firing rates at the onset of any given submovement, when decoding complex motor tasks such as corrective submovements.
Collapse
|
19
|
Naud R, Longtin A. Connecting levels of analysis in the computational era. J Physiol 2024; 602:417-420. [PMID: 38071740 DOI: 10.1113/jp286013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Accepted: 11/30/2023] [Indexed: 02/02/2024] Open
Affiliation(s)
- Richard Naud
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
- Department of Physics, University of Ottawa, Ottawa, ON, Canada
- Center for Neural Dynamics, University of Ottawa, Ottawa, ON, Canada
- Brain and Mind Research Institute, University of Ottawa, Ottawa, ON, Canada
| | - André Longtin
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
- Department of Physics, University of Ottawa, Ottawa, ON, Canada
- Center for Neural Dynamics, University of Ottawa, Ottawa, ON, Canada
- Brain and Mind Research Institute, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
20
|
Pals M, Macke JH, Barak O. Trained recurrent neural networks develop phase-locked limit cycles in a working memory task. PLoS Comput Biol 2024; 20:e1011852. [PMID: 38315736 PMCID: PMC10868787 DOI: 10.1371/journal.pcbi.1011852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 02/15/2024] [Accepted: 01/22/2024] [Indexed: 02/07/2024] Open
Abstract
Neural oscillations are ubiquitously observed in many brain areas. One proposed functional role of these oscillations is that they serve as an internal clock, or 'frame of reference'. Information can be encoded by the timing of neural activity relative to the phase of such oscillations. In line with this hypothesis, there have been multiple empirical observations of such phase codes in the brain. Here we ask: What kind of neural dynamics support phase coding of information with neural oscillations? We tackled this question by analyzing recurrent neural networks (RNNs) that were trained on a working memory task. The networks were given access to an external reference oscillation and tasked to produce an oscillation, such that the phase difference between the reference and output oscillation maintains the identity of transient stimuli. We found that networks converged to stable oscillatory dynamics. Reverse engineering these networks revealed that each phase-coded memory corresponds to a separate limit cycle attractor. We characterized how the stability of the attractor dynamics depends on both reference oscillation amplitude and frequency, properties that can be experimentally observed. To understand the connectivity structures that underlie these dynamics, we showed that trained networks can be described as two phase-coupled oscillators. Using this insight, we condensed our trained networks to a reduced model consisting of two functional modules: One that generates an oscillation and one that implements a coupling function between the internal oscillation and external reference. In summary, by reverse engineering the dynamics and connectivity of trained RNNs, we propose a mechanism by which neural networks can harness reference oscillations for working memory. Specifically, we propose that a phase-coding network generates autonomous oscillations which it couples to an external reference oscillation in a multi-stable fashion.
Collapse
Affiliation(s)
- Matthijs Pals
- Machine Learning in Science, Excellence Cluster Machine Learning, University of Tübingen, Tübingen, Germany
- Tübingen AI Center, University of Tübingen, Tübingen, Germany
| | - Jakob H. Macke
- Machine Learning in Science, Excellence Cluster Machine Learning, University of Tübingen, Tübingen, Germany
- Tübingen AI Center, University of Tübingen, Tübingen, Germany
- Department Empirical Inference, Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Omri Barak
- Rappaport Faculty of Medicine Technion, Israel Institute of Technology, Haifa, Israel
- Network Biology Research Laboratory, Israel Institute of Technology, Haifa, Israel
| |
Collapse
|
21
|
Weng G, Clark K, Akbarian A, Noudoost B, Nategh N. Time-varying generalized linear models: characterizing and decoding neuronal dynamics in higher visual areas. Front Comput Neurosci 2024; 18:1273053. [PMID: 38348287 PMCID: PMC10859875 DOI: 10.3389/fncom.2024.1273053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 01/09/2024] [Indexed: 02/15/2024] Open
Abstract
To create a behaviorally relevant representation of the visual world, neurons in higher visual areas exhibit dynamic response changes to account for the time-varying interactions between external (e.g., visual input) and internal (e.g., reward value) factors. The resulting high-dimensional representational space poses challenges for precisely quantifying individual factors' contributions to the representation and readout of sensory information during a behavior. The widely used point process generalized linear model (GLM) approach provides a powerful framework for a quantitative description of neuronal processing as a function of various sensory and non-sensory inputs (encoding) as well as linking particular response components to particular behaviors (decoding), at the level of single trials and individual neurons. However, most existing variations of GLMs assume the neural systems to be time-invariant, making them inadequate for modeling nonstationary characteristics of neuronal sensitivity in higher visual areas. In this review, we summarize some of the existing GLM variations, with a focus on time-varying extensions. We highlight their applications to understanding neural representations in higher visual areas and decoding transient neuronal sensitivity as well as linking physiology to behavior through manipulation of model components. This time-varying class of statistical models provide valuable insights into the neural basis of various visual behaviors in higher visual areas and hold significant potential for uncovering the fundamental computational principles that govern neuronal processing underlying various behaviors in different regions of the brain.
Collapse
Affiliation(s)
- Geyu Weng
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, United States
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Kelsey Clark
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Amir Akbarian
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Behrad Noudoost
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Neda Nategh
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
- Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT, United States
| |
Collapse
|
22
|
Tolooshams B, Matias S, Wu H, Temereanca S, Uchida N, Murthy VN, Masset P, Ba D. Interpretable deep learning for deconvolutional analysis of neural signals. bioRxiv 2024:2024.01.05.574379. [PMID: 38260512 PMCID: PMC10802267 DOI: 10.1101/2024.01.05.574379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
The widespread adoption of deep learning to build models that capture the dynamics of neural populations is typically based on "black-box" approaches that lack an interpretable link between neural activity and function. Here, we propose to apply algorithm unrolling, a method for interpretable deep learning, to design the architecture of sparse deconvolutional neural networks and obtain a direct interpretation of network weights in relation to stimulus-driven single-neuron activity through a generative model. We characterize our method, referred to as deconvolutional unrolled neural learning (DUNL), and show its versatility by applying it to deconvolve single-trial local signals across multiple brain areas and recording modalities. To exemplify use cases of our decomposition method, we uncover multiplexed salience and reward prediction error signals from midbrain dopamine neurons in an unbiased manner, perform simultaneous event detection and characterization in somatosensory thalamus recordings, and characterize the responses of neurons in the piriform cortex. Our work leverages the advances in interpretable deep learning to gain a mechanistic understanding of neural dynamics.
Collapse
Affiliation(s)
- Bahareh Tolooshams
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge MA, 02138
- Computing + Mathematical Sciences, California Institute of Technology, Pasadena, CA, 91125
| | - Sara Matias
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
| | - Hao Wu
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
| | - Simona Temereanca
- Carney Institute for Brain Science, Brown University, Providence, RI, 02906
| | - Naoshige Uchida
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
| | - Venkatesh N. Murthy
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
| | - Paul Masset
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
- Department of Psychology, McGill University, Montréal QC, H3A 1G1
| | - Demba Ba
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge MA, 02138
- Kempner Institute for the Study of Natural & Artificial Intelligence, Harvard University, Cambridge MA, 02138
| |
Collapse
|
23
|
Chen Y, Chien J, Dai B, Lin D, Chen ZS. Identifying behavioral links to neural dynamics of multifiber photometry recordings in a mouse social behavior network. bioRxiv 2024:2023.12.25.573308. [PMID: 38234793 PMCID: PMC10793434 DOI: 10.1101/2023.12.25.573308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
Distributed hypothalamic-midbrain neural circuits orchestrate complex behavioral responses during social interactions. How population-averaged neural activity measured by multi-fiber photometry (MFP) for calcium fluorescence signals correlates with social behaviors is a fundamental question. We propose a state-space analysis framework to characterize mouse MFP data based on dynamic latent variable models, which include continuous-state linear dynamical system (LDS) and discrete-state hidden semi-Markov model (HSMM). We validate these models on extensive MFP recordings during aggressive and mating behaviors in male-male and male-female interactions, respectively. Our results show that these models are capable of capturing both temporal behavioral structure and associated neural states. Overall, these analysis approaches provide an unbiased strategy to examine neural dynamics underlying social behaviors and reveals mechanistic insights into the relevant networks.
Collapse
Affiliation(s)
- Yibo Chen
- Department of Psychiatry, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, USA
- Program in Artificial Intelligence, University of Science and Technology of China, Hefei, Anhui, China
| | - Jonathan Chien
- Department of Psychiatry, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, USA
| | - Bing Dai
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, USA
| | - Dayu Lin
- Department of Psychiatry, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, USA
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| | - Zhe Sage Chen
- Department of Psychiatry, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, USA
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, USA
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, NY, USA
| |
Collapse
|
24
|
Love K, Cao D, Chang JC, Dal'Bello LR, Ma X, O'Shea DJ, Schone HR, Shahbazi M, Smoulder A. Highlights from the 32nd Annual Meeting of the Society for the Neural Control of Movement. J Neurophysiol 2024; 131:75-87. [PMID: 38057264 DOI: 10.1152/jn.00428.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 12/04/2023] [Indexed: 12/08/2023] Open
Affiliation(s)
- Kassia Love
- Massachusetts Eye and Ear, Boston, Massachusetts, United States
| | - Di Cao
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, Maryland, United States
- Center for Movement Studies, Kennedy Krieger Institute, Baltimore, Maryland, United States
| | - Joanna C Chang
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Lucas R Dal'Bello
- Laboratory of Neuromotor Physiology, IRCCS Fondazione Santa Lucia, Rome, Italy
| | - Xuan Ma
- Department of Neuroscience, Northwestern University, Chicago, Illinois, United States
| | - Daniel J O'Shea
- Department of Bioengineering, Stanford University, Stanford, California, United States
| | - Hunter R Schone
- Rehabilitation and Neural Engineering Laboratory, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
| | - Mahdiyar Shahbazi
- Western Institute for Neuroscience, Western University, London, Ontario, Canada
| | - Adam Smoulder
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
- Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, United States
| |
Collapse
|
25
|
A neural network that enables flexible nonlinear inference from neural population activity. Nat Biomed Eng 2024; 8:9-10. [PMID: 38086959 DOI: 10.1038/s41551-023-01111-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2024]
|
26
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent factors and structures in neural population activity. Nat Biomed Eng 2024; 8:85-108. [PMID: 38082181 DOI: 10.1038/s41551-023-01106-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 09/12/2023] [Indexed: 12/26/2023]
Abstract
Modelling the spatiotemporal dynamics in the activity of neural populations while also enabling their flexible inference is hindered by the complexity and noisiness of neural observations. Here we show that the lower-dimensional nonlinear latent factors and latent structures can be computationally modelled in a manner that allows for flexible inference causally, non-causally and in the presence of missing neural observations. To enable flexible inference, we developed a neural network that separates the model into jointly trained manifold and dynamic latent factors such that nonlinearity is captured through the manifold factors and the dynamics can be modelled in tractable linear form on this nonlinear manifold. We show that the model, which we named 'DFINE' (for 'dynamical flexible inference for nonlinear embeddings') achieves flexible inference in simulations of nonlinear dynamics and across neural datasets representing a diversity of brain regions and behaviours. Compared with earlier neural-network models, DFINE enables flexible inference, better predicts neural activity and behaviour, and better captures the latent neural manifold structure. DFINE may advance the development of neurotechnology and investigations in neuroscience.
Collapse
Affiliation(s)
- Hamidreza Abbaspourazad
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Eray Erturk
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Departments of Neurosurgery, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
27
|
Dyer EL, Kording K. Why the simplest explanation isn't always the best. Proc Natl Acad Sci U S A 2023; 120:e2319169120. [PMID: 38117857 PMCID: PMC10756184 DOI: 10.1073/pnas.2319169120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2023] Open
Affiliation(s)
- Eva L. Dyer
- Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA30332
| | - Konrad Kording
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA19104
| |
Collapse
|
28
|
Wang JH, Tsin D, Engel TA. Predictive variational autoencoder for learning robust representations of time-series data. ArXiv 2023:arXiv:2312.06932v1. [PMID: 38168462 PMCID: PMC10760197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 01/05/2024]
Abstract
Variational autoencoders (VAEs) have been used extensively to discover low-dimensional latent factors governing neural activity and animal behavior. However, without careful model selection, the uncovered latent factors may reflect noise in the data rather than true underlying features, rendering such representations unsuitable for scientific interpretation. Existing solutions to this problem involve introducing additional measured variables or data augmentations specific to a particular data type. We propose a VAE architecture that predicts the next point in time and show that it mitigates the learning of spurious features. In addition, we introduce a model selection metric based on smoothness over time in the latent space. We show that together these two constraints on VAEs to be smooth over time produce robust latent representations and faithfully recover latent factors on synthetic datasets.
Collapse
Affiliation(s)
- Julia H Wang
- Cold Spring Harbor Laboratory School of Biological Sciences Cold Spring Harbor Laboratory Cold Spring Harbor, New York, USA
| | - Dexter Tsin
- Princeton Neuroscience Institute Prineton University Princeton, New Jersey, USA
| | - Tatiana A Engel
- Princeton Neuroscience Institute Prineton University Princeton, New Jersey, USA
| |
Collapse
|
29
|
Petkoski S. On the structure function dichotomy: A perspective from human brain network modeling. Comment on "Structure and function in artificial, zebrafish and human neural networks" by Peng Ji et al. Phys Life Rev 2023; 47:165-167. [PMID: 37918193 DOI: 10.1016/j.plrev.2023.10.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 10/17/2023] [Indexed: 11/04/2023]
Affiliation(s)
- Spase Petkoski
- Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France.
| |
Collapse
|
30
|
Zhang Y, Ge F, Lin X, Xue J, Song Y, Xie H, He Y. Extract latent features of single-particle trajectories with historical experience learning. Biophys J 2023; 122:4451-4466. [PMID: 37885178 PMCID: PMC10698327 DOI: 10.1016/j.bpj.2023.10.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 07/30/2023] [Accepted: 10/20/2023] [Indexed: 10/28/2023] Open
Abstract
Single-particle tracking has enabled real-time, in situ quantitative studies of complex systems. However, inferring dynamic state changes from noisy and undersampling trajectories encounters challenges. Here, we introduce a data-driven method for extracting features of subtrajectories with historical experience learning (Deep-SEES), where a single-particle tracking analysis pipeline based on a self-supervised architecture automatically searches for the latent space, allowing effective segmentation of the underlying states from noisy trajectories without prior knowledge on the particle dynamics. We validated our method on a variety of noisy simulated and experimental data. Our results showed that the method can faithfully capture both stable states and their dynamic switch. In highly random systems, our method outperformed commonly used unsupervised methods in inferring motion states, which is important for understanding nanoparticles interacting with living cell membranes, active enzymes, and liquid-liquid phase separation. Self-generating latent features of trajectories could potentially improve the understanding, estimation, and prediction of many complex systems.
Collapse
Affiliation(s)
- Yongyu Zhang
- Department of Chemistry, Tsinghua University, Beijing, P.R. China
| | - Feng Ge
- Department of Chemistry, Tsinghua University, Beijing, P.R. China
| | - Xijian Lin
- Department of Chemistry, Tsinghua University, Beijing, P.R. China
| | - Jianfeng Xue
- Department of Chemistry, Tsinghua University, Beijing, P.R. China
| | - Yuxin Song
- Department of Chemistry, Tsinghua University, Beijing, P.R. China
| | - Hao Xie
- Department of Automation, Tsinghua University, Beijing, P.R. China.
| | - Yan He
- Department of Chemistry, Tsinghua University, Beijing, P.R. China.
| |
Collapse
|
31
|
Luo TZ, Kim TD, Gupta D, Bondy AG, Kopec CD, Elliot VA, DePasquale B, Brody CD. Transitions in dynamical regime and neural mode underlie perceptual decision-making. bioRxiv 2023:2023.10.15.562427. [PMID: 37904994 PMCID: PMC10614809 DOI: 10.1101/2023.10.15.562427] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Perceptual decision-making is the process by which an animal uses sensory stimuli to choose an action or mental proposition. This process is thought to be mediated by neurons organized as attractor networks 1,2 . However, whether attractor dynamics underlie decision behavior and the complex neuronal responses remains unclear. Here we use an unsupervised, deep learning-based method to discover decision-related dynamics from the simultaneous activity of neurons in frontal cortex and striatum of rats while they accumulate pulsatile auditory evidence. We show that contrary to prevailing hypotheses, attractors play a role only after a transition from a regime in the dynamics that is strongly driven by inputs to one dominated by the intrinsic dynamics. The initial regime mediates evidence accumulation, and the subsequent intrinsic-dominant regime subserves decision commitment. This regime transition is coupled to a rapid reorganization in the representation of the decision process in the neural population (a change in the "neural mode" along which the process develops). A simplified model approximating the coupled transition in the dynamics and neural mode allows inferring, from each trial's neural activity, the internal decision commitment time in that trial, and captures diverse and complex single-neuron temporal profiles, such as ramping and stepping 3-5 . It also captures trial-averaged curved trajectories 6-8 , and reveals distinctions between brain regions. Our results show that the formation of a perceptual choice involves a rapid, coordinated transition in both the dynamical regime and the neural mode of the decision process, and suggest pairing deep learning and parsimonious models as a promising approach for understanding complex data.
Collapse
|
32
|
Kim TD, Luo TZ, Can T, Krishnamurthy K, Pillow JW, Brody CD. Flow-field inference from neural data using deep recurrent networks. bioRxiv 2023:2023.11.14.567136. [PMID: 38014290 PMCID: PMC10680687 DOI: 10.1101/2023.11.14.567136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Computations involved in processes such as decision-making, working memory, and motor control are thought to emerge from the dynamics governing the collective activity of neurons in large populations. But the estimation of these dynamics remains a significant challenge. Here we introduce Flow-field Inference from Neural Data using deep Recurrent networks (FINDR), an unsupervised deep learning method that can infer low-dimensional nonlinear stochastic dynamics underlying neural population activity. Using population spike train data from frontal brain regions of rats performing an auditory decision-making task, we demonstrate that FINDR outperforms existing methods in capturing the heterogeneous responses of individual neurons. We further show that FINDR can discover interpretable low-dimensional dynamics when it is trained to disentangle task-relevant and irrelevant components of the neural population activity. Importantly, the low-dimensional nature of the learned dynamics allows for explicit visualization of flow fields and attractor structures. We suggest FINDR as a powerful method for revealing the low-dimensional task-relevant dynamics of neural populations and their associated computations.
Collapse
Affiliation(s)
| | - Thomas Zhihao Luo
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Tankut Can
- School of Natural Sciences, Institute for Advanced Study, Princeton, NJ
| | - Kamesh Krishnamurthy
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
- Joseph Henry Laboratories of Physics, Princeton University, Princeton, NJ
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Carlos D Brody
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
- Howard Hughes Medical Institute, Princeton University, Princeton, NJ
| |
Collapse
|
33
|
Durstewitz D, Koppe G, Thurm MI. Reconstructing computational system dynamics from neural data with recurrent neural networks. Nat Rev Neurosci 2023; 24:693-710. [PMID: 37794121 DOI: 10.1038/s41583-023-00740-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/18/2023] [Indexed: 10/06/2023]
Abstract
Computational models in neuroscience usually take the form of systems of differential equations. The behaviour of such systems is the subject of dynamical systems theory. Dynamical systems theory provides a powerful mathematical toolbox for analysing neurobiological processes and has been a mainstay of computational neuroscience for decades. Recently, recurrent neural networks (RNNs) have become a popular machine learning tool for studying the non-linear dynamics of neural and behavioural processes by emulating an underlying system of differential equations. RNNs have been routinely trained on similar behavioural tasks to those used for animal subjects to generate hypotheses about the underlying computational mechanisms. By contrast, RNNs can also be trained on the measured physiological and behavioural data, thereby directly inheriting their temporal and geometrical properties. In this way they become a formal surrogate for the experimentally probed system that can be further analysed, perturbed and simulated. This powerful approach is called dynamical system reconstruction. In this Perspective, we focus on recent trends in artificial intelligence and machine learning in this exciting and rapidly expanding field, which may be less well known in neuroscience. We discuss formal prerequisites, different model architectures and training approaches for RNN-based dynamical system reconstructions, ways to evaluate and validate model performance, how to interpret trained models in a neuroscience context, and current challenges.
Collapse
Affiliation(s)
- Daniel Durstewitz
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany.
- Interdisciplinary Center for Scientific Computing, Heidelberg University, Heidelberg, Germany.
- Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany.
| | - Georgia Koppe
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Dept. of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Hector Institute for Artificial Intelligence in Psychiatry, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Max Ingo Thurm
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
34
|
Boucher PO, Wang T, Carceroni L, Kane G, Shenoy KV, Chandrasekaran C. Initial conditions combine with sensory evidence to induce decision-related dynamics in premotor cortex. Nat Commun 2023; 14:6510. [PMID: 37845221 PMCID: PMC10579235 DOI: 10.1038/s41467-023-41752-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Accepted: 09/18/2023] [Indexed: 10/18/2023] Open
Abstract
We used a dynamical systems perspective to understand decision-related neural activity, a fundamentally unresolved problem. This perspective posits that time-varying neural activity is described by a state equation with an initial condition and evolves in time by combining at each time step, recurrent activity and inputs. We hypothesized various dynamical mechanisms of decisions, simulated them in models to derive predictions, and evaluated these predictions by examining firing rates of neurons in the dorsal premotor cortex (PMd) of monkeys performing a perceptual decision-making task. Prestimulus neural activity (i.e., the initial condition) predicted poststimulus neural trajectories, covaried with RT and the outcome of the previous trial, but not with choice. Poststimulus dynamics depended on both the sensory evidence and initial condition, with easier stimuli and fast initial conditions leading to the fastest choice-related dynamics. Together, these results suggest that initial conditions combine with sensory evidence to induce decision-related dynamics in PMd.
Collapse
Affiliation(s)
- Pierre O Boucher
- Department of Biomedical Engineering, Boston University, Boston, 02115, MA, USA
| | - Tian Wang
- Department of Biomedical Engineering, Boston University, Boston, 02115, MA, USA
| | - Laura Carceroni
- Undergraduate Program in Neuroscience, Boston University, Boston, 02115, MA, USA
| | - Gary Kane
- Department of Psychological and Brain Sciences, Boston University, Boston, 02115, MA, USA
| | - Krishna V Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, 94305, CA, USA
- Department of Neurobiology, Stanford University, Stanford, 94305, CA, USA
- Howard Hughes Medical Institute, HHMI, Chevy Chase, 20815-6789, MD, USA
- Department of Bioengineering, Stanford University, Stanford, 94305, CA, USA
- Stanford Neurosciences Institute, Stanford University, Stanford, 94305, CA, USA
- Bio-X Program, Stanford University, Stanford, 94305, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, 94305, CA, USA
| | - Chandramouli Chandrasekaran
- Department of Biomedical Engineering, Boston University, Boston, 02115, MA, USA.
- Department of Psychological and Brain Sciences, Boston University, Boston, 02115, MA, USA.
- Center for Systems Neuroscience, Boston University, Boston, 02115, MA, USA.
- Department of Anatomy & Neurobiology, Boston University, Boston, 02118, MA, USA.
| |
Collapse
|
35
|
Morrell M, Nemenman I, Sederberg AJ. Neural criticality from effective latent variables. ArXiv 2023:arXiv:2301.00759v3. [PMID: 36713239 PMCID: PMC9882570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
Observations of power laws in neural activity data have raised the intriguing notion that brains may operate in a critical state. One example of this critical state is "avalanche criticality," which has been observed in various systems, including cultured neurons, zebrafish, rodent cortex, and human EEG. More recently, power laws were also observed in neural populations in the mouse under an activity coarse-graining procedure, and they were explained as a consequence of the neural activity being coupled to multiple latent dynamical variables. An intriguing possibility is that avalanche criticality emerges due to a similar mechanism. Here, we determine the conditions under which latent dynamical variables give rise to avalanche criticality. We find that populations coupled to multiple latent variables produce critical behavior across a broader parameter range than those coupled to a single, quasi-static latent variable, but in both cases, avalanche criticality is observed without fine-tuning of model parameters. We identify two regimes of avalanches, both critical but differing in the amount of information carried about the latent variable. Our results suggest that avalanche criticality arises in neural systems in which activity is effectively modeled as a population driven by a few dynamical variables and these variables can be inferred from the population activity.
Collapse
Affiliation(s)
| | - Ilya Nemenman
- Department of Physics, Department of Biology, Initiative in Theory and Modeling of Living Systems, Emory University
| | - Audrey J. Sederberg
- Department of Neuroscience, University of Minnesota Medical School
- School of Psychology and School of Physics, Georgia Institute of Technology (current)
| |
Collapse
|
36
|
Winner TS, Rosenberg MC, Jain K, Kesar TM, Ting LH, Berman GJ. Discovering individual-specific gait signatures from data-driven models of neuromechanical dynamics. PLoS Comput Biol 2023; 19:e1011556. [PMID: 37889927 PMCID: PMC10610102 DOI: 10.1371/journal.pcbi.1011556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 09/30/2023] [Indexed: 10/29/2023] Open
Abstract
Locomotion results from the interactions of highly nonlinear neural and biomechanical dynamics. Accordingly, understanding gait dynamics across behavioral conditions and individuals based on detailed modeling of the underlying neuromechanical system has proven difficult. Here, we develop a data-driven and generative modeling approach that recapitulates the dynamical features of gait behaviors to enable more holistic and interpretable characterizations and comparisons of gait dynamics. Specifically, gait dynamics of multiple individuals are predicted by a dynamical model that defines a common, low-dimensional, latent space to compare group and individual differences. We find that highly individualized dynamics-i.e., gait signatures-for healthy older adults and stroke survivors during treadmill walking are conserved across gait speed. Gait signatures further reveal individual differences in gait dynamics, even in individuals with similar functional deficits. Moreover, components of gait signatures can be biomechanically interpreted and manipulated to reveal their relationships to observed spatiotemporal joint coordination patterns. Lastly, the gait dynamics model can predict the time evolution of joint coordination based on an initial static posture. Our gait signatures framework thus provides a generalizable, holistic method for characterizing and predicting cyclic, dynamical motor behavior that may generalize across species, pathologies, and gait perturbations.
Collapse
Affiliation(s)
- Taniel S. Winner
- W.H. Coulter Dept. Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia, United States of America
| | - Michael C. Rosenberg
- W.H. Coulter Dept. Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia, United States of America
| | - Kanishk Jain
- Department of Physics, Emory University, Atlanta, Georgia, United States of America
| | - Trisha M. Kesar
- Department of Rehabilitation Medicine, Division of Physical Therapy, Emory University, Atlanta, Georgia, United States of America
| | - Lena H. Ting
- W.H. Coulter Dept. Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia, United States of America
- Department of Rehabilitation Medicine, Division of Physical Therapy, Emory University, Atlanta, Georgia, United States of America
| | - Gordon J. Berman
- Department of Biology, Emory University, Atlanta, Georgia, United States of America
| |
Collapse
|
37
|
Warsi NM, Wong SM, Germann J, Boutet A, Arski ON, Anderson R, Erdman L, Yan H, Suresh H, Gouveia FV, Loh A, Elias GJB, Kerr E, Smith ML, Ochi A, Otsubo H, Sharma R, Jain P, Donner E, Lozano AM, Snead OC, Ibrahim GM. Dissociable default-mode subnetworks subserve childhood attention and cognitive flexibility: Evidence from deep learning and stereotactic electroencephalography. Neural Netw 2023; 167:827-837. [PMID: 37741065 DOI: 10.1016/j.neunet.2023.07.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 05/13/2023] [Accepted: 07/12/2023] [Indexed: 09/25/2023]
Abstract
Cognitive flexibility encompasses the ability to efficiently shift focus and forms a critical component of goal-directed attention. The neural substrates of this process are incompletely understood in part due to difficulties in sampling the involved circuitry. We leverage stereotactic intracranial recordings to directly resolve local-field potentials from otherwise inaccessible structures to study moment-to-moment attentional activity in children with epilepsy performing a flexible attentional task. On an individual subject level, we employed deep learning to decode neural features predictive of task performance indexed by single-trial reaction time. These models were subsequently aggregated across participants to identify predictive brain regions based on AAL atlas and FIND functional network parcellations. Through this approach, we show that fluctuations in beta (12-30 Hz) and gamma (30-80 Hz) power reflective of increased top-down attentional control and local neuronal processing within relevant large-scale networks can accurately predict single-trial task performance. We next performed connectomic profiling of these highly predictive nodes to examine task-related engagement of distributed functional networks, revealing exclusive recruitment of the dorsal default mode network during shifts in attention. The identification of distinct substreams within the default mode system supports a key role for this network in cognitive flexibility and attention in children. Furthermore, convergence of our results onto consistent functional networks despite significant inter-subject variability in electrode implantations supports a broader role for deep learning applied to intracranial electrodes in the study of human attention.
Collapse
Affiliation(s)
- Nebras M Warsi
- Division of Neurosurgery, The Hospital for Sick Children, 555 University Ave., Toronto, Ontario, Canada; Department of Biomedical Engineering, University of Toronto, Toronto, Ontario, Canada
| | - Simeon M Wong
- Department of Biomedical Engineering, University of Toronto, Toronto, Ontario, Canada; Program in Neuroscience and Mental Health, Hospital for Sick Children, Toronto, Ontario, Canada
| | - Jürgen Germann
- Division of Neurosurgery, Toronto Western Hospital, University Health Network, Toronto, Ontario, Canada
| | - Alexandre Boutet
- Division of Neurosurgery, Toronto Western Hospital, University Health Network, Toronto, Ontario, Canada; Joint Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada
| | - Olivia N Arski
- Program in Neuroscience and Mental Health, Hospital for Sick Children, Toronto, Ontario, Canada
| | | | - Lauren Erdman
- Vector Institute for Artificial Intelligence, University Health Network, Toronto, Ontario, Canada
| | - Han Yan
- Division of Neurosurgery, The Hospital for Sick Children, 555 University Ave., Toronto, Ontario, Canada
| | - Hrishikesh Suresh
- Division of Neurosurgery, The Hospital for Sick Children, 555 University Ave., Toronto, Ontario, Canada; Department of Biomedical Engineering, University of Toronto, Toronto, Ontario, Canada
| | | | - Aaron Loh
- Division of Neurosurgery, Toronto Western Hospital, University Health Network, Toronto, Ontario, Canada; Joint Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada
| | - Gavin J B Elias
- Division of Neurosurgery, Toronto Western Hospital, University Health Network, Toronto, Ontario, Canada; Joint Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada
| | - Elizabeth Kerr
- Department of Psychology, The Hospital for Sick Children, University of Toronto, 555 University Ave., Toronto, Ontario, Canada, M5G 1X8
| | - Mary Lou Smith
- Department of Psychology, The Hospital for Sick Children, University of Toronto, 555 University Ave., Toronto, Ontario, Canada, M5G 1X8
| | - Ayako Ochi
- Division of Neurosurgery, The Hospital for Sick Children, 555 University Ave., Toronto, Ontario, Canada
| | - Hiroshi Otsubo
- Division of Neurosurgery, The Hospital for Sick Children, 555 University Ave., Toronto, Ontario, Canada
| | - Roy Sharma
- Division of Neurosurgery, The Hospital for Sick Children, 555 University Ave., Toronto, Ontario, Canada
| | - Puneet Jain
- Division of Neurosurgery, The Hospital for Sick Children, 555 University Ave., Toronto, Ontario, Canada
| | - Elizabeth Donner
- Division of Neurosurgery, The Hospital for Sick Children, 555 University Ave., Toronto, Ontario, Canada
| | - Andres M Lozano
- Division of Neurosurgery, Toronto Western Hospital, University Health Network, Toronto, Ontario, Canada
| | - O Carter Snead
- Division of Neurosurgery, The Hospital for Sick Children, 555 University Ave., Toronto, Ontario, Canada
| | - George M Ibrahim
- Division of Neurosurgery, The Hospital for Sick Children, 555 University Ave., Toronto, Ontario, Canada; Department of Biomedical Engineering, University of Toronto, Toronto, Ontario, Canada; Program in Neuroscience and Mental Health, Hospital for Sick Children, Toronto, Ontario, Canada.
| |
Collapse
|
38
|
De A, Chaudhuri R. Common population codes produce extremely nonlinear neural manifolds. Proc Natl Acad Sci U S A 2023; 120:e2305853120. [PMID: 37733742 PMCID: PMC10523500 DOI: 10.1073/pnas.2305853120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 08/03/2023] [Indexed: 09/23/2023] Open
Abstract
Populations of neurons represent sensory, motor, and cognitive variables via patterns of activity distributed across the population. The size of the population used to encode a variable is typically much greater than the dimension of the variable itself, and thus, the corresponding neural population activity occupies lower-dimensional subsets of the full set of possible activity states. Given population activity data with such lower-dimensional structure, a fundamental question asks how close the low-dimensional data lie to a linear subspace. The linearity or nonlinearity of the low-dimensional structure reflects important computational features of the encoding, such as robustness and generalizability. Moreover, identifying such linear structure underlies common data analysis methods such as Principal Component Analysis (PCA). Here, we show that for data drawn from many common population codes the resulting point clouds and manifolds are exceedingly nonlinear, with the dimension of the best-fitting linear subspace growing at least exponentially with the true dimension of the data. Consequently, linear methods like PCA fail dramatically at identifying the true underlying structure, even in the limit of arbitrarily many data points and no noise.
Collapse
Affiliation(s)
- Anandita De
- Center for Neuroscience, University of California, Davis, CA95618
- Department of Physics, University of California, Davis, CA95616
| | - Rishidev Chaudhuri
- Center for Neuroscience, University of California, Davis, CA95618
- Department of Neurobiology, Physiology and Behavior, University of California, Davis, CA95616
- Department of Mathematics, University of California, Davis, CA95616
| |
Collapse
|
39
|
Jerjian SJ, Harsch DR, Fetsch CR. Self-motion perception and sequential decision-making: where are we heading? Philos Trans R Soc Lond B Biol Sci 2023; 378:20220333. [PMID: 37545301 PMCID: PMC10404932 DOI: 10.1098/rstb.2022.0333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 06/18/2023] [Indexed: 08/08/2023] Open
Abstract
To navigate and guide adaptive behaviour in a dynamic environment, animals must accurately estimate their own motion relative to the external world. This is a fundamentally multisensory process involving integration of visual, vestibular and kinesthetic inputs. Ideal observer models, paired with careful neurophysiological investigation, helped to reveal how visual and vestibular signals are combined to support perception of linear self-motion direction, or heading. Recent work has extended these findings by emphasizing the dimension of time, both with regard to stimulus dynamics and the trade-off between speed and accuracy. Both time and certainty-i.e. the degree of confidence in a multisensory decision-are essential to the ecological goals of the system: terminating a decision process is necessary for timely action, and predicting one's accuracy is critical for making multiple decisions in a sequence, as in navigation. Here, we summarize a leading model for multisensory decision-making, then show how the model can be extended to study confidence in heading discrimination. Lastly, we preview ongoing efforts to bridge self-motion perception and navigation per se, including closed-loop virtual reality and active self-motion. The design of unconstrained, ethologically inspired tasks, accompanied by large-scale neural recordings, raise promise for a deeper understanding of spatial perception and decision-making in the behaving animal. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Steven J. Jerjian
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Devin R. Harsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
- Center for Neuroscience and Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Christopher R. Fetsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
40
|
Ye J, Collinger JL, Wehbe L, Gaunt R. Neural Data Transformer 2: Multi-context Pretraining for Neural Spiking Activity. bioRxiv 2023:2023.09.18.558113. [PMID: 37781630 PMCID: PMC10541112 DOI: 10.1101/2023.09.18.558113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/03/2023]
Abstract
The neural population spiking activity recorded by intracortical brain-computer interfaces (iBCIs) contain rich structure. Current models of such spiking activity are largely prepared for individual experimental contexts, restricting data volume to that collectable within a single session and limiting the effectiveness of deep neural networks (DNNs). The purported challenge in aggregating neural spiking data is the pervasiveness of context-dependent shifts in the neural data distributions. However, large scale unsupervised pretraining by nature spans heterogeneous data, and has proven to be a fundamental recipe for successful representation learning across deep learning. We thus develop Neural Data Transformer 2 (NDT2), a spatiotemporal Transformer for neural spiking activity, and demonstrate that pretraining can leverage motor BCI datasets that span sessions, subjects, and experimental tasks. NDT2 enables rapid adaptation to novel contexts in downstream decoding tasks and opens the path to deployment of pretrained DNNs for iBCI control. Code: https://github.com/joel99/context_general_bci.
Collapse
Affiliation(s)
- Joel Ye
- Rehab Neural Engineering Labs, University of Pittsburgh
- Neuroscience Institute, Carnegie Mellon University
- Center for the Neural Basis of Cognition, Pittsburgh
| | - Jennifer L. Collinger
- Rehab Neural Engineering Labs, University of Pittsburgh
- Center for the Neural Basis of Cognition, Pittsburgh
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh
- Department of Bioengineering, University of Pittsburgh
- Department of Biomedical Engineering, Carnegie Mellon University
| | - Leila Wehbe
- Neuroscience Institute, Carnegie Mellon University
- Center for the Neural Basis of Cognition, Pittsburgh
- Machine Learning Department, Carnegie Mellon University
| | - Robert Gaunt
- Rehab Neural Engineering Labs, University of Pittsburgh
- Center for the Neural Basis of Cognition, Pittsburgh
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh
- Department of Bioengineering, University of Pittsburgh
- Department of Biomedical Engineering, Carnegie Mellon University
| |
Collapse
|
41
|
Versteeg C, Sedler AR, McCart JD, Pandarinath C. Expressive dynamics models with nonlinear injective readouts enable reliable recovery of latent features from neural activity. ArXiv 2023:arXiv:2309.06402v1. [PMID: 37744459 PMCID: PMC10516113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
The advent of large-scale neural recordings has enabled new approaches that aim to discover the computational mechanisms of neural circuits by understanding the rules that govern how their state evolves over time. While these neural dynamics cannot be directly measured, they can typically be approximated by low-dimensional models in a latent space. How these models represent the mapping from latent space to neural space can affect the interpretability of the latent representation. We show that typical choices for this mapping (e.g., linear or MLP) often lack the property of injectivity, meaning that changes in latent state are not obligated to affect activity in the neural space. During training, non-injective readouts incentivize the invention of dynamics that misrepresent the underlying system and the computation it performs. Combining our injective Flow readout with prior work on interpretable latent dynamics models, we created the Ordinary Differential equations autoencoder with Injective Nonlinear readout (ODIN), which learns to capture latent dynamical systems that are nonlinearly embedded into observed neural activity via an approximately injective nonlinear mapping. We show that ODIN can recover nonlinearly embedded systems from simulated neural activity, even when the nature of the system and embedding are unknown. Additionally, we show that ODIN enables the unsupervised recovery of underlying dynamical features (e.g., fixed points) and embedding geometry. When applied to biological neural recordings, ODIN can reconstruct neural activity with comparable accuracy to previous state-of-the-art methods while using substantially fewer latent dimensions. Overall, ODIN's accuracy in recovering ground-truth latent features and ability to accurately reconstruct neural activity with low dimensionality make it a promising method for distilling interpretable dynamics that can help explain neural computation.
Collapse
Affiliation(s)
- Christopher Versteeg
- Wallace H. Coulter Department of Biomedical Engineering Emory University and Georgia Institute of Technology Atlanta, GA, USA
| | - Andrew R Sedler
- Wallace H. Coulter Department of Biomedical Engineering Emory University and Georgia Institute of Technology Atlanta, GA, USA
- Center for Machine Learning Georgia Institute of Technology Atlanta, GA, USA
| | - Jonathan D McCart
- Wallace H. Coulter Department of Biomedical Engineering Emory University and Georgia Institute of Technology Atlanta, GA, USA
- Center for Machine Learning Georgia Institute of Technology Atlanta, GA, USA
| | - Chethan Pandarinath
- Wallace H. Coulter Department of Biomedical Engineering Emory University and Georgia Institute of Technology Atlanta, GA, USA
- Center for Machine Learning Georgia Institute of Technology Atlanta, GA, USA
| |
Collapse
|
42
|
Kass RE, Bong H, Olarinre M, Xin Q, Urban KN. Identification of interacting neural populations: methods and statistical considerations. J Neurophysiol 2023; 130:475-496. [PMID: 37465897 PMCID: PMC10642974 DOI: 10.1152/jn.00131.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 07/17/2023] [Accepted: 07/17/2023] [Indexed: 07/20/2023] Open
Abstract
As improved recording technologies have created new opportunities for neurophysiological investigation, emphasis has shifted from individual neurons to multiple populations that form circuits, and it has become important to provide evidence of cross-population coordinated activity. We review various methods for doing so, placing them in six major categories while avoiding technical descriptions and instead focusing on high-level motivations and concerns. Our aim is to indicate what the methods can achieve and the circumstances under which they are likely to succeed. Toward this end, we include a discussion of four cross-cutting issues: the definition of neural populations, trial-to-trial variability and Poisson-like noise, time-varying dynamics, and causality.
Collapse
Affiliation(s)
- Robert E Kass
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
- Department of Statistics & Data Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
| | - Heejong Bong
- Department of Statistics & Data Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
| | - Motolani Olarinre
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
- Department of Statistics & Data Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
| | - Qi Xin
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
- Department of Statistics & Data Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
| | - Konrad N Urban
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
- Department of Statistics & Data Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
| |
Collapse
|
43
|
Summerfield C, Miller K. Computational and systems neuroscience: The next 20 years. PLoS Biol 2023; 21:e3002306. [PMID: 37751414 PMCID: PMC10522016 DOI: 10.1371/journal.pbio.3002306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/28/2023] Open
Abstract
Over the past 20 years, neuroscience has been propelled forward by theory-driven experimentation. We consider the future outlook for the field in the age of big neural data and powerful artificial intelligence models.
Collapse
Affiliation(s)
- Christopher Summerfield
- Google DeepMind, London, United Kingdom
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| | - Kevin Miller
- Google DeepMind, London, United Kingdom
- Department of Ophthalmology, University College London, London, United Kingdom
| |
Collapse
|
44
|
Ma X, Rizzoglio F, Bodkin KL, Perreault E, Miller LE, Kennedy A. Using adversarial networks to extend brain computer interface decoding accuracy over time. eLife 2023; 12:e84296. [PMID: 37610305 PMCID: PMC10446822 DOI: 10.7554/elife.84296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 08/01/2023] [Indexed: 08/24/2023] Open
Abstract
Existing intracortical brain computer interfaces (iBCIs) transform neural activity into control signals capable of restoring movement to persons with paralysis. However, the accuracy of the 'decoder' at the heart of the iBCI typically degrades over time due to turnover of recorded neurons. To compensate, decoders can be recalibrated, but this requires the user to spend extra time and effort to provide the necessary data, then learn the new dynamics. As the recorded neurons change, one can think of the underlying movement intent signal being expressed in changing coordinates. If a mapping can be computed between the different coordinate systems, it may be possible to stabilize the original decoder's mapping from brain to behavior without recalibration. We previously proposed a method based on Generalized Adversarial Networks (GANs), called 'Adversarial Domain Adaptation Network' (ADAN), which aligns the distributions of latent signals within underlying low-dimensional neural manifolds. However, we tested ADAN on only a very limited dataset. Here we propose a method based on Cycle-Consistent Adversarial Networks (Cycle-GAN), which aligns the distributions of the full-dimensional neural recordings. We tested both Cycle-GAN and ADAN on data from multiple monkeys and behaviors and compared them to a third, quite different method based on Procrustes alignment of axes provided by Factor Analysis. All three methods are unsupervised and require little data, making them practical in real life. Overall, Cycle-GAN had the best performance and was easier to train and more robust than ADAN, making it ideal for stabilizing iBCI systems over time.
Collapse
Affiliation(s)
- Xuan Ma
- Department of Neuroscience, Northwestern UniversityChicagoUnited States
| | - Fabio Rizzoglio
- Department of Neuroscience, Northwestern UniversityChicagoUnited States
| | - Kevin L Bodkin
- Department of Neuroscience, Northwestern UniversityChicagoUnited States
| | - Eric Perreault
- Department of Biomedical Engineering, Northwestern UniversityEvanstonUnited States
- Department of Physical Medicine and Rehabilitation, Northwestern UniversityChicagoUnited States
- Shirley Ryan AbilityLabChicagoUnited States
| | - Lee E Miller
- Department of Neuroscience, Northwestern UniversityChicagoUnited States
- Department of Biomedical Engineering, Northwestern UniversityEvanstonUnited States
- Department of Physical Medicine and Rehabilitation, Northwestern UniversityChicagoUnited States
- Shirley Ryan AbilityLabChicagoUnited States
| | - Ann Kennedy
- Department of Neuroscience, Northwestern UniversityChicagoUnited States
| |
Collapse
|
45
|
Natraj N, Seko S, Abiri R, Yan H, Graham Y, Tu-Chan A, Chang EF, Ganguly K. Flexible regulation of representations on a drifting manifold enables long-term stable complex neuroprosthetic control. bioRxiv 2023:2023.08.11.551770. [PMID: 37645922 PMCID: PMC10462094 DOI: 10.1101/2023.08.11.551770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
The nervous system needs to balance the stability of neural representations with plasticity. It is unclear what is the representational stability of simple actions, particularly those that are well-rehearsed in humans, and how it changes in new contexts. Using an electrocorticography brain-computer interface (BCI), we found that the mesoscale manifold and relative representational distances for a repertoire of simple imagined movements were remarkably stable. Interestingly, however, the manifold's absolute location demonstrated day-to-day drift. Strikingly, representational statistics, especially variance, could be flexibly regulated to increase discernability during BCI control without somatotopic changes. Discernability strengthened with practice and was specific to the BCI, demonstrating remarkable contextual specificity. Accounting for drift, and leveraging the flexibility of representations, allowed neuroprosthetic control of a robotic arm and hand for over 7 months without recalibration. Our study offers insight into how electrocorticography can both track representational statistics across long periods and allow long-term complex neuroprosthetic control.
Collapse
Affiliation(s)
- Nikhilesh Natraj
- Dept. of Neurology, Weill Institute for Neurosciences, University of California San Francisco, San Francisco, California, USA
- UCSF - Veteran Affairs Medical Center, San Francisco, California, USA
| | - Sarah Seko
- Dept. of Neurology, Weill Institute for Neurosciences, University of California San Francisco, San Francisco, California, USA
- UCSF - Veteran Affairs Medical Center, San Francisco, California, USA
| | - Reza Abiri
- Electrical, Computer and Biomedical Engineering, University of Rhode Island, Rhode Island, USA
| | - Hongyi Yan
- Dept. of Neurology, Weill Institute for Neurosciences, University of California San Francisco, San Francisco, California, USA
- UCSF - Veteran Affairs Medical Center, San Francisco, California, USA
| | - Yasmin Graham
- Dept. of Neurology, Weill Institute for Neurosciences, University of California San Francisco, San Francisco, California, USA
- UCSF - Veteran Affairs Medical Center, San Francisco, California, USA
| | - Adelyn Tu-Chan
- Dept. of Neurology, Weill Institute for Neurosciences, University of California San Francisco, San Francisco, California, USA
- UCSF - Veteran Affairs Medical Center, San Francisco, California, USA
| | - Edward F Chang
- Department of Neurological Surgery, Weill Institute for Neuroscience, University of California-San Francisco, San Francisco, California, USA
| | - Karunesh Ganguly
- Dept. of Neurology, Weill Institute for Neurosciences, University of California San Francisco, San Francisco, California, USA
- UCSF - Veteran Affairs Medical Center, San Francisco, California, USA
| |
Collapse
|
46
|
FERRÉ JOHN, ROKEM ARIEL, BUFFALO ELIZABETHA, KUTZ JNATHAN, Fairhall A. Non-Stationary Dynamic Mode Decomposition. bioRxiv 2023:2023.08.08.552333. [PMID: 37609201 PMCID: PMC10441341 DOI: 10.1101/2023.08.08.552333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
Many physical processes display complex high-dimensional time-varying behavior, from global weather patterns to brain activity. An outstanding challenge is to express high dimensional data in terms of a dynamical model that reveals their spatiotemporal structure. Dynamic Mode Decomposition is a means to achieve this goal, allowing the identification of key spatiotemporal modes through the diagonalization of a finite dimensional approximation of the Koopman operator. However, DMD methods apply best to time-translationally invariant or stationary data, while in many typical cases, dynamics vary across time and conditions. To capture this temporal evolution, we developed a method, Non-Stationary Dynamic Mode Decomposition (NS-DMD), that generalizes DMD by fitting global modulations of drifting spatiotemporal modes. This method accurately predicts the temporal evolution of modes in simulations and recovers previously known results from simpler methods. To demonstrate its properties, the method is applied to multi-channel recordings from an awake behaving non-human primate performing a cognitive task.
Collapse
Affiliation(s)
- JOHN FERRÉ
- Physics Department, University of Washington, Seattle, Washington 98195, USA
| | - ARIEL ROKEM
- Psychology Department and eScience Institute, University of Washington, Seattle, Washington 98195, USA
| | - ELIZABETH A. BUFFALO
- Department of Physiology and Biophysics, University of Washington School of Medicine, Washington National Primate Research Center, Seattle Washington 98195, USA
| | - J. NATHAN KUTZ
- Applied Mathematics and Electrical and Computer Engineering Department, University of Washington, Seattle, Washington 98195, USA
| | - Adrienne Fairhall
- Physiology and Biophysics Department, University of Washington, Seattle, Washington 98195, USA
| |
Collapse
|
47
|
Ali YH, Bodkin K, Rigotti-Thompson M, Patel K, Card NS, Bhaduri B, Nason-Tomaszewski SR, Mifsud DM, Hou X, Nicolas C, Allcroft S, Hochberg LR, Yong NA, Stavisky SD, Miller LE, Brandman DM, Pandarinath C. BRAND: A platform for closed-loop experiments with deep network models. bioRxiv 2023:2023.08.08.552473. [PMID: 37609167 PMCID: PMC10441362 DOI: 10.1101/2023.08.08.552473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
Artificial neural networks (ANNs) are state-of-the-art tools for modeling and decoding neural activity, but deploying them in closed-loop experiments with tight timing constraints is challenging due to their limited support in existing real-time frameworks. Researchers need a platform that fully supports high-level languages for running ANNs (e.g., Python and Julia) while maintaining support for languages that are critical for low-latency data acquisition and processing (e.g., C and C++). To address these needs, we introduce the Backend for Realtime Asynchronous Neural Decoding (BRAND). BRAND comprises Linux processes, termed nodes , which communicate with each other in a graph via streams of data. Its asynchronous design allows for acquisition, control, and analysis to be executed in parallel on streams of data that may operate at different timescales. BRAND uses Redis to send data between nodes, which enables fast inter-process communication and supports 54 different programming languages. Thus, developers can easily deploy existing ANN models in BRAND with minimal implementation changes. In our tests, BRAND achieved <600 microsecond latency between processes when sending large quantities of data (1024 channels of 30 kHz neural data in 1-millisecond chunks). BRAND runs a brain-computer interface with a recurrent neural network (RNN) decoder with less than 8 milliseconds of latency from neural data input to decoder prediction. In a real-world demonstration of the system, participant T11 in the BrainGate2 clinical trial performed a standard cursor control task, in which 30 kHz signal processing, RNN decoding, task control, and graphics were all executed in BRAND. This system also supports real-time inference with complex latent variable models like Latent Factor Analysis via Dynamical Systems. By providing a framework that is fast, modular, and language-agnostic, BRAND lowers the barriers to integrating the latest tools in neuroscience and machine learning into closed-loop experiments.
Collapse
|
48
|
Daie K, Fontolan L, Druckmann S, Svoboda K. Feedforward amplification in recurrent networks underlies paradoxical neural coding. bioRxiv 2023:2023.08.04.552026. [PMID: 37577599 PMCID: PMC10418196 DOI: 10.1101/2023.08.04.552026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
The activity of single neurons encodes behavioral variables, such as sensory stimuli (Hubel & Wiesel 1959) and behavioral choice (Britten et al. 1992; Guo et al. 2014), but their influence on behavior is often mysterious. We estimated the influence of a unit of neural activity on behavioral choice from recordings in anterior lateral motor cortex (ALM) in mice performing a memory-guided movement task (H. K. Inagaki et al. 2018). Choice selectivity grew as it flowed through a sequence of directions in activity space. Early directions carried little selectivity but were predicted to have a large behavioral influence, while late directions carried large selectivity and little behavioral influence. Consequently, estimated behavioral influence was only weakly correlated with choice selectivity; a large proportion of neurons selective for one choice were predicted to influence choice in the opposite direction. These results were consistent with models in which recurrent circuits produce feedforward amplification (Goldman 2009; Ganguli et al. 2008; Murphy & Miller 2009) so that small amplitude signals along early directions are amplified to produce low-dimensional choice selectivity along the late directions, and behavior. Targeted photostimulation experiments (Daie et al. 2021b) revealed that activity along the early directions triggered sequential activity along the later directions and caused predictable behavioral biases. These results demonstrate the existence of an amplifying feedforward dynamical motif in the motor cortex, explain paradoxical responses to perturbation experiments (Chettih & Harvey 2019; Daie et al. 2021b; Russell et al. 2019), and reveal behavioral relevance of small amplitude neural dynamics.
Collapse
|
49
|
Kirchherr S, Mildiner Moraga S, Coudé G, Bimbi M, Ferrari PF, Aarts E, Bonaiuto JJ. Bayesian multilevel hidden Markov models identify stable state dynamics in longitudinal recordings from macaque primary motor cortex. Eur J Neurosci 2023; 58:2787-2806. [PMID: 37382060 DOI: 10.1111/ejn.16065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 04/02/2023] [Accepted: 06/01/2023] [Indexed: 06/30/2023]
Abstract
Neural populations, rather than single neurons, may be the fundamental unit of cortical computation. Analysing chronically recorded neural population activity is challenging not only because of the high dimensionality of activity but also because of changes in the signal that may or may not be due to neural plasticity. Hidden Markov models (HMMs) are a promising technique for analysing such data in terms of discrete latent states, but previous approaches have not considered the statistical properties of neural spiking data, have not been adaptable to longitudinal data, or have not modelled condition-specific differences. We present a multilevel Bayesian HMM addresses these shortcomings by incorporating multivariate Poisson log-normal emission probability distributions, multilevel parameter estimation and trial-specific condition covariates. We applied this framework to multi-unit neural spiking data recorded using chronically implanted multi-electrode arrays from macaque primary motor cortex during a cued reaching, grasping and placing task. We show that, in line with previous work, the model identifies latent neural population states which are tightly linked to behavioural events, despite the model being trained without any information about event timing. The association between these states and corresponding behaviour is consistent across multiple days of recording. Notably, this consistency is not observed in the case of a single-level HMM, which fails to generalise across distinct recording sessions. The utility and stability of this approach is demonstrated using a previously learned task, but this multilevel Bayesian HMM framework would be especially suited for future studies of long-term plasticity in neural populations.
Collapse
Affiliation(s)
- Sebastien Kirchherr
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Bron, France
- Université Claude Bernard Lyon 1, Université de Lyon, France
| | | | - Gino Coudé
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Bron, France
- Université Claude Bernard Lyon 1, Université de Lyon, France
- Inovarion, Paris, France
| | - Marco Bimbi
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Bron, France
- Université Claude Bernard Lyon 1, Université de Lyon, France
| | - Pier F Ferrari
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Bron, France
- Université Claude Bernard Lyon 1, Université de Lyon, France
| | - Emmeke Aarts
- Department of Methodology and Statistics, Universiteit Utrecht, Utrecht, Netherlands
| | - James J Bonaiuto
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Bron, France
- Université Claude Bernard Lyon 1, Université de Lyon, France
| |
Collapse
|
50
|
Genkin M, Shenoy KV, Chandrasekaran C, Engel TA. The dynamics and geometry of choice in premotor cortex. bioRxiv 2023:2023.07.22.550183. [PMID: 37546748 PMCID: PMC10401920 DOI: 10.1101/2023.07.22.550183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
The brain represents sensory variables in the coordinated activity of neural populations, in which tuning curves of single neurons define the geometry of the population code. Whether the same coding principle holds for dynamic cognitive variables remains unknown because internal cognitive processes unfold with a unique time course on single trials observed only in the irregular spiking of heterogeneous neural populations. Here we show the existence of such a population code for the dynamics of choice formation in the primate premotor cortex. We developed an approach to simultaneously infer population dynamics and tuning functions of single neurons to the population state. Applied to spike data recorded during decision-making, our model revealed that populations of neurons encoded the same dynamic variable predicting choices, and heterogeneous firing rates resulted from the diverse tuning of single neurons to this decision variable. The inferred dynamics indicated an attractor mechanism for decision computation. Our results reveal a common geometric principle for neural encoding of sensory and dynamic cognitive variables.
Collapse
Affiliation(s)
| | - Krishna V Shenoy
- Howard Hughes Medical Institute, Stanford University, Stanford, CA
- Department of Electrical Engineering, Stanford University, Stanford, CA
| | - Chandramouli Chandrasekaran
- Department of Anatomy & Neurobiology, Boston University, Boston, MA
- Department of Psychological and Brain Sciences, Boston University, Boston, MA
- Center for Systems Neuroscience, Boston University, Boston, MA
| | - Tatiana A Engel
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| |
Collapse
|