1
|
Donner C, Bartram J, Hornauer P, Kim T, Roqueiro D, Hierlemann A, Obozinski G, Schröter M. Ensemble learning and ground-truth validation of synaptic connectivity inferred from spike trains. PLoS Comput Biol 2024; 20:e1011964. [PMID: 38683881 PMCID: PMC11081509 DOI: 10.1371/journal.pcbi.1011964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 05/09/2024] [Accepted: 03/02/2024] [Indexed: 05/02/2024] Open
Abstract
Probing the architecture of neuronal circuits and the principles that underlie their functional organization remains an important challenge of modern neurosciences. This holds true, in particular, for the inference of neuronal connectivity from large-scale extracellular recordings. Despite the popularity of this approach and a number of elaborate methods to reconstruct networks, the degree to which synaptic connections can be reconstructed from spike-train recordings alone remains controversial. Here, we provide a framework to probe and compare connectivity inference algorithms, using a combination of synthetic ground-truth and in vitro data sets, where the connectivity labels were obtained from simultaneous high-density microelectrode array (HD-MEA) and patch-clamp recordings. We find that reconstruction performance critically depends on the regularity of the recorded spontaneous activity, i.e., their dynamical regime, the type of connectivity, and the amount of available spike-train data. We therefore introduce an ensemble artificial neural network (eANN) to improve connectivity inference. We train the eANN on the validated outputs of six established inference algorithms and show how it improves network reconstruction accuracy and robustness. Overall, the eANN demonstrated strong performance across different dynamical regimes, worked well on smaller datasets, and improved the detection of synaptic connectivity, especially inhibitory connections. Results indicated that the eANN also improved the topological characterization of neuronal networks. The presented methodology contributes to advancing the performance of inference algorithms and facilitates our understanding of how neuronal activity relates to synaptic connectivity.
Collapse
Affiliation(s)
- Christian Donner
- Swiss Data Science Center, ETH Zürich & EPFL, Zürich & Lausanne, Switzerland
| | - Julian Bartram
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| | - Philipp Hornauer
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| | - Taehoon Kim
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| | - Damian Roqueiro
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| | - Andreas Hierlemann
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| | - Guillaume Obozinski
- Swiss Data Science Center, ETH Zürich & EPFL, Zürich & Lausanne, Switzerland
| | - Manuel Schröter
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| |
Collapse
|
2
|
Liang T, Brinkman BAW. Statistically inferred neuronal connections in subsampled neural networks strongly correlate with spike train covariances. Phys Rev E 2024; 109:044404. [PMID: 38755896 DOI: 10.1103/physreve.109.044404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 02/29/2024] [Indexed: 05/18/2024]
Abstract
Statistically inferred neuronal connections from observed spike train data are often skewed from ground truth by factors such as model mismatch, unobserved neurons, and limited data. Spike train covariances, sometimes referred to as "functional connections," are often used as a proxy for the connections between pairs of neurons, but reflect statistical relationships between neurons, not anatomical connections. Moreover, covariances are not causal: spiking activity is correlated in both the past and the future, whereas neurons respond only to synaptic inputs in the past. Connections inferred by maximum likelihood inference, however, can be constrained to be causal. However, we show in this work that the inferred connections in spontaneously active networks modeled by stochastic leaky integrate-and-fire networks strongly correlate with the covariances between neurons, and may reflect noncausal relationships, when many neurons are unobserved or when neurons are weakly coupled. This phenomenon occurs across different network structures, including random networks and balanced excitatory-inhibitory networks. We use a combination of simulations and a mean-field analysis with fluctuation corrections to elucidate the relationships between spike train covariances, inferred synaptic filters, and ground-truth connections in partially observed networks.
Collapse
Affiliation(s)
- Tong Liang
- Department of Physics and Astronomy, Stony Brook University, Stony Brook, New York 11794, USA
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| | - Braden A W Brinkman
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| |
Collapse
|
3
|
Vareberg AD, Bok I, Eizadi J, Ren X, Hai A. Inference of network connectivity from temporally binned spike trains. J Neurosci Methods 2024; 404:110073. [PMID: 38309313 PMCID: PMC10949361 DOI: 10.1016/j.jneumeth.2024.110073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 01/19/2024] [Accepted: 01/30/2024] [Indexed: 02/05/2024]
Abstract
BACKGROUND Processing neural activity to reconstruct network connectivity is a central focus of neuroscience, yet the spatiotemporal requisites of biological nervous systems are challenging for current neuronal sensing modalities. Consequently, methods that leverage limited data to successfully infer synaptic connections, predict activity at single unit resolution, and decipher their effect on whole systems, can uncover critical information about neural processing. Despite the emergence of powerful methods for inferring connectivity, network reconstruction based on temporally subsampled data remains insufficiently unexplored. NEW METHOD We infer synaptic weights by processing firing rates within variable time bins for a heterogeneous feed-forward network of excitatory, inhibitory, and unconnected units. We assess classification and optimize model parameters for postsynaptic spike train reconstruction. We test our method on a physiological network of leaky integrate-and-fire neurons displaying bursting patterns and assess prediction of postsynaptic activity from microelectrode array data. RESULTS Results reveal parameters for improved prediction and performance and suggest that lower resolution data and limited access to neurons can be preferred. COMPARISON WITH EXISTING METHOD(S) Recent computational methods demonstrate highly improved reconstruction of connectivity from networks of parallel spike trains by considering spike lag, time-varying firing rates, and other underlying dynamics. However, these methods insufficiently explore temporal subsampling representative of novel data types. CONCLUSIONS We provide a framework for reverse engineering neural networks from data with limited temporal quality, describing optimal parameters for each bin size, which can be further improved using non-linear methods and applied to more complicated readouts and connectivity distributions in multiple brain circuits.
Collapse
Affiliation(s)
- Adam D Vareberg
- Department of Biomedical Engineering, University of Wisconsin-Madison, United States; Wisconsin Institute for Translational Neuroengineering (WITNe), University of Wisconsin-Madison, United States
| | - Ilhan Bok
- Department of Electrical and Computer Engineering, University of Wisconsin-Madison, United States; Wisconsin Institute for Translational Neuroengineering (WITNe), University of Wisconsin-Madison, United States
| | - Jenna Eizadi
- Department of Biomedical Engineering, University of Wisconsin-Madison, United States; Wisconsin Institute for Translational Neuroengineering (WITNe), University of Wisconsin-Madison, United States
| | - Xiaoxuan Ren
- Department of Electrical and Computer Engineering, University of Wisconsin-Madison, United States
| | - Aviad Hai
- Department of Biomedical Engineering, University of Wisconsin-Madison, United States; Department of Electrical and Computer Engineering, University of Wisconsin-Madison, United States; Wisconsin Institute for Translational Neuroengineering (WITNe), University of Wisconsin-Madison, United States.
| |
Collapse
|
4
|
Ren X, Bok I, Vareberg A, Hai A. Stimulation-mediated reverse engineering of silent neural networks. J Neurophysiol 2023; 129:1505-1514. [PMID: 37222450 PMCID: PMC10311990 DOI: 10.1152/jn.00100.2023] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 05/16/2023] [Accepted: 05/23/2023] [Indexed: 05/25/2023] Open
Abstract
Reconstructing connectivity of neuronal networks from single-cell activity is essential to understanding brain function, but the challenge of deciphering connections from populations of silent neurons has been largely unmet. We demonstrate a protocol for deriving connectivity of simulated silent neuronal networks using stimulation combined with a supervised learning algorithm, which enables inferring connection weights with high fidelity and predicting spike trains at the single-spike and single-cell levels with high accuracy. We apply our method on rat cortical recordings fed through a circuit of heterogeneously connected leaky integrate-and-fire neurons firing at typical lognormal distributions and demonstrate improved performance during stimulation for multiple subpopulations. These testable predictions about the number and protocol of the required stimulations are expected to enhance future efforts for deriving neuronal connectivity and drive new experiments to better understand brain function.NEW & NOTEWORTHY We introduce a new concept for reverse engineering silent neuronal networks using a supervised learning algorithm combined with stimulation. We quantify the performance of the algorithm and the precision of deriving synaptic weights in inhibitory and excitatory subpopulations. We then show that stimulation enables deciphering connectivity of heterogeneous circuits fed with real electrode array recordings, which could extend in the future to deciphering connectivity in broad biological and artificial neural networks.
Collapse
Affiliation(s)
- Xiaoxuan Ren
- Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, Wisconsin, United States
- Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, Wisconsin, United States
| | - Ilhan Bok
- Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, Wisconsin, United States
| | - Adam Vareberg
- Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, Wisconsin, United States
| | - Aviad Hai
- Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, Wisconsin, United States
- Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, Wisconsin, United States
- Wisconsin Institute for Translational Neuroengineering (WITNe), Madison, Wisconsin, United States
| |
Collapse
|
5
|
Affiliation(s)
- Max Dabagia
- School of Computer Science, Georgia Institute of Technology, Atlanta, GA, USA
| | - Konrad P Kording
- Department of Biomedical Engineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Eva L Dyer
- Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA, USA.
| |
Collapse
|
6
|
Tu L, Talbot A, Gallagher NM, Carlson DE. Supervising the Decoder of Variational Autoencoders to Improve Scientific Utility. IEEE TRANSACTIONS ON SIGNAL PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 70:5954-5966. [PMID: 36777018 PMCID: PMC9910304 DOI: 10.1109/tsp.2022.3230329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Probabilistic generative models are attractive for scientific modeling because their inferred parameters can be used to generate hypotheses and design experiments. This requires that the learned model provides an accurate representation of the input data and yields a latent space that effectively predicts outcomes relevant to the scientific question. Supervised Variational Autoencoders (SVAEs) have previously been used for this purpose, as a carefully designed decoder can be used as an interpretable generative model of the data, while the supervised objective ensures a predictive latent representation. Unfortunately, the supervised objective forces the encoder to learn a biased approximation to the generative posterior distribution, which renders the generative parameters unreliable when used in scientific models. This issue has remained undetected as reconstruction losses commonly used to evaluate model performance do not detect bias in the encoder. We address this previously-unreported issue by developing a second-order supervision framework (SOS-VAE) that updates the decoder parameters, rather than the encoder, to induce a predictive latent representation. This ensures that the encoder maintains a reliable posterior approximation and the decoder parameters can be effectively interpreted. We extend this technique to allow the user to trade-off the bias in the generative parameters for improved predictive performance, acting as an intermediate option between SVAEs and our new SOS-VAE. We also use this methodology to address missing data issues that often arise when combining recordings from multiple scientific experiments. We demonstrate the effectiveness of these developments using synthetic data and electrophysiological recordings with an emphasis on how our learned representations can be used to design scientific experiments.
Collapse
Affiliation(s)
- Liyun Tu
- School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, 100876, China
| | - Austin Talbot
- Department of Psychiatry and Behavioral Sciences, Stanford University, Palo Alto, CA 94305, USA
| | - Neil M. Gallagher
- Department of Psychiatry, Weill Cornell Medical College, New York, NY 10065, USA
| | - David E. Carlson
- Department of Biostatistics and Bioinformatics and the Department of Civil and Environmental Engineering, Duke University, Durham, NC 27708, USA
| |
Collapse
|
7
|
Ouchi T, Orsborn AL. Quantifying the influence of stimulation protocols on neural network connectivity inference to optimize rapid network measurements. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:2369-2372. [PMID: 36085860 DOI: 10.1109/embc48229.2022.9871658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Connectivity is key to understanding neural circuit computations. However, estimating in vivo connectivity using recording of activity alone is challenging. Issues include common input and bias errors in inference, and limited temporal resolution due to large data requirements. Perturbations (e.g. stimulation) can improve inference accuracy and accelerate estimation. However, optimal stimulation protocols for rapid network estimation are not yet established. Here, we use neural network simulations to identify stimulation protocols that minimize connectivity inference errors when using generalized linear model inference. We find that stimulation parameters that balance excitatory and inhibitory activity minimize inference error. We also show that pairing optimized stimulation with adaptive protocols that choose neurons to stimulate via Bayesian inference may ultimately enable rapid network inference.
Collapse
|
8
|
Lee S, Periwal V, Jo J. Inference of stochastic time series with missing data. Phys Rev E 2021; 104:024119. [PMID: 34525568 PMCID: PMC9531145 DOI: 10.1103/physreve.104.024119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 07/22/2021] [Indexed: 11/07/2022]
Abstract
Inferring dynamics from time series is an important objective in data analysis. In particular, it is challenging to infer stochastic dynamics given incomplete data. We propose an expectation maximization (EM) algorithm that iterates between alternating two steps: E-step restores missing data points, while M-step infers an underlying network model from the restored data. Using synthetic data of a kinetic Ising model, we confirm that the algorithm works for restoring missing data points as well as inferring the underlying model. At the initial iteration of the EM algorithm, the model inference shows better model-data consistency with observed data points than with missing data points. As we keep iterating, however, missing data points show better model-data consistency. We find that demanding equal consistency of observed and missing data points provides an effective stopping criterion for the iteration to prevent going beyond the most accurate model inference. Using the EM algorithm and the stopping criterion together, we infer missing data points from a time-series data of real neuronal activities. Our method reproduces collective properties of neuronal activities such as correlations and firing statistics even when 70% of data points are masked as missing points.
Collapse
Affiliation(s)
- Sangwon Lee
- Department of Physics and Astronomy, Seoul National University, Seoul 08826, Korea
| | - Vipul Periwal
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, Maryland 20892, USA
| | - Junghyo Jo
- Department of Physics Education and Center for Theoretical Physics and Artificial Intelligence Institute, Seoul National University, Seoul 08826, Korea
- School of Computational Sciences, Korea Institute for Advanced Study, Seoul 02455, Korea
| |
Collapse
|
9
|
Rupasinghe A, Francis N, Liu J, Bowen Z, Kanold PO, Babadi B. Direct extraction of signal and noise correlations from two-photon calcium imaging of ensemble neuronal activity. eLife 2021; 10:68046. [PMID: 34180397 PMCID: PMC8354639 DOI: 10.7554/elife.68046] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Accepted: 06/27/2021] [Indexed: 12/21/2022] Open
Abstract
Neuronal activity correlations are key to understanding how populations of neurons collectively encode information. While two-photon calcium imaging has created a unique opportunity to record the activity of large populations of neurons, existing methods for inferring correlations from these data face several challenges. First, the observations of spiking activity produced by two-photon imaging are temporally blurred and noisy. Secondly, even if the spiking data were perfectly recovered via deconvolution, inferring network-level features from binary spiking data is a challenging task due to the non-linear relation of neuronal spiking to endogenous and exogenous inputs. In this work, we propose a methodology to explicitly model and directly estimate signal and noise correlations from two-photon fluorescence observations, without requiring intermediate spike deconvolution. We provide theoretical guarantees on the performance of the proposed estimator and demonstrate its utility through applications to simulated and experimentally recorded data from the mouse auditory cortex.
Collapse
Affiliation(s)
- Anuththara Rupasinghe
- Department of Electrical and Computer Engineering, University of Maryland, College Park, United States
| | - Nikolas Francis
- The Institute for Systems Research, University of Maryland, College Park, United States.,Department of Biology, University of Maryland, College Park, United States
| | - Ji Liu
- The Institute for Systems Research, University of Maryland, College Park, United States.,Department of Biology, University of Maryland, College Park, United States
| | - Zac Bowen
- The Institute for Systems Research, University of Maryland, College Park, United States.,Department of Biology, University of Maryland, College Park, United States
| | - Patrick O Kanold
- The Institute for Systems Research, University of Maryland, College Park, United States.,Department of Biology, University of Maryland, College Park, United States.,Department of Biomedical Engineering, Johns Hopkins University, Baltimore, United States
| | - Behtash Babadi
- Department of Electrical and Computer Engineering, University of Maryland, College Park, United States
| |
Collapse
|
10
|
Goffinet J, Brudner S, Mooney R, Pearson J. Low-dimensional learned feature spaces quantify individual and group differences in vocal repertoires. eLife 2021; 10:e67855. [PMID: 33988503 PMCID: PMC8213406 DOI: 10.7554/elife.67855] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 05/12/2021] [Indexed: 11/16/2022] Open
Abstract
Increases in the scale and complexity of behavioral data pose an increasing challenge for data analysis. A common strategy involves replacing entire behaviors with small numbers of handpicked, domain-specific features, but this approach suffers from several crucial limitations. For example, handpicked features may miss important dimensions of variability, and correlations among them complicate statistical testing. Here, by contrast, we apply the variational autoencoder (VAE), an unsupervised learning method, to learn features directly from data and quantify the vocal behavior of two model species: the laboratory mouse and the zebra finch. The VAE converges on a parsimonious representation that outperforms handpicked features on a variety of common analysis tasks, enables the measurement of moment-by-moment vocal variability on the timescale of tens of milliseconds in the zebra finch, provides strong evidence that mouse ultrasonic vocalizations do not cluster as is commonly believed, and captures the similarity of tutor and pupil birdsong with qualitatively higher fidelity than previous approaches. In all, we demonstrate the utility of modern unsupervised learning approaches to the quantification of complex and high-dimensional vocal behavior.
Collapse
Affiliation(s)
- Jack Goffinet
- Department of Computer Science, Duke UniversityDurhamUnited States
- Center for Cognitive Neurobiology, Duke UniversityDurhamUnited States
- Department of Neurobiology, Duke UniversityDurhamUnited States
| | - Samuel Brudner
- Department of Neurobiology, Duke UniversityDurhamUnited States
| | - Richard Mooney
- Department of Neurobiology, Duke UniversityDurhamUnited States
| | - John Pearson
- Center for Cognitive Neurobiology, Duke UniversityDurhamUnited States
- Department of Neurobiology, Duke UniversityDurhamUnited States
- Department of Biostatistics & Bioinformatics, Duke UniversityDurhamUnited States
- Department of Electrical and Computer Engineering, Duke UniversityDurhamUnited States
| |
Collapse
|
11
|
Randi F, Leifer AM. Nonequilibrium Green's Functions for Functional Connectivity in the Brain. PHYSICAL REVIEW LETTERS 2021; 126:118102. [PMID: 33798383 PMCID: PMC8454901 DOI: 10.1103/physrevlett.126.118102] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 12/29/2020] [Accepted: 02/18/2021] [Indexed: 05/28/2023]
Abstract
A theoretical framework describing the set of interactions between neurons in the brain, or functional connectivity, should include dynamical functions representing the propagation of signal from one neuron to another. Green's functions and response functions are natural candidates for this but, while they are conceptually very useful, they are usually defined only for linear time-translationally invariant systems. The brain, instead, behaves nonlinearly and in a time-dependent way. Here, we use nonequilibrium Green's functions to describe the time-dependent functional connectivity of a continuous-variable network of neurons. We show how the connectivity is related to the measurable response functions, and provide two illustrative examples via numerical calculations, inspired from Caenorhabditis elegans.
Collapse
Affiliation(s)
- Francesco Randi
- Department of Physics, Princeton University, Jadwin Hall, Princeton, New Jersey 08544, USA
| | - Andrew M. Leifer
- Department of Physics, Princeton University, Jadwin Hall, Princeton, New Jersey 08544, USA
- Princeton Neuroscience Institute, Princeton University, New Jersey 08544, USA
| |
Collapse
|
12
|
Das A, Fiete IR. Systematic errors in connectivity inferred from activity in strongly recurrent networks. Nat Neurosci 2020; 23:1286-1296. [DOI: 10.1038/s41593-020-0699-2] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Accepted: 07/28/2020] [Indexed: 11/09/2022]
|
13
|
Renteria C, Liu YZ, Chaney EJ, Barkalifa R, Sengupta P, Boppart SA. Dynamic Tracking Algorithm for Time-Varying Neuronal Network Connectivity using Wide-Field Optical Image Video Sequences. Sci Rep 2020; 10:2540. [PMID: 32054882 PMCID: PMC7018813 DOI: 10.1038/s41598-020-59227-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Accepted: 01/27/2020] [Indexed: 12/18/2022] Open
Abstract
Propagation of signals between neurons and brain regions provides information about the functional properties of neural networks, and thus information transfer. Advances in optical imaging and statistical analyses of acquired optical signals have yielded various metrics for inferring neural connectivity, and hence for mapping signal intercorrelation. However, a single coefficient is traditionally derived to classify the connection strength between two cells, ignoring the fact that neural systems are inherently time-variant systems. To overcome these limitations, we utilized a time-varying Pearson's correlation coefficient, spike-sorting, wavelet transform, and wavelet coherence of calcium transients from DIV 12-15 hippocampal neurons from GCaMP6s mice after applying various concentrations of glutamate. Results provide a comprehensive overview of resulting firing patterns, network connectivity, signal directionality, and network properties. Together, these metrics provide a more comprehensive and robust method of analyzing transient neural signals, and enable future investigations for tracking the effects of different stimuli on network properties.
Collapse
Affiliation(s)
- Carlos Renteria
- Beckman Institute for Advanced Science and Technology, Urbana, USA
- Department of Bioengineering, Urbana, USA
| | - Yuan-Zhi Liu
- Beckman Institute for Advanced Science and Technology, Urbana, USA
| | - Eric J Chaney
- Beckman Institute for Advanced Science and Technology, Urbana, USA
| | - Ronit Barkalifa
- Beckman Institute for Advanced Science and Technology, Urbana, USA
| | - Parijat Sengupta
- Beckman Institute for Advanced Science and Technology, Urbana, USA
| | - Stephen A Boppart
- Beckman Institute for Advanced Science and Technology, Urbana, USA.
- Department of Bioengineering, Urbana, USA.
- Department of Electrical and Computer Engineering, Urbana, USA.
- Neuroscience Program, Urbana, USA.
- Carle Illinois College of Medicine, University of Illinois at Urbana-Champaign, Champaign, USA.
| |
Collapse
|
14
|
Kastanenka KV, Moreno-Bote R, De Pittà M, Perea G, Eraso-Pichot A, Masgrau R, Poskanzer KE, Galea E. A roadmap to integrate astrocytes into Systems Neuroscience. Glia 2020; 68:5-26. [PMID: 31058383 PMCID: PMC6832773 DOI: 10.1002/glia.23632] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Revised: 04/08/2019] [Accepted: 04/09/2019] [Indexed: 12/14/2022]
Abstract
Systems neuroscience is still mainly a neuronal field, despite the plethora of evidence supporting the fact that astrocytes modulate local neural circuits, networks, and complex behaviors. In this article, we sought to identify which types of studies are necessary to establish whether astrocytes, beyond their well-documented homeostatic and metabolic functions, perform computations implementing mathematical algorithms that sub-serve coding and higher-brain functions. First, we reviewed Systems-like studies that include astrocytes in order to identify computational operations that these cells may perform, using Ca2+ transients as their encoding language. The analysis suggests that astrocytes may carry out canonical computations in a time scale of subseconds to seconds in sensory processing, neuromodulation, brain state, memory formation, fear, and complex homeostatic reflexes. Next, we propose a list of actions to gain insight into the outstanding question of which variables are encoded by such computations. The application of statistical analyses based on machine learning, such as dimensionality reduction and decoding in the context of complex behaviors, combined with connectomics of astrocyte-neuronal circuits, is, in our view, fundamental undertakings. We also discuss technical and analytical approaches to study neuronal and astrocytic populations simultaneously, and the inclusion of astrocytes in advanced modeling of neural circuits, as well as in theories currently under exploration such as predictive coding and energy-efficient coding. Clarifying the relationship between astrocytic Ca2+ and brain coding may represent a leap forward toward novel approaches in the study of astrocytes in health and disease.
Collapse
Affiliation(s)
- Ksenia V. Kastanenka
- Department of Neurology, MassGeneral Institute for Neurodegenerative Diseases, Massachusetts General Hospital and Harvard Medical School, Massachusetts 02129, USA
| | - Rubén Moreno-Bote
- Department of Information and Communications Technologies, Center for Brain and Cognition and Universitat Pompeu Fabra, 08018 Barcelona, Spain
- ICREA, 08010 Barcelona, Spain
| | | | | | - Abel Eraso-Pichot
- Departament de Bioquímica, Institut de Neurociències i Universitat Autònoma de Barcelona, Bellaterra, 08193 Barcelona, Spain
| | - Roser Masgrau
- Departament de Bioquímica, Institut de Neurociències i Universitat Autònoma de Barcelona, Bellaterra, 08193 Barcelona, Spain
| | - Kira E. Poskanzer
- Department of Biochemistry & Biophysics, Neuroscience Graduate Program, and Kavli Institute for Fundamental Neuroscience, University of California, San Francisco, San Francisco, California 94143, USA
- Equally contributing authors
| | - Elena Galea
- ICREA, 08010 Barcelona, Spain
- Departament de Bioquímica, Institut de Neurociències i Universitat Autònoma de Barcelona, Bellaterra, 08193 Barcelona, Spain
- Equally contributing authors
| |
Collapse
|
15
|
Haehne H, Casadiego J, Peinke J, Timme M. Detecting Hidden Units and Network Size from Perceptible Dynamics. PHYSICAL REVIEW LETTERS 2019; 122:158301. [PMID: 31050518 DOI: 10.1103/physrevlett.122.158301] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Revised: 02/12/2019] [Indexed: 06/09/2023]
Abstract
The number of units of a network dynamical system, its size, arguably constitutes its most fundamental property. Many units of a network, however, are typically experimentally inaccessible such that the network size is often unknown. Here we introduce a detection matrix that suitably arranges multiple transient time series from the subset of accessible units to detect network size via matching rank constraints. The proposed method is model-free, applicable across system types and interaction topologies, and applies to nonstationary dynamics near fixed points, as well as periodic and chaotic collective motion. Even if only a small minority of units is perceptible and for systems simultaneously exhibiting nonlinearities, heterogeneities, and noise, exact size detection is feasible. We illustrate applicability for a paradigmatic class of biochemical reaction networks.
Collapse
Affiliation(s)
- Hauke Haehne
- Institute of Physics and ForWind, University of Oldenburg, 26111 Oldenburg, Germany
| | - Jose Casadiego
- Chair for Network Dynamics, Institute for Theoretical Physics and Center for Advancing Electronics Dresden (cfaed), Technical University of Dresden, 01062 Dresden, Germany
| | - Joachim Peinke
- Institute of Physics and ForWind, University of Oldenburg, 26111 Oldenburg, Germany
| | - Marc Timme
- Chair for Network Dynamics, Institute for Theoretical Physics and Center for Advancing Electronics Dresden (cfaed), Technical University of Dresden, 01062 Dresden, Germany
| |
Collapse
|
16
|
Paninski L, Cunningham JP. Neural data science: accelerating the experiment-analysis-theory cycle in large-scale neuroscience. Curr Opin Neurobiol 2019; 50:232-241. [PMID: 29738986 DOI: 10.1016/j.conb.2018.04.007] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2017] [Revised: 03/12/2018] [Accepted: 04/06/2018] [Indexed: 01/01/2023]
Abstract
Modern large-scale multineuronal recording methodologies, including multielectrode arrays, calcium imaging, and optogenetic techniques, produce single-neuron resolution data of a magnitude and precision that were the realm of science fiction twenty years ago. The major bottlenecks in systems and circuit neuroscience no longer lie in simply collecting data from large neural populations, but also in understanding this data: developing novel scientific questions, with corresponding analysis techniques and experimental designs to fully harness these new capabilities and meaningfully interrogate these questions. Advances in methods for signal processing, network analysis, dimensionality reduction, and optimal control-developed in lockstep with advances in experimental neurotechnology-promise major breakthroughs in multiple fundamental neuroscience problems. These trends are clear in a broad array of subfields of modern neuroscience; this review focuses on recent advances in methods for analyzing neural time-series data with single-neuronal precision.
Collapse
Affiliation(s)
- L Paninski
- Department of Statistics, Grossman Center for the Statistics of Mind, Zuckerman Mind Brain Behavior Institute, Center for Theoretical Neuroscience, Columbia University, United States; Department of Neuroscience, Grossman Center for the Statistics of Mind, Zuckerman Mind Brain Behavior Institute, Center for Theoretical Neuroscience, Columbia University, United States.
| | - J P Cunningham
- Department of Statistics, Grossman Center for the Statistics of Mind, Zuckerman Mind Brain Behavior Institute, Center for Theoretical Neuroscience, Columbia University, United States
| |
Collapse
|
17
|
Duarte A, Galves A, Löcherbach E, Ost G. Estimating the interaction graph of stochastic neural dynamics. BERNOULLI 2019. [DOI: 10.3150/17-bej1006] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
18
|
Brinkman BAW, Rieke F, Shea-Brown E, Buice MA. Predicting how and when hidden neurons skew measured synaptic interactions. PLoS Comput Biol 2018; 14:e1006490. [PMID: 30346943 PMCID: PMC6219819 DOI: 10.1371/journal.pcbi.1006490] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2017] [Revised: 11/06/2018] [Accepted: 09/05/2018] [Indexed: 11/18/2022] Open
Abstract
A major obstacle to understanding neural coding and computation is the fact that experimental recordings typically sample only a small fraction of the neurons in a circuit. Measured neural properties are skewed by interactions between recorded neurons and the “hidden” portion of the network. To properly interpret neural data and determine how biological structure gives rise to neural circuit function, we thus need a better understanding of the relationships between measured effective neural properties and the true underlying physiological properties. Here, we focus on how the effective spatiotemporal dynamics of the synaptic interactions between neurons are reshaped by coupling to unobserved neurons. We find that the effective interactions from a pre-synaptic neuron r′ to a post-synaptic neuron r can be decomposed into a sum of the true interaction from r′ to r plus corrections from every directed path from r′ to r through unobserved neurons. Importantly, the resulting formula reveals when the hidden units have—or do not have—major effects on reshaping the interactions among observed neurons. As a particular example of interest, we derive a formula for the impact of hidden units in random networks with “strong” coupling—connection weights that scale with 1/N, where N is the network size, precisely the scaling observed in recent experiments. With this quantitative relationship between measured and true interactions, we can study how network properties shape effective interactions, which properties are relevant for neural computations, and how to manipulate effective interactions. No experiment in neuroscience can record from more than a tiny fraction of the total number of neurons present in a circuit. This severely complicates measurement of a network’s true properties, as unobserved neurons skew measurements away from what would be measured if all neurons were observed. For example, the measured post-synaptic response of a neuron to a spike from a particular pre-synaptic neuron incorporates direct connections between the two neurons as well as the effect of any number of indirect connections, including through unobserved neurons. To understand how measured quantities are distorted by unobserved neurons, we calculate a general relationship between measured “effective” synaptic interactions and the ground-truth interactions in the network. This allows us to identify conditions under which hidden neurons substantially alter measured interactions. Moreover, it provides a foundation for future work on manipulating effective interactions between neurons to better understand and potentially alter circuit function—or dysfunction.
Collapse
Affiliation(s)
- Braden A W Brinkman
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America.,Department of Physiology and Biophysics, University of Washington, Seattle, Washington, United States of America
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, Washington, United States of America.,Graduate Program in Neuroscience, University of Washington, Seattle, Washington, United States of America
| | - Eric Shea-Brown
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America.,Department of Physiology and Biophysics, University of Washington, Seattle, Washington, United States of America.,Graduate Program in Neuroscience, University of Washington, Seattle, Washington, United States of America.,Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Michael A Buice
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America.,Allen Institute for Brain Science, Seattle, Washington, United States of America
| |
Collapse
|
19
|
Widloski J, Marder MP, Fiete IR. Inferring circuit mechanisms from sparse neural recording and global perturbation in grid cells. eLife 2018; 7:e33503. [PMID: 29985132 PMCID: PMC6078497 DOI: 10.7554/elife.33503] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2017] [Accepted: 07/07/2018] [Indexed: 02/02/2023] Open
Abstract
A goal of systems neuroscience is to discover the circuit mechanisms underlying brain function. Despite experimental advances that enable circuit-wide neural recording, the problem remains open in part because solving the 'inverse problem' of inferring circuity and mechanism by merely observing activity is hard. In the grid cell system, we show through modeling that a technique based on global circuit perturbation and examination of a novel theoretical object called the distribution of relative phase shifts (DRPS) could reveal the mechanisms of a cortical circuit at unprecedented detail using extremely sparse neural recordings. We establish feasibility, showing that the method can discriminate between recurrent versus feedforward mechanisms and amongst various recurrent mechanisms using recordings from a handful of cells. The proposed strategy demonstrates that sparse recording coupled with simple perturbation can reveal more about circuit mechanism than can full knowledge of network activity or the synaptic connectivity matrix.
Collapse
Affiliation(s)
- John Widloski
- Department of PsychologyThe University of CaliforniaBerkeleyUnited States
| | | | - Ila R Fiete
- Department of PhysicsThe University of TexasAustinUnited States
- Center for Learning and MemoryThe University of TexasAustinUnited States
| |
Collapse
|
20
|
Karbasi A, Salavati AH, Vetterli M. Learning neural connectivity from firing activity: efficient algorithms with provable guarantees on topology. J Comput Neurosci 2018; 44:253-272. [PMID: 29464489 PMCID: PMC5851696 DOI: 10.1007/s10827-018-0678-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2017] [Revised: 01/06/2018] [Accepted: 01/22/2018] [Indexed: 11/18/2022]
Abstract
The connectivity of a neuronal network has a major effect on its functionality and role. It is generally believed that the complex network structure of the brain provides a physiological basis for information processing. Therefore, identifying the network's topology has received a lot of attentions in neuroscience and has been the center of many research initiatives such as Human Connectome Project. Nevertheless, direct and invasive approaches that slice and observe the neural tissue have proven to be time consuming, complex and costly. As a result, the inverse methods that utilize firing activity of neurons in order to identify the (functional) connections have gained momentum recently, especially in light of rapid advances in recording technologies; It will soon be possible to simultaneously monitor the activities of tens of thousands of neurons in real time. While there are a number of excellent approaches that aim to identify the functional connections from firing activities, the scalability of the proposed techniques plays a major challenge in applying them on large-scale datasets of recorded firing activities. In exceptional cases where scalability has not been an issue, the theoretical performance guarantees are usually limited to a specific family of neurons or the type of firing activities. In this paper, we formulate the neural network reconstruction as an instance of a graph learning problem, where we observe the behavior of nodes/neurons (i.e., firing activities) and aim to find the links/connections. We develop a scalable learning mechanism and derive the conditions under which the estimated graph for a network of Leaky Integrate and Fire (LIf) neurons matches the true underlying synaptic connections. We then validate the performance of the algorithm using artificially generated data (for benchmarking) and real data recorded from multiple hippocampal areas in rats.
Collapse
Affiliation(s)
- Amin Karbasi
- Inference, Information and Decision Systems Group, Yale Institute for Network Science, Yale University, New Haven, CT 06520 USA
| | - Amir Hesam Salavati
- Laboratory of Audiovisual Communications (LCAV), School of Computer and Communication Sciences, Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland
| | - Martin Vetterli
- Laboratory of Audiovisual Communications (LCAV), School of Computer and Communication Sciences, Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
21
|
Nonnenmacher M, Behrens C, Berens P, Bethge M, Macke JH. Signatures of criticality arise from random subsampling in simple population models. PLoS Comput Biol 2017; 13:e1005718. [PMID: 28972970 PMCID: PMC5640238 DOI: 10.1371/journal.pcbi.1005718] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2017] [Revised: 10/13/2017] [Accepted: 08/01/2017] [Indexed: 11/18/2022] Open
Abstract
The rise of large-scale recordings of neuronal activity has fueled the hope to gain new insights into the collective activity of neural ensembles. How can one link the statistics of neural population activity to underlying principles and theories? One attempt to interpret such data builds upon analogies to the behaviour of collective systems in statistical physics. Divergence of the specific heat-a measure of population statistics derived from thermodynamics-has been used to suggest that neural populations are optimized to operate at a "critical point". However, these findings have been challenged by theoretical studies which have shown that common inputs can lead to diverging specific heat. Here, we connect "signatures of criticality", and in particular the divergence of specific heat, back to statistics of neural population activity commonly studied in neural coding: firing rates and pairwise correlations. We show that the specific heat diverges whenever the average correlation strength does not depend on population size. This is necessarily true when data with correlations is randomly subsampled during the analysis process, irrespective of the detailed structure or origin of correlations. We also show how the characteristic shape of specific heat capacity curves depends on firing rates and correlations, using both analytically tractable models and numerical simulations of a canonical feed-forward population model. To analyze these simulations, we develop efficient methods for characterizing large-scale neural population activity with maximum entropy models. We find that, consistent with experimental findings, increases in firing rates and correlation directly lead to more pronounced signatures. Thus, previous reports of thermodynamical criticality in neural populations based on the analysis of specific heat can be explained by average firing rates and correlations, and are not indicative of an optimized coding strategy. We conclude that a reliable interpretation of statistical tests for theories of neural coding is possible only in reference to relevant ground-truth models.
Collapse
Affiliation(s)
- Marcel Nonnenmacher
- Research Center caesar, an associate of the Max Planck Society, Bonn, Germany
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
| | - Christian Behrens
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
- Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
- Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
| | - Philipp Berens
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
- Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
- Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
| | - Matthias Bethge
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
- Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
- Institute of Theoretical Physics, University of Tübingen, Tübingen, Germany
| | - Jakob H. Macke
- Research Center caesar, an associate of the Max Planck Society, Bonn, Germany
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
| |
Collapse
|
22
|
Linderman SW, Gershman SJ. Using computational theory to constrain statistical models of neural data. Curr Opin Neurobiol 2017; 46:14-24. [PMID: 28732273 PMCID: PMC5660645 DOI: 10.1016/j.conb.2017.06.004] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2017] [Revised: 06/07/2017] [Accepted: 06/25/2017] [Indexed: 11/27/2022]
Abstract
Computational neuroscience is, to first order, dominated by two approaches: the 'bottom-up' approach, which searches for statistical patterns in large-scale neural recordings, and the 'top-down' approach, which begins with a theory of computation and considers plausible neural implementations. While this division is not clear-cut, we argue that these approaches should be much more intimately linked. From a Bayesian perspective, computational theories provide constrained prior distributions on neural data-albeit highly sophisticated ones. By connecting theory to observation via a probabilistic model, we provide the link necessary to test, evaluate, and revise our theories in a data-driven and statistically rigorous fashion. This review highlights examples of this theory-driven pipeline for neural data analysis in recent literature and illustrates it with a worked example based on the temporal difference learning model of dopamine.
Collapse
Affiliation(s)
| | - Samuel J Gershman
- Department of Psychology and Center for Brain Science, Harvard University, United States.
| |
Collapse
|
23
|
Friedrich J, Yang W, Soudry D, Mu Y, Ahrens MB, Yuste R, Peterka DS, Paninski L. Multi-scale approaches for high-speed imaging and analysis of large neural populations. PLoS Comput Biol 2017; 13:e1005685. [PMID: 28771570 PMCID: PMC5557609 DOI: 10.1371/journal.pcbi.1005685] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2016] [Revised: 08/15/2017] [Accepted: 07/14/2017] [Indexed: 11/19/2022] Open
Abstract
Progress in modern neuroscience critically depends on our ability to observe the activity of large neuronal populations with cellular spatial and high temporal resolution. However, two bottlenecks constrain efforts towards fast imaging of large populations. First, the resulting large video data is challenging to analyze. Second, there is an explicit tradeoff between imaging speed, signal-to-noise, and field of view: with current recording technology we cannot image very large neuronal populations with simultaneously high spatial and temporal resolution. Here we describe multi-scale approaches for alleviating both of these bottlenecks. First, we show that spatial and temporal decimation techniques based on simple local averaging provide order-of-magnitude speedups in spatiotemporally demixing calcium video data into estimates of single-cell neural activity. Second, once the shapes of individual neurons have been identified at fine scale (e.g., after an initial phase of conventional imaging with standard temporal and spatial resolution), we find that the spatial/temporal resolution tradeoff shifts dramatically: after demixing we can accurately recover denoised fluorescence traces and deconvolved neural activity of each individual neuron from coarse scale data that has been spatially decimated by an order of magnitude. This offers a cheap method for compressing this large video data, and also implies that it is possible to either speed up imaging significantly, or to "zoom out" by a corresponding factor to image order-of-magnitude larger neuronal populations with minimal loss in accuracy or temporal resolution.
Collapse
Affiliation(s)
- Johannes Friedrich
- Department of Statistics, Grossman Center for the Statistics of Mind, and Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
- * E-mail: (JF); (LP)
| | - Weijian Yang
- NeuroTechnology Center, Department of Biological Sciences, Columbia University, New York, New York, United States of America
| | - Daniel Soudry
- Department of Statistics, Grossman Center for the Statistics of Mind, and Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
| | - Yu Mu
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
| | - Misha B. Ahrens
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
| | - Rafael Yuste
- NeuroTechnology Center, Department of Biological Sciences, Columbia University, New York, New York, United States of America
- Kavli Institute of Brain Science, Columbia University, New York, New York, United States of America
| | - Darcy S. Peterka
- NeuroTechnology Center, Department of Biological Sciences, Columbia University, New York, New York, United States of America
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
| | - Liam Paninski
- Department of Statistics, Grossman Center for the Statistics of Mind, and Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- NeuroTechnology Center, Department of Biological Sciences, Columbia University, New York, New York, United States of America
- Kavli Institute of Brain Science, Columbia University, New York, New York, United States of America
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- * E-mail: (JF); (LP)
| |
Collapse
|
24
|
Williamson RC, Cowley BR, Litwin-Kumar A, Doiron B, Kohn A, Smith MA, Yu BM. Scaling Properties of Dimensionality Reduction for Neural Populations and Network Models. PLoS Comput Biol 2016; 12:e1005141. [PMID: 27926936 PMCID: PMC5142778 DOI: 10.1371/journal.pcbi.1005141] [Citation(s) in RCA: 60] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2016] [Accepted: 09/11/2016] [Indexed: 01/20/2023] Open
Abstract
Recent studies have applied dimensionality reduction methods to understand how the multi-dimensional structure of neural population activity gives rise to brain function. It is unclear, however, how the results obtained from dimensionality reduction generalize to recordings with larger numbers of neurons and trials or how these results relate to the underlying network structure. We address these questions by applying factor analysis to recordings in the visual cortex of non-human primates and to spiking network models that self-generate irregular activity through a balance of excitation and inhibition. We compared the scaling trends of two key outputs of dimensionality reduction—shared dimensionality and percent shared variance—with neuron and trial count. We found that the scaling properties of networks with non-clustered and clustered connectivity differed, and that the in vivo recordings were more consistent with the clustered network. Furthermore, recordings from tens of neurons were sufficient to identify the dominant modes of shared variability that generalize to larger portions of the network. These findings can help guide the interpretation of dimensionality reduction outputs in regimes of limited neuron and trial sampling and help relate these outputs to the underlying network structure. We seek to understand how billions of neurons in the brain work together to give rise to everyday brain function. In most current experimental settings, we can only record from tens of neurons for a few hours at a time. A major question in systems neuroscience is whether our interpretation of how neurons interact would change if we monitor orders of magnitude more neurons and for substantially more time. In this study, we use realistic networks of model neurons, which allow us to analyze the activity from as many model neurons as we want for as long as we want. For these models, we found that we can identify the salient interactions among neurons and interpret their activity meaningfully within the range of neurons and recording time available in current experiments. Furthermore, we studied how the neural activity from the models reflects how the neurons are connected. These results help to guide the interpretation of analyses using populations of neurons in the context of the larger network to understand brain function.
Collapse
Affiliation(s)
- Ryan C. Williamson
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
- School of Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Department of Machine Learning, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
| | - Benjamin R. Cowley
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
- Department of Machine Learning, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
| | - Ashok Litwin-Kumar
- Center for Theoretical Neuroscience, Columbia University, New York City, New York, United States of America
| | - Brent Doiron
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Adam Kohn
- Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
- Department of Ophthalmology and Vision Sciences, Albert Einstein College of Medicine, Bronx, New York, United States of America
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, New York, United States of America
| | - Matthew A. Smith
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Fox Center for Vision Restoration, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Byron M. Yu
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
- * E-mail:
| |
Collapse
|
25
|
Consistent estimation of complete neuronal connectivity in large neuronal populations using sparse "shotgun" neuronal activity sampling. J Comput Neurosci 2016; 41:157-84. [PMID: 27515518 DOI: 10.1007/s10827-016-0611-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2015] [Revised: 06/09/2016] [Accepted: 06/13/2016] [Indexed: 11/27/2022]
Abstract
We investigate the properties of recently proposed "shotgun" sampling approach for the common inputs problem in the functional estimation of neuronal connectivity. We study the asymptotic correctness, the speed of convergence, and the data size requirements of such an approach. We show that the shotgun approach can be expected to allow the inference of complete connectivity matrix in large neuronal populations under some rather general conditions. However, we find that the posterior error of the shotgun connectivity estimator grows quickly with the size of unobserved neuronal populations, the square of average connectivity strength, and the square of observation sparseness. This implies that the shotgun connectivity estimation will require significantly larger amounts of neuronal activity data whenever the number of neurons in observed neuronal populations remains small. We present a numerical approach for solving the shotgun estimation problem in general settings and use it to demonstrate the shotgun connectivity inference in the examples of simulated synfire and weakly coupled cortical neuronal networks.
Collapse
|
26
|
Schneidman E. Towards the design principles of neural population codes. Curr Opin Neurobiol 2016; 37:133-140. [PMID: 27016639 DOI: 10.1016/j.conb.2016.03.001] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2016] [Revised: 03/01/2016] [Accepted: 03/02/2016] [Indexed: 12/18/2022]
Abstract
The ability to record the joint activity of large groups of neurons would allow for direct study of information representation and computation at the level of whole circuits in the brain. The combinatorial space of potential population activity patterns and neural noise imply that it would be impossible to directly map the relations between stimuli and population responses. Understanding of large neural population codes therefore depends on identifying simplifying design principles. We review recent results showing that strongly correlated population codes can be explained using minimal models that rely on low order relations among cells. We discuss the implications for large populations, and how such models allow for mapping the semantic organization of the neural codebook and stimulus space, and decoding.
Collapse
Affiliation(s)
- Elad Schneidman
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel.
| |
Collapse
|
27
|
Correction: Efficient "Shotgun" Inference of Neural Connectivity from Highly Sub-sampled Activity Data. PLoS Comput Biol 2015; 11:e1004657. [PMID: 26633800 PMCID: PMC4669160 DOI: 10.1371/journal.pcbi.1004657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|