1
|
Donner C, Bartram J, Hornauer P, Kim T, Roqueiro D, Hierlemann A, Obozinski G, Schröter M. Ensemble learning and ground-truth validation of synaptic connectivity inferred from spike trains. PLoS Comput Biol 2024; 20:e1011964. [PMID: 38683881 PMCID: PMC11081509 DOI: 10.1371/journal.pcbi.1011964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 05/09/2024] [Accepted: 03/02/2024] [Indexed: 05/02/2024] Open
Abstract
Probing the architecture of neuronal circuits and the principles that underlie their functional organization remains an important challenge of modern neurosciences. This holds true, in particular, for the inference of neuronal connectivity from large-scale extracellular recordings. Despite the popularity of this approach and a number of elaborate methods to reconstruct networks, the degree to which synaptic connections can be reconstructed from spike-train recordings alone remains controversial. Here, we provide a framework to probe and compare connectivity inference algorithms, using a combination of synthetic ground-truth and in vitro data sets, where the connectivity labels were obtained from simultaneous high-density microelectrode array (HD-MEA) and patch-clamp recordings. We find that reconstruction performance critically depends on the regularity of the recorded spontaneous activity, i.e., their dynamical regime, the type of connectivity, and the amount of available spike-train data. We therefore introduce an ensemble artificial neural network (eANN) to improve connectivity inference. We train the eANN on the validated outputs of six established inference algorithms and show how it improves network reconstruction accuracy and robustness. Overall, the eANN demonstrated strong performance across different dynamical regimes, worked well on smaller datasets, and improved the detection of synaptic connectivity, especially inhibitory connections. Results indicated that the eANN also improved the topological characterization of neuronal networks. The presented methodology contributes to advancing the performance of inference algorithms and facilitates our understanding of how neuronal activity relates to synaptic connectivity.
Collapse
Affiliation(s)
- Christian Donner
- Swiss Data Science Center, ETH Zürich & EPFL, Zürich & Lausanne, Switzerland
| | - Julian Bartram
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| | - Philipp Hornauer
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| | - Taehoon Kim
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| | - Damian Roqueiro
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| | - Andreas Hierlemann
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| | - Guillaume Obozinski
- Swiss Data Science Center, ETH Zürich & EPFL, Zürich & Lausanne, Switzerland
| | - Manuel Schröter
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| |
Collapse
|
2
|
Liang T, Brinkman BAW. Statistically inferred neuronal connections in subsampled neural networks strongly correlate with spike train covariances. Phys Rev E 2024; 109:044404. [PMID: 38755896 DOI: 10.1103/physreve.109.044404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 02/29/2024] [Indexed: 05/18/2024]
Abstract
Statistically inferred neuronal connections from observed spike train data are often skewed from ground truth by factors such as model mismatch, unobserved neurons, and limited data. Spike train covariances, sometimes referred to as "functional connections," are often used as a proxy for the connections between pairs of neurons, but reflect statistical relationships between neurons, not anatomical connections. Moreover, covariances are not causal: spiking activity is correlated in both the past and the future, whereas neurons respond only to synaptic inputs in the past. Connections inferred by maximum likelihood inference, however, can be constrained to be causal. However, we show in this work that the inferred connections in spontaneously active networks modeled by stochastic leaky integrate-and-fire networks strongly correlate with the covariances between neurons, and may reflect noncausal relationships, when many neurons are unobserved or when neurons are weakly coupled. This phenomenon occurs across different network structures, including random networks and balanced excitatory-inhibitory networks. We use a combination of simulations and a mean-field analysis with fluctuation corrections to elucidate the relationships between spike train covariances, inferred synaptic filters, and ground-truth connections in partially observed networks.
Collapse
Affiliation(s)
- Tong Liang
- Department of Physics and Astronomy, Stony Brook University, Stony Brook, New York 11794, USA
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| | - Braden A W Brinkman
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| |
Collapse
|
3
|
Shomali SR, Rasuli SN, Ahmadabadi MN, Shimazaki H. Uncovering hidden network architecture from spiking activities using an exact statistical input-output relation of neurons. Commun Biol 2023; 6:169. [PMID: 36792689 PMCID: PMC9932086 DOI: 10.1038/s42003-023-04511-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 01/20/2023] [Indexed: 02/17/2023] Open
Abstract
Identifying network architecture from observed neural activities is crucial in neuroscience studies. A key requirement is knowledge of the statistical input-output relation of single neurons in vivo. By utilizing an exact analytical solution of the spike-timing for leaky integrate-and-fire neurons under noisy inputs balanced near the threshold, we construct a framework that links synaptic type, strength, and spiking nonlinearity with the statistics of neuronal population activity. The framework explains structured pairwise and higher-order interactions of neurons receiving common inputs under different architectures. We compared the theoretical predictions with the activity of monkey and mouse V1 neurons and found that excitatory inputs given to pairs explained the observed sparse activity characterized by strong negative triple-wise interactions, thereby ruling out the alternative explanation by shared inhibition. Moreover, we showed that the strong interactions are a signature of excitatory rather than inhibitory inputs whenever the spontaneous rate is low. We present a guide map of neural interactions that help researchers to specify the hidden neuronal motifs underlying observed interactions found in empirical data.
Collapse
Affiliation(s)
- Safura Rashid Shomali
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran, 19395-5746, Iran.
| | - Seyyed Nader Rasuli
- grid.418744.a0000 0000 8841 7951School of Physics, Institute for Research in Fundamental Sciences (IPM), Tehran, 19395-5531 Iran ,grid.411872.90000 0001 2087 2250Department of Physics, University of Guilan, Rasht, 41335-1914 Iran
| | - Majid Nili Ahmadabadi
- grid.46072.370000 0004 0612 7950Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, 14395-515 Iran
| | - Hideaki Shimazaki
- Graduate School of Informatics, Kyoto University, Kyoto, 606-8501, Japan. .,Center for Human Nature, Artificial Intelligence, and Neuroscience (CHAIN), Hokkaido University, Hokkaido, 060-0812, Japan.
| |
Collapse
|
4
|
Endo D, Kobayashi R, Bartolo R, Averbeck BB, Sugase-Miyamoto Y, Hayashi K, Kawano K, Richmond BJ, Shinomoto S. A convolutional neural network for estimating synaptic connectivity from spike trains. Sci Rep 2021; 11:12087. [PMID: 34103546 PMCID: PMC8187444 DOI: 10.1038/s41598-021-91244-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 05/21/2021] [Indexed: 02/05/2023] Open
Abstract
The recent increase in reliable, simultaneous high channel count extracellular recordings is exciting for physiologists and theoreticians because it offers the possibility of reconstructing the underlying neuronal circuits. We recently presented a method of inferring this circuit connectivity from neuronal spike trains by applying the generalized linear model to cross-correlograms. Although the algorithm can do a good job of circuit reconstruction, the parameters need to be carefully tuned for each individual dataset. Here we present another method using a Convolutional Neural Network for Estimating synaptic Connectivity from spike trains. After adaptation to huge amounts of simulated data, this method robustly captures the specific feature of monosynaptic impact in a noisy cross-correlogram. There are no user-adjustable parameters. With this new method, we have constructed diagrams of neuronal circuits recorded in several cortical areas of monkeys.
Collapse
Affiliation(s)
- Daisuke Endo
- Graduate School of Informatics, Kyoto University, Kyoto, 606-8501, Japan
| | - Ryota Kobayashi
- Mathematics and Informatics Center, The University of Tokyo, Tokyo, 113-8656, Japan
- Department of Complexity Science and Engineering, The University of Tokyo, Chiba, 277-8561, Japan
- JST, PRESTO, Saitama, 332-0012, Japan
| | - Ramon Bartolo
- Laboratory of Neuropsychology, NIMH/NIH/DHHS, Bethesda, MD, 20814, USA
| | - Bruno B Averbeck
- Laboratory of Neuropsychology, NIMH/NIH/DHHS, Bethesda, MD, 20814, USA
| | - Yasuko Sugase-Miyamoto
- Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology, Tsukuba, 305-8568, Japan
| | - Kazuko Hayashi
- Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology, Tsukuba, 305-8568, Japan
- Japan Society for the Promotion of Science, Tokyo, 102-0083, Japan
| | - Kenji Kawano
- Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology, Tsukuba, 305-8568, Japan
| | - Barry J Richmond
- Laboratory of Neuropsychology, NIMH/NIH/DHHS, Bethesda, MD, 20814, USA
| | - Shigeru Shinomoto
- Graduate School of Informatics, Kyoto University, Kyoto, 606-8501, Japan.
- Brain Information Communication Research Laboratory Group, ATR Institute International, Kyoto, 619-0288, Japan.
| |
Collapse
|
5
|
Ren N, Ito S, Hafizi H, Beggs JM, Stevenson IH. Model-based detection of putative synaptic connections from spike recordings with latency and type constraints. J Neurophysiol 2020; 124:1588-1604. [DOI: 10.1152/jn.00066.2020] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Detecting synaptic connections using large-scale extracellular spike recordings is a difficult statistical problem. Here, we develop an extension of a generalized linear model that explicitly separates fast synaptic effects and slow background fluctuations in cross-correlograms between pairs of neurons while incorporating circuit properties learned from the whole network. This model outperforms two previously developed synapse detection methods in the simulated networks and recovers plausible connections from hundreds of neurons in in vitro multielectrode array data.
Collapse
Affiliation(s)
- Naixin Ren
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut
| | - Shinya Ito
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, California
| | - Hadi Hafizi
- Department of Physics, Indiana University, Bloomington, Indiana
| | - John M. Beggs
- Department of Physics, Indiana University, Bloomington, Indiana
| | - Ian H. Stevenson
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut
| |
Collapse
|
6
|
Latimer KW, Rieke F, Pillow JW. Inferring synaptic inputs from spikes with a conductance-based neural encoding model. eLife 2019; 8:47012. [PMID: 31850846 PMCID: PMC6989090 DOI: 10.7554/elife.47012] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Accepted: 12/17/2019] [Indexed: 01/15/2023] Open
Abstract
Descriptive statistical models of neural responses generally aim to characterize the mapping from stimuli to spike responses while ignoring biophysical details of the encoding process. Here, we introduce an alternative approach, the conductance-based encoding model (CBEM), which describes a mapping from stimuli to excitatory and inhibitory synaptic conductances governing the dynamics of sub-threshold membrane potential. Remarkably, we show that the CBEM can be fit to extracellular spike train data and then used to predict excitatory and inhibitory synaptic currents. We validate these predictions with intracellular recordings from macaque retinal ganglion cells. Moreover, we offer a novel quasi-biophysical interpretation of the Poisson generalized linear model (GLM) as a special case of the CBEM in which excitation and inhibition are perfectly balanced. This work forges a new link between statistical and biophysical models of neural encoding and sheds new light on the biophysical variables that underlie spiking in the early visual pathway.
Collapse
Affiliation(s)
- Kenneth W Latimer
- Department of Physiology and Biophysics, University of Washington, Seattle, United States
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, United States
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Department of Psychology, Princeton University, Princeton, United States
| |
Collapse
|
7
|
Inferring and validating mechanistic models of neural microcircuits based on spike-train data. Nat Commun 2019; 10:4933. [PMID: 31666513 PMCID: PMC6821748 DOI: 10.1038/s41467-019-12572-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Accepted: 09/18/2019] [Indexed: 01/11/2023] Open
Abstract
The interpretation of neuronal spike train recordings often relies on abstract statistical models that allow for principled parameter estimation and model selection but provide only limited insights into underlying microcircuits. In contrast, mechanistic models are useful to interpret microcircuit dynamics, but are rarely quantitatively matched to experimental data due to methodological challenges. Here we present analytical methods to efficiently fit spiking circuit models to single-trial spike trains. Using derived likelihood functions, we statistically infer the mean and variance of hidden inputs, neuronal adaptation properties and connectivity for coupled integrate-and-fire neurons. Comprehensive evaluations on synthetic data, validations using ground truth in-vitro and in-vivo recordings, and comparisons with existing techniques demonstrate that parameter estimation is very accurate and efficient, even for highly subsampled networks. Our methods bridge statistical, data-driven and theoretical, model-based neurosciences at the level of spiking circuits, for the purpose of a quantitative, mechanistic interpretation of recorded neuronal population activity. It is difficult to fit mechanistic, biophysically constrained circuit models to spike train data from in vivo extracellular recordings. Here the authors present analytical methods that enable efficient parameter estimation for integrate-and-fire circuit models and inference of the underlying connectivity structure in subsampled networks.
Collapse
|
8
|
Kobayashi R, Kurita S, Kurth A, Kitano K, Mizuseki K, Diesmann M, Richmond BJ, Shinomoto S. Reconstructing neuronal circuitry from parallel spike trains. Nat Commun 2019; 10:4468. [PMID: 31578320 PMCID: PMC6775109 DOI: 10.1038/s41467-019-12225-2] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2018] [Accepted: 08/27/2019] [Indexed: 11/23/2022] Open
Abstract
State-of-the-art techniques allow researchers to record large numbers of spike trains in parallel for many hours. With enough such data, we should be able to infer the connectivity among neurons. Here we develop a method for reconstructing neuronal circuitry by applying a generalized linear model (GLM) to spike cross-correlations. Our method estimates connections between neurons in units of postsynaptic potentials and the amount of spike recordings needed to verify connections. The performance of inference is optimized by counting the estimation errors using synthetic data. This method is superior to other established methods in correctly estimating connectivity. By applying our method to rat hippocampal data, we show that the types of estimated connections match the results inferred from other physiological cues. Thus our method provides the means to build a circuit diagram from recorded spike trains, thereby providing a basis for elucidating the differences in information processing in different brain regions. Current techniques have enabled the simultaneous collection of spike train data from large numbers of neurons. Here, the authors report a method to infer the underlying neural circuit connectivity diagram based on a generalized linear model applied to spike cross-correlations between neurons.
Collapse
Affiliation(s)
- Ryota Kobayashi
- National Institute of Informatics, Tokyo, 101-8430, Japan.,Department of Informatics, SOKENDAI (The Graduate University for Advanced Studies), Tokyo, 101-8430, Japan
| | - Shuhei Kurita
- Center for Advanced Intelligence Project, RIKEN, Tokyo, 103-0027, Japan
| | - Anno Kurth
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, 52425, Jülich, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| | - Katsunori Kitano
- Department of Information Science and Engineering, Ritsumeikan University, Kusatsu, 525-8577, Japan
| | - Kenji Mizuseki
- Department of Physiology, Osaka City University Graduate School of Medicine, Osaka, 545-8585, Japan
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, 52425, Jülich, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany.,Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen University, Aachen, Germany
| | - Barry J Richmond
- Laboratory of Neuropsychology, NIMH/NIH/DHHS, Bethesda, MD, 20814, USA
| | - Shigeru Shinomoto
- Department of Physics, Kyoto University, Kyoto, 606-8502, Japan. .,Brain Information Communication Research Laboratory Group, ATR Institute International, Kyoto, 619-0288, Japan.
| |
Collapse
|
9
|
Abstract
In recent years new technologies in neuroscience have made it possible to measure the activities of large numbers of neurons simultaneously in behaving animals. For each neuron a fluorescence trace is measured; this can be seen as a first-order approximation of the neuron's activity over time. Determining the exact time at which a neuron spikes on the basis of its fluorescence trace is an important open problem in the field of computational neuroscience. Recently, a convex optimization problem involving an ℓ1 penalty was proposed for this task. In this paper we slightly modify that recent proposal by replacing the ℓ1 penalty with an ℓ0 penalty. In stark contrast to the conventional wisdom that ℓ0 optimization problems are computationally intractable, we show that the resulting optimization problem can be efficiently solved for the global optimum using an extremely simple and efficient dynamic programming algorithm. Our R-language implementation of the proposed algorithm runs in a few minutes on fluorescence traces of 100,000 timesteps. Furthermore, our proposal leads to substantial improvements over the previous ℓ1 proposal, in simulations as well as on two calcium imaging datasets. R-language software for our proposal is available on CRAN in the package LZeroSpikeInference. Instructions for running this software in python can be found at https://github.com/jewellsean/LZeroSpikeInference.
Collapse
Affiliation(s)
- Sean Jewell
- Department of Statistics, University of Washington, Seattle, Washington 98195, USA,
| | - Daniela Witten
- Departments of Statistics and Biostatistics, University of Washington, Seattle, Washington 98195, USA,
| |
Collapse
|
10
|
Abstract
Generalized linear models (GLMs) have a wide range of applications in systems neuroscience describing the encoding of stimulus and behavioral variables, as well as the dynamics of single neurons. However, in any given experiment, many variables that have an impact on neural activity are not observed or not modeled. Here we demonstrate, in both theory and practice, how these omitted variables can result in biased parameter estimates for the effects that are included. In three case studies, we estimate tuning functions for common experiments in motor cortex, hippocampus, and visual cortex. We find that including traditionally omitted variables changes estimates of the original parameters and that modulation originally attributed to one variable is reduced after new variables are included. In GLMs describing single-neuron dynamics, we then demonstrate how postspike history effects can also be biased by omitted variables. Here we find that omitted variable bias can lead to mistaken conclusions about the stability of single-neuron firing. Omitted variable bias can appear in any model with confounders-where omitted variables modulate neural activity and the effects of the omitted variables covary with the included effects. Understanding how and to what extent omitted variable bias affects parameter estimates is likely to be important for interpreting the parameters and predictions of many neural encoding models.
Collapse
Affiliation(s)
- Ian H Stevenson
- Department of Psychological Sciences, Department of Biomedical Engineering, and CT Institute for Brain and Cognitive Sciences, University of Connecticut, Storrs, CT 06269, U.S.A.
| |
Collapse
|
11
|
Ghanbari A, Malyshev A, Volgushev M, Stevenson IH. Estimating short-term synaptic plasticity from pre- and postsynaptic spiking. PLoS Comput Biol 2017; 13:e1005738. [PMID: 28873406 PMCID: PMC5600391 DOI: 10.1371/journal.pcbi.1005738] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2017] [Revised: 09/15/2017] [Accepted: 08/18/2017] [Indexed: 01/27/2023] Open
Abstract
Short-term synaptic plasticity (STP) critically affects the processing of information in neuronal circuits by reversibly changing the effective strength of connections between neurons on time scales from milliseconds to a few seconds. STP is traditionally studied using intracellular recordings of postsynaptic potentials or currents evoked by presynaptic spikes. However, STP also affects the statistics of postsynaptic spikes. Here we present two model-based approaches for estimating synaptic weights and short-term plasticity from pre- and postsynaptic spike observations alone. We extend a generalized linear model (GLM) that predicts postsynaptic spiking as a function of the observed pre- and postsynaptic spikes and allow the connection strength (coupling term in the GLM) to vary as a function of time based on the history of presynaptic spikes. Our first model assumes that STP follows a Tsodyks-Markram description of vesicle depletion and recovery. In a second model, we introduce a functional description of STP where we estimate the coupling term as a biophysically unrestrained function of the presynaptic inter-spike intervals. To validate the models, we test the accuracy of STP estimation using the spiking of pre- and postsynaptic neurons with known synaptic dynamics. We first test our models using the responses of layer 2/3 pyramidal neurons to simulated presynaptic input with different types of STP, and then use simulated spike trains to examine the effects of spike-frequency adaptation, stochastic vesicle release, spike sorting errors, and common input. We find that, using only spike observations, both model-based methods can accurately reconstruct the time-varying synaptic weights of presynaptic inputs for different types of STP. Our models also capture the differences in postsynaptic spike responses to presynaptic spikes following short vs long inter-spike intervals, similar to results reported for thalamocortical connections. These models may thus be useful tools for characterizing short-term plasticity from multi-electrode spike recordings in vivo.
Collapse
Affiliation(s)
- Abed Ghanbari
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, United States of America
| | - Aleksey Malyshev
- Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Science, Moscow, Russia
| | - Maxim Volgushev
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, United States of America
| | - Ian H. Stevenson
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, United States of America
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, United States of America
| |
Collapse
|
12
|
Puelma Touzel M, Wolf F. Complete Firing-Rate Response of Neurons with Complex Intrinsic Dynamics. PLoS Comput Biol 2015; 11:e1004636. [PMID: 26720924 PMCID: PMC4697854 DOI: 10.1371/journal.pcbi.1004636] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2015] [Accepted: 10/29/2015] [Indexed: 11/23/2022] Open
Abstract
The response of a neuronal population over a space of inputs depends on the intrinsic properties of its constituent neurons. Two main modes of single neuron dynamics–integration and resonance–have been distinguished. While resonator cell types exist in a variety of brain areas, few models incorporate this feature and fewer have investigated its effects. To understand better how a resonator’s frequency preference emerges from its intrinsic dynamics and contributes to its local area’s population firing rate dynamics, we analyze the dynamic gain of an analytically solvable two-degree of freedom neuron model. In the Fokker-Planck approach, the dynamic gain is intractable. The alternative Gauss-Rice approach lifts the resetting of the voltage after a spike. This allows us to derive a complete expression for the dynamic gain of a resonator neuron model in terms of a cascade of filters on the input. We find six distinct response types and use them to fully characterize the routes to resonance across all values of the relevant timescales. We find that resonance arises primarily due to slow adaptation with an intrinsic frequency acting to sharpen and adjust the location of the resonant peak. We determine the parameter regions for the existence of an intrinsic frequency and for subthreshold and spiking resonance, finding all possible intersections of the three. The expressions and analysis presented here provide an account of how intrinsic neuron dynamics shape dynamic population response properties and can facilitate the construction of an exact theory of correlations and stability of population activity in networks containing populations of resonator neurons. Dynamic gain, the amount by which features at specific frequencies in the input to a neuron are amplified or attenuated in its output spiking, is fundamental for the encoding of information by neural populations. Most studies of dynamic gain have focused on neurons without intrinsic degrees of freedom exhibiting integrator-type subthreshold dynamics. Many neuron types in the brain, however, exhibit complex subthreshold dynamics such as resonance, found for instance in cortical interneurons, stellate cells, and mitral cells. A resonator neuron has at least two degrees of freedom for which the classical Fokker-Planck approach to calculating the dynamic gain is largely intractable. Here, we lift the voltage-reset rule after a spike, allowing us to derive a complete expression of the dynamic gain of a resonator neuron model. We find the gain can exhibit only six shapes. The resonant ones have peaks that become large due to intrinsic adaptation and become sharp due to an intrinsic frequency. A resonance can nevertheless result from either property. The analysis presented here helps explain how intrinsic neuron dynamics shape population-level response properties and provides a powerful tool for developing theories of inter-neuron correlations and dynamic responses of neural populations.
Collapse
Affiliation(s)
- Maximilian Puelma Touzel
- Department for Nonlinear Dynamics, Max Planck Institute for Dynamics and Self-Organization, Goettingen, Germany
- Bernstein Center for Computational Neuroscience, Goettingen, Germany
- Institute for Nonlinear Dynamics, Georg-August University School of Science, Goettingen, Germany
- * E-mail:
| | - Fred Wolf
- Department for Nonlinear Dynamics, Max Planck Institute for Dynamics and Self-Organization, Goettingen, Germany
- Bernstein Center for Computational Neuroscience, Goettingen, Germany
- Institute for Nonlinear Dynamics, Georg-August University School of Science, Goettingen, Germany
- Kavli Institute for Theoretical Physics, University of California Santa Barbara, Santa Barbara, California, United States of America
| |
Collapse
|
13
|
Soudry D, Keshri S, Stinson P, Oh MH, Iyengar G, Paninski L. Efficient "Shotgun" Inference of Neural Connectivity from Highly Sub-sampled Activity Data. PLoS Comput Biol 2015; 11:e1004464. [PMID: 26465147 PMCID: PMC4605541 DOI: 10.1371/journal.pcbi.1004464] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2014] [Accepted: 07/09/2015] [Indexed: 11/19/2022] Open
Abstract
Inferring connectivity in neuronal networks remains a key challenge in statistical neuroscience. The “common input” problem presents a major roadblock: it is difficult to reliably distinguish causal connections between pairs of observed neurons versus correlations induced by common input from unobserved neurons. Available techniques allow us to simultaneously record, with sufficient temporal resolution, only a small fraction of the network. Consequently, naive connectivity estimators that neglect these common input effects are highly biased. This work proposes a “shotgun” experimental design, in which we observe multiple sub-networks briefly, in a serial manner. Thus, while the full network cannot be observed simultaneously at any given time, we may be able to observe much larger subsets of the network over the course of the entire experiment, thus ameliorating the common input problem. Using a generalized linear model for a spiking recurrent neural network, we develop a scalable approximate expected loglikelihood-based Bayesian method to perform network inference given this type of data, in which only a small fraction of the network is observed in each time bin. We demonstrate in simulation that the shotgun experimental design can eliminate the biases induced by common input effects. Networks with thousands of neurons, in which only a small fraction of the neurons is observed in each time bin, can be quickly and accurately estimated, achieving orders of magnitude speed up over previous approaches. Optical imaging of the activity in a neuronal network is limited by the scanning speed of the imaging device. Therefore, typically, only a small fixed part of the network is observed during the entire experiment. However, in such an experiment, it can be hard to infer from the observed activity patterns whether (1) a neuron A directly affects neuron B, or (2) another, unobserved neuron C affects both A and B. To deal with this issue, we propose a “shotgun” observation scheme, in which, at each time point, we observe a small changing subset of the neurons from the network. Consequently, many fewer neurons remain completely unobserved during the entire experiment, enabling us to eventually distinguish between cases (1) and (2) given sufficiently long experiments. Since previous inference algorithms cannot efficiently handle so many missing observations, we develop a scalable algorithm for data acquired using the shotgun observation scheme, in which only a small fraction of the neurons are observed in each time bin. Using this kind of simulated data, we show the algorithm is able to quickly infer connectivity in spiking recurrent networks with thousands of neurons.
Collapse
Affiliation(s)
- Daniel Soudry
- Department of Statistics, Department of Neuroscience, the Center for Theoretical Neuroscience, the Grossman Center for the Statistics of Mind, the Kavli Institute for Brain Science, and the NeuroTechnology Center, Columbia University, New York, New York, United States of America
| | - Suraj Keshri
- Department of Industrial Engineering and Operations Research, Columbia University, New York, New York, United States of America
| | - Patrick Stinson
- Department of Statistics, Department of Neuroscience, the Center for Theoretical Neuroscience, the Grossman Center for the Statistics of Mind, the Kavli Institute for Brain Science, and the NeuroTechnology Center, Columbia University, New York, New York, United States of America
| | - Min-Hwan Oh
- Department of Industrial Engineering and Operations Research, Columbia University, New York, New York, United States of America
| | - Garud Iyengar
- Department of Industrial Engineering and Operations Research, Columbia University, New York, New York, United States of America
| | - Liam Paninski
- Department of Statistics, Department of Neuroscience, the Center for Theoretical Neuroscience, the Grossman Center for the Statistics of Mind, the Kavli Institute for Brain Science, and the NeuroTechnology Center, Columbia University, New York, New York, United States of America
| |
Collapse
|
14
|
Ilin V, Stevenson IH, Volgushev M. Injection of fully-defined signal mixtures: a novel high-throughput tool to study neuronal encoding and computations. PLoS One 2014; 9:e109928. [PMID: 25335081 PMCID: PMC4204817 DOI: 10.1371/journal.pone.0109928] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2014] [Accepted: 09/05/2014] [Indexed: 12/18/2022] Open
Abstract
Understanding of how neurons transform fluctuations of membrane potential, reflecting input activity, into spike responses, which communicate the ultimate results of single-neuron computation, is one of the central challenges for cellular and computational neuroscience. To study this transformation under controlled conditions, previous work has used a signal immersed in noise paradigm where neurons are injected with a current consisting of fluctuating noise that mimics on-going synaptic activity and a systematic signal whose transmission is studied. One limitation of this established paradigm is that it is designed to examine the encoding of only one signal under a specific, repeated condition. As a result, characterizing how encoding depends on neuronal properties, signal parameters, and the interaction of multiple inputs is cumbersome. Here we introduce a novel fully-defined signal mixture paradigm, which allows us to overcome these problems. In this paradigm, current for injection is synthetized as a sum of artificial postsynaptic currents (PSCs) resulting from the activity of a large population of model presynaptic neurons. PSCs from any presynaptic neuron(s) can be now considered as "signal", while the sum of all other inputs is considered as "noise". This allows us to study the encoding of a large number of different signals in a single experiment, thus dramatically increasing the throughput of data acquisition. Using this novel paradigm, we characterize the detection of excitatory and inhibitory PSCs from neuronal spike responses over a wide range of amplitudes and firing-rates. We show, that for moderately-sized neuronal populations the detectability of individual inputs is higher for excitatory than for inhibitory inputs during the 2-5 ms following PSC onset, but becomes comparable after 7-8 ms. This transient imbalance of sensitivity in favor of excitation may enhance propagation of balanced signals through neuronal networks. Finally, we discuss several open questions that this novel high-throughput paradigm may address.
Collapse
Affiliation(s)
- Vladimir Ilin
- Department of Psychology, University of Connecticut, Storrs, Connecticut, United States of America
| | - Ian H. Stevenson
- Department of Psychology, University of Connecticut, Storrs, Connecticut, United States of America
| | - Maxim Volgushev
- Department of Psychology, University of Connecticut, Storrs, Connecticut, United States of America
- * E-mail:
| |
Collapse
|