1
|
Fitz H, Hagoort P, Petersson KM. Neurobiological Causal Models of Language Processing. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:225-247. [PMID: 38645618 PMCID: PMC11025648 DOI: 10.1162/nol_a_00133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 12/18/2023] [Indexed: 04/23/2024]
Abstract
The language faculty is physically realized in the neurobiological infrastructure of the human brain. Despite significant efforts, an integrated understanding of this system remains a formidable challenge. What is missing from most theoretical accounts is a specification of the neural mechanisms that implement language function. Computational models that have been put forward generally lack an explicit neurobiological foundation. We propose a neurobiologically informed causal modeling approach which offers a framework for how to bridge this gap. A neurobiological causal model is a mechanistic description of language processing that is grounded in, and constrained by, the characteristics of the neurobiological substrate. It intends to model the generators of language behavior at the level of implementational causality. We describe key features and neurobiological component parts from which causal models can be built and provide guidelines on how to implement them in model simulations. Then we outline how this approach can shed new light on the core computational machinery for language, the long-term storage of words in the mental lexicon and combinatorial processing in sentence comprehension. In contrast to cognitive theories of behavior, causal models are formulated in the "machine language" of neurobiology which is universal to human cognition. We argue that neurobiological causal modeling should be pursued in addition to existing approaches. Eventually, this approach will allow us to develop an explicit computational neurobiology of language.
Collapse
Affiliation(s)
- Hartmut Fitz
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Peter Hagoort
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Karl Magnus Petersson
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Faculty of Medicine and Biomedical Sciences, University of Algarve, Faro, Portugal
| |
Collapse
|
2
|
AbdelAty AM, Fouda ME, Eltawil A. Parameter Estimation of Two Spiking Neuron Models With Meta-Heuristic Optimization Algorithms. Front Neuroinform 2022; 16:771730. [PMID: 35250525 PMCID: PMC8888432 DOI: 10.3389/fninf.2022.771730] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 01/13/2022] [Indexed: 11/17/2022] Open
Abstract
The automatic fitting of spiking neuron models to experimental data is a challenging problem. The integrate and fire model and Hodgkin–Huxley (HH) models represent the two complexity extremes of spiking neural models. Between these two extremes lies two and three differential-equation-based models. In this work, we investigate the problem of parameter estimation of two simple neuron models with a sharp reset in order to fit the spike timing of electro-physiological recordings based on two problem formulations. Five optimization algorithms are investigated; three of them have not been used to tackle this problem before. The new algorithms show improved fitting when compared with the old ones in both problems under investigation. The improvement in fitness function is between 5 and 8%, which is achieved by using the new algorithms while also being more consistent between independent trials. Furthermore, a new problem formulation is investigated that uses a lower number of search space variables when compared to the ones reported in related literature.
Collapse
Affiliation(s)
- Amr M. AbdelAty
- Engineering Mathematics and Physics Department, Faculty of Engineering, Fayoum University, Faiyum, Egypt
- Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia
| | - Mohammed E. Fouda
- Center for Embedded & Cyber-Physical Systems, University of California, Irvine, Irvine, CA, United States
- Nanoelectronics Integrated Systems Center (NISC), Nile University, Giza, Egypt
- *Correspondence: Mohammed E. Fouda
| | - Ahmed Eltawil
- Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia
| |
Collapse
|
3
|
Huang C, Zeldenrust F, Celikel T. Cortical Representation of Touch in Silico. Neuroinformatics 2022; 20:1013-1039. [PMID: 35486347 PMCID: PMC9588483 DOI: 10.1007/s12021-022-09576-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/19/2022] [Indexed: 12/31/2022]
Abstract
With its six layers and ~ 12,000 neurons, a cortical column is a complex network whose function is plausibly greater than the sum of its constituents'. Functional characterization of its network components will require going beyond the brute-force modulation of the neural activity of a small group of neurons. Here we introduce an open-source, biologically inspired, computationally efficient network model of the somatosensory cortex's granular and supragranular layers after reconstructing the barrel cortex in soma resolution. Comparisons of the network activity to empirical observations showed that the in silico network replicates the known properties of touch representations and whisker deprivation-induced changes in synaptic strength induced in vivo. Simulations show that the history of the membrane potential acts as a spatial filter that determines the presynaptic population of neurons contributing to a post-synaptic action potential; this spatial filtering might be critical for synaptic integration of top-down and bottom-up information.
Collapse
Affiliation(s)
- Chao Huang
- grid.9647.c0000 0004 7669 9786Department of Biology, University of Leipzig, Leipzig, Germany
| | - Fleur Zeldenrust
- grid.5590.90000000122931605Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Tansu Celikel
- grid.5590.90000000122931605Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands ,grid.213917.f0000 0001 2097 4943School of Psychology, Georgia Institute of Technology, Atlanta, GA USA
| |
Collapse
|
4
|
Oesterle J, Behrens C, Schröder C, Hermann T, Euler T, Franke K, Smith RG, Zeck G, Berens P. Bayesian inference for biophysical neuron models enables stimulus optimization for retinal neuroprosthetics. eLife 2020; 9:e54997. [PMID: 33107821 PMCID: PMC7673784 DOI: 10.7554/elife.54997] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Accepted: 10/26/2020] [Indexed: 01/02/2023] Open
Abstract
While multicompartment models have long been used to study the biophysics of neurons, it is still challenging to infer the parameters of such models from data including uncertainty estimates. Here, we performed Bayesian inference for the parameters of detailed neuron models of a photoreceptor and an OFF- and an ON-cone bipolar cell from the mouse retina based on two-photon imaging data. We obtained multivariate posterior distributions specifying plausible parameter ranges consistent with the data and allowing to identify parameters poorly constrained by the data. To demonstrate the potential of such mechanistic data-driven neuron models, we created a simulation environment for external electrical stimulation of the retina and optimized stimulus waveforms to target OFF- and ON-cone bipolar cells, a current major problem of retinal neuroprosthetics.
Collapse
Affiliation(s)
- Jonathan Oesterle
- Institute for Ophthalmic Research, University of TübingenTübingenGermany
| | - Christian Behrens
- Institute for Ophthalmic Research, University of TübingenTübingenGermany
| | - Cornelius Schröder
- Institute for Ophthalmic Research, University of TübingenTübingenGermany
| | - Thoralf Hermann
- Naturwissenschaftliches und Medizinisches Institut an der Universität TübingenReutlingenGermany
| | - Thomas Euler
- Institute for Ophthalmic Research, University of TübingenTübingenGermany
- Center for Integrative Neuroscience, University of TübingenTübingenGermany
- Bernstein Center for Computational Neuroscience, University of TübingenTübingenGermany
| | - Katrin Franke
- Institute for Ophthalmic Research, University of TübingenTübingenGermany
- Bernstein Center for Computational Neuroscience, University of TübingenTübingenGermany
| | - Robert G Smith
- Department of Neuroscience, University of PennsylvaniaPhiladelphiaUnited States
| | - Günther Zeck
- Naturwissenschaftliches und Medizinisches Institut an der Universität TübingenReutlingenGermany
| | - Philipp Berens
- Institute for Ophthalmic Research, University of TübingenTübingenGermany
- Center for Integrative Neuroscience, University of TübingenTübingenGermany
- Bernstein Center for Computational Neuroscience, University of TübingenTübingenGermany
- Institute for Bioinformatics and Medical Informatics, University of TübingenTübingenGermany
| |
Collapse
|
5
|
Gonçalves PJ, Lueckmann JM, Deistler M, Nonnenmacher M, Öcal K, Bassetto G, Chintaluri C, Podlaski WF, Haddad SA, Vogels TP, Greenberg DS, Macke JH. Training deep neural density estimators to identify mechanistic models of neural dynamics. eLife 2020; 9:e56261. [PMID: 32940606 PMCID: PMC7581433 DOI: 10.7554/elife.56261] [Citation(s) in RCA: 57] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Accepted: 09/16/2020] [Indexed: 01/27/2023] Open
Abstract
Mechanistic modeling in neuroscience aims to explain observed phenomena in terms of underlying causes. However, determining which model parameters agree with complex and stochastic neural data presents a significant challenge. We address this challenge with a machine learning tool which uses deep neural density estimators-trained using model simulations-to carry out Bayesian inference and retrieve the full space of parameters compatible with raw data or selected data features. Our method is scalable in parameters and data features and can rapidly analyze new data after initial training. We demonstrate the power and flexibility of our approach on receptive fields, ion channels, and Hodgkin-Huxley models. We also characterize the space of circuit configurations giving rise to rhythmic activity in the crustacean stomatogastric ganglion, and use these results to derive hypotheses for underlying compensation mechanisms. Our approach will help close the gap between data-driven and theory-driven models of neural dynamics.
Collapse
Affiliation(s)
- Pedro J Gonçalves
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
| | - Jan-Matthis Lueckmann
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
| | - Michael Deistler
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Machine Learning in Science, Excellence Cluster Machine Learning, Tübingen UniversityTübingenGermany
| | - Marcel Nonnenmacher
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
- Model-Driven Machine Learning, Institute of Coastal Research, Helmholtz Centre GeesthachtGeesthachtGermany
| | - Kaan Öcal
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
- Mathematical Institute, University of BonnBonnGermany
| | - Giacomo Bassetto
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
| | - Chaitanya Chintaluri
- Centre for Neural Circuits and Behaviour, University of OxfordOxfordUnited Kingdom
- Institute of Science and Technology AustriaKlosterneuburgAustria
| | - William F Podlaski
- Centre for Neural Circuits and Behaviour, University of OxfordOxfordUnited Kingdom
| | - Sara A Haddad
- Max Planck Institute for Brain ResearchFrankfurtGermany
| | - Tim P Vogels
- Centre for Neural Circuits and Behaviour, University of OxfordOxfordUnited Kingdom
- Institute of Science and Technology AustriaKlosterneuburgAustria
| | - David S Greenberg
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Model-Driven Machine Learning, Institute of Coastal Research, Helmholtz Centre GeesthachtGeesthachtGermany
| | - Jakob H Macke
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
- Machine Learning in Science, Excellence Cluster Machine Learning, Tübingen UniversityTübingenGermany
- Max Planck Institute for Intelligent SystemsTübingenGermany
| |
Collapse
|
6
|
Inferring and validating mechanistic models of neural microcircuits based on spike-train data. Nat Commun 2019; 10:4933. [PMID: 31666513 PMCID: PMC6821748 DOI: 10.1038/s41467-019-12572-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Accepted: 09/18/2019] [Indexed: 01/11/2023] Open
Abstract
The interpretation of neuronal spike train recordings often relies on abstract statistical models that allow for principled parameter estimation and model selection but provide only limited insights into underlying microcircuits. In contrast, mechanistic models are useful to interpret microcircuit dynamics, but are rarely quantitatively matched to experimental data due to methodological challenges. Here we present analytical methods to efficiently fit spiking circuit models to single-trial spike trains. Using derived likelihood functions, we statistically infer the mean and variance of hidden inputs, neuronal adaptation properties and connectivity for coupled integrate-and-fire neurons. Comprehensive evaluations on synthetic data, validations using ground truth in-vitro and in-vivo recordings, and comparisons with existing techniques demonstrate that parameter estimation is very accurate and efficient, even for highly subsampled networks. Our methods bridge statistical, data-driven and theoretical, model-based neurosciences at the level of spiking circuits, for the purpose of a quantitative, mechanistic interpretation of recorded neuronal population activity. It is difficult to fit mechanistic, biophysically constrained circuit models to spike train data from in vivo extracellular recordings. Here the authors present analytical methods that enable efficient parameter estimation for integrate-and-fire circuit models and inference of the underlying connectivity structure in subsampled networks.
Collapse
|
7
|
Valadez-Godínez S, Sossa H, Santiago-Montero R. On the accuracy and computational cost of spiking neuron implementation. Neural Netw 2019; 122:196-217. [PMID: 31689679 DOI: 10.1016/j.neunet.2019.09.026] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2019] [Revised: 09/12/2019] [Accepted: 09/17/2019] [Indexed: 10/25/2022]
Abstract
Since more than a decade ago, three statements about spiking neuron (SN) implementations have been widely accepted: 1) Hodgkin and Huxley (HH) model is computationally prohibitive, 2) Izhikevich (IZH) artificial neuron is as efficient as Leaky Integrate-and-Fire (LIF) model, and 3) IZH model is more efficient than HH model (Izhikevich, 2004). As suggested by Hodgkin and Huxley (1952), their model operates in two modes: by using the α's and β's rate functions directly (HH model) and by storing them into tables (HHT model) for computational cost reduction. Recently, it has been stated that: 1) HHT model (HH using tables) is not prohibitive, 2) IZH model is not efficient, and 3) both HHT and IZH models are comparable in computational cost (Skocik & Long, 2014). That controversy shows that there is no consensus concerning SN simulation capacities. Hence, in this work, we introduce a refined approach, based on the multiobjective optimization theory, describing the SN simulation capacities and ultimately choosing optimal simulation parameters. We have used normalized metrics to define the capacity levels of accuracy, computational cost, and efficiency. Normalized metrics allowed comparisons between SNs at the same level or scale. We conducted tests for balanced, lower, and upper boundary conditions under a regular spiking mode with constant and random current stimuli. We found optimal simulation parameters leading to a balance between computational cost and accuracy. Importantly, and, in general, we found that 1) HH model (without using tables) is the most accurate, computationally inexpensive, and efficient, 2) IZH model is the most expensive and inefficient, 3) both LIF and HHT models are the most inaccurate, 4) HHT model is more expensive and inaccurate than HH model due to α's and β's table discretization, and 5) HHT model is not comparable in computational cost to IZH model. These results refute the theory formulated over a decade ago (Izhikevich, 2004) and go more in-depth in the statements formulated by Skocik and Long (2014). Our statements imply that the number of dimensions or FLOPS in the SNs are theoretical but not practical indicators of the true computational cost. The metric we propose for the computational cost is more precise than FLOPS and was found to be invariant to computer architecture. Moreover, we found that the firing frequency used in previous works is a necessary but an insufficient metric to evaluate the simulation accuracy. We also show that our results are consistent with the theory of numerical methods and the theory of SN discontinuity. Discontinuous SNs, such LIF and IZH models, introduce a considerable error every time a spike is generated. In addition, compared to the constant input current, the random input current increases the computational cost and inaccuracy. Besides, we found that the search for optimal simulation parameters is problem-specific. That is important because most of the previous works have intended to find a general and unique optimal simulation. Here, we show that this solution could not exist because it is a multiobjective optimization problem that depends on several factors. This work sets up a renewed thesis concerning the SN simulation that is useful to several related research areas, including the emergent Deep Spiking Neural Networks.
Collapse
Affiliation(s)
- Sergio Valadez-Godínez
- Laboratorio de Robótica y Mecatrónica, Centro de Investigación en Computación, Instituto Politécnico Nacional, Av. Juan de Dios Bátiz, S/N, Col. Nva. Industrial Vallejo, Ciudad de México, México, 07738, Mexico; División de Ingeniería Informática, Instituto Tecnológico Superior de Purísima del Rincón, Gto., México, 36413, Mexico; División de Ingenierías de Educación Superior, Universidad Virtual del Estado de Guanajuato, Gto., México, 36400, Mexico.
| | - Humberto Sossa
- Laboratorio de Robótica y Mecatrónica, Centro de Investigación en Computación, Instituto Politécnico Nacional, Av. Juan de Dios Bátiz, S/N, Col. Nva. Industrial Vallejo, Ciudad de México, México, 07738, Mexico; Tecnológico de Monterrey, Campus Guadalajara, Av. Gral. Ramón Corona 2514, Zapopan, Jal., México, 45138, Mexico.
| | - Raúl Santiago-Montero
- División de Estudios de Posgrado e Investigación, Instituto Tecnológico de León, Av. Tecnológico S/N, León, Gto., México, 37290, Mexico.
| |
Collapse
|
8
|
Lubejko ST, Fontaine B, Soueidan SE, MacLeod KM. Spike threshold adaptation diversifies neuronal operating modes in the auditory brain stem. J Neurophysiol 2019; 122:2576-2590. [PMID: 31577531 DOI: 10.1152/jn.00234.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Single neurons function along a spectrum of neuronal operating modes whose properties determine how the output firing activity is generated from synaptic input. The auditory brain stem contains a diversity of neurons, from pure coincidence detectors to pure integrators and those with intermediate properties. We investigated how intrinsic spike initiation mechanisms regulate neuronal operating mode in the avian cochlear nucleus. Although the neurons in one division of the avian cochlear nucleus, nucleus magnocellularis, have been studied in depth, the spike threshold dynamics of the tonically firing neurons of a second division of cochlear nucleus, nucleus angularis (NA), remained unexplained. The input-output functions of tonically firing NA neurons were interrogated with directly injected in vivo-like current stimuli during whole cell patch-clamp recordings in vitro. Increasing the amplitude of the noise fluctuations in the current stimulus enhanced the firing rates in one subset of tonically firing neurons ("differentiators") but not another ("integrators"). We found that spike thresholds showed significantly greater adaptation and variability in the differentiator neurons. A leaky integrate-and-fire neuronal model with an adaptive spike initiation process derived from sodium channel dynamics was fit to the firing responses and could recapitulate >80% of the precise temporal firing across a range of fluctuation and mean current levels. Greater threshold adaptation explained the frequency-current curve changes due to a hyperpolarized shift in the effective adaptation voltage range and longer-lasting threshold adaptation in differentiators. The fine-tuning of the intrinsic properties of different NA neurons suggests they may have specialized roles in spectrotemporal processing.NEW & NOTEWORTHY Avian cochlear nucleus angularis (NA) neurons are responsible for encoding sound intensity for sound localization and spectrotemporal processing. An adaptive spike threshold mechanism fine-tunes a subset of repetitive-spiking neurons in NA to confer coincidence detector-like properties. A model based on sodium channel inactivation properties reproduced the activity via a hyperpolarized shift in adaptation conferring fluctuation sensitivity.
Collapse
Affiliation(s)
- Susan T Lubejko
- Department of Biology, University of Maryland, College Park, Maryland
| | - Bertrand Fontaine
- Laboratory of Auditory Neurophysiology, University of Leuven, Leuven, Belgium
| | - Sara E Soueidan
- Department of Biology, University of Maryland, College Park, Maryland
| | - Katrina M MacLeod
- Department of Biology, University of Maryland, College Park, Maryland.,Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland.,Center for the Comparative and Evolutionary Biology of Hearing, University of Maryland, College Park, Maryland
| |
Collapse
|
9
|
Jȩdrzejewski-Szmek Z, Abrahao KP, Jȩdrzejewska-Szmek J, Lovinger DM, Blackwell KT. Parameter Optimization Using Covariance Matrix Adaptation-Evolutionary Strategy (CMA-ES), an Approach to Investigate Differences in Channel Properties Between Neuron Subtypes. Front Neuroinform 2018; 12:47. [PMID: 30108495 PMCID: PMC6079282 DOI: 10.3389/fninf.2018.00047] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2018] [Accepted: 07/06/2018] [Indexed: 11/25/2022] Open
Abstract
Computational models in neuroscience can be used to predict causal relationships between biological mechanisms in neurons and networks, such as the effect of blocking an ion channel or synaptic connection on neuron activity. Since developing a biophysically realistic, single neuron model is exceedingly difficult, software has been developed for automatically adjusting parameters of computational neuronal models. The ideal optimization software should work with commonly used neural simulation software; thus, we present software which works with models specified in declarative format for the MOOSE simulator. Experimental data can be specified using one of two different file formats. The fitness function is customizable as a weighted combination of feature differences. The optimization itself uses the covariance matrix adaptation-evolutionary strategy, because it is robust in the face of local fluctuations of the fitness function, and deals well with a high-dimensional and discontinuous fitness landscape. We demonstrate the versatility of the software by creating several model examples of each of four types of neurons (two subtypes of spiny projection neurons and two subtypes of globus pallidus neurons) by tuning to current clamp data. Optimizations reached convergence within 1,600-4,000 model evaluations (200-500 generations × population size of 8). Analysis of the parameters of the best fitting models revealed differences between neuron subtypes, which are consistent with prior experimental results. Overall our results suggest that this easy-to-use, automatic approach for finding neuron channel parameters may be applied to current clamp recordings from neurons exhibiting different biochemical markers to help characterize ionic differences between other neuron subtypes.
Collapse
Affiliation(s)
| | - Karina P. Abrahao
- Laboratory for Integrative Neuroscience, Section on Synaptic Pharmacology, National Institute on Alcohol Abuse and Alcoholism, National Institutes of Health, Rockville, MD, United States
| | | | - David M. Lovinger
- Laboratory for Integrative Neuroscience, Section on Synaptic Pharmacology, National Institute on Alcohol Abuse and Alcoholism, National Institutes of Health, Rockville, MD, United States
| | - Kim T. Blackwell
- Krasnow Institute of Advanced Study, George Mason University, Fairfax, VA, United States
- Department of Bioengineering, Volgenau School of Engineering, George Mason University, Fairfax, VA, United States
| |
Collapse
|
10
|
Venkadesh S, Komendantov AO, Listopad S, Scott EO, De Jong K, Krichmar JL, Ascoli GA. Evolving Simple Models of Diverse Intrinsic Dynamics in Hippocampal Neuron Types. Front Neuroinform 2018; 12:8. [PMID: 29593519 PMCID: PMC5859109 DOI: 10.3389/fninf.2018.00008] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2017] [Accepted: 02/21/2018] [Indexed: 12/24/2022] Open
Abstract
The diversity of intrinsic dynamics observed in neurons may enhance the computations implemented in the circuit by enriching network-level emergent properties such as synchronization and phase locking. Large-scale spiking network models of entire brain regions offer a platform to test theories of neural computation and cognitive function, providing useful insights on information processing in the nervous system. However, a systematic in-depth investigation requires network simulations to capture the biological intrinsic diversity of individual neurons at a sufficient level of accuracy. The computationally efficient Izhikevich model can reproduce a wide range of neuronal behaviors qualitatively. Previous studies using optimization techniques, however, were less successful in quantitatively matching experimentally recorded voltage traces. In this article, we present an automated pipeline based on evolutionary algorithms to quantitatively reproduce features of various classes of neuronal spike patterns using the Izhikevich model. Employing experimental data from Hippocampome.org, a comprehensive knowledgebase of neuron types in the rodent hippocampus, we demonstrate that our approach reliably fit Izhikevich models to nine distinct classes of experimentally recorded spike patterns, including delayed spiking, spiking with adaptation, stuttering, and bursting. Importantly, by leveraging the parameter-exploration capabilities of evolutionary algorithms, and by representing qualitative spike pattern class definitions in the error landscape, our approach creates several suitable models for each neuron type, exhibiting appropriate feature variabilities among neurons. Moreover, we demonstrate the flexibility of our methodology by creating multi-compartment Izhikevich models for each neuron type in addition to single-point versions. Although the results presented here focus on hippocampal neuron types, the same strategy is broadly applicable to any neural systems.
Collapse
Affiliation(s)
- Siva Venkadesh
- Center for Neural Informatics, Structures, and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, United States
| | - Alexander O Komendantov
- Center for Neural Informatics, Structures, and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, United States
| | - Stanislav Listopad
- Cognitive Anteater Robotics Laboratory, Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
| | - Eric O Scott
- Adaptive Systems Laboratory, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, United States
| | - Kenneth De Jong
- Adaptive Systems Laboratory, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, United States
| | - Jeffrey L Krichmar
- Cognitive Anteater Robotics Laboratory, Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
| | - Giorgio A Ascoli
- Center for Neural Informatics, Structures, and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, United States
| |
Collapse
|
11
|
Fröhlich F, Theis FJ, Rädler JO, Hasenauer J. Parameter estimation for dynamical systems with discrete events and logical operations. Bioinformatics 2017; 33:1049-1056. [PMID: 28040696 DOI: 10.1093/bioinformatics/btw764] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2016] [Accepted: 11/25/2016] [Indexed: 11/14/2022] Open
Abstract
Motivation Ordinary differential equation (ODE) models are frequently used to describe the dynamic behaviour of biochemical processes. Such ODE models are often extended by events to describe the effect of fast latent processes on the process dynamics. To exploit the predictive power of ODE models, their parameters have to be inferred from experimental data. For models without events, gradient based optimization schemes perform well for parameter estimation, when sensitivity equations are used for gradient computation. Yet, sensitivity equations for models with parameter- and state-dependent events and event-triggered observations are not supported by existing toolboxes. Results In this manuscript, we describe the sensitivity equations for differential equation models with events and demonstrate how to estimate parameters from event-resolved data using event-triggered observations in parameter estimation. We consider a model for GFP expression after transfection and a model for spiking neurons and demonstrate that we can improve computational efficiency and robustness of parameter estimation by using sensitivity equations for systems with events. Moreover, we demonstrate that, by using event-outputs, it is possible to consider event-resolved data, such as time-to-event data, for parameter estimation with ODE models. By providing a user-friendly, modular implementation in the toolbox AMICI, the developed methods are made publicly available and can be integrated in other systems biology toolboxes. Availability and Implementation We implement the methods in the open-source toolbox Advanced MATLAB Interface for CVODES and IDAS (AMICI, https://github.com/ICB-DCM/AMICI ). Contact jan.hasenauer@helmholtz-muenchen.de. Supplementary information Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Fabian Fröhlich
- Institute of Computational Biology, Helmholtz Zentrum München, Neuherberg 85764, Germany.,Center for Mathematics, Technische Universität München, Garching 85748, Germany
| | - Fabian J Theis
- Institute of Computational Biology, Helmholtz Zentrum München, Neuherberg 85764, Germany.,Center for Mathematics, Technische Universität München, Garching 85748, Germany
| | - Joachim O Rädler
- Faculty of Physics, Ludwig-Maximilians-Universität, München 80539, Germany
| | - Jan Hasenauer
- Institute of Computational Biology, Helmholtz Zentrum München, Neuherberg 85764, Germany.,Center for Mathematics, Technische Universität München, Garching 85748, Germany
| |
Collapse
|
12
|
GeNN: a code generation framework for accelerated brain simulations. Sci Rep 2016; 6:18854. [PMID: 26740369 PMCID: PMC4703976 DOI: 10.1038/srep18854] [Citation(s) in RCA: 57] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2015] [Accepted: 11/19/2015] [Indexed: 11/16/2022] Open
Abstract
Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.
Collapse
|
13
|
Colliaux D, Yger P, Kaneko K. Impact of sub and supra-threshold adaptation currents in networks of spiking neurons. J Comput Neurosci 2015; 39:255-70. [PMID: 26400658 PMCID: PMC4649064 DOI: 10.1007/s10827-015-0575-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2014] [Revised: 07/30/2015] [Accepted: 08/04/2015] [Indexed: 11/26/2022]
Abstract
Neuronal adaptation is the intrinsic capacity of the brain to change, by various mechanisms, its dynamical responses as a function of the context. Such a phenomena, widely observed in vivo and in vitro, is known to be crucial in homeostatic regulation of the activity and gain control. The effects of adaptation have already been studied at the single-cell level, resulting from either voltage or calcium gated channels both activated by the spiking activity and modulating the dynamical responses of the neurons. In this study, by disentangling those effects into a linear (sub-threshold) and a non-linear (supra-threshold) part, we focus on the the functional role of those two distinct components of adaptation onto the neuronal activity at various scales, starting from single-cell responses up to recurrent networks dynamics, and under stationary or non-stationary stimulations. The effects of slow currents on collective dynamics, like modulation of population oscillation and reliability of spike patterns, is quantified for various types of adaptation in sparse recurrent networks.
Collapse
Affiliation(s)
- David Colliaux
- Institut des Systèmes Intelligents et de Robotique (ISIR), CNRS UMR 7222, UPMC University Paris, 4 Place Jussieu, 75005, Paris, France.
| | - Pierre Yger
- Institut d'Etudes de la Cognition, ENS, Paris, France
- Sorbonne Université, UPMC University Paris06 UMRS968, Insititut de la Vision, Paris, France
- INSERM, U968, Paris, France
- CNRS, UMR7210, Paris, France
| | - Kunihiko Kaneko
- Department of Basic Science, The University of Tokyo, 3-8-1, Komaba, Meguro-ku, Tokyo, 153-8902, Japan
| |
Collapse
|
14
|
Lynch EP, Houghton CJ. Parameter estimation of neuron models using in-vitro and in-vivo electrophysiological data. Front Neuroinform 2015; 9:10. [PMID: 25941485 PMCID: PMC4403314 DOI: 10.3389/fninf.2015.00010] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2014] [Accepted: 03/27/2015] [Indexed: 11/30/2022] Open
Abstract
Spiking neuron models can accurately predict the response of neurons to somatically injected currents if the model parameters are carefully tuned. Predicting the response of in-vivo neurons responding to natural stimuli presents a far more challenging modeling problem. In this study, an algorithm is presented for parameter estimation of spiking neuron models. The algorithm is a hybrid evolutionary algorithm which uses a spike train metric as a fitness function. We apply this to parameter discovery in modeling two experimental data sets with spiking neurons; in-vitro current injection responses from a regular spiking pyramidal neuron are modeled using spiking neurons and in-vivo extracellular auditory data is modeled using a two stage model consisting of a stimulus filter and spiking neuron model.
Collapse
Affiliation(s)
- Eoin P Lynch
- School of Mathematics, Trinity College Dublin Dublin, Ireland ; Department of Computer Science, University of Bristol Bristol, UK
| | - Conor J Houghton
- Department of Computer Science, University of Bristol Bristol, UK
| |
Collapse
|
15
|
Abstract
A large variety of neuron models are used in theoretical and computational neuroscience, and among these, single-compartment models are a popular kind. These models do not explicitly include the dendrites or the axon, and range from the Hodgkin-Huxley (HH) model to various flavors of integrate-and-fire (IF) models. The main classes of models differ in the way spikes are initiated. Which one is the most realistic? Starting with some general epistemological considerations, I show that the notion of realism comes in two dimensions: empirical content (the sort of predictions that a model can produce) and empirical accuracy (whether these predictions are correct). I then examine the realism of the main classes of single-compartment models along these two dimensions, in light of recent experimental evidence.
Collapse
Affiliation(s)
- Romain Brette
- Institut d’Etudes de la Cognition, Ecole Normale Supérieure, Paris, France
- Sorbonne Universités, UPMC Univ. Paris 06, UMR_S 968, Institut de la Vision, Paris, France
- INSERM, U968, Paris, France
- CNRS, UMR_7210, Paris, France
- * E-mail:
| |
Collapse
|
16
|
Friedrich P, Vella M, Gulyás AI, Freund TF, Káli S. A flexible, interactive software tool for fitting the parameters of neuronal models. Front Neuroinform 2014; 8:63. [PMID: 25071540 PMCID: PMC4091312 DOI: 10.3389/fninf.2014.00063] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2013] [Accepted: 06/11/2014] [Indexed: 11/22/2022] Open
Abstract
The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool.
Collapse
Affiliation(s)
- Péter Friedrich
- Laboratory of Cerebral Cortex Research, Institute of Experimental Medicine, Hungarian Academy of Sciences Budapest, Hungary ; Faculty of Information Technology, Péter Pázmány Catholic University Budapest, Hungary
| | - Michael Vella
- Department of Physiology, Development and Neuroscience, University of Cambridge Cambridge, UK
| | - Attila I Gulyás
- Laboratory of Cerebral Cortex Research, Institute of Experimental Medicine, Hungarian Academy of Sciences Budapest, Hungary
| | - Tamás F Freund
- Laboratory of Cerebral Cortex Research, Institute of Experimental Medicine, Hungarian Academy of Sciences Budapest, Hungary ; Faculty of Information Technology, Péter Pázmány Catholic University Budapest, Hungary
| | - Szabolcs Káli
- Laboratory of Cerebral Cortex Research, Institute of Experimental Medicine, Hungarian Academy of Sciences Budapest, Hungary ; Faculty of Information Technology, Péter Pázmány Catholic University Budapest, Hungary
| |
Collapse
|
17
|
Cáceres MJ, Perthame B. Beyond blow-up in excitatory integrate and fire neuronal networks: Refractory period and spontaneous activity. J Theor Biol 2014; 350:81-9. [DOI: 10.1016/j.jtbi.2014.02.005] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2013] [Revised: 01/28/2014] [Accepted: 02/06/2014] [Indexed: 10/25/2022]
|
18
|
Fontaine B, MacLeod KM, Lubejko ST, Steinberg LJ, Köppl C, Peña JL. Emergence of band-pass filtering through adaptive spiking in the owl's cochlear nucleus. J Neurophysiol 2014; 112:430-45. [PMID: 24790170 DOI: 10.1152/jn.00132.2014] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In the visual, auditory, and electrosensory modalities, stimuli are defined by first- and second-order attributes. The fast time-pressure signal of a sound, a first-order attribute, is important, for instance, in sound localization and pitch perception, while its slow amplitude-modulated envelope, a second-order attribute, can be used for sound recognition. Ascending the auditory pathway from ear to midbrain, neurons increasingly show a preference for the envelope and are most sensitive to particular envelope modulation frequencies, a tuning considered important for encoding sound identity. The level at which this tuning property emerges along the pathway varies across species, and the mechanism of how this occurs is a matter of debate. In this paper, we target the transition between auditory nerve fibers and the cochlear nucleus angularis (NA). While the owl's auditory nerve fibers simultaneously encode the fast and slow attributes of a sound, one synapse further, NA neurons encode the envelope more efficiently than the auditory nerve. Using in vivo and in vitro electrophysiology and computational analysis, we show that a single-cell mechanism inducing spike threshold adaptation can explain the difference in neural filtering between the two areas. We show that spike threshold adaptation can explain the increased selectivity to modulation frequency, as input level increases in NA. These results demonstrate that a spike generation nonlinearity can modulate the tuning to second-order stimulus features, without invoking network or synaptic mechanisms.
Collapse
Affiliation(s)
- Bertrand Fontaine
- Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York;
| | - Katrina M MacLeod
- Department of Biology, Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland; and
| | - Susan T Lubejko
- Department of Biology, Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland; and
| | - Louisa J Steinberg
- Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York
| | - Christine Köppl
- Cluster of Excellence "Hearing4all" and Research Center Neurosensory Science and Department of Neuroscience School of Medicine and Health Science, Carl von Ossietzky University, Oldenburg, Germany
| | - Jose L Peña
- Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York
| |
Collapse
|
19
|
Fontaine B, Peña JL, Brette R. Spike-threshold adaptation predicted by membrane potential dynamics in vivo. PLoS Comput Biol 2014; 10:e1003560. [PMID: 24722397 PMCID: PMC3983065 DOI: 10.1371/journal.pcbi.1003560] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2013] [Accepted: 02/21/2014] [Indexed: 11/18/2022] Open
Abstract
Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo. Neurons spike when their membrane potential exceeds a threshold value, but this value has been shown to be variable in the same neuron recorded in vivo. This variability could reflect noise, or deterministic processes that make the threshold vary with the membrane potential. The second alternative would have important functional consequences. Here, we show that threshold variability is a genuine feature of neurons, which reflects adaptation to the membrane potential at a short timescale, with little contribution from noise. This demonstrates that a deterministic model can predict spikes based only on the membrane potential.
Collapse
Affiliation(s)
- Bertrand Fontaine
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
| | - José Luis Peña
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
| | - Romain Brette
- Laboratoire Psychologie de la Perception, CNRS and Université Paris Descartes, Paris, France
- Département d'Etudes Cognitives, Ecole Normale Supérieure, Paris, France
- Sorbonne Universités, UPMC Univ. Paris 06, UMR_S 968, Institut de la Vision, Paris, France
- INSERM, U968, Paris, France
- CNRS, UMR_7210, Paris, France
- * E-mail:
| |
Collapse
|
20
|
Carlson KD, Nageswaran JM, Dutt N, Krichmar JL. An efficient automated parameter tuning framework for spiking neural networks. Front Neurosci 2014; 8:10. [PMID: 24550771 PMCID: PMC3912986 DOI: 10.3389/fnins.2014.00010] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2013] [Accepted: 01/17/2014] [Indexed: 11/13/2022] Open
Abstract
As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65× of the GPU implementation over the CPU implementation, or 0.35 h per generation for GPU vs. 23.5 h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier.
Collapse
Affiliation(s)
- Kristofor D Carlson
- Department of Cognitive Sciences, University of California Irvine Irvine, CA, USA
| | | | - Nikil Dutt
- Department of Computer Science, University of California Irvine Irvine, CA, USA
| | - Jeffrey L Krichmar
- Department of Cognitive Sciences, University of California Irvine Irvine, CA, USA ; Department of Computer Science, University of California Irvine Irvine, CA, USA
| |
Collapse
|
21
|
Fontaine B, Benichoux V, Joris PX, Brette R. Predicting spike timing in highly synchronous auditory neurons at different sound levels. J Neurophysiol 2013; 110:1672-88. [PMID: 23864375 PMCID: PMC4042421 DOI: 10.1152/jn.00051.2013] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2013] [Accepted: 07/15/2013] [Indexed: 11/22/2022] Open
Abstract
A challenge for sensory systems is to encode natural signals that vary in amplitude by orders of magnitude. The spike trains of neurons in the auditory system must represent the fine temporal structure of sounds despite a tremendous variation in sound level in natural environments. It has been shown in vitro that the transformation from dynamic signals into precise spike trains can be accurately captured by simple integrate-and-fire models. In this work, we show that the in vivo responses of cochlear nucleus bushy cells to sounds across a wide range of levels can be precisely predicted by deterministic integrate-and-fire models with adaptive spike threshold. Our model can predict both the spike timings and the firing rate in response to novel sounds, across a large input level range. A noisy version of the model accounts for the statistical structure of spike trains, including the reliability and temporal precision of responses. Spike threshold adaptation was critical to ensure that predictions remain accurate at different levels. These results confirm that simple integrate-and-fire models provide an accurate phenomenological account of spike train statistics and emphasize the functional relevance of spike threshold adaptation.
Collapse
Affiliation(s)
- Bertrand Fontaine
- Laboratoire Psychologie de la Perception, CNRS, Université Paris Descartes, Paris, France
| | | | | | | |
Collapse
|
22
|
Fontaine B, Steinberg LJ, Peña JL. Sound envelope extraction in cochlear nucleus neurons: modulation filterbank and cellular mechanism. BMC Neurosci 2013. [PMCID: PMC3704833 DOI: 10.1186/1471-2202-14-s1-p312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
23
|
Chen D, Li X, Cui D, Wang L, Lu D. Global Synchronization Measurement of Multivariate Neural Signals with Massively Parallel Nonlinear Interdependence Analysis. IEEE Trans Neural Syst Rehabil Eng 2013; 22:33-43. [PMID: 23674459 DOI: 10.1109/tnsre.2013.2258939] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The estimation of synchronization amongst multiple brain regions is a critical issue in understanding brain functions. There is a lack of an appropriate approach which is capable of 1) measuring the direction and strength of synchronization of activities of multiple brain regions, and 2) adapting to the quickly increasing sizes and scales of neural signals. Nonlinear Interdependence (NLI) analysis is an effective method for measuring synchronization direction and strength of bivariate neural signal. However, the method currently does not directly apply in handling multivariate signal. Its application in practice has also long been largely hampered by the ultra-high complexity of NLI algorithms. Aiming at these problems, this study 1) extends the conventional NLI to quantify the global synchronization of multivariate neural signals, and 2) develops a parallelized NLI method with general-purpose computing on the graphics processing unit (GPGPU), namely, G-NLI. The approach performs synchronization measurement in a massively parallel manner. The G-NLI has improved the runtime performance by more than 1000 times comparing to the original sequential NLI. Meanwhile, the G-NLI was employed to analyze 10-channel local field potential (LFP) recordings from a patient suffering from temporal lobe epilepsy. The results demonstrate that the proposed G-NLI method can support real-time global synchronization measurement and it could be successful in localization of epileptic focus.
Collapse
|
24
|
Abstract
Modern graphics cards contain hundreds of cores that can be programmed for intensive calculations. They are beginning to be used for spiking neural network simulations. The goal is to make parallel simulation of spiking neural networks available to a large audience, without the requirements of a cluster. We review the ongoing efforts towards this goal, and we outline the main difficulties.
Collapse
Affiliation(s)
- Romain Brette
- Laboratoire Psychologie de la Perception, CNRS and Université Paris Descartes, Paris, France.
| | | |
Collapse
|
25
|
Hertäg L, Hass J, Golovko T, Durstewitz D. An Approximation to the Adaptive Exponential Integrate-and-Fire Neuron Model Allows Fast and Predictive Fitting to Physiological Data. Front Comput Neurosci 2012; 6:62. [PMID: 22973220 PMCID: PMC3434419 DOI: 10.3389/fncom.2012.00062] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2012] [Accepted: 08/03/2012] [Indexed: 11/13/2022] Open
Abstract
For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean-input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ("in vivo-like") input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a "high-throughput" model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.
Collapse
Affiliation(s)
- Loreen Hertäg
- Bernstein-Center for Computational Neuroscience, Central Institute of Mental Health, Psychiatry, Medical Faculty Mannheim of Heidelberg UniversityMannheim, Germany
| | - Joachim Hass
- Bernstein-Center for Computational Neuroscience, Central Institute of Mental Health, Psychiatry, Medical Faculty Mannheim of Heidelberg UniversityMannheim, Germany
| | - Tatiana Golovko
- Bernstein-Center for Computational Neuroscience, Central Institute of Mental Health, Psychiatry, Medical Faculty Mannheim of Heidelberg UniversityMannheim, Germany
| | - Daniel Durstewitz
- Bernstein-Center for Computational Neuroscience, Central Institute of Mental Health, Psychiatry, Medical Faculty Mannheim of Heidelberg UniversityMannheim, Germany
| |
Collapse
|
26
|
Rossant C, Fontaine B, Magnusson AK, Brette R. A calibration-free electrode compensation method. J Neurophysiol 2012; 108:2629-39. [PMID: 22896724 DOI: 10.1152/jn.01122.2011] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In a single-electrode current-clamp recording, the measured potential includes both the response of the membrane and that of the measuring electrode. The electrode response is traditionally removed using bridge balance, where the response of an ideal resistor representing the electrode is subtracted from the measurement. Because the electrode is not an ideal resistor, this procedure produces capacitive transients in response to fast or discontinuous currents. More sophisticated methods exist, but they all require a preliminary calibration phase, to estimate the properties of the electrode. If these properties change after calibration, the measurements are corrupted. We propose a compensation method that does not require preliminary calibration. Measurements are compensated offline by fitting a model of the neuron and electrode to the trace and subtracting the predicted electrode response. The error criterion is designed to avoid the distortion of compensated traces by spikes. The technique allows electrode properties to be tracked over time and can be extended to arbitrary models of electrode and neuron. We demonstrate the method using biophysical models and whole cell recordings in cortical and brain-stem neurons.
Collapse
Affiliation(s)
- Cyrille Rossant
- Laboratoire Psychologie de la Perception, Centre National de la Recherche Scientifique, Université Paris Descartes, Paris, France
| | | | | | | |
Collapse
|
27
|
Optimizing ion channel models using a parallel genetic algorithm on graphical processors. J Neurosci Methods 2012; 206:183-94. [PMID: 22407006 DOI: 10.1016/j.jneumeth.2012.02.024] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2011] [Revised: 02/25/2012] [Accepted: 02/28/2012] [Indexed: 11/20/2022]
Abstract
We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs.
Collapse
|
28
|
Abstract
How do neurons compute? Two main theories compete: neurons could temporally integrate noisy inputs (rate-based theories) or they could detect coincident input spikes (spike timing-based theories). Correlations at fine timescales have been observed in many areas of the nervous system, but they might have a minor impact. To address this issue, we used a probabilistic approach to quantify the impact of coincidences on neuronal response in the presence of fluctuating synaptic activity. We found that when excitation and inhibition are balanced, as in the sensory cortex in vivo, synchrony in a very small proportion of inputs results in dramatic increases in output firing rate. Our theory was experimentally validated with in vitro recordings of cortical neurons of mice. We conclude that not only are noisy neurons well equipped to detect coincidences, but they are so sensitive to fine correlations that a rate-based description of neural computation is unlikely to be accurate in general.
Collapse
|
29
|
Yamauchi S, Kim H, Shinomoto S. Elemental spiking neuron model for reproducing diverse firing patterns and predicting precise firing times. Front Comput Neurosci 2011; 5:42. [PMID: 22203798 PMCID: PMC3215233 DOI: 10.3389/fncom.2011.00042] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2011] [Accepted: 09/14/2011] [Indexed: 12/02/2022] Open
Abstract
In simulating realistic neuronal circuitry composed of diverse types of neurons, we need an elemental spiking neuron model that is capable of not only quantitatively reproducing spike times of biological neurons given in vivo-like fluctuating inputs, but also qualitatively representing a variety of firing responses to transient current inputs. Simplistic models based on leaky integrate-and-fire mechanisms have demonstrated the ability to adapt to biological neurons. In particular, the multi-timescale adaptive threshold (MAT) model reproduces and predicts precise spike times of regular-spiking, intrinsic-bursting, and fast-spiking neurons, under any fluctuating current; however, this model is incapable of reproducing such specific firing responses as inhibitory rebound spiking and resonate spiking. In this paper, we augment the MAT model by adding a voltage dependency term to the adaptive threshold so that the model can exhibit the full variety of firing responses to various transient current pulses while maintaining the high adaptability inherent in the original MAT model. Furthermore, with this addition, our model is actually able to better predict spike times. Despite the augmentation, the model has only four free parameters and is implementable in an efficient algorithm for large-scale simulation due to its linearity, serving as an element neuron model in the simulation of realistic neuronal circuitry.
Collapse
|
30
|
Richert M, Nageswaran JM, Dutt N, Krichmar JL. An efficient simulation environment for modeling large-scale cortical processing. Front Neuroinform 2011; 5:19. [PMID: 22007166 PMCID: PMC3172707 DOI: 10.3389/fninf.2011.00019] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2011] [Accepted: 08/27/2011] [Indexed: 11/13/2022] Open
Abstract
We have developed a spiking neural network simulator, which is both easy to use and computationally efficient, for the generation of large-scale computational neuroscience models. The simulator implements current or conductance based Izhikevich neuron networks, having spike-timing dependent plasticity and short-term plasticity. It uses a standard network construction interface. The simulator allows for execution on either GPUs or CPUs. The simulator, which is written in C/C++, allows for both fine grain and coarse grain specificity of a host of parameters. We demonstrate the ease of use and computational efficiency of this model by implementing a large-scale model of cortical areas V1, V4, and area MT. The complete model, which has 138,240 neurons and approximately 30 million synapses, runs in real-time on an off-the-shelf GPU. The simulator source code, as well as the source code for the cortical model examples is publicly available.
Collapse
Affiliation(s)
- Micah Richert
- Department of Cognitive Sciences, University of California Irvine, CA, USA
| | | | | | | |
Collapse
|