1
|
Zhang A, Niu Y, Gao Y, Wu J, Gao Z. Second-order information bottleneck based spiking neural networks for sEMG recognition. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2021.11.065] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
2
|
Conductance-Based Refractory Density Approach for a Population of Bursting Neurons. Bull Math Biol 2019; 81:4124-4143. [PMID: 31313084 DOI: 10.1007/s11538-019-00643-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2018] [Accepted: 07/09/2019] [Indexed: 12/29/2022]
Abstract
The conductance-based refractory density (CBRD) approach is a parsimonious mathematical-computational framework for modelling interacting populations of regular spiking neurons, which, however, has not been yet extended for a population of bursting neurons. The canonical CBRD method allows to describe the firing activity of a statistical ensemble of uncoupled Hodgkin-Huxley-like neurons (differentiated by noise) and has demonstrated its validity against experimental data. The present manuscript generalises the CBRD for a population of bursting neurons; however, in this pilot computational study, we consider the simplest setting in which each individual neuron is governed by a piecewise linear bursting dynamics. The resulting population model makes use of slow-fast analysis, which leads to a novel methodology that combines CBRD with the theory of multiple timescale dynamics. The main prospect is that it opens novel avenues for mathematical explorations, as well as, the derivation of more sophisticated population activity from Hodgkin-Huxley-like bursting neurons, which will allow to capture the activity of synchronised bursting activity in hyper-excitable brain states (e.g. onset of epilepsy).
Collapse
|
3
|
de Kamps M, Lepperød M, Lai YM. Computational geometry for modeling neural populations: From visualization to simulation. PLoS Comput Biol 2019; 15:e1006729. [PMID: 30830903 PMCID: PMC6417745 DOI: 10.1371/journal.pcbi.1006729] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2018] [Revised: 03/14/2019] [Accepted: 11/26/2018] [Indexed: 11/18/2022] Open
Abstract
The importance of a mesoscopic description level of the brain has now been well established. Rate based models are widely used, but have limitations. Recently, several extremely efficient population-level methods have been proposed that go beyond the characterization of a population in terms of a single variable. Here, we present a method for simulating neural populations based on two dimensional (2D) point spiking neuron models that defines the state of the population in terms of a density function over the neural state space. Our method differs in that we do not make the diffusion approximation, nor do we reduce the state space to a single dimension (1D). We do not hard code the neural model, but read in a grid describing its state space in the relevant simulation region. Novel models can be studied without even recompiling the code. The method is highly modular: variations of the deterministic neural dynamics and the stochastic process can be investigated independently. Currently, there is a trend to reduce complex high dimensional neuron models to 2D ones as they offer a rich dynamical repertoire that is not available in 1D, such as limit cycles. We will demonstrate that our method is ideally suited to investigate noise in such systems, replicating results obtained in the diffusion limit and generalizing them to a regime of large jumps. The joint probability density function is much more informative than 1D marginals, and we will argue that the study of 2D systems subject to noise is important complementary to 1D systems.
Collapse
Affiliation(s)
- Marc de Kamps
- Institute for Artificial and Biological Intelligence, University of Leeds, Leeds, West Yorkshire, United Kingdom
| | - Mikkel Lepperød
- Institute of Basic Medical Sciences, and Center for Integrative Neuroplasticity, University of Oslo, Oslo, Norway
| | - Yi Ming Lai
- Institute for Artificial and Biological Intelligence, University of Leeds, Leeds, West Yorkshire, United Kingdom.,Currently at the School of Mathematical Sciences, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
4
|
An Efficient Population Density Method for Modeling Neural Networks with Synaptic Dynamics Manifesting Finite Relaxation Time and Short-Term Plasticity. eNeuro 2019; 5:eN-MNT-0002-18. [PMID: 30662939 PMCID: PMC6336402 DOI: 10.1523/eneuro.0002-18.2018] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2018] [Revised: 10/24/2018] [Accepted: 11/21/2018] [Indexed: 12/05/2022] Open
Abstract
When incorporating more realistic synaptic dynamics, the computational efficiency of population density methods (PDMs) declines sharply due to the increase in the dimension of master equations. To avoid such a decline, we develop an efficient PDM, termed colored-synapse PDM (csPDM), in which the dimension of the master equations does not depend on the number of synapse-associated state variables in the underlying network model. Our goal is to allow the PDM to incorporate realistic synaptic dynamics that possesses not only finite relaxation time but also short-term plasticity (STP). The model equations of csPDM are derived based on the diffusion approximation on synaptic dynamics and probability density function methods for Langevin equations with colored noise. Numerical examples, given by simulations of the population dynamics of uncoupled exponential integrate-and-fire (EIF) neurons, show good agreement between the results of csPDM and Monte Carlo simulations (MCSs). Compared to the original full-dimensional PDM (fdPDM), the csPDM reveals more excellent computational efficiency because of the lower dimension of the master equations. In addition, it permits network dynamics to possess the short-term plastic characteristics inherited from plastic synapses. The novel csPDM has potential applicability to any spiking neuron models because of no assumptions on neuronal dynamics, and, more importantly, this is the first report of PDM to successfully encompass short-term facilitation/depression properties.
Collapse
|
5
|
Komarov M, Krishnan G, Chauvette S, Rulkov N, Timofeev I, Bazhenov M. New class of reduced computationally efficient neuronal models for large-scale simulations of brain dynamics. J Comput Neurosci 2017; 44:1-24. [PMID: 29230640 DOI: 10.1007/s10827-017-0663-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2016] [Revised: 09/17/2017] [Accepted: 09/22/2017] [Indexed: 12/29/2022]
Abstract
During slow-wave sleep, brain electrical activity is dominated by the slow (< 1 Hz) electroencephalogram (EEG) oscillations characterized by the periodic transitions between active (or Up) and silent (or Down) states in the membrane voltage of the cortical and thalamic neurons. Sleep slow oscillation is believed to play critical role in consolidation of recent memories. Past computational studies, based on the Hodgkin-Huxley type neuronal models, revealed possible intracellular and network mechanisms of the neuronal activity during sleep, however, they failed to explore the large-scale cortical network dynamics depending on collective behavior in the large populations of neurons. In this new study, we developed a novel class of reduced discrete time spiking neuron models for large-scale network simulations of wake and sleep dynamics. In addition to the spiking mechanism, the new model implemented nonlinearities capturing effects of the leak current, the Ca2+ dependent K+ current and the persistent Na+ current that were found to be critical for transitions between Up and Down states of the slow oscillation. We applied the new model to study large-scale two-dimensional cortical network activity during slow-wave sleep. Our study explained traveling wave dynamics and characteristic synchronization properties of transitions between Up and Down states of the slow oscillation as observed in vivo in recordings from cats. We further predict a critical role of synaptic noise and slow adaptive currents for spike sequence replay as found during sleep related memory consolidation.
Collapse
Affiliation(s)
- Maxim Komarov
- Department of Medicine, University of California San Diego, 9500 Gilman Dr, La Jolla, CA, 92093, USA
| | - Giri Krishnan
- Department of Medicine, University of California San Diego, 9500 Gilman Dr, La Jolla, CA, 92093, USA.
| | - Sylvain Chauvette
- Centre de recherche de l'Institut universitaire en santé mentale de Québec (CRIUSMQ), Local F-6500, 2601 de la Canardière, QC, Québec, G1J2G3, Canada
| | - Nikolai Rulkov
- BioCircuits Institute, University of California, San Diego 9500 Gilman Drive, La Jolla, CA, 92093-0328, USA
| | - Igor Timofeev
- Centre de recherche de l'Institut universitaire en santé mentale de Québec (CRIUSMQ), Local F-6500, 2601 de la Canardière, QC, Québec, G1J2G3, Canada.,Department of Psychiatry and Neuroscience, Université Laval, Québec, Canada
| | - Maxim Bazhenov
- Department of Medicine, University of California San Diego, 9500 Gilman Dr, La Jolla, CA, 92093, USA
| |
Collapse
|
6
|
Siettos C, Starke J. Multiscale modeling of brain dynamics: from single neurons and networks to mathematical tools. WILEY INTERDISCIPLINARY REVIEWS-SYSTEMS BIOLOGY AND MEDICINE 2016; 8:438-58. [PMID: 27340949 DOI: 10.1002/wsbm.1348] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2016] [Revised: 05/01/2016] [Accepted: 05/14/2016] [Indexed: 11/09/2022]
Abstract
The extreme complexity of the brain naturally requires mathematical modeling approaches on a large variety of scales; the spectrum ranges from single neuron dynamics over the behavior of groups of neurons to neuronal network activity. Thus, the connection between the microscopic scale (single neuron activity) to macroscopic behavior (emergent behavior of the collective dynamics) and vice versa is a key to understand the brain in its complexity. In this work, we attempt a review of a wide range of approaches, ranging from the modeling of single neuron dynamics to machine learning. The models include biophysical as well as data-driven phenomenological models. The discussed models include Hodgkin-Huxley, FitzHugh-Nagumo, coupled oscillators (Kuramoto oscillators, Rössler oscillators, and the Hindmarsh-Rose neuron), Integrate and Fire, networks of neurons, and neural field equations. In addition to the mathematical models, important mathematical methods in multiscale modeling and reconstruction of the causal connectivity are sketched. The methods include linear and nonlinear tools from statistics, data analysis, and time series analysis up to differential equations, dynamical systems, and bifurcation theory, including Granger causal connectivity analysis, phase synchronization connectivity analysis, principal component analysis (PCA), independent component analysis (ICA), and manifold learning algorithms such as ISOMAP, and diffusion maps and equation-free techniques. WIREs Syst Biol Med 2016, 8:438-458. doi: 10.1002/wsbm.1348 For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
- Constantinos Siettos
- School of Applied Mathematics and Physical Sciences, National Technical University of Athens, Athens, Greece
| | - Jens Starke
- School of Mathematical Sciences, Queen Mary University of London, London, UK
| |
Collapse
|
7
|
Huang CH, Lin CCK, Ju MS. Discontinuous Galerkin finite element method for solving population density functions of cortical pyramidal and thalamic neuronal populations. Comput Biol Med 2015; 57:150-8. [PMID: 25557200 DOI: 10.1016/j.compbiomed.2014.12.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2014] [Revised: 12/10/2014] [Accepted: 12/12/2014] [Indexed: 10/24/2022]
Abstract
Compared with the Monte Carlo method, the population density method is efficient for modeling collective dynamics of neuronal populations in human brain. In this method, a population density function describes the probabilistic distribution of states of all neurons in the population and it is governed by a hyperbolic partial differential equation. In the past, the problem was mainly solved by using the finite difference method. In a previous study, a continuous Galerkin finite element method was found better than the finite difference method for solving the hyperbolic partial differential equation; however, the population density function often has discontinuity and both methods suffer from a numerical stability problem. The goal of this study is to improve the numerical stability of the solution using discontinuous Galerkin finite element method. To test the performance of the new approach, interaction of a population of cortical pyramidal neurons and a population of thalamic neurons was simulated. The numerical results showed good agreement between results of discontinuous Galerkin finite element and Monte Carlo methods. The convergence and accuracy of the solutions are excellent. The numerical stability problem could be resolved using the discontinuous Galerkin finite element method which has total-variation-diminishing property. The efficient approach will be employed to simulate the electroencephalogram or dynamics of thalamocortical network which involves three populations, namely, thalamic reticular neurons, thalamocortical neurons and cortical pyramidal neurons.
Collapse
Affiliation(s)
- Chih-Hsu Huang
- Department of Mechanical Engineering, National Cheng Kung University, Tainan, Taiwan
| | - Chou-Ching K Lin
- Department of Neurology, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan; Medical device innovation center, National Cheng Kung University, Tainan, Taiwan
| | - Ming-Shaung Ju
- Department of Mechanical Engineering, National Cheng Kung University, Tainan, Taiwan; Medical device innovation center, National Cheng Kung University, Tainan, Taiwan.
| |
Collapse
|
8
|
|
9
|
Ly C. A principled dimension-reduction method for the population density approach to modeling networks of neurons with synaptic dynamics. Neural Comput 2013; 25:2682-708. [PMID: 23777517 DOI: 10.1162/neco_a_00489] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The population density approach to neural network modeling has been utilized in a variety of contexts. The idea is to group many similar noisy neurons into populations and track the probability density function for each population that encompasses the proportion of neurons with a particular state rather than simulating individual neurons (i.e., Monte Carlo). It is commonly used for both analytic insight and as a time-saving computational tool. The main shortcoming of this method is that when realistic attributes are incorporated in the underlying neuron model, the dimension of the probability density function increases, leading to intractable equations or, at best, computationally intensive simulations. Thus, developing principled dimension-reduction methods is essential for the robustness of these powerful methods. As a more pragmatic tool, it would be of great value for the larger theoretical neuroscience community. For exposition of this method, we consider a single uncoupled population of leaky integrate-and-fire neurons receiving external excitatory synaptic input only. We present a dimension-reduction method that reduces a two-dimensional partial differential-integral equation to a computationally efficient one-dimensional system and gives qualitatively accurate results in both the steady-state and nonequilibrium regimes. The method, termed modified mean-field method, is based entirely on the governing equations and not on any auxiliary variables or parameters, and it does not require fine-tuning. The principles of the modified mean-field method have potential applicability to more realistic (i.e., higher-dimensional) neural networks.
Collapse
Affiliation(s)
- Cheng Ly
- Department of Statistical Sciences and Operations Research, Virginia Commonwealth University Richmond, VA 23284-3083, USA.
| |
Collapse
|
10
|
Bifurcations of large networks of two-dimensional integrate and fire neurons. J Comput Neurosci 2013; 35:87-108. [PMID: 23430291 DOI: 10.1007/s10827-013-0442-z] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2012] [Revised: 11/29/2012] [Accepted: 01/17/2013] [Indexed: 12/25/2022]
Abstract
Recently, a class of two-dimensional integrate and fire models has been used to faithfully model spiking neurons. This class includes the Izhikevich model, the adaptive exponential integrate and fire model, and the quartic integrate and fire model. The bifurcation types for the individual neurons have been thoroughly analyzed by Touboul (SIAM J Appl Math 68(4):1045-1079, 2008). However, when the models are coupled together to form networks, the networks can display bifurcations that an uncoupled oscillator cannot. For example, the networks can transition from firing with a constant rate to burst firing. This paper introduces a technique to reduce a full network of this class of neurons to a mean field model, in the form of a system of switching ordinary differential equations. The reduction uses population density methods and a quasi-steady state approximation to arrive at the mean field system. Reduced models are derived for networks with different topologies and different model neurons with biologically derived parameters. The mean field equations are able to qualitatively and quantitatively describe the bifurcations that the full networks display. Extensions and higher order approximations are discussed.
Collapse
|
11
|
Shkarayev MS, Kovačič G, Cai D. Topological effects on dynamics in complex pulse-coupled networks of integrate-and-fire type. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2012; 85:036104. [PMID: 22587146 DOI: 10.1103/physreve.85.036104] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2011] [Revised: 01/31/2012] [Indexed: 05/31/2023]
Abstract
For a class of integrate-and-fire, pulse-coupled networks with complex topology, we study the dependence of the pulse rate on the underlying architectural connectivity statistics. We derive the distribution of the pulse rate from this dependence and determine when the underlying scale-free architectural connectivity gives rise to a scale-free pulse-rate distribution. We identify the scaling of the pairwise coupling between the dynamical units in this network class that keeps their pulse rates bounded in the infinite-network limit. In the process, we determine the connectivity statistics for a specific scale-free network grown by preferential attachment.
Collapse
Affiliation(s)
- Maxim S Shkarayev
- Mathematical Sciences Department, Rensselaer Polytechnic Institute, 110 8th Street, Troy, New York 12180, USA
| | | | | |
Collapse
|
12
|
Linaro D, Storace M, Mattia M. Inferring network dynamics and neuron properties from population recordings. Front Comput Neurosci 2011; 5:43. [PMID: 22016731 PMCID: PMC3191764 DOI: 10.3389/fncom.2011.00043] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2011] [Accepted: 09/14/2011] [Indexed: 11/18/2022] Open
Abstract
Understanding the computational capabilities of the nervous system means to “identify” its emergent multiscale dynamics. For this purpose, we propose a novel model-driven identification procedure and apply it to sparsely connected populations of excitatory integrate-and-fire neurons with spike frequency adaptation (SFA). Our method does not characterize the system from its microscopic elements in a bottom-up fashion, and does not resort to any linearization. We investigate networks as a whole, inferring their properties from the response dynamics of the instantaneous discharge rate to brief and aspecific supra-threshold stimulations. While several available methods assume generic expressions for the system as a black box, we adopt a mean-field theory for the evolution of the network transparently parameterized by identified elements (such as dynamic timescales), which are in turn non-trivially related to single-neuron properties. In particular, from the elicited transient responses, the input–output gain function of the neurons in the network is extracted and direct links to the microscopic level are made available: indeed, we show how to extract the decay time constant of the SFA, the absolute refractory period and the average synaptic efficacy. In addition and contrary to previous attempts, our method captures the system dynamics across bifurcations separating qualitatively different dynamical regimes. The robustness and the generality of the methodology is tested on controlled simulations, reporting a good agreement between theoretically expected and identified values. The assumptions behind the underlying theoretical framework make the method readily applicable to biological preparations like cultured neuron networks and in vitro brain slices.
Collapse
Affiliation(s)
- Daniele Linaro
- Department of Biophysical and Electronic Engineering, University of Genoa Genoa, Italy
| | | | | |
Collapse
|
13
|
Rangan AV. Diagrammatic expansion of pulse-coupled network dynamics in terms of subnetworks. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2009; 80:036101. [PMID: 19905174 DOI: 10.1103/physreve.80.036101] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2008] [Revised: 05/11/2009] [Indexed: 05/28/2023]
Abstract
We introduce a framework wherein various measurements of a pulse-coupled network's stationary dynamics can be expanded in terms of the network's connectivity. Such measurements include the occurrence rate of pulses (e.g., firing rates within a neuronal network) as well as higher-order correlations in activity between various nodes in the network. The various terms in this expansion can be interpreted as diagrams corresponding to subnetworks of the original network, which span both space (in terms of the network's graph) as well as time (in the sense of causality).
Collapse
Affiliation(s)
- Aaditya V Rangan
- Courant Institute of Mathematical Sciences, 251 Mercer Street, New York, New York 10012, USA
| |
Collapse
|
14
|
Kovacic G, Tao L, Rangan AV, Cai D. Fokker-Planck description of conductance-based integrate-and-fire neuronal networks. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2009; 80:021904. [PMID: 19792148 DOI: 10.1103/physreve.80.021904] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2009] [Revised: 06/27/2009] [Indexed: 05/28/2023]
Abstract
Steady dynamics of coupled conductance-based integrate-and-fire neuronal networks in the limit of small fluctuations is studied via the equilibrium states of a Fokker-Planck equation. An asymptotic approximation for the membrane-potential probability density function is derived and the corresponding gain curves are found. Validity conditions are discussed for the Fokker-Planck description and verified via direct numerical simulations.
Collapse
Affiliation(s)
- Gregor Kovacic
- Department of Mathematical Sciences, Rensselaer Polytechnic Institute, Troy, New York 12180, USA
| | | | | | | |
Collapse
|
15
|
Ly C, Tranchina D. Spike train statistics and dynamics with synaptic input from any renewal process: a population density approach. Neural Comput 2009; 21:360-96. [PMID: 19431264 DOI: 10.1162/neco.2008.03-08-743] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In the probability density function (PDF) approach to neural network modeling, a common simplifying assumption is that the arrival times of elementary postsynaptic events are governed by a Poisson process. This assumption ignores temporal correlations in the input that sometimes have important physiological consequences. We extend PDF methods to models with synaptic event times governed by any modulated renewal process. We focus on the integrate-and-fire neuron with instantaneous synaptic kinetics and a random elementary excitatory postsynaptic potential (EPSP), A. Between presynaptic events, the membrane voltage, v, decays exponentially toward rest, while s, the time since the last synaptic input event, evolves with unit velocity. When a synaptic event arrives, v jumps by A, and s is reset to zero. If v crosses the threshold voltage, an action potential occurs, and v is reset to v(reset). The probability per unit time of a synaptic event at time t, given the elapsed time s since the last event, h(s, t), depends on specifics of the renewal process. We study how regularity of the train of synaptic input events affects output spike rate, PDF and coefficient of variation (CV) of the interspike interval, and the autocorrelation function of the output spike train. In the limit of a deterministic, clocklike train of input events, the PDF of the interspike interval converges to a sum of delta functions, with coefficients determined by the PDF for A. The limiting autocorrelation function of the output spike train is a sum of delta functions whose coefficients fall under a damped oscillatory envelope. When the EPSP CV, sigma A/mu A, is equal to 0.45, a CV for the intersynaptic event interval, sigma T/mu T = 0.35, is functionally equivalent to a deterministic periodic train of synaptic input events (CV = 0) with respect to spike statistics. We discuss the relevance to neural network simulations.
Collapse
Affiliation(s)
- Cheng Ly
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260, USA.
| | | |
Collapse
|
16
|
Liu CY, Nykamp DQ. A kinetic theory approach to capturing interneuronal correlation: the feed-forward case. J Comput Neurosci 2008; 26:339-68. [PMID: 18987967 DOI: 10.1007/s10827-008-0116-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2008] [Revised: 09/19/2008] [Accepted: 09/24/2008] [Indexed: 11/30/2022]
Abstract
We present an approach for using kinetic theory to capture first and second order statistics of neuronal activity. We coarse grain neuronal networks into populations of neurons and calculate the population average firing rate and output cross-correlation in response to time varying correlated input. We derive coupling equations for the populations based on first and second order statistics of the network connectivity. This coupling scheme is based on the hypothesis that second order statistics of the network connectivity are sufficient to determine second order statistics of neuronal activity. We implement a kinetic theory representation of a simple feed-forward network and demonstrate that the kinetic theory model captures key aspects of the emergence and propagation of correlations in the network, as long as the correlations do not become too strong. By analyzing the correlated activity of feed-forward networks with a variety of connectivity patterns, we provide evidence supporting our hypothesis of the sufficiency of second order connectivity statistics.
Collapse
Affiliation(s)
- Chin-Yueh Liu
- School of Mathematics, University of Minnesota, 206 Church St., Minneapolis, MN 55455, USA
| | | |
Collapse
|
17
|
Marreiros AC, Kiebel SJ, Daunizeau J, Harrison LM, Friston KJ. Population dynamics under the Laplace assumption. Neuroimage 2008; 44:701-14. [PMID: 19013532 DOI: 10.1016/j.neuroimage.2008.10.008] [Citation(s) in RCA: 65] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2008] [Revised: 09/30/2008] [Accepted: 10/10/2008] [Indexed: 11/30/2022] Open
Abstract
In this paper, we describe a generic approach to modelling dynamics in neuronal populations. This approach models a full density on the states of neuronal populations but finesses this high-dimensional problem by re-formulating density dynamics in terms of ordinary differential equations on the sufficient statistics of the densities considered (c.f., the method of moments). The particular form for the population density we adopt is a Gaussian density (c.f., the Laplace assumption). This means population dynamics are described by equations governing the evolution of the population's mean and covariance. We derive these equations from the Fokker-Planck formalism and illustrate their application to a conductance-based model of neuronal exchanges. One interesting aspect of this formulation is that we can uncouple the mean and covariance to furnish a neural-mass model, which rests only on the populations mean. This enables us to compare equivalent mean-field and neural-mass models of the same populations and evaluate, quantitatively, the contribution of population variance to the expected dynamics. The mean-field model presented here will form the basis of a dynamic causal model of observed electromagnetic signals in future work.
Collapse
Affiliation(s)
- André C Marreiros
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, London, UK.
| | | | | | | | | |
Collapse
|
18
|
Oscillations and synchrony in large-scale cortical network models. J Biol Phys 2008; 34:279-99. [PMID: 19669478 DOI: 10.1007/s10867-008-9079-y] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2008] [Accepted: 04/11/2008] [Indexed: 10/21/2022] Open
Abstract
Intrinsic neuronal and circuit properties control the responses of large ensembles of neurons by creating spatiotemporal patterns of activity that are used for sensory processing, memory formation, and other cognitive tasks. The modeling of such systems requires computationally efficient single-neuron models capable of displaying realistic response properties. We developed a set of reduced models based on difference equations (map-based models) to simulate the intrinsic dynamics of biological neurons. These phenomenological models were designed to capture the main response properties of specific types of neurons while ensuring realistic model behavior across a sufficient dynamic range of inputs. This approach allows for fast simulations and efficient parameter space analysis of networks containing hundreds of thousands of neurons of different types using a conventional workstation. Drawing on results obtained using large-scale networks of map-based neurons, we discuss spatiotemporal cortical network dynamics as a function of parameters that affect synaptic interactions and intrinsic states of the neurons.
Collapse
|
19
|
Abstract
A mathematical model, of general character for the dynamic description of coupled neural oscillators is presented. The population approach that is employed applies equally to coupled cells as to populations of such coupled cells. The formulation includes stochasticity and preserves details of precisely firing neurons. Based on the generally accepted view of cortical wiring, this formulation is applied to the retinal ganglion cell (RGC)/lateral geniculate nucleus (LGN) relay cell system, of the early mammalian visual system. The smallness of quantal voltage jumps at the retinal level permits a Fokker-Planck approximation for the RGC contribution; however, the LGN description requires the use of finite jumps, which for fast synaptic dynamics appears as finite jumps in the membrane potential. Analyses of equilibrium spiking behavior for both the deterministic and stochastic cases are presented. Green's function methods form the basis for the asymptotic and exact results that are presented. This determines the spiking ratio (i.e., the number of RGC arrivals per LGN spike), which is the reciprocal of the transfer ratio, under wide circumstances. Criteria for spiking regimes, in terms of the relatively few parameters of the model, are presented. Under reasonable hypotheses, it is shown that the transfer ratio is ≤1/2, in the absence of input from other areas. Thus, the model suggests that the LGN/RGC system may be a relatively unsophisticated spike editor. In the absence of other input, the system is designed to fire an LGN spike only when two or more RGC spikes appear in a relatively short time. Transfer ratios that briefly exceed 1/2 (but are less than 1) have been recorded in the laboratory. Inclusion of brain stem input has been shown to provide a signal that elevates the transfer ratio (Ozaki & Kaplan, 2006). A model that includes this contribution is also presented.
Collapse
Affiliation(s)
- Lawrence Sirovich
- Laboratory of Applied Mathematics, Mt. Sinai School of Medicine, New York, NY 10029, U.S.A
| |
Collapse
|
20
|
Modolo J, Garenne A, Henry J, Beuter A. Development and validation of a neural population model based on the dynamics of a discontinuous membrane potential neuron model. J Integr Neurosci 2008; 6:625-55. [PMID: 18181271 DOI: 10.1142/s0219635207001672] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2007] [Accepted: 11/01/2007] [Indexed: 11/18/2022] Open
Abstract
The major goal of this study was to develop a population density based model derived from statistical mechanics based on the dynamics of a discontinuous membrane potential neuron model. A secondary goal was to validate this model by comparing results from a direct simulation approach on the one hand and our population based approach on the other hand. Comparisons between the two approaches in the case of a synaptically uncoupled and a synaptically coupled neural population produced satisfactory qualitative agreement in terms of firing rate and mean membrane potential. Reasonable quantitative agreement was also obtained for these variables in performed simulations. The results of this work based on the dynamics of a discontinuous membrane potential neuron model provide a basis to simulate phenomenologically large-scale neuronal networks with a reasonably short computing time.
Collapse
Affiliation(s)
- Julien Modolo
- Institut de Cognitique, Université de Bordeaux, 33076 Bordeaux, France.
| | | | | | | |
Collapse
|
21
|
Rangan AV, Kovacic G, Cai D. Kinetic theory for neuronal networks with fast and slow excitatory conductances driven by the same spike train. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2008; 77:041915. [PMID: 18517664 DOI: 10.1103/physreve.77.041915] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2007] [Revised: 12/29/2007] [Indexed: 05/26/2023]
Abstract
We present a kinetic theory for all-to-all coupled networks of identical, linear, integrate-and-fire, excitatory point neurons in which a fast and a slow excitatory conductance are driven by the same spike train in the presence of synaptic failure. The maximal-entropy principle guides us in deriving a set of three (1+1) -dimensional kinetic moment equations from a Boltzmann-like equation describing the evolution of the one-neuron probability density function. We explain the emergence of correlation terms in the kinetic moment and Boltzmann-like equations as a consequence of simultaneous activation of both the fast and slow excitatory conductances and furnish numerical evidence for their importance in correctly describing the coarse-grained dynamics of the underlying neuronal network.
Collapse
Affiliation(s)
- Aaditya V Rangan
- Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, NY 10012-1185, USA
| | | | | |
Collapse
|
22
|
Ly C, Tranchina D. Critical analysis of dimension reduction by a moment closure method in a population density approach to neural network modeling. Neural Comput 2007; 19:2032-92. [PMID: 17571938 DOI: 10.1162/neco.2007.19.8.2032] [Citation(s) in RCA: 68] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Computational techniques within the population density function (PDF) framework have provided time-saving alternatives to classical Monte Carlo simulations of neural network activity. Efficiency of the PDF method is lost as the underlying neuron model is made more realistic and the number of state variables increases. In a detailed theoretical and computational study, we elucidate strengths and weaknesses of dimension reduction by a particular moment closure method (Cai, Tao, Shelley, & McLaughlin, 2004; Cai, Tao, Rangan, & McLaughlin, 2006) as applied to integrate-and-fire neurons that receive excitatory synaptic input only. When the unitary postsynaptic conductance event has a single-exponential time course, the evolution equation for the PDF is a partial differential integral equation in two state variables, voltage and excitatory conductance. In the moment closure method, one approximates the conditional kth centered moment of excitatory conductance given voltage by the corresponding unconditioned moment. The result is a system of k coupled partial differential equations with one state variable, voltage, and k coupled ordinary differential equations. Moment closure at k = 2 works well, and at k = 3 works even better, in the regime of high dynamically varying synaptic input rates. Both closures break down at lower synaptic input rates. Phase-plane analysis of the k = 2 problem with typical parameters proves, and reveals why, no steady-state solutions exist below a synaptic input rate that gives a firing rate of 59 s(1) in the full 2D problem. Closure at k = 3 fails for similar reasons. Low firing-rate solutions can be obtained only with parameters for the amplitude or kinetics (or both) of the unitary postsynaptic conductance event that are on the edge of the physiological range. We conclude that this dimension-reduction method gives ill-posed problems for a wide range of physiological parameters, and we suggest future directions.
Collapse
Affiliation(s)
- Cheng Ly
- Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, USA.
| | | |
Collapse
|
23
|
Casti A, Hayot F, Xiao Y, Kaplan E. A simple model of retina-LGN transmission. J Comput Neurosci 2007; 24:235-52. [PMID: 17763931 DOI: 10.1007/s10827-007-0053-7] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2007] [Revised: 07/04/2007] [Accepted: 07/20/2007] [Indexed: 11/29/2022]
Abstract
To gain a deeper understanding of the transmission of visual signals from retina through the lateral geniculate nucleus (LGN), we have used a simple leaky integrate and-fire model to simulate a relay cell in the LGN. The simplicity of the model was motivated by two questions: (1) Can an LGN model that is driven by a retinal spike train recorded as synaptic ('S') potentials, but does not include a diverse array of ion channels, nor feedback inputs from the cortex, brainstem, and thalamic reticular nucleus, accurately simulate the LGN discharge on a spike-for-spike basis? (2) Are any special synaptic mechanisms, beyond simple summation of currents, necessary to model experimental recordings? We recorded cat relay cell responses to spatially homogeneous small or large spots, with luminance that was rapidly modulated in a pseudo-random fashion. Model parameters for each cell were optimized with a Simplex algorithm using a short segment of the recording. The model was then tested on a much longer, distinct data set consisting of responses to numerous repetitions of the noisy stimulus. For LGN cells that spiked in response to a sufficiently large fraction of retinal inputs, we found that this simplified model accurately predicted the firing times of LGN discharges. This suggests that modulations of the efficacy of the retino-geniculate synapse by pre-synaptic facilitation or depression are not necessary in order to account for the LGN responses generated by our stimuli, and that post-synaptic summation is sufficient.
Collapse
Affiliation(s)
- Alexander Casti
- Fishburg Department of Neuroscience, Mount Sinai School of Medicine, 1 Gustave L. Levy Place, New York, NY 10029-6574, USA.
| | | | | | | |
Collapse
|
24
|
Williams GSB, Huertas MA, Sobie EA, Jafri MS, Smith GD. A probability density approach to modeling local control of calcium-induced calcium release in cardiac myocytes. Biophys J 2007; 92:2311-28. [PMID: 17237200 PMCID: PMC1864826 DOI: 10.1529/biophysj.106.099861] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
We present a probability density approach to modeling localized Ca2+ influx via L-type Ca2+ channels and Ca2+-induced Ca2+ release mediated by clusters of ryanodine receptors during excitation-contraction coupling in cardiac myocytes. Coupled advection-reaction equations are derived relating the time-dependent probability density of subsarcolemmal subspace and junctional sarcoplasmic reticulum [Ca2+] conditioned on "Ca2+ release unit" state. When these equations are solved numerically using a high-resolution finite difference scheme and the resulting probability densities are coupled to ordinary differential equations for the bulk myoplasmic and sarcoplasmic reticulum [Ca2+], a realistic but minimal model of cardiac excitation-contraction coupling is produced. Modeling Ca2+ release unit activity using this probability density approach avoids the computationally demanding task of resolving spatial aspects of global Ca2+ signaling, while accurately representing heterogeneous local Ca2+ signals in a population of diadic subspaces and junctional sarcoplasmic reticulum depletion domains. The probability density approach is validated for a physiologically realistic number of Ca2+ release units and benchmarked for computational efficiency by comparison to traditional Monte Carlo simulations. In simulated voltage-clamp protocols, both the probability density and Monte Carlo approaches to modeling local control of excitation-contraction coupling produce high-gain Ca2+ release that is graded with changes in membrane potential, a phenomenon not exhibited by so-called "common pool" models. However, a probability density calculation can be significantly faster than the corresponding Monte Carlo simulation, especially when cellular parameters are such that diadic subspace [Ca2+] is in quasistatic equilibrium with junctional sarcoplasmic reticulum [Ca2+] and, consequently, univariate rather than multivariate probability densities may be employed.
Collapse
Affiliation(s)
- George S B Williams
- Department of Applied Science, College of William and Mary, Williamsburg, Virginia 23187, USA
| | | | | | | | | |
Collapse
|
25
|
Apfaltrer F, Ly C, Tranchina D. Population density methods for stochastic neurons with realistic synaptic kinetics: firing rate dynamics and fast computational methods. NETWORK (BRISTOL, ENGLAND) 2006; 17:373-418. [PMID: 17162461 DOI: 10.1080/09548980601069787] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
An outstanding problem in computational neuroscience is how to use population density function (PDF) methods to model neural networks with realistic synaptic kinetics in a computationally efficient manner. We explore an application of two-dimensional (2-D) PDF methods to simulating electrical activity in networks of excitatory integrate-and-fire neurons. We formulate a pair of coupled partial differential-integral equations describing the evolution of PDFs for neurons in non-refractory and refractory pools. The population firing rate is given by the total flux of probability across the threshold voltage. We use an operator-splitting method to reduce computation time. We report on speed and accuracy of PDF results and compare them to those from direct, Monte-Carlo simulations. We compute temporal frequency response functions for the transduction from the rate of postsynaptic input to population firing rate, and examine its dependence on background synaptic input rate. The behaviors in the1-D and 2-D cases--corresponding to instantaneous and non-instantaneous synaptic kinetics, respectively--differ markedly from those for a somewhat different transduction: from injected current input to population firing rate output (Brunel et al. 2001; Fourcaud & Brunel 2002). We extend our method by adding inhibitory input, consider a 3-D to 2-D dimension reduction method, demonstrate its limitations, and suggest directions for future study.
Collapse
Affiliation(s)
- Felix Apfaltrer
- Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, USA
| | | | | |
Collapse
|
26
|
Huertas MA, Smith GD. A multivariate population density model of the dLGN/PGN relay. J Comput Neurosci 2006; 21:171-89. [PMID: 16788765 DOI: 10.1007/s10827-006-7753-2] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2005] [Revised: 02/07/2006] [Accepted: 02/14/2006] [Indexed: 10/24/2022]
Abstract
Using a population density approach we study the dynamics of two interacting collections of integrate-and-fire-or-burst (IFB) neurons representing thalamocortical (TC) cells from the dorsal lateral geniculate nucleus (dLGN) and thalamic reticular (RE) cells from the perigeniculate nucleus (PGN). Each population of neurons is described by a multivariate probability density function that satisfies a conservation equation with appropriately defined probability fluxes and boundary conditions. The state variables of each neuron are the membrane potential and the inactivation gating variable of the low-threshold Ca2+ current I(T). The synaptic coupling of the populations and external excitatory drive are modeled by instantaneous jumps in the membrane potential of postsynaptic neurons. The population density model is validated by comparing its response to time-varying retinal input to Monte Carlo simulations of the corresponding IFB network composed of 100 to 1,000 cells per population. In the absence of retinal input, the population density model exhibits rhythmic bursting similar to the 7 to 14 Hz oscillations associated with slow wave sleep that require feedback inhibition from RE to TC cells. When the TC and RE cell potassium leakage conductances are adjusted to represent cholingergic neuromodulation and arousal of the network, rhythmic bursting of the probability density model may either persists or be eliminated depending on the number of excitatory (TC to RE) or inhibitory (RE to TC) connections made by each presynaptic cell. When the probability density model is stimulated with constant retinal input (10-100 spikes/sec), a wide range of responses are observed depending on cellular parameters and network connectivity. These include asynchronous burst and tonic spikes, sleep spindle-like rhythmic bursting, and oscillations in population firing rate that are distinguishable from sleep spindles due to their amplitude, frequency, or the presence of tonic spikes. In this context of dLGN/PGN network modeling, we find the population density approach using 2,500 mesh points and resolving membrane voltage to 0.7 mV is over 30 times more efficient than 1,000-cell Monte Carlo simulations.
Collapse
Affiliation(s)
- Marco A Huertas
- Department of Applied Science, College of William and Mary, Williamsburg, VA 23187, USA.
| | | |
Collapse
|
27
|
Huertas MA, Smith GD. A two-dimensional population density approach to modeling the dLGN/PGN network. Neurocomputing 2006. [DOI: 10.1016/j.neucom.2005.12.093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
28
|
Sirovich L, Omurtag A, Lubliner K. Dynamics of neural populations: stability and synchrony. NETWORK (BRISTOL, ENGLAND) 2006; 17:3-29. [PMID: 16613792 DOI: 10.1080/09548980500421154] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
A population formulation of neuronal activity is employed to study an excitatory network of (spiking) neurons receiving external input as well as recurrent feedback. At relatively low levels of feedback, the network exhibits time stationary asynchronous behavior. A stability analysis of this time stationary state leads to an analytical criterion for the critical gain at which time asynchronous behavior becomes unstable. At instability the dynamics can undergo a supercritical Hopf bifurcation and the population passes to a synchronous state. Under different conditions it can pass to synchrony through a subcritical Hopf bifurcation. And at high gain a network can reach a runaway state, in finite time, after which the network no longer supports bounded solutions. The introduction of time delayed feedback leads to a rich range of phenomena. For example, for a given external input, increasing gain produces transition from asynchrony, to synchrony, to asynchrony and finally can lead to divergence. Time delay is also shown to strongly mollify the amplitude of synchronous oscillations. Perhaps, of general importance, is the result that synchronous behavior can exist only for a narrow range of time delays, which range is an order of magnitude smaller than periods of oscillation.
Collapse
Affiliation(s)
- Lawrence Sirovich
- Laboratory of Applied Mathematics, Mount Sinai School of Medicine, 1 Gustave L. Levy Place, New York, NY 10029, USA.
| | | | | |
Collapse
|
29
|
Huertas MA, Groff JR, Smith GD. Feedback Inhibition and Throughput Properties of an Integrate-and-Fire-or-Burst Network Model of Retinogeniculate Transmission. J Comput Neurosci 2005; 19:147-80. [PMID: 16133817 DOI: 10.1007/s10827-005-1084-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2004] [Revised: 03/08/2005] [Accepted: 03/21/2005] [Indexed: 10/25/2022]
Abstract
Computational modeling has played an important role in the dissection of the biophysical basis of rhythmic oscillations in thalamus that are associated with sleep and certain forms of epilepsy. In contrast, the dynamic filter properties of thalamic relay nuclei during states of arousal are not well understood. Here we present a modeling and simulation study of the throughput properties of the visually driven dorsal lateral geniculate nucleus (dLGN) in the presence of feedback inhibition from the perigeniculate nucleus (PGN). We employ thalamocortical (TC) and thalamic reticular (RE) versions of a minimal integrate-and-fire-or-burst type model and a one-dimensional, two-layered network architecture. Potassium leakage conductances control the neuromodulatory state of the network and eliminate rhythmic bursting in the presence of spontaneous input (i.e., wake up the network). The aroused dLGN/PGN network model is subsequently stimulated by spatially homogeneous spontaneous retinal input or spatio-temporally patterned input consistent with the activity of X-type retinal ganglion cells during full-field or drifting grating visual stimulation. The throughput properties of this visually-driven dLGN/PGN network model are characterized and quantified as a function of stimulus parameters such as contrast, temporal frequency, and spatial frequency. During low-frequency oscillatory full-field stimulation, feedback inhibition from RE neurons often leads to TC neuron burst responses, while at high frequency tonic responses dominate. Depending on the average rate of stimulation, contrast level, and temporal frequency of modulation, the TC and RE cell bursts may or may not be phase-locked to the visual stimulus. During drifting-grating stimulation, phase-locked bursts often occur for sufficiently high contrast so long as the spatial period of the grating is not small compared to the synaptic footprint length, i.e., the spatial scale of the network connectivity.
Collapse
Affiliation(s)
- Marco A Huertas
- Department of Applied Science, College of William and Mary, Williamsburg, VA 23187, USA
| | | | | |
Collapse
|
30
|
Mazzag B, Tignanelli CJ, Smith GD. The effect of residual on the stochastic gating of -regulated channel models. J Theor Biol 2005; 235:121-50. [PMID: 15833318 DOI: 10.1016/j.jtbi.2004.12.024] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2004] [Revised: 12/23/2004] [Accepted: 12/27/2004] [Indexed: 11/26/2022]
Abstract
Single-channel models of intracellular Ca(2+) channels such as the inositol 1,4,5-trisphosphate receptor and ryanodine receptor often assume that Ca(2+)-dependent transitions are mediated by a constant background [Ca(2+)] as opposed to a dynamic [Ca(2+)] representing the formation and collapse of a localized Ca(2+) domain. This assumption neglects the fact that Ca(2+) released by open intracellular Ca(2+) channels may influence subsequent gating through the processes of Ca(2+)-activation or -inactivation. We study the effect of such "residual Ca(2+)" from previous channel opening on the stochastic gating of minimal and realistic single-channel models coupled to a restricted cytoplasmic compartment. Using Monte Carlo simulation as well as analytical and numerical solution of a system of advection-reaction equations for the probability density of the domain [Ca(2+)] conditioned on the state of the channel, we determine how the steady-state open probability (p(open)) of single-channel models of Ca(2+)-regulated Ca(2+) channels depends on the time constant for Ca(2+) domain formation and collapse. As expected, p(open) for a minimal model including Ca(2+) activation increases as the domain time constant becomes large compared to the open and closed dwell times of the channel, that is, on average the channel is activated by residual Ca(2+) from previous openings. Interestingly, p(open) for a channel model that is inactivated by Ca(2+) also increases as a function of the domain time constant when the maximum domain [Ca(2+)] is fixed, because slow formation of the Ca(2+) domain attenuates Ca(2+)-mediated inactivation. Conversely, when the source amplitude of the channel is fixed, increasing the domain time constant leads to elevated domain [Ca(2+)] and decreased open probability. Consistent with these observations, a realistic De Young-Keizer-like IP(3)R model responds to residual Ca(2+) with a steady-state open probability that is a monotonic function of the domain time constant, though minimal models that include both Ca(2+)-activation and -inactivation show more complex behavior. We show how the probability density approach described here can be generalized for arbitrarily complex channel models and for any value of the domain time constant. In addition, we present a comparatively simple numerical procedure for estimating p(open) for models of Ca(2+)-regulated Ca(2+) channels in the limit of a very fast or very slow Ca(2+) domain. When the ordinary differential equation for the [Ca(2+)] in a restricted cytoplasmic compartment is replaced by a partial differential equation for the buffered diffusion of intracellular Ca(2+) in a homogeneous isotropic cytosol, we find the dependence of p(open) on the buffer time constant is qualitatively similar to the above-mentioned results.
Collapse
Affiliation(s)
- Borbala Mazzag
- Department of Applied Science, College of William and Mary, Williamsburg, VA 23187, USA
| | | | | |
Collapse
|
31
|
Abstract
Cortical activity is the product of interactions among neuronal populations. Macroscopic electrophysiological phenomena are generated by these interactions. In principle, the mechanisms of these interactions afford constraints on biologically plausible models of electrophysiological responses. In other words, the macroscopic features of cortical activity can be modelled in terms of the microscopic behaviour of neurons. An evoked response potential (ERP) is the mean electrical potential measured from an electrode on the scalp, in response to some event. The purpose of this paper is to outline a population density approach to modelling ERPs. We propose a biologically plausible model of neuronal activity that enables the estimation of physiologically meaningful parameters from electrophysiological data. The model encompasses four basic characteristics of neuronal activity and organization: (i) neurons are dynamic units, (ii) driven by stochastic forces, (iii) organized into populations with similar biophysical properties and response characteristics and (iv) multiple populations interact to form functional networks. This leads to a formulation of population dynamics in terms of the Fokker-Planck equation. The solution of this equation is the temporal evolution of a probability density over state-space, representing the distribution of an ensemble of trajectories. Each trajectory corresponds to the changing state of a neuron. Measurements can be modelled by taking expectations over this density, e.g. mean membrane potential, firing rate or energy consumption per neuron. The key motivation behind our approach is that ERPs represent an average response over many neurons. This means it is sufficient to model the probability density over neurons, because this implicitly models their average state. Although the dynamics of each neuron can be highly stochastic, the dynamics of the density is not. This means we can use Bayesian inference and estimation tools that have already been established for deterministic systems. The potential importance of modelling density dynamics (as opposed to more conventional neural mass models) is that they include interactions among the moments of neuronal states (e.g. the mean depolarization may depend on the variance of synaptic currents through nonlinear mechanisms).Here, we formulate a population model, based on biologically informed model-neurons with spike-rate adaptation and synaptic dynamics. Neuronal sub-populations are coupled to form an observation model, with the aim of estimating and making inferences about coupling among sub-populations using real data. We approximate the time-dependent solution of the system using a bi-orthogonal set and first-order perturbation expansion. For didactic purposes, the model is developed first in the context of deterministic input, and then extended to include stochastic effects. The approach is demonstrated using synthetic data, where model parameters are identified using a Bayesian estimation scheme we have described previously.
Collapse
Affiliation(s)
- L M Harrison
- The Wellcome Department of Imaging Neuroscience, Institute of Neurology, UCL, 12 Queen Square, London WC1N 3BG, UK.
| | | | | |
Collapse
|
32
|
Cai D, Tao L, Shelley M, McLaughlin DW. An effective kinetic representation of fluctuation-driven neuronal networks with application to simple and complex cells in visual cortex. Proc Natl Acad Sci U S A 2004; 101:7757-62. [PMID: 15131268 PMCID: PMC419679 DOI: 10.1073/pnas.0401906101] [Citation(s) in RCA: 98] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
A coarse-grained representation of neuronal network dynamics is developed in terms of kinetic equations, which are derived by a moment closure, directly from the original large-scale integrate-and-fire (I&F) network. This powerful kinetic theory captures the full dynamic range of neuronal networks, from the mean-driven limit (a limit such as the number of neurons N --> infinity, in which the fluctuations vanish) to the fluctuation-dominated limit (such as in small N networks). Comparison with full numerical simulations of the original I&F network establishes that the reduced dynamics is very accurate and numerically efficient over all dynamic ranges. Both analytical insights and scale-up of numerical representation can be achieved by this kinetic approach. Here, the theory is illustrated by a study of the dynamical properties of networks of various architectures, including excitatory and inhibitory neurons of both simple and complex type, which exhibit rich dynamic phenomena, such as, transitions to bistability and hysteresis, even in the presence of large fluctuations. The implication for possible connections between the structure of the bifurcations and the behavior of complex cells is discussed. Finally, I&F networks and kinetic theory are used to discuss orientation selectivity of complex cells for "ring-model" architectures that characterize changes in the response of neurons located from near "orientation pinwheel centers" to far from them.
Collapse
Affiliation(s)
- David Cai
- Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, USA.
| | | | | | | |
Collapse
|
33
|
Brown E, Moehlis J, Holmes P. On the phase reduction and response dynamics of neural oscillator populations. Neural Comput 2004; 16:673-715. [PMID: 15025826 DOI: 10.1162/089976604322860668] [Citation(s) in RCA: 210] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We undertake a probabilistic analysis of the response of repetitively firing neural populations to simple pulselike stimuli. Recalling and extending results from the literature, we compute phase response curves (PRCs) valid near bifurcations to periodic firing for Hindmarsh-Rose, Hodgkin-Huxley, FitzHugh-Nagumo, and Morris-Lecar models, encompassing the four generic (codimension one) bifurcations. Phase density equations are then used to analyze the role of the bifurcation, and the resulting PRC, in responses to stimuli. In particular, we explore the interplay among stimulus duration, baseline firing frequency, and population-level response patterns. We interpret the results in terms of the signal processing measure of gain and discuss further applications and experimentally testable predictions.
Collapse
Affiliation(s)
- Eric Brown
- Program in Applied and Computational Mathematics, Princeton University, Princeton, NJ 08544, USA
| | | | | |
Collapse
|
34
|
Abstract
A population density description of large populations of neurons has generated considerable interest recently. The evolution in time of the population density is determined by a partial differential equation (PDE). Most of the algorithms proposed to solve this PDE have used finite difference schemes. Here, I use the method of characteristics to reduce the PDE to a set of ordinary differential equations, which are easy to solve. The method is applied to leaky-integrate-and-fire neurons and produces an algorithm that is efficient and yields a stable and manifestly nonnegative density. Contrary to algorithms based directly on finite difference schemes, this algorithm is insensitive to large density gradients, which may occur during evolution of the density.
Collapse
Affiliation(s)
- M de Kamps
- Section Cognitive Psychology, Faculty of Social Sciences, Leiden University, 2333 AK Leiden, The Netherlands.
| |
Collapse
|
35
|
Dynamic aspects of delay activity. Neurocomputing 2003. [DOI: 10.1016/s0925-2312(02)00751-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|