51
|
Grytskyy D, Tetzlaff T, Diesmann M, Helias M. A unified view on weakly correlated recurrent networks. Front Comput Neurosci 2013; 7:131. [PMID: 24151463 PMCID: PMC3799216 DOI: 10.3389/fncom.2013.00131] [Citation(s) in RCA: 56] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2013] [Accepted: 09/10/2013] [Indexed: 11/13/2022] Open
Abstract
The diversity of neuron models used in contemporary theoretical neuroscience to investigate specific properties of covariances in the spiking activity raises the question how these models relate to each other. In particular it is hard to distinguish between generic properties of covariances and peculiarities due to the abstracted model. Here we present a unified view on pairwise covariances in recurrent networks in the irregular regime. We consider the binary neuron model, the leaky integrate-and-fire (LIF) model, and the Hawkes process. We show that linear approximation maps each of these models to either of two classes of linear rate models (LRM), including the Ornstein-Uhlenbeck process (OUP) as a special case. The distinction between both classes is the location of additive noise in the rate dynamics, which is located on the output side for spiking models and on the input side for the binary model. Both classes allow closed form solutions for the covariance. For output noise it separates into an echo term and a term due to correlated input. The unified framework enables us to transfer results between models. For example, we generalize the binary model and the Hawkes process to the situation with synaptic conduction delays and simplify derivations for established results. Our approach is applicable to general network structures and suitable for the calculation of population averages. The derived averages are exact for fixed out-degree network architectures and approximate for fixed in-degree. We demonstrate how taking into account fluctuations in the linearization procedure increases the accuracy of the effective theory and we explain the class dependent differences between covariances in the time and the frequency domain. Finally we show that the oscillatory instability emerging in networks of LIF models with delayed inhibitory feedback is a model-invariant feature: the same structure of poles in the complex frequency plane determines the population power spectra.
Collapse
Affiliation(s)
- Dmytro Grytskyy
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre and JARA Jülich, Germany
| | | | | | | |
Collapse
|
52
|
Nowicki D, Verga P, Siegelmann H. Modeling reconsolidation in kernel associative memory. PLoS One 2013; 8:e68189. [PMID: 23936300 PMCID: PMC3732245 DOI: 10.1371/journal.pone.0068189] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2013] [Accepted: 05/27/2013] [Indexed: 11/24/2022] Open
Abstract
Memory reconsolidation is a central process enabling adaptive memory and the perception of a constantly changing reality. It causes memories to be strengthened, weakened or changed following their recall. A computational model of memory reconsolidation is presented. Unlike Hopfield-type memory models, our model introduces an unbounded number of attractors that are updatable and can process real-valued, large, realistic stimuli. Our model replicates three characteristic effects of the reconsolidation process on human memory: increased association, extinction of fear memories, and the ability to track and follow gradually changing objects. In addition to this behavioral validation, a continuous time version of the reconsolidation model is introduced. This version extends average rate dynamic models of brain circuits exhibiting persistent activity to include adaptivity and an unbounded number of attractors.
Collapse
Affiliation(s)
- Dimitri Nowicki
- The Biologically Inspired Neural and Dynamic Systems (BINDS) Lab, Department of Computer Science, University of Massachusetts Amherst, Amherst, Massachusetts, United States of America
- Institute of Mathematical Machines & Systems Problems of NASU, Center for Cybernetics, Kiev, Ukraine
| | - Patrick Verga
- The Biologically Inspired Neural and Dynamic Systems (BINDS) Lab, Department of Computer Science, University of Massachusetts Amherst, Amherst, Massachusetts, United States of America
| | - Hava Siegelmann
- The Biologically Inspired Neural and Dynamic Systems (BINDS) Lab, Department of Computer Science, University of Massachusetts Amherst, Amherst, Massachusetts, United States of America
- Program of Neuroscience and Behavior, University of Massachusetts Amherst, Amherst, Massachusetts, United States of America
| |
Collapse
|
53
|
Buice MA, Chow CC. Beyond mean field theory: statistical field theory for neural networks. JOURNAL OF STATISTICAL MECHANICS (ONLINE) 2013; 2013:P03003. [PMID: 25243014 PMCID: PMC4169078 DOI: 10.1088/1742-5468/2013/03/p03003] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Mean field theories have been a stalwart for studying the dynamics of networks of coupled neurons. They are convenient because they are relatively simple and possible to analyze. However, classical mean field theory neglects the effects of fluctuations and correlations due to single neuron effects. Here, we consider various possible approaches for going beyond mean field theory and incorporating correlation effects. Statistical field theory methods, in particular the Doi-Peliti-Janssen formalism, are particularly useful in this regard.
Collapse
Affiliation(s)
- Michael A Buice
- Center for Learning and Memory, University of Texas at Austin, Austin, TX, USA
| | - Carson C Chow
- Laboratory of Biological Modeling, NIDDK, NIH, Bethesda, MD, USA
| |
Collapse
|
54
|
Buice MA, Chow CC. Dynamic finite size effects in spiking neural networks. PLoS Comput Biol 2013; 9:e1002872. [PMID: 23359258 PMCID: PMC3554590 DOI: 10.1371/journal.pcbi.1002872] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2012] [Accepted: 11/21/2012] [Indexed: 11/19/2022] Open
Abstract
We investigate the dynamics of a deterministic finite-sized network of synaptically coupled spiking neurons and present a formalism for computing the network statistics in a perturbative expansion. The small parameter for the expansion is the inverse number of neurons in the network. The network dynamics are fully characterized by a neuron population density that obeys a conservation law analogous to the Klimontovich equation in the kinetic theory of plasmas. The Klimontovich equation does not possess well-behaved solutions but can be recast in terms of a coupled system of well-behaved moment equations, known as a moment hierarchy. The moment hierarchy is impossible to solve but in the mean field limit of an infinite number of neurons, it reduces to a single well-behaved conservation law for the mean neuron density. For a large but finite system, the moment hierarchy can be truncated perturbatively with the inverse system size as a small parameter but the resulting set of reduced moment equations that are still very difficult to solve. However, the entire moment hierarchy can also be re-expressed in terms of a functional probability distribution of the neuron density. The moments can then be computed perturbatively using methods from statistical field theory. Here we derive the complete mean field theory and the lowest order second moment corrections for physiologically relevant quantities. Although we focus on finite-size corrections, our method can be used to compute perturbative expansions in any parameter. One avenue towards understanding how the brain functions is to create computational and mathematical models. However, a human brain has on the order of a hundred billion neurons with a quadrillion synaptic connections. Each neuron is a complex cell comprised of multiple compartments hosting a myriad of ions, proteins and other molecules. Even if computing power continues to increase exponentially, directly simulating all the processes in the brain on a computer is not feasible in the foreseeable future and even if this could be achieved, the resulting simulation may be no simpler to understand than the brain itself. Hence, the need for more tractable models. Historically, systems with many interacting bodies are easier to understand in the two opposite limits of a small number or an infinite number of elements and most of the theoretical efforts in understanding neural networks have been devoted to these two limits. There has been relatively little effort directed to the very relevant but difficult regime of large but finite networks. In this paper, we introduce a new formalism that borrows from the methods of many-body statistical physics to analyze finite size effects in spiking neural networks.
Collapse
Affiliation(s)
- Michael A. Buice
- Laboratory of Biological Modeling, NIDDK, NIH, Bethesda, Maryland, United States of America
- * E-mail: (MAB); (CCC)
| | - Carson C. Chow
- Laboratory of Biological Modeling, NIDDK, NIH, Bethesda, Maryland, United States of America
- * E-mail: (MAB); (CCC)
| |
Collapse
|
55
|
Riedler MG, Buckwar E. Laws of large numbers and langevin approximations for stochastic neural field equations. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2013; 3:1. [PMID: 23343328 PMCID: PMC3582461 DOI: 10.1186/2190-8567-3-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2012] [Accepted: 01/14/2013] [Indexed: 05/10/2023]
Abstract
In this study, we consider limit theorems for microscopic stochastic models of neural fields. We show that the Wilson-Cowan equation can be obtained as the limit in uniform convergence on compacts in probability for a sequence of microscopic models when the number of neuron populations distributed in space and the number of neurons per population tend to infinity. This result also allows to obtain limits for qualitatively different stochastic convergence concepts, e.g., convergence in the mean. Further, we present a central limit theorem for the martingale part of the microscopic models which, suitably re-scaled, converges to a centred Gaussian process with independent increments. These two results provide the basis for presenting the neural field Langevin equation, a stochastic differential equation taking values in a Hilbert space, which is the infinite-dimensional analogue of the chemical Langevin equation in the present setting. On a technical level, we apply recently developed law of large numbers and central limit theorems for piecewise deterministic processes taking values in Hilbert spaces to a master equation formulation of stochastic neuronal network models. These theorems are valid for processes taking values in Hilbert spaces, and by this are able to incorporate spatial structures of the underlying model.Mathematics Subject Classification (2000): 60F05, 60J25, 60J75, 92C20.
Collapse
Affiliation(s)
- Martin G Riedler
- Institute for Stochastics, Johannes Kepler University, Linz, Austria
| | - Evelyn Buckwar
- Institute for Stochastics, Johannes Kepler University, Linz, Austria
| |
Collapse
|
56
|
Bressloff PC, Wilkerson J. Traveling pulses in a stochastic neural field model of direction selectivity. Front Comput Neurosci 2012. [PMID: 23181018 PMCID: PMC3501266 DOI: 10.3389/fncom.2012.00090] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
We analyze the effects of extrinsic noise on traveling pulses in a neural field model of direction selectivity. The model consists of a one-dimensional scalar neural field with an asymmetric weight distribution consisting of an offset Mexican hat function. We first show how, in the absence of any noise, the system supports spontaneously propagating traveling pulses that can lock to externally moving stimuli. Using a separation of time-scales and perturbation methods previously developed for stochastic reaction-diffusion equations, we then show how extrinsic noise in the activity variables leads to a diffusive-like displacement (wandering) of the wave from its uniformly translating position at long time-scales, and fluctuations in the wave profile around its instantaneous position at short time-scales. In the case of freely propagating pulses, the wandering is characterized by pure Brownian motion, whereas in the case of stimulus-locked pulses, it is given by an Ornstein–Uhlenbeck process. This establishes that stimulus-locked pulses are more robust to noise.
Collapse
Affiliation(s)
- Paul C Bressloff
- Department of Mathematics, University of Utah Salt Lake City, UT, USA
| | | |
Collapse
|
57
|
Inhibitory networks of fast-spiking interneurons generate slow population activities due to excitatory fluctuations and network multistability. J Neurosci 2012; 32:9931-46. [PMID: 22815508 DOI: 10.1523/jneurosci.5446-11.2012] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
Slow population activities (SPAs) exist in the brain and have frequencies below ~5 Hz. Despite SPAs being prominent in several cortical areas and serving many putative functions, their mechanisms are not well understood. We studied a specific type of in vitro GABAergic, inhibition-based SPA exhibited by C57BL/6 murine hippocampus. We used a multipronged approach consisting of experiment, simulation, and mathematical analyses to uncover mechanisms responsible for hippocampal SPAs. Our results show that hippocampal SPAs are an emergent phenomenon in which the "slowness" of the network is due to interactions between synaptic and cellular characteristics of individual fast-spiking, inhibitory interneurons. Our simulations quantify characteristics underlying hippocampal SPAs. In particular, for hippocampal SPAs to occur, we predict that individual fast-spiking interneurons should have frequency-current (f-I) curves that exhibit a suitably sized kink where the slope of the curve decreases more abruptly in the gamma frequency range with increasing current. We also predict that these interneurons should be well connected with one another. Our mathematical analyses show that the combination of synaptic and intrinsic conditions, as predicted by our simulations, promotes network multistability. Population slow timescales occur when excitatory fluctuations drive the network between different stable network firing states. Since many of the parameters we use are extracted from experiments and subsequent measurements of experimental f-I curves of fast-spiking interneurons exhibit characteristics as predicted, we propose that our network models capture a fundamental operating mechanism in biological hippocampal networks.
Collapse
|
58
|
Tetzlaff T, Helias M, Einevoll GT, Diesmann M. Decorrelation of neural-network activity by inhibitory feedback. PLoS Comput Biol 2012; 8:e1002596. [PMID: 23133368 PMCID: PMC3487539 DOI: 10.1371/journal.pcbi.1002596] [Citation(s) in RCA: 112] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2011] [Accepted: 05/20/2012] [Indexed: 11/19/2022] Open
Abstract
Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II).
Collapse
Affiliation(s)
- Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6), Computational and Systems Neuroscience, Research Center Jülich, Jülich, Germany.
| | | | | | | |
Collapse
|
59
|
Wilson MT, Robinson PA, O'Neill B, Steyn-Ross DA. Complementarity of spike- and rate-based dynamics of neural systems. PLoS Comput Biol 2012; 8:e1002560. [PMID: 22737064 PMCID: PMC3380910 DOI: 10.1371/journal.pcbi.1002560] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2012] [Accepted: 05/02/2012] [Indexed: 11/18/2022] Open
Abstract
Relationships between spiking-neuron and rate-based approaches to the dynamics of neural assemblies are explored by analyzing a model system that can be treated by both methods, with the rate-based method further averaged over multiple neurons to give a neural-field approach. The system consists of a chain of neurons, each with simple spiking dynamics that has a known rate-based equivalent. The neurons are linked by propagating activity that is described in terms of a spatial interaction strength with temporal delays that reflect distances between neurons; feedback via a separate delay loop is also included because such loops also exist in real brains. These interactions are described using a spatiotemporal coupling function that can carry either spikes or rates to provide coupling between neurons. Numerical simulation of corresponding spike- and rate-based methods with these compatible couplings then allows direct comparison between the dynamics arising from these approaches. The rate-based dynamics can reproduce two different forms of oscillation that are present in the spike-based model: spiking rates of individual neurons and network-induced modulations of spiking rate that occur if network interactions are sufficiently strong. Depending on conditions either mode of oscillation can dominate the spike-based dynamics and in some situations, particularly when the ratio of the frequencies of these two modes is integer or half-integer, the two can both be present and interact with each other.
Collapse
Affiliation(s)
- M T Wilson
- School of Engineering, University of Waikato, Hamilton, New Zealand.
| | | | | | | |
Collapse
|
60
|
Beggs JM, Timme N. Being critical of criticality in the brain. Front Physiol 2012; 3:163. [PMID: 22701101 PMCID: PMC3369250 DOI: 10.3389/fphys.2012.00163] [Citation(s) in RCA: 263] [Impact Index Per Article: 20.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2012] [Accepted: 05/07/2012] [Indexed: 11/23/2022] Open
Abstract
Relatively recent work has reported that networks of neurons can produce avalanches of activity whose sizes follow a power law distribution. This suggests that these networks may be operating near a critical point, poised between a phase where activity rapidly dies out and a phase where activity is amplified over time. The hypothesis that the electrical activity of neural networks in the brain is critical is potentially important, as many simulations suggest that information processing functions would be optimized at the critical point. This hypothesis, however, is still controversial. Here we will explain the concept of criticality and review the substantial objections to the criticality hypothesis raised by skeptics. Points and counter points are presented in dialog form.
Collapse
Affiliation(s)
- John M Beggs
- Department of Physics, Indiana University Bloomington, IN, USA
| | | |
Collapse
|
61
|
Baladron J, Fasoli D, Faugeras O, Touboul J. Mean-field description and propagation of chaos in networks of Hodgkin-Huxley and FitzHugh-Nagumo neurons. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2012; 2:10. [PMID: 22657695 PMCID: PMC3497713 DOI: 10.1186/2190-8567-2-10] [Citation(s) in RCA: 60] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2011] [Accepted: 03/09/2012] [Indexed: 05/20/2023]
Abstract
We derive the mean-field equations arising as the limit of a network of interacting spiking neurons, as the number of neurons goes to infinity. The neurons belong to a fixed number of populations and are represented either by the Hodgkin-Huxley model or by one of its simplified version, the FitzHugh-Nagumo model. The synapses between neurons are either electrical or chemical. The network is assumed to be fully connected. The maximum conductances vary randomly. Under the condition that all neurons' initial conditions are drawn independently from the same law that depends only on the population they belong to, we prove that a propagation of chaos phenomenon takes place, namely that in the mean-field limit, any finite number of neurons become independent and, within each population, have the same probability distribution. This probability distribution is a solution of a set of implicit equations, either nonlinear stochastic differential equations resembling the McKean-Vlasov equations or non-local partial differential equations resembling the McKean-Vlasov-Fokker-Planck equations. We prove the well-posedness of the McKean-Vlasov equations, i.e. the existence and uniqueness of a solution. We also show the results of some numerical experiments that indicate that the mean-field equations are a good representation of the mean activity of a finite size network, even for modest sizes. These experiments also indicate that the McKean-Vlasov-Fokker-Planck equations may be a good way to understand the mean-field dynamics through, e.g. a bifurcation analysis.Mathematics Subject Classification (2000): 60F99, 60B10, 92B20, 82C32, 82C80, 35Q80.
Collapse
Affiliation(s)
- Javier Baladron
- NeuroMathComp Laboratory, INRIA, Sophia-Antipolis Méditerranée, 06902, France.
| | | | | | | |
Collapse
|
62
|
Robinson PA. Neural field theory with variance dynamics. J Math Biol 2012; 66:1475-97. [PMID: 22576451 DOI: 10.1007/s00285-012-0541-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2011] [Revised: 04/15/2012] [Indexed: 11/29/2022]
Abstract
Previous neural field models have mostly been concerned with prediction of mean neural activity and with second order quantities such as its variance, but without feedback of second order quantities on the dynamics. Here the effects of feedback of the variance on the steady states and adiabatic dynamics of neural systems are calculated using linear neural field theory to estimate the neural voltage variance, then including this quantity in the total variance parameter of the nonlinear firing rate-voltage response function, and thus into determination of the fixed points and the variance itself. The general results further clarify the limits of validity of approaches with and without inclusion of variance dynamics. Specific applications show that stability against a saddle-node bifurcation is reduced in a purely cortical system, but can be either increased or decreased in the corticothalamic case, depending on the initial state. Estimates of critical variance scalings near saddle-node bifurcation are also found, including physiologically based normalizations and new scalings for mean firing rate and the position of the bifurcation.
Collapse
Affiliation(s)
- P A Robinson
- School of Physics, University of Sydney, Sydney, NSW 2006, Australia.
| |
Collapse
|
63
|
Abstract
Neural activity that persists long after stimulus presentation is a biological correlate of short-term memory. Variability in spiking activity causes persistent states to drift over time, ultimately degrading memory. Models of short-term memory often assume that the input fluctuations to neural populations are independent across cells, a feature that attenuates population-level variability and stabilizes persistent activity. However, this assumption is at odds with experimental recordings from pairs of cortical neurons showing that both the input currents and output spike trains are correlated. It remains unclear how correlated variability affects the stability of persistent activity and the performance of cognitive tasks that it supports. We consider the stochastic long-timescale attractor dynamics of pairs of mutually inhibitory populations of spiking neurons. In these networks, persistent activity was less variable when correlated variability was globally distributed across both populations compared with the case when correlations were locally distributed only within each population. Using a reduced firing rate model with a continuum of persistent states, we show that, when input fluctuations are correlated across both populations, they drive firing rate fluctuations orthogonal to the persistent state attractor, thereby causing minimal stochastic drift. Using these insights, we establish that distributing correlated fluctuations globally as opposed to locally improves network's performance on a two-interval, delayed response discrimination task. Our work shows that the correlation structure of input fluctuations to a network is an important factor when determining long-timescale, persistent population spiking activity.
Collapse
|
64
|
Bressloff PC. From invasion to extinction in heterogeneous neural fields. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2012; 2:6. [PMID: 22655682 PMCID: PMC3430586 DOI: 10.1186/2190-8567-2-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2011] [Accepted: 03/26/2012] [Indexed: 06/01/2023]
Abstract
In this paper, we analyze the invasion and extinction of activity in heterogeneous neural fields. We first consider the effects of spatial heterogeneities on the propagation of an invasive activity front. In contrast to previous studies of front propagation in neural media, we assume that the front propagates into an unstable rather than a metastable zero-activity state. For sufficiently localized initial conditions, the asymptotic velocity of the resulting pulled front is given by the linear spreading velocity, which is determined by linearizing about the unstable state within the leading edge of the front. One of the characteristic features of these so-called pulled fronts is their sensitivity to perturbations inside the leading edge. This means that standard perturbation methods for studying the effects of spatial heterogeneities or external noise fluctuations break down. We show how to extend a partial differential equation method for analyzing pulled fronts in slowly modulated environments to the case of neural fields with slowly modulated synaptic weights. The basic idea is to rescale space and time so that the front becomes a sharp interface whose location can be determined by solving a corresponding local Hamilton-Jacobi equation. We use steepest descents to derive the Hamilton-Jacobi equation from the original nonlocal neural field equation. In the case of weak synaptic heterogenities, we then use perturbation theory to solve the corresponding Hamilton equations and thus determine the time-dependent wave speed. In the second part of the paper, we investigate how time-dependent heterogenities in the form of extrinsic multiplicative noise can induce rare noise-driven transitions to the zero-activity state, which now acts as an absorbing state signaling the extinction of all activity. In this case, the most probable path to extinction can be obtained by solving the classical equations of motion that dominate a path integral representation of the stochastic neural field in the weak noise limit. These equations take the form of nonlocal Hamilton equations in an infinite-dimensional phase space.
Collapse
Affiliation(s)
- Paul C Bressloff
- Department of Mathematics, University of Utah, Salt Lake City, UT, 84112, USA.
| |
Collapse
|
65
|
Approximating distributions in stochastic learning. Neural Netw 2012; 32:219-28. [PMID: 22418034 DOI: 10.1016/j.neunet.2012.02.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2011] [Revised: 12/27/2011] [Accepted: 02/07/2012] [Indexed: 11/22/2022]
Abstract
On-line machine learning algorithms, many biological spike-timing-dependent plasticity (STDP) learning rules, and stochastic neural dynamics evolve by Markov processes. A complete description of such systems gives the probability densities for the variables. The evolution and equilibrium state of these densities are given by a Chapman-Kolmogorov equation in discrete time, or a master equation in continuous time. These formulations are analytically intractable for most cases of interest, and to make progress a nonlinear Fokker-Planck equation (FPE) is often used in their place. The FPE is limited, and some argue that its application to describe jump processes (such as in these problems) is fundamentally flawed. We develop a well-grounded perturbation expansion that provides approximations for both the density and its moments. The approach is based on the system size expansion in statistical physics (which does not give approximations for the density), but our simple development makes the methods accessible and invites application to diverse problems. We apply the method to calculate the equilibrium distributions for two biologically-observed STDP learning rules and for a simple nonlinear machine-learning problem. In all three examples, we show that our perturbation series provides good agreement with Monte-Carlo simulations in regimes where the FPE breaks down.
Collapse
|
66
|
Leen TK, Friel R. Stochastic perturbation methods for spike-timing-dependent plasticity. Neural Comput 2012; 24:1109-46. [PMID: 22295984 DOI: 10.1162/neco_a_00267] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Online machine learning rules and many biological spike-timing-dependent plasticity (STDP) learning rules generate jump process Markov chains for the synaptic weights. We give a perturbation expansion for the dynamics that, unlike the usual approximation by a Fokker-Planck equation (FPE), is well justified. Our approach extends the related system size expansion by giving an expansion for the probability density as well as its moments. We apply the approach to two observed STDP learning rules and show that in regimes where the FPE breaks down, the new perturbation expansion agrees well with Monte Carlo simulations. The methods are also applicable to the dynamics of stochastic neural activity. Like previous ensemble analyses of STDP, we focus on equilibrium solutions, although the methods can in principle be applied to transients as well.
Collapse
Affiliation(s)
- Todd K Leen
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239, USA.
| | | |
Collapse
|
67
|
Robinson PA. Interrelating anatomical, effective, and functional brain connectivity using propagators and neural field theory. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2012; 85:011912. [PMID: 22400596 DOI: 10.1103/physreve.85.011912] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2011] [Revised: 12/09/2011] [Indexed: 05/31/2023]
Abstract
It is shown how to compute effective and functional connection matrices (eCMs and fCMs) from anatomical CMs (aCMs) and corresponding strength-of-connection matrices (sCMs) using propagator methods in which neural interactions play the role of scatterings. This analysis demonstrates how network effects dress the bare propagators (the sCMs) to yield effective propagators (the eCMs) that can be used to compute the covariances customarily used to define fCMs. The results incorporate excitatory and inhibitory connections, multiple structures and populations, asymmetries, time delays, and measurement effects. They can also be postprocessed in the same manner as experimental measurements for direct comparison with data and thereby give insights into the role of coarse-graining, thresholding, and other effects in determining the structure of CMs. The spatiotemporal results show how to generalize CMs to include time delays and how natural network modes give rise to long-range coherence at resonant frequencies. The results are demonstrated using tractable analytic cases via neural field theory of cortical and corticothalamic systems. These also demonstrate close connections between the structure of CMs and proximity to critical points of the system, highlight the importance of indirect links between brain regions and raise the possibility of imaging specific levels of indirect connectivity. Aside from the results presented explicitly here, the expression of the connections among aCMs, sCMs, eCMs, and fCMs in terms of propagators opens the way for propagator theory to be further applied to analysis of connectivity.
Collapse
Affiliation(s)
- P A Robinson
- School of Physics, University of Sydney, New South Wales 2006, Australia
| |
Collapse
|
68
|
Cardanobile S, Rotter S. Emergent properties of interacting populations of spiking neurons. Front Comput Neurosci 2011; 5:59. [PMID: 22207844 PMCID: PMC3245521 DOI: 10.3389/fncom.2011.00059] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2011] [Accepted: 11/28/2011] [Indexed: 12/05/2022] Open
Abstract
Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system. Here, we present and discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks is faithfully reflected by a set of non-linear rate equations, describing all interactions on the population level. These equations are similar in structure to Lotka-Volterra equations, well known by their use in modeling predator-prey relations in population biology, but abundant applications to economic theory have also been described. We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of interacting neuronal populations.
Collapse
|
69
|
Thomas PJ, Cowan JD. Generalized spin models for coupled cortical feature maps obtained by coarse graining correlation based synaptic learning rules. J Math Biol 2011; 65:1149-86. [PMID: 22101498 DOI: 10.1007/s00285-011-0484-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2010] [Revised: 06/11/2011] [Indexed: 11/26/2022]
Abstract
We derive generalized spin models for the development of feedforward cortical architecture from a Hebbian synaptic learning rule in a two layer neural network with nonlinear weight constraints. Our model takes into account the effects of lateral interactions in visual cortex combining local excitation and long range effective inhibition. Our approach allows the principled derivation of developmental rules for low-dimensional feature maps, starting from high-dimensional synaptic learning rules. We incorporate the effects of smooth nonlinear constraints on net synaptic weight projected from units in the thalamic layer (the fan-out) and on the net synaptic weight received by units in the cortical layer (the fan-in). These constraints naturally couple together multiple feature maps such as orientation preference and retinotopic organization. We give a detailed illustration of the method applied to the development of the orientation preference map as a special case, in addition to deriving a model for joint pattern formation in cortical maps of orientation preference, retinotopic location, and receptive field width. We show that the combination of Hebbian learning and center-surround cortical interaction naturally leads to an orientation map development model that is closely related to the XY magnetic lattice model from statistical physics. The results presented here provide justification for phenomenological models studied in Cowan and Friedman (Advances in neural information processing systems 3, 1991), Thomas and Cowan (Phys Rev Lett 92(18):e188101, 2004) and provide a developmental model realizing the synaptic weight constraints previously assumed in Thomas and Cowan (Math Med Biol 23(2):119-138, 2006).
Collapse
Affiliation(s)
- Peter J Thomas
- Department of Mathematics, Case Western Reserve University, Cleveland, OH, USA.
| | | |
Collapse
|
70
|
Buice MA, Chow CC. Effective stochastic behavior in dynamical systems with incomplete information. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2011; 84:051120. [PMID: 22181382 PMCID: PMC3457716 DOI: 10.1103/physreve.84.051120] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2011] [Revised: 09/12/2011] [Indexed: 05/26/2023]
Abstract
Complex systems are generally analytically intractable and difficult to simulate. We introduce a method for deriving an effective stochastic equation for a high-dimensional deterministic dynamical system for which some portion of the configuration is not precisely specified. We use a response function path integral to construct an equivalent distribution for the stochastic dynamics from the distribution of the incomplete information. We apply this method to the Kuramoto model of coupled oscillators to derive an effective stochastic equation for a single oscillator interacting with a bath of oscillators and also outline the procedure for other systems.
Collapse
Affiliation(s)
- Michael A Buice
- Laboratory of Biological Modeling, NIDDK, NIH, Bethesda, Maryland 20892, USA
| | | |
Collapse
|
71
|
Ledoux E, Brunel N. Dynamics of networks of excitatory and inhibitory neurons in response to time-dependent inputs. Front Comput Neurosci 2011; 5:25. [PMID: 21647353 PMCID: PMC3103906 DOI: 10.3389/fncom.2011.00025] [Citation(s) in RCA: 90] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2011] [Accepted: 05/09/2011] [Indexed: 11/13/2022] Open
Abstract
We investigate the dynamics of recurrent networks of excitatory (E) and inhibitory (I) neurons in the presence of time-dependent inputs. The dynamics is characterized by the network dynamical transfer function, i.e., how the population firing rate is modulated by sinusoidal inputs at arbitrary frequencies. Two types of networks are studied and compared: (i) a Wilson-Cowan type firing rate model; and (ii) a fully connected network of leaky integrate-and-fire (LIF) neurons, in a strong noise regime. We first characterize the region of stability of the "asynchronous state" (a state in which population activity is constant in time when external inputs are constant) in the space of parameters characterizing the connectivity of the network. We then systematically characterize the qualitative behaviors of the dynamical transfer function, as a function of the connectivity. We find that the transfer function can be either low-pass, or with a single or double resonance, depending on the connection strengths and synaptic time constants. Resonances appear when the system is close to Hopf bifurcations, that can be induced by two separate mechanisms: the I-I connectivity and the E-I connectivity. Double resonances can appear when excitatory delays are larger than inhibitory delays, due to the fact that two distinct instabilities exist with a finite gap between the corresponding frequencies. In networks of LIF neurons, changes in external inputs and external noise are shown to be able to change qualitatively the network transfer function. Firing rate models are shown to exhibit the same diversity of transfer functions as the LIF network, provided delays are present. They can also exhibit input-dependent changes of the transfer function, provided a suitable static non-linearity is incorporated.
Collapse
Affiliation(s)
- Erwan Ledoux
- Laboratory of Neurophysics and Physiology, UMR 8119, CNRS, Université Paris Descartes Paris, France
| | | |
Collapse
|
72
|
Bressloff PC, Lai YM. Stochastic synchronization of neuronal populations with intrinsic and extrinsic noise. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2011; 1:2. [PMID: 22656265 PMCID: PMC3280892 DOI: 10.1186/2190-8567-1-2] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2010] [Accepted: 05/03/2011] [Indexed: 05/29/2023]
Abstract
We extend the theory of noise-induced phase synchronization to the case of a neural master equation describing the stochastic dynamics of an ensemble of uncoupled neuronal population oscillators with intrinsic and extrinsic noise. The master equation formulation of stochastic neurodynamics represents the state of each population by the number of currently active neurons, and the state transitions are chosen so that deterministic Wilson-Cowan rate equations are recovered in the mean-field limit. We apply phase reduction and averaging methods to a corresponding Langevin approximation of the master equation in order to determine how intrinsic noise disrupts synchronization of the population oscillators driven by a common extrinsic noise source. We illustrate our analysis by considering one of the simplest networks known to generate limit cycle oscillations at the population level, namely, a pair of mutually coupled excitatory (E) and inhibitory (I) subpopulations. We show how the combination of intrinsic independent noise and extrinsic common noise can lead to clustering of the population oscillators due to the multiplicative nature of both noise sources under the Langevin approximation. Finally, we show how a similar analysis can be carried out for another simple population model that exhibits limit cycle oscillations in the deterministic limit, namely, a recurrent excitatory network with synaptic depression; inclusion of synaptic depression into the neural master equation now generates a stochastic hybrid system.
Collapse
Affiliation(s)
- Paul C Bressloff
- Mathematical Institute, University of Oxford, 24-29 St. Giles', Oxford OX1 3LB, UK
- Department of Mathematics, University of Utah, 155 South 1400 East, Salt Lake City, Utah 84112, USA
| | - Yi Ming Lai
- Mathematical Institute, University of Oxford, 24-29 St. Giles', Oxford OX1 3LB, UK
| |
Collapse
|
73
|
Touboul JD, Ermentrout GB. Finite-size and correlation-induced effects in mean-field dynamics. J Comput Neurosci 2011; 31:453-84. [PMID: 21384156 DOI: 10.1007/s10827-011-0320-5] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2010] [Revised: 01/28/2011] [Accepted: 02/16/2011] [Indexed: 10/18/2022]
Abstract
The brain's activity is characterized by the interaction of a very large number of neurons that are strongly affected by noise. However, signals often arise at macroscopic scales integrating the effect of many neurons into a reliable pattern of activity. In order to study such large neuronal assemblies, one is often led to derive mean-field limits summarizing the effect of the interaction of a large number of neurons into an effective signal. Classical mean-field approaches consider the evolution of a deterministic variable, the mean activity, thus neglecting the stochastic nature of neural behavior. In this article, we build upon two recent approaches that include correlations and higher order moments in mean-field equations, and study how these stochastic effects influence the solutions of the mean-field equations, both in the limit of an infinite number of neurons and for large yet finite networks. We introduce a new model, the infinite model, which arises from both equations by a rescaling of the variables and, which is invertible for finite-size networks, and hence, provides equivalent equations to those previously derived models. The study of this model allows us to understand qualitative behavior of such large-scale networks. We show that, though the solutions of the deterministic mean-field equation constitute uncorrelated solutions of the new mean-field equations, the stability properties of limit cycles are modified by the presence of correlations, and additional non-trivial behaviors including periodic orbits appear when there were none in the mean field. The origin of all these behaviors is then explored in finite-size networks where interesting mesoscopic scale effects appear. This study leads us to show that the infinite-size system appears as a singular limit of the network equations, and for any finite network, the system will differ from the infinite system.
Collapse
Affiliation(s)
- Jonathan D Touboul
- NeuroMathComp Laboratory, INRIA/ENS Paris, 23 Avenue d'Italie, 75013 Paris, France.
| | | |
Collapse
|
74
|
Bressloff PC. Metastable states and quasicycles in a stochastic Wilson-Cowan model of neuronal population dynamics. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2010; 82:051903. [PMID: 21230496 DOI: 10.1103/physreve.82.051903] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2010] [Revised: 09/22/2010] [Indexed: 05/08/2023]
Abstract
We analyze a stochastic model of neuronal population dynamics with intrinsic noise. In the thermodynamic limit N→∞ , where N determines the size of each population, the dynamics is described by deterministic Wilson-Cowan equations. On the other hand, for finite N the dynamics is described by a master equation that determines the probability of spiking activity within each population. We first consider a single excitatory population that exhibits bistability in the deterministic limit. The steady-state probability distribution of the stochastic network has maxima at points corresponding to the stable fixed points of the deterministic network; the relative weighting of the two maxima depends on the system size. For large but finite N , we calculate the exponentially small rate of noise-induced transitions between the resulting metastable states using a Wentzel-Kramers-Brillouin (WKB) approximation and matched asymptotic expansions. We then consider a two-population excitatory or inhibitory network that supports limit cycle oscillations. Using a diffusion approximation, we reduce the dynamics to a neural Langevin equation, and show how the intrinsic noise amplifies subthreshold oscillations (quasicycles).
Collapse
Affiliation(s)
- Paul C Bressloff
- Mathematical Institute, University of Oxford, 24-29 St. Giles', Oxford OX1 3LB, United Kingdom
| |
Collapse
|
75
|
Benayoun M, Cowan JD, van Drongelen W, Wallace E. Avalanches in a stochastic model of spiking neurons. PLoS Comput Biol 2010; 6:e1000846. [PMID: 20628615 PMCID: PMC2900286 DOI: 10.1371/journal.pcbi.1000846] [Citation(s) in RCA: 103] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2010] [Accepted: 06/02/2010] [Indexed: 11/19/2022] Open
Abstract
Neuronal avalanches are a form of spontaneous activity widely observed in cortical slices and other types of nervous tissue, both in vivo and in vitro. They are characterized by irregular, isolated population bursts when many neurons fire together, where the number of spikes per burst obeys a power law distribution. We simulate, using the Gillespie algorithm, a model of neuronal avalanches based on stochastic single neurons. The network consists of excitatory and inhibitory neurons, first with all-to-all connectivity and later with random sparse connectivity. Analyzing our model using the system size expansion, we show that the model obeys the standard Wilson-Cowan equations for large network sizes ( neurons). When excitation and inhibition are closely balanced, networks of thousands of neurons exhibit irregular synchronous activity, including the characteristic power law distribution of avalanche size. We show that these avalanches are due to the balanced network having weakly stable functionally feedforward dynamics, which amplifies some small fluctuations into the large population bursts. Balanced networks are thought to underlie a variety of observed network behaviours and have useful computational properties, such as responding quickly to changes in input. Thus, the appearance of avalanches in such functionally feedforward networks indicates that avalanches may be a simple consequence of a widely present network structure, when neuron dynamics are noisy. An important implication is that a network need not be “critical” for the production of avalanches, so experimentally observed power laws in burst size may be a signature of noisy functionally feedforward structure rather than of, for example, self-organized criticality. Networks of neurons display a broad variety of behavior that nonetheless can often be described in very simple statistical terms. Here we explain the basis of one particularly striking statistical rule: that in many systems, the likelihood that groups of neurons burst, or fire together, is linked to the number of neurons involved, or size of the burst, by a power law. The wide-spread presence of these so-called avalanches has been taken to mean that neuronal networks in general operate near criticality, the boundary between two different global behaviors. We model these neuronal avalanches within the context of a network of noisy excitatory and inhibitory neurons interconnected by several different connection rules. We find that neuronal avalanches arise in our model only when excitatory and inhibitory connections are balanced in such a way that small fluctuations in the difference of population activities feed forward into large fluctuations in the sum of activities, creating avalanches. In contrast with the notion that the ubiquity of neuronal avalanches implies that neuronal networks operate near criticality, our work shows that avalanches are ubiquitous because they arise naturally from a network structure, the noisy balanced network, which underlies a wide variety of models.
Collapse
Affiliation(s)
- Marc Benayoun
- Department of Pediatrics, University of Chicago, Chicago, Illinois, United States of America
| | - Jack D. Cowan
- Department of Mathematics, University of Chicago, Chicago, Illinois, United States of America
| | - Wim van Drongelen
- Department of Pediatrics, University of Chicago, Chicago, Illinois, United States of America
- Computation Institute, University of Chicago, Chicago, Illinois, United States of America
| | - Edward Wallace
- Department of Mathematics, University of Chicago, Chicago, Illinois, United States of America
- * E-mail:
| |
Collapse
|
76
|
Coombes S. Large-scale neural dynamics: simple and complex. Neuroimage 2010; 52:731-9. [PMID: 20096791 DOI: 10.1016/j.neuroimage.2010.01.045] [Citation(s) in RCA: 89] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2009] [Revised: 12/23/2009] [Accepted: 01/13/2010] [Indexed: 11/24/2022] Open
Abstract
We review the use of neural field models for modelling the brain at the large scales necessary for interpreting EEG, fMRI, MEG and optical imaging data. Albeit a framework that is limited to coarse-grained or mean-field activity, neural field models provide a framework for unifying data from different imaging modalities. Starting with a description of neural mass models, we build to spatially extend cortical models of layered two-dimensional sheets with long range axonal connections mediating synaptic interactions. Reformulations of the fundamental non-local mathematical model in terms of more familiar local differential (brain wave) equations are described. Techniques for the analysis of such models, including how to determine the onset of spatio-temporal pattern forming instabilities, are reviewed. Extensions of the basic formalism to treat refractoriness, adaptive feedback and inhomogeneous connectivity are described along with open challenges for the development of multi-scale models that can integrate macroscopic models at large spatial scales with models at the microscopic scale.
Collapse
Affiliation(s)
- S Coombes
- School of Mathematical Sciences, University of Nottingham, Nottingham, NG7 2RD, UK.
| |
Collapse
|