1
|
Kusch L, Depannemaecker D, Destexhe A, Jirsa V. Dynamics and Bifurcation Structure of a Mean-Field Model of Adaptive Exponential Integrate-and-Fire Networks. Neural Comput 2025; 37:1102-1123. [PMID: 40262748 DOI: 10.1162/neco_a_01758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2023] [Accepted: 12/23/2024] [Indexed: 04/24/2025]
Abstract
The study of brain activity spans diverse scales and levels of description and requires the development of computational models alongside experimental investigations to explore integrations across scales. The high dimensionality of spiking networks presents challenges for understanding their dynamics. To tackle this, a mean-field formulation offers a potential approach for dimensionality reduction while retaining essential elements. Here, we focus on a previously developed mean-field model of adaptive exponential integrate and fire (AdEx) networks used in various research work. We observe qualitative similarities in the bifurcation structure but quantitative differences in mean firing rates between the mean-field model and AdEx spiking network simulations. Even if the mean-field model does not accurately predict phase shift during transients and oscillatory input, it generally captures the qualitative dynamics of the spiking network's response to both constant and varying inputs. Finally, we offer an overview of the dynamical properties of the AdExMF to assist future users in interpreting their results of simulations.
Collapse
Affiliation(s)
- Lionel Kusch
- Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France
| | - Damien Depannemaecker
- Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France
- Paris-Saclay University, Centre National de la Recherche Scientifique (CNRS), Institute of Neuroscience, 91198, Gif sur Yvette, France
| | - Alain Destexhe
- Paris-Saclay University, Centre National de la Recherche Scientifique (CNRS), Institute of Neuroscience, 91198, Gif sur Yvette, France
| | - Viktor Jirsa
- Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France
| |
Collapse
|
2
|
Deshpande SS, van Drongelen W. Third-order entropy for spatiotemporal neural network characterization. J Neurophysiol 2025; 133:1234-1244. [PMID: 40098383 DOI: 10.1152/jn.00108.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Revised: 04/08/2024] [Accepted: 03/10/2025] [Indexed: 03/19/2025] Open
Abstract
The human brain comprises an intricate web of connections that generate complex neural networks capable of storing and processing information based on factors such as network structure, connectivity strength, and interactions. To further unravel and understand this information, we introduce third-order entropy, a new metric grounded in the Triple Correlation Uniqueness (TCU) theorem. Triple correlation, which provides a complete and unique characterization of the network, relates three nodes separated by up to two spatiotemporal lags. Based on these four lags, we evaluate third-order entropy from the spatiotemporal lag probability distribution function (PDF) of the network activity's triple correlation. Given a spike raster, we compute triple correlation by iterating over time and space. Summing the contributions to the triple correlation over each of the spatial and temporal lag combinations generates a 4-D spatiotemporal frequency histogram, from which we estimate a PDF and compute entropy. To validate our approach, we first estimate third-order entropy from feedforward motifs in a simulated spike raster and then simulate the effects of adding increasing motif-class structure to a Poisson-modeled spike raster. Finally, we apply this methodology to spiking activity recorded from rat cortical cultures and compare our results to previously published results of pairwise entropy over time. Although first- and second-order metrics of activity (spike rate and cross-correlation) show agreement with previously published results, our TCU-based third-order entropy computation is a more complete tool for neural network characterization and reveals a greater depth of underlying network organization compared with pairwise entropy.NEW & NOTEWORTHY Here, we present third-order entropy built from triple correlation, which measures spatiotemporal interactions among up to three neurons. Per the Triple Correlation Uniqueness theorem, third-order entropy is based on a complete and unique characterization of the network. We first outline and validate the method and then apply it to an experimental dataset of rat cortical cultures. We show that the third-order entropy metric provides greater insight into network activity compared with pairwise entropy.
Collapse
Affiliation(s)
- Sarita S Deshpande
- Medical Scientist Training Program, University of Chicago, Chicago, Illinois, United States
- Committee on Neurobiology, University of Chicago, Chicago, Illinois, United States
- Section of Pediatric Neurology, University of Chicago, Chicago, Illinois, United States
| | - Wim van Drongelen
- Committee on Neurobiology, University of Chicago, Chicago, Illinois, United States
- Section of Pediatric Neurology, University of Chicago, Chicago, Illinois, United States
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois, United States
| |
Collapse
|
3
|
Pietras B. Pulse Shape and Voltage-Dependent Synchronization in Spiking Neuron Networks. Neural Comput 2024; 36:1476-1540. [PMID: 39028958 DOI: 10.1162/neco_a_01680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 03/18/2024] [Indexed: 07/21/2024]
Abstract
Pulse-coupled spiking neural networks are a powerful tool to gain mechanistic insights into how neurons self-organize to produce coherent collective behavior. These networks use simple spiking neuron models, such as the θ-neuron or the quadratic integrate-and-fire (QIF) neuron, that replicate the essential features of real neural dynamics. Interactions between neurons are modeled with infinitely narrow pulses, or spikes, rather than the more complex dynamics of real synapses. To make these networks biologically more plausible, it has been proposed that they must also account for the finite width of the pulses, which can have a significant impact on the network dynamics. However, the derivation and interpretation of these pulses are contradictory, and the impact of the pulse shape on the network dynamics is largely unexplored. Here, I take a comprehensive approach to pulse coupling in networks of QIF and θ-neurons. I argue that narrow pulses activate voltage-dependent synaptic conductances and show how to implement them in QIF neurons such that their effect can last through the phase after the spike. Using an exact low-dimensional description for networks of globally coupled spiking neurons, I prove for instantaneous interactions that collective oscillations emerge due to an effective coupling through the mean voltage. I analyze the impact of the pulse shape by means of a family of smooth pulse functions with arbitrary finite width and symmetric or asymmetric shapes. For symmetric pulses, the resulting voltage coupling is not very effective in synchronizing neurons, but pulses that are slightly skewed to the phase after the spike readily generate collective oscillations. The results unveil a voltage-dependent spike synchronization mechanism at the heart of emergent collective behavior, which is facilitated by pulses of finite width and complementary to traditional synaptic transmission in spiking neuron networks.
Collapse
Affiliation(s)
- Bastian Pietras
- Department of Information and Communication Technologies, Universitat Pompeu Fabra, 08018, Barcelona, Spain
| |
Collapse
|
4
|
Huang CH, Lin CCK. New biophysical rate-based modeling of long-term plasticity in mean-field neuronal population models. Comput Biol Med 2023; 163:107213. [PMID: 37413849 DOI: 10.1016/j.compbiomed.2023.107213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 05/20/2023] [Accepted: 06/25/2023] [Indexed: 07/08/2023]
Abstract
The formation of customized neural networks as the basis of brain functions such as receptive field selectivity, learning or memory depends heavily on the long-term plasticity of synaptic connections. However, the current mean-field population models commonly used to simulate large-scale neural network dynamics lack explicit links to the underlying cellular mechanisms of long-term plasticity. In this study, we developed a new mean-field population model, the plastic density-based neural mass model (pdNMM), by incorporating a newly developed rate-based plasticity model based on the calcium control hypothesis into an existing density-based neural mass model. Derivation of the plasticity model was carried out using population density methods. Our results showed that the synaptic plasticity represented by the resulting rate-based plasticity model exhibited Bienenstock-Cooper-Munro-like learning rules. Furthermore, we demonstrated that the pdNMM accurately reproduced previous experimental observations of long-term plasticity, including characteristics of Hebbian plasticity such as longevity, associativity and input specificity, on hippocampal slices, and the formation of receptive field selectivity in the visual cortex. In conclusion, the pdNMM is a novel approach that can confer long-term plasticity to conventional mean-field neuronal population models.
Collapse
Affiliation(s)
- Chih-Hsu Huang
- Department of Neurology, College of Medicine, National Cheng Kung University, Tainan, Taiwan
| | - Chou-Ching K Lin
- Department of Neurology, College of Medicine, National Cheng Kung University, Tainan, Taiwan; Department of Neurology, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan; Innovation Center of Medical Devices and Technology, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan; Medical Device Innovation Center, National Cheng Kung University, Tainan, Taiwan.
| |
Collapse
|
5
|
Duchet B, Bick C, Byrne Á. Mean-Field Approximations With Adaptive Coupling for Networks With Spike-Timing-Dependent Plasticity. Neural Comput 2023; 35:1481-1528. [PMID: 37437202 PMCID: PMC10422128 DOI: 10.1162/neco_a_01601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2022] [Accepted: 04/26/2023] [Indexed: 07/14/2023]
Abstract
Understanding the effect of spike-timing-dependent plasticity (STDP) is key to elucidating how neural networks change over long timescales and to design interventions aimed at modulating such networks in neurological disorders. However, progress is restricted by the significant computational cost associated with simulating neural network models with STDP and by the lack of low-dimensional description that could provide analytical insights. Phase-difference-dependent plasticity (PDDP) rules approximate STDP in phase oscillator networks, which prescribe synaptic changes based on phase differences of neuron pairs rather than differences in spike timing. Here we construct mean-field approximations for phase oscillator networks with STDP to describe part of the phase space for this very high-dimensional system. We first show that single-harmonic PDDP rules can approximate a simple form of symmetric STDP, while multiharmonic rules are required to accurately approximate causal STDP. We then derive exact expressions for the evolution of the average PDDP coupling weight in terms of network synchrony. For adaptive networks of Kuramoto oscillators that form clusters, we formulate a family of low-dimensional descriptions based on the mean-field dynamics of each cluster and average coupling weights between and within clusters. Finally, we show that such a two-cluster mean-field model can be fitted to synthetic data to provide a low-dimensional approximation of a full adaptive network with symmetric STDP. Our framework represents a step toward a low-dimensional description of adaptive networks with STDP, and could for example inform the development of new therapies aimed at maximizing the long-lasting effects of brain stimulation.
Collapse
Affiliation(s)
- Benoit Duchet
- Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford X3 9DU, U.K
- MRC Brain Network Dynamics Unit, University of Oxford, Oxford X1 3TH, U.K.
| | - Christian Bick
- Department of Mathematics, Vrije Universiteit Amsterdam, Amsterdam 1081 HV, the Netherlands
- Amsterdam Neuroscience-Systems and Network Neuroscience, Amsterdam 1081 HV, the Netherlands
- Mathematical Institute, University of Oxford, Oxford X2 6GG, U.K.
| | - Áine Byrne
- School of Mathematics and Statistics, University College Dublin, Dublin D04 V1W8, Ireland
| |
Collapse
|
6
|
Klinshov VV, Smelov PS, Kirillov SY. Constructive role of shot noise in the collective dynamics of neural networks. CHAOS (WOODBURY, N.Y.) 2023; 33:2894498. [PMID: 37276575 DOI: 10.1063/5.0147409] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Accepted: 05/18/2023] [Indexed: 06/07/2023]
Abstract
Finite-size effects may significantly influence the collective dynamics of large populations of neurons. Recently, we have shown that in globally coupled networks these effects can be interpreted as additional common noise term, the so-called shot noise, to the macroscopic dynamics unfolding in the thermodynamic limit. Here, we continue to explore the role of the shot noise in the collective dynamics of globally coupled neural networks. Namely, we study the noise-induced switching between different macroscopic regimes. We show that shot noise can turn attractors of the infinitely large network into metastable states whose lifetimes smoothly depend on the system parameters. A surprising effect is that the shot noise modifies the region where a certain macroscopic regime exists compared to the thermodynamic limit. This may be interpreted as a constructive role of the shot noise since a certain macroscopic state appears in a parameter region where it does not exist in an infinite network.
Collapse
Affiliation(s)
- V V Klinshov
- Institute of Applied Physics of the Russian Academy of Sciences, Ulyanova Street 46, 603950 Nizhny Novgorod, Russia
- National Research University Higher School of Economics, 25/12 Bol'shaya Pecherskaya Street, Nizhny Novgorod 603155, Russia
| | - P S Smelov
- Institute of Applied Physics of the Russian Academy of Sciences, Ulyanova Street 46, 603950 Nizhny Novgorod, Russia
| | - S Yu Kirillov
- Institute of Applied Physics of the Russian Academy of Sciences, Ulyanova Street 46, 603950 Nizhny Novgorod, Russia
| |
Collapse
|
7
|
Wang YC, Rudi J, Velasco J, Sinha N, Idumah G, Powers RK, Heckman CJ, Chardon MK. Multimodal parameter spaces of a complex multi-channel neuron model. Front Syst Neurosci 2022; 16:999531. [PMID: 36341477 PMCID: PMC9632740 DOI: 10.3389/fnsys.2022.999531] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 09/28/2022] [Indexed: 08/21/2023] Open
Abstract
One of the most common types of models that helps us to understand neuron behavior is based on the Hodgkin-Huxley ion channel formulation (HH model). A major challenge with inferring parameters in HH models is non-uniqueness: many different sets of ion channel parameter values produce similar outputs for the same input stimulus. Such phenomena result in an objective function that exhibits multiple modes (i.e., multiple local minima). This non-uniqueness of local optimality poses challenges for parameter estimation with many algorithmic optimization techniques. HH models additionally have severe non-linearities resulting in further challenges for inferring parameters in an algorithmic fashion. To address these challenges with a tractable method in high-dimensional parameter spaces, we propose using a particular Markov chain Monte Carlo (MCMC) algorithm, which has the advantage of inferring parameters in a Bayesian framework. The Bayesian approach is designed to be suitable for multimodal solutions to inverse problems. We introduce and demonstrate the method using a three-channel HH model. We then focus on the inference of nine parameters in an eight-channel HH model, which we analyze in detail. We explore how the MCMC algorithm can uncover complex relationships between inferred parameters using five injected current levels. The MCMC method provides as a result a nine-dimensional posterior distribution, which we analyze visually with solution maps or landscapes of the possible parameter sets. The visualized solution maps show new complex structures of the multimodal posteriors, and they allow for selection of locally and globally optimal value sets, and they visually expose parameter sensitivities and regions of higher model robustness. We envision these solution maps as enabling experimentalists to improve the design of future experiments, increase scientific productivity and improve on model structure and ideation when the MCMC algorithm is applied to experimental data.
Collapse
Affiliation(s)
- Y. Curtis Wang
- Department of Electrical and Computer Engineering, California State University, Los Angeles, Los Angeles, CA, United States
| | - Johann Rudi
- Department of Mathematics, Virginia Tech, Blacksburg, VA, United States
| | - James Velasco
- Department of Electrical and Computer Engineering, California State University, Los Angeles, Los Angeles, CA, United States
| | - Nirvik Sinha
- Interdepartmental Neuroscience, Northwestern University, Chicago, IL, United States
| | - Gideon Idumah
- Department of Mathematics, Case Western Reserve University, Cleveland, OH, United States
| | - Randall K. Powers
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, United States
| | - Charles J. Heckman
- Department of Neuroscience, Northwestern University, Chicago, IL, United States
- Physical Medicine and Rehabilitation, Shirley Ryan Ability Lab, Chicago, IL, United States
- Physical Therapy and Human Movement Sciences, Northwestern University, Chicago, IL, United States
| | - Matthieu K. Chardon
- Department of Neuroscience, Northwestern University, Chicago, IL, United States
- Northwestern-Argonne Institute of Science and Engineering, Evanston, IL, United States
| |
Collapse
|
8
|
Osborne H, de Kamps M. A numerical population density technique for N-dimensional neuron models. Front Neuroinform 2022; 16:883796. [PMID: 35935536 PMCID: PMC9354936 DOI: 10.3389/fninf.2022.883796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 06/24/2022] [Indexed: 11/13/2022] Open
Abstract
Population density techniques can be used to simulate the behavior of a population of neurons which adhere to a common underlying neuron model. They have previously been used for analyzing models of orientation tuning and decision making tasks. They produce a fully deterministic solution to neural simulations which often involve a non-deterministic or noise component. Until now, numerical population density techniques have been limited to only one- and two-dimensional models. For the first time, we demonstrate a method to take an N-dimensional underlying neuron model and simulate the behavior of a population. The technique enables so-called graceful degradation of the dynamics allowing a balance between accuracy and simulation speed while maintaining important behavioral features such as rate curves and bifurcations. It is an extension of the numerical population density technique implemented in the MIIND software framework that simulates networks of populations of neurons. Here, we describe the extension to N dimensions and simulate populations of leaky integrate-and-fire neurons with excitatory and inhibitory synaptic conductances then demonstrate the effect of degrading the accuracy on the solution. We also simulate two separate populations in an E-I configuration to demonstrate the technique's ability to capture complex behaviors of interacting populations. Finally, we simulate a population of four-dimensional Hodgkin-Huxley neurons under the influence of noise. Though the MIIND software has been used only for neural modeling up to this point, the technique can be used to simulate the behavior of a population of agents adhering to any system of ordinary differential equations under the influence of shot noise. MIIND has been modified to render a visualization of any three of an N-dimensional state space of a population which encourages fast model prototyping and debugging and could prove a useful educational tool for understanding dynamical systems.
Collapse
Affiliation(s)
- Hugh Osborne
- School of Computing, University of Leeds, Leeds, United Kingdom
| | - Marc de Kamps
- School of Computing, University of Leeds, Leeds, United Kingdom
- Leeds Institute for Data Analytics, University of Leeds, Leeds, United Kingdom
- The Alan Turing Institute, London, United Kingdom
- *Correspondence: Marc de Kamps
| |
Collapse
|
9
|
Short-Term Synaptic Plasticity: Microscopic Modelling and (Some) Computational Implications. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1359:105-121. [DOI: 10.1007/978-3-030-89439-9_5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
10
|
A Modular Workflow for Model Building, Analysis, and Parameter Estimation in Systems Biology and Neuroscience. Neuroinformatics 2022; 20:241-259. [PMID: 34709562 PMCID: PMC9537196 DOI: 10.1007/s12021-021-09546-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/03/2021] [Indexed: 01/07/2023]
Abstract
Neuroscience incorporates knowledge from a range of scales, from single molecules to brain wide neural networks. Modeling is a valuable tool in understanding processes at a single scale or the interactions between two adjacent scales and researchers use a variety of different software tools in the model building and analysis process. Here we focus on the scale of biochemical pathways, which is one of the main objects of study in systems biology. While systems biology is among the more standardized fields, conversion between different model formats and interoperability between various tools is still somewhat problematic. To offer our take on tackling these shortcomings and by keeping in mind the FAIR (findability, accessibility, interoperability, reusability) data principles, we have developed a workflow for building and analyzing biochemical pathway models, using pre-existing tools that could be utilized for the storage and refinement of models in all phases of development. We have chosen the SBtab format which allows the storage of biochemical models and associated data in a single file and provides a human readable set of syntax rules. Next, we implemented custom-made MATLAB® scripts to perform parameter estimation and global sensitivity analysis used in model refinement. Additionally, we have developed a web-based application for biochemical models that allows simulations with either a network free solver or stochastic solvers and incorporating geometry. Finally, we illustrate convertibility and use of a biochemical model in a biophysically detailed single neuron model by running multiscale simulations in NEURON. Using this workflow, we can simulate the same model in three different simulators, with a smooth conversion between the different model formats, enhancing the characterization of different aspects of the model.
Collapse
|
11
|
Gast R, Knösche TR, Schmidt H. Mean-field approximations of networks of spiking neurons with short-term synaptic plasticity. Phys Rev E 2021; 104:044310. [PMID: 34781468 DOI: 10.1103/physreve.104.044310] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 09/30/2021] [Indexed: 01/17/2023]
Abstract
Low-dimensional descriptions of spiking neural network dynamics are an effective tool for bridging different scales of organization of brain structure and function. Recent advances in deriving mean-field descriptions for networks of coupled oscillators have sparked the development of a new generation of neural mass models. Of notable interest are mean-field descriptions of all-to-all coupled quadratic integrate-and-fire (QIF) neurons, which have already seen numerous extensions and applications. These extensions include different forms of short-term adaptation considered to play an important role in generating and sustaining dynamic regimes of interest in the brain. It is an open question, however, whether the incorporation of presynaptic forms of synaptic plasticity driven by single neuron activity would still permit the derivation of mean-field equations using the same method. Here we discuss this problem using an established model of short-term synaptic plasticity at the single neuron level, for which we present two different approaches for the derivation of the mean-field equations. We compare these models with a recently proposed mean-field approximation that assumes stochastic spike timings. In general, the latter fails to accurately reproduce the macroscopic activity in networks of deterministic QIF neurons with distributed parameters. We show that the mean-field models we propose provide a more accurate description of the network dynamics, although they are mathematically more involved. Using bifurcation analysis, we find that QIF networks with presynaptic short-term plasticity can express regimes of periodic bursting activity as well as bistable regimes. Together, we provide novel insight into the macroscopic effects of short-term synaptic plasticity in spiking neural networks, as well as two different mean-field descriptions for future investigations of such networks.
Collapse
Affiliation(s)
- Richard Gast
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Thomas R Knösche
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Helmut Schmidt
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
12
|
Zendrikov D, Paraskevov A. Emergent population activity in metric-free and metric networks of neurons with stochastic spontaneous spikes and dynamic synapses. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.11.073] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
13
|
Schwalger T. Mapping input noise to escape noise in integrate-and-fire neurons: a level-crossing approach. BIOLOGICAL CYBERNETICS 2021; 115:539-562. [PMID: 34668051 PMCID: PMC8551127 DOI: 10.1007/s00422-021-00899-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Accepted: 09/27/2021] [Indexed: 06/13/2023]
Abstract
Noise in spiking neurons is commonly modeled by a noisy input current or by generating output spikes stochastically with a voltage-dependent hazard rate ("escape noise"). While input noise lends itself to modeling biophysical noise processes, the phenomenological escape noise is mathematically more tractable. Using the level-crossing theory for differentiable Gaussian processes, we derive an approximate mapping between colored input noise and escape noise in leaky integrate-and-fire neurons. This mapping requires the first-passage-time (FPT) density of an overdamped Brownian particle driven by colored noise with respect to an arbitrarily moving boundary. Starting from the Wiener-Rice series for the FPT density, we apply the second-order decoupling approximation of Stratonovich to the case of moving boundaries and derive a simplified hazard-rate representation that is local in time and numerically efficient. This simplification requires the calculation of the non-stationary auto-correlation function of the level-crossing process: For exponentially correlated input noise (Ornstein-Uhlenbeck process), we obtain an exact formula for the zero-lag auto-correlation as a function of noise parameters, mean membrane potential and its speed, as well as an exponential approximation of the full auto-correlation function. The theory well predicts the FPT and interspike interval densities as well as the population activities obtained from simulations with colored input noise and time-dependent stimulus or boundary. The agreement with simulations is strongly enhanced across the sub- and suprathreshold firing regime compared to a first-order decoupling approximation that neglects correlations between level crossings. The second-order approximation also improves upon a previously proposed theory in the subthreshold regime. Depending on a simplicity-accuracy trade-off, all considered approximations represent useful mappings from colored input noise to escape noise, enabling progress in the theory of neuronal population dynamics.
Collapse
Affiliation(s)
- Tilo Schwalger
- Institute of Mathematics, Technical University Berlin, 10623, Berlin, Germany.
- Bernstein Center for Computational Neuroscience Berlin, 10115, Berlin, Germany.
| |
Collapse
|
14
|
Pietras B, Gallice N, Schwalger T. Low-dimensional firing-rate dynamics for populations of renewal-type spiking neurons. Phys Rev E 2021; 102:022407. [PMID: 32942450 DOI: 10.1103/physreve.102.022407] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Accepted: 06/29/2020] [Indexed: 11/07/2022]
Abstract
The macroscopic dynamics of large populations of neurons can be mathematically analyzed using low-dimensional firing-rate or neural-mass models. However, these models fail to capture spike synchronization effects and nonstationary responses of the population activity to rapidly changing stimuli. Here we derive low-dimensional firing-rate models for homogeneous populations of neurons modeled as time-dependent renewal processes. The class of renewal neurons includes integrate-and-fire models driven by white noise and has been frequently used to model neuronal refractoriness and spike synchronization dynamics. The derivation is based on an eigenmode expansion of the associated refractory density equation, which generalizes previous spectral methods for Fokker-Planck equations to arbitrary renewal models. We find a simple relation between the eigenvalues characterizing the timescales of the firing rate dynamics and the Laplace transform of the interspike interval density, for which explicit expressions are available for many renewal models. Retaining only the first eigenmode already yields a reliable low-dimensional approximation of the firing-rate dynamics that captures spike synchronization effects and fast transient dynamics at stimulus onset. We explicitly demonstrate the validity of our model for a large homogeneous population of Poisson neurons with absolute refractoriness and other renewal models that admit an explicit analytical calculation of the eigenvalues. The eigenmode expansion presented here provides a systematic framework for alternative firing-rate models in computational neuroscience based on spiking neuron dynamics with refractoriness.
Collapse
Affiliation(s)
- Bastian Pietras
- Institute of Mathematics, Technical University Berlin, 10623 Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
| | - Noé Gallice
- Brain Mind Institute, École polytechnique fédérale de Lausanne (EPFL), Station 15, CH-1015 Lausanne, Switzerland
| | - Tilo Schwalger
- Institute of Mathematics, Technical University Berlin, 10623 Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
| |
Collapse
|
15
|
Rossbroich J, Trotter D, Beninger J, Tóth K, Naud R. Linear-nonlinear cascades capture synaptic dynamics. PLoS Comput Biol 2021; 17:e1008013. [PMID: 33720935 PMCID: PMC7993773 DOI: 10.1371/journal.pcbi.1008013] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 03/25/2021] [Accepted: 02/25/2021] [Indexed: 11/18/2022] Open
Abstract
Short-term synaptic dynamics differ markedly across connections and strongly regulate how action potentials communicate information. To model the range of synaptic dynamics observed in experiments, we have developed a flexible mathematical framework based on a linear-nonlinear operation. This model can capture various experimentally observed features of synaptic dynamics and different types of heteroskedasticity. Despite its conceptual simplicity, we show that it is more adaptable than previous models. Combined with a standard maximum likelihood approach, synaptic dynamics can be accurately and efficiently characterized using naturalistic stimulation patterns. These results make explicit that synaptic processing bears algorithmic similarities with information processing in convolutional neural networks.
Collapse
Affiliation(s)
- Julian Rossbroich
- Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland
| | - Daniel Trotter
- Department of Physics, University of Ottawa, Ottawa, ON, Canada
| | - John Beninger
- uOttawa Brain Mind Institute, Center for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Katalin Tóth
- uOttawa Brain Mind Institute, Center for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Richard Naud
- Department of Physics, University of Ottawa, Ottawa, ON, Canada
- uOttawa Brain Mind Institute, Center for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
16
|
Exact neural mass model for synaptic-based working memory. PLoS Comput Biol 2020; 16:e1008533. [PMID: 33320855 PMCID: PMC7771880 DOI: 10.1371/journal.pcbi.1008533] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Revised: 12/29/2020] [Accepted: 11/12/2020] [Indexed: 01/29/2023] Open
Abstract
A synaptic theory of Working Memory (WM) has been developed in the last decade as a possible alternative to the persistent spiking paradigm. In this context, we have developed a neural mass model able to reproduce exactly the dynamics of heterogeneous spiking neural networks encompassing realistic cellular mechanisms for short-term synaptic plasticity. This population model reproduces the macroscopic dynamics of the network in terms of the firing rate and the mean membrane potential. The latter quantity allows us to gain insight of the Local Field Potential and electroencephalographic signals measured during WM tasks to characterize the brain activity. More specifically synaptic facilitation and depression integrate each other to efficiently mimic WM operations via either synaptic reactivation or persistent activity. Memory access and loading are related to stimulus-locked transient oscillations followed by a steady-state activity in the β-γ band, thus resembling what is observed in the cortex during vibrotactile stimuli in humans and object recognition in monkeys. Memory juggling and competition emerge already by loading only two items. However more items can be stored in WM by considering neural architectures composed of multiple excitatory populations and a common inhibitory pool. Memory capacity depends strongly on the presentation rate of the items and it maximizes for an optimal frequency range. In particular we provide an analytic expression for the maximal memory capacity. Furthermore, the mean membrane potential turns out to be a suitable proxy to measure the memory load, analogously to event driven potentials in experiments on humans. Finally we show that the γ power increases with the number of loaded items, as reported in many experiments, while θ and β power reveal non monotonic behaviours. In particular, β and γ rhythms are crucially sustained by the inhibitory activity, while the θ rhythm is controlled by excitatory synapses. Working Memory (WM) is the ability to temporarily store and manipulate stimuli representations that are no longer available to the senses. We have developed an innovative coarse-grained population model able to mimic several operations associated to WM. The novelty of the model consists in reproducing exactly the dynamics of spiking neural networks with realistic synaptic plasticity composed of hundreds of thousands of neurons in terms of a few macroscopic variables. These variables give access to experimentally measurable quantities such as local field potentials and electroencephalographic signals. Memory operations are joined to sustained or transient oscillations emerging in different frequency bands, in accordance with experimental results for primate and humans performing WM tasks. We have designed an architecture composed of many excitatory populations and a common inhibitory pool able to store and retain several memory items. The capacity of our multi-item architecture is around 3–5 items, a value similar to the WM capacities measured in many experiments. Furthermore, the maximal capacity is achievable only for presentation rates within an optimal frequency range. Finally, we have defined a measure of the memory load analogous to the event-related potentials employed to test humans’ WM capacity during visual memory tasks.
Collapse
|