1
|
Lalwani P, Polk T, Garrett DD. Modulation of brain signal variability in visual cortex reflects aging, GABA, and behavior. eLife 2025; 14:e83865. [PMID: 40243542 PMCID: PMC12005714 DOI: 10.7554/elife.83865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 11/30/2024] [Indexed: 04/18/2025] Open
Abstract
Moment-to-moment neural variability has been shown to scale positively with the complexity of stimulus input. However, the mechanisms underlying the ability to align variability to input complexity are unknown. Using a combination of behavioral methods, computational modeling, fMRI, MR spectroscopy, and pharmacological intervention, we investigated the role of aging and GABA in neural variability during visual processing. We replicated previous findings that participants expressed higher variability when viewing more complex visual stimuli. Additionally, we found that such variability modulation was associated with higher baseline visual GABA levels and was reduced in older adults. When pharmacologically increasing GABA activity, we found that participants with lower baseline GABA levels showed a drug-related increase in variability modulation while participants with higher baseline GABA showed no change or even a reduction, consistent with an inverted-U account. Finally, higher baseline GABA and variability modulation were jointly associated with better visual-discrimination performance. These results suggest that GABA plays an important role in how humans utilize neural variability to adapt to the complexity of the visual world.
Collapse
Affiliation(s)
- Poortata Lalwani
- Department of Psychology, University of MichiganAnn ArborUnited States
| | - Thad Polk
- Department of Psychology, University of MichiganAnn ArborUnited States
| | - Douglas D Garrett
- Max Planck UCL Centre for Computational Psychiatry and Ageing ResearchBerlinGermany
- Center for Lifespan Psychology, Max Planck Institute for Human DevelopmentBerlinGermany
| |
Collapse
|
2
|
Mattera A, Alfieri V, Granato G, Baldassarre G. Chaotic recurrent neural networks for brain modelling: A review. Neural Netw 2025; 184:107079. [PMID: 39756119 DOI: 10.1016/j.neunet.2024.107079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2024] [Revised: 11/25/2024] [Accepted: 12/19/2024] [Indexed: 01/07/2025]
Abstract
Even in the absence of external stimuli, the brain is spontaneously active. Indeed, most cortical activity is internally generated by recurrence. Both theoretical and experimental studies suggest that chaotic dynamics characterize this spontaneous activity. While the precise function of brain chaotic activity is still puzzling, we know that chaos confers many advantages. From a computational perspective, chaos enhances the complexity of network dynamics. From a behavioural point of view, chaotic activity could generate the variability required for exploration. Furthermore, information storage and transfer are maximized at the critical border between order and chaos. Despite these benefits, many computational brain models avoid incorporating spontaneous chaotic activity due to the challenges it poses for learning algorithms. In recent years, however, multiple approaches have been proposed to overcome this limitation. As a result, many different algorithms have been developed, initially within the reservoir computing paradigm. Over time, the field has evolved to increase the biological plausibility and performance of the algorithms, sometimes going beyond the reservoir computing framework. In this review article, we examine the computational benefits of chaos and the unique properties of chaotic recurrent neural networks, with a particular focus on those typically utilized in reservoir computing. We also provide a detailed analysis of the algorithms designed to train chaotic RNNs, tracing their historical evolution and highlighting key milestones in their development. Finally, we explore the applications and limitations of chaotic RNNs for brain modelling, consider their potential broader impacts beyond neuroscience, and outline promising directions for future research.
Collapse
Affiliation(s)
- Andrea Mattera
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy.
| | - Valerio Alfieri
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy; International School of Advanced Studies, Center for Neuroscience, University of Camerino, Via Gentile III Da Varano, 62032, Camerino, Italy
| | - Giovanni Granato
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy
| | - Gianluca Baldassarre
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy
| |
Collapse
|
3
|
Floriach P, Garcia-Ojalvo J, Clusella P. From chimeras to extensive chaos in networks of heterogeneous Kuramoto oscillator populations. CHAOS (WOODBURY, N.Y.) 2025; 35:023115. [PMID: 39899579 DOI: 10.1063/5.0243379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2024] [Accepted: 01/16/2025] [Indexed: 02/05/2025]
Abstract
Populations of coupled oscillators can exhibit a wide range of complex dynamical behavior, from complete synchronization to chimera and chaotic states. We can, thus, expect complex dynamics to arise in networks of such populations. Here, we analyze the dynamics of networks of populations of heterogeneous mean-field coupled Kuramoto-Sakaguchi oscillators and show that the instability that leads to chimera states in a simple two-population model also leads to extensive chaos in large networks of coupled populations. Formally, the system consists of a complex network of oscillator populations whose mesoscopic behavior evolves according to the Ott-Antonsen equations. By considering identical parameters across populations, the system contains a manifold of homogeneous solutions where all populations behave identically. Stability analysis of these homogeneous states provided by the master stability function formalism shows that non-trivial dynamics might emerge on a wide region of the parameter space for arbitrary network topologies. As examples, we first revisit the two-population case and provide a complete bifurcation diagram. Then, we investigate the emergent dynamics in large ring and Erdös-Rényi networks. In both cases, transverse instabilities lead to extensive space-time chaos, i.e., irregular regimes whose complexity scales linearly with the system size. Our work provides a unified analytical framework to understand the emergent dynamics of networks of oscillator populations, from chimera states to robust high-dimensional chaos.
Collapse
Affiliation(s)
- Pol Floriach
- Department of Mathematical Sciences, University of Copenhagen, Copenhagen DK-2100, Denmark
| | - Jordi Garcia-Ojalvo
- Department of Medicine and Life Sciences, Universitat Pompeu Fabra, Barcelona 08003, Spain
| | - Pau Clusella
- EPSEM, Departament de Matemàtiques, Universitat Politècnica de Catalunya, Manresa 08242, Spain
| |
Collapse
|
4
|
Goldobin DS, di Volo M, Torcini A. Discrete Synaptic Events Induce Global Oscillations in Balanced Neural Networks. PHYSICAL REVIEW LETTERS 2024; 133:238401. [PMID: 39714685 DOI: 10.1103/physrevlett.133.238401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 06/05/2024] [Accepted: 10/31/2024] [Indexed: 12/24/2024]
Abstract
Despite the fact that neural dynamics is triggered by discrete synaptic events, the neural response is usually obtained within the diffusion approximation representing the synaptic inputs as Gaussian noise. We derive a mean-field formalism encompassing synaptic shot noise for sparse balanced neural networks. For low (high) excitatory drive (inhibitory feedback) global oscillations emerge via continuous or hysteretic transitions, correctly predicted by our approach, but not from the diffusion approximation. At sufficiently low in-degrees the nature of these global oscillations changes from drift driven to cluster activation.
Collapse
Affiliation(s)
| | | | - Alessandro Torcini
- Laboratoire de Physique Théorique et Modélisation, CY Cergy Paris Université, CNRS, UMR 8089, 95302 Cergy-Pontoise cedex, France
- CNR-Consiglio Nazionale delle Ricerche-Istituto dei Sistemi Complessi, via Madonna del Piano 10, 50019 Sesto Fiorentino, Italy
- INFN Sezione di Firenze, Via Sansone 1, 50019 Sesto Fiorentino, Italy
| |
Collapse
|
5
|
Pronold J, van Meegen A, Shimoura RO, Vollenbröker H, Senden M, Hilgetag CC, Bakker R, van Albada SJ. Multi-scale spiking network model of human cerebral cortex. Cereb Cortex 2024; 34:bhae409. [PMID: 39428578 PMCID: PMC11491286 DOI: 10.1093/cercor/bhae409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 09/15/2024] [Accepted: 09/24/2024] [Indexed: 10/22/2024] Open
Abstract
Although the structure of cortical networks provides the necessary substrate for their neuronal activity, the structure alone does not suffice to understand the activity. Leveraging the increasing availability of human data, we developed a multi-scale, spiking network model of human cortex to investigate the relationship between structure and dynamics. In this model, each area in one hemisphere of the Desikan-Killiany parcellation is represented by a $1\,\mathrm{mm^{2}}$ column with a layered structure. The model aggregates data across multiple modalities, including electron microscopy, electrophysiology, morphological reconstructions, and diffusion tensor imaging, into a coherent framework. It predicts activity on all scales from the single-neuron spiking activity to the area-level functional connectivity. We compared the model activity with human electrophysiological data and human resting-state functional magnetic resonance imaging (fMRI) data. This comparison reveals that the model can reproduce aspects of both spiking statistics and fMRI correlations if the inter-areal connections are sufficiently strong. Furthermore, we study the propagation of a single-spike perturbation and macroscopic fluctuations through the network. The open-source model serves as an integrative platform for further refinements and future in silico studies of human cortical structure, dynamics, and function.
Collapse
Affiliation(s)
- Jari Pronold
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, D-52428 Jülich, Germany
- RWTH Aachen University, D-52062 Aachen, Germany
| | - Alexander van Meegen
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, D-52428 Jülich, Germany
- Institute of Zoology, University of Cologne, D-50674 Cologne, Germany
| | - Renan O Shimoura
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, D-52428 Jülich, Germany
| | - Hannah Vollenbröker
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, D-52428 Jülich, Germany
- Heinrich Heine University Düsseldorf, D-40225 Düsseldorf, Germany
| | - Mario Senden
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, NL-6229 ER Maastricht, The Netherlands
- Faculty of Psychology and Neuroscience, Maastricht Brain Imaging Centre, Maastricht University, NL-6229 ER Maastricht, The Netherlands
| | - Claus C Hilgetag
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, D-20246 Hamburg, Germany
| | - Rembrandt Bakker
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, D-52428 Jülich, Germany
- Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen, NL-6525 EN Nijmegen, The Netherlands
| | - Sacha J van Albada
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, D-52428 Jülich, Germany
- Institute of Zoology, University of Cologne, D-50674 Cologne, Germany
| |
Collapse
|
6
|
Politi A, Torcini A. A robust balancing mechanism for spiking neural networks. CHAOS (WOODBURY, N.Y.) 2024; 34:041102. [PMID: 38639569 DOI: 10.1063/5.0199298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 02/03/2024] [Indexed: 04/20/2024]
Abstract
Dynamical balance of excitation and inhibition is usually invoked to explain the irregular low firing activity observed in the cortex. We propose a robust nonlinear balancing mechanism for a random network of spiking neurons, which works also in the absence of strong external currents. Biologically, the mechanism exploits the plasticity of excitatory-excitatory synapses induced by short-term depression. Mathematically, the nonlinear response of the synaptic activity is the key ingredient responsible for the emergence of a stable balanced regime. Our claim is supported by a simple self-consistent analysis accompanied by extensive simulations performed for increasing network sizes. The observed regime is essentially fluctuation driven and characterized by highly irregular spiking dynamics of all neurons.
Collapse
Affiliation(s)
- Antonio Politi
- Institute for Complex Systems and Mathematical Biology and Department of Physics, Aberdeen AB24 3UE, United Kingdom
- CNR-Consiglio Nazionale delle Ricerche-Istituto dei Sistemi Complessi, via Madonna del Piano 10, 50019 Sesto Fiorentino, Italy
| | - Alessandro Torcini
- CNR-Consiglio Nazionale delle Ricerche-Istituto dei Sistemi Complessi, via Madonna del Piano 10, 50019 Sesto Fiorentino, Italy
- Laboratoire de Physique Théorique et Modélisation, CY Cergy Paris Université, CNRS UMR 8089, 95302 Cergy-Pontoise cedex, France
- INFN Sezione di Firenze, Via Sansone 1 50019 Sesto Fiorentino, Italy
| |
Collapse
|
7
|
Fagerholm ED, Dezhina Z, Moran RJ, Turkheimer FE, Leech R. A primer on entropy in neuroscience. Neurosci Biobehav Rev 2023; 146:105070. [PMID: 36736445 DOI: 10.1016/j.neubiorev.2023.105070] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 01/16/2023] [Accepted: 01/29/2023] [Indexed: 02/04/2023]
Abstract
Entropy is not just a property of a system - it is a property of a system and an observer. Specifically, entropy is a measure of the amount of hidden information in a system that arises due to an observer's limitations. Here we provide an account of entropy from first principles in statistical mechanics with the aid of toy models of neural systems. Specifically, we describe the distinction between micro and macrostates in the context of simplified binary-state neurons and the characteristics of entropy required to capture an associated measure of hidden information. We discuss the origin of the mathematical form of entropy via the indistinguishable re-arrangements of discrete-state neurons and show the way in which the arguments are extended into a phase space description for continuous large-scale neural systems. Finally, we show the ways in which limitations in neuroimaging resolution, as represented by coarse graining operations in phase space, lead to an increase in entropy in time as per the second law of thermodynamics. It is our hope that this primer will support the increasing number of studies that use entropy as a way of characterising neuroimaging timeseries and of making inferences about brain states.
Collapse
Affiliation(s)
- Erik D Fagerholm
- Department of Neuroimaging, King's College London, United Kingdom.
| | - Zalina Dezhina
- Department of Neuroimaging, King's College London, United Kingdom
| | - Rosalyn J Moran
- Department of Neuroimaging, King's College London, United Kingdom
| | | | - Robert Leech
- Department of Neuroimaging, King's College London, United Kingdom
| |
Collapse
|
8
|
Laing CR, Krauskopf B. Theta neuron subject to delayed feedback: a prototypical model for self-sustained pulsing. Proc Math Phys Eng Sci 2022. [DOI: 10.1098/rspa.2022.0292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
We consider a single theta neuron with delayed self-feedback in the form of a Dirac delta function in time. Because the dynamics of a theta neuron on its own can be solved explicitly—it is either excitable or shows self-pulsations—we are able to derive algebraic expressions for the existence and stability of the periodic solutions that arise in the presence of feedback. These periodic solutions are characterized by one or more equally spaced pulses per delay interval, and there is an increasing amount of multistability with increasing delay time. We present a complete description of where these self-sustained oscillations can be found in parameter space; in particular, we derive explicit expressions for the loci of their saddle-node bifurcations. We conclude that the theta neuron with delayed self-feedback emerges as a prototypical model: it provides an analytical basis for understanding pulsating dynamics observed in other excitable systems subject to delayed self-coupling.
Collapse
Affiliation(s)
- Carlo R. Laing
- School of Natural and Computational Sciences Massey University, Private Bag 102-904, North Shore Mail Centre, Auckland 0745, New Zealand
| | - Bernd Krauskopf
- Department of Mathematics, The University of Auckland, Private Bag 92019, Auckland 1142, New Zealand
| |
Collapse
|
9
|
Exact mean-field models for spiking neural networks with adaptation. J Comput Neurosci 2022; 50:445-469. [PMID: 35834100 DOI: 10.1007/s10827-022-00825-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 06/15/2022] [Indexed: 10/17/2022]
Abstract
Networks of spiking neurons with adaption have been shown to be able to reproduce a wide range of neural activities, including the emergent population bursting and spike synchrony that underpin brain disorders and normal function. Exact mean-field models derived from spiking neural networks are extremely valuable, as such models can be used to determine how individual neurons and the network they reside within interact to produce macroscopic network behaviours. In the paper, we derive and analyze a set of exact mean-field equations for the neural network with spike frequency adaptation. Specifically, our model is a network of Izhikevich neurons, where each neuron is modeled by a two dimensional system consisting of a quadratic integrate and fire equation plus an equation which implements spike frequency adaptation. Previous work deriving a mean-field model for this type of network, relied on the assumption of sufficiently slow dynamics of the adaptation variable. However, this approximation did not succeed in establishing an exact correspondence between the macroscopic description and the realistic neural network, especially when the adaptation time constant was not large. The challenge lies in how to achieve a closed set of mean-field equations with the inclusion of the mean-field dynamics of the adaptation variable. We address this problem by using a Lorentzian ansatz combined with the moment closure approach to arrive at a mean-field system in the thermodynamic limit. The resulting macroscopic description is capable of qualitatively and quantitatively describing the collective dynamics of the neural network, including transition between states where the individual neurons exhibit asynchronous tonic firing and synchronous bursting. We extend the approach to a network of two populations of neurons and discuss the accuracy and efficacy of our mean-field approximations by examining all assumptions that are imposed during the derivation. Numerical bifurcation analysis of our mean-field models reveals bifurcations not previously observed in the models, including a novel mechanism for emergence of bursting in the network. We anticipate our results will provide a tractable and reliable tool to investigate the underlying mechanism of brain function and dysfunction from the perspective of computational neuroscience.
Collapse
|
10
|
Albers J, Pronold J, Kurth AC, Vennemo SB, Haghighi Mood K, Patronis A, Terhorst D, Jordan J, Kunkel S, Tetzlaff T, Diesmann M, Senk J. A Modular Workflow for Performance Benchmarking of Neuronal Network Simulations. Front Neuroinform 2022; 16:837549. [PMID: 35645755 PMCID: PMC9131021 DOI: 10.3389/fninf.2022.837549] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 03/11/2022] [Indexed: 11/13/2022] Open
Abstract
Modern computational neuroscience strives to develop complex network models to explain dynamics and function of brains in health and disease. This process goes hand in hand with advancements in the theory of neuronal networks and increasing availability of detailed anatomical data on brain connectivity. Large-scale models that study interactions between multiple brain areas with intricate connectivity and investigate phenomena on long time scales such as system-level learning require progress in simulation speed. The corresponding development of state-of-the-art simulation engines relies on information provided by benchmark simulations which assess the time-to-solution for scientifically relevant, complementary network models using various combinations of hardware and software revisions. However, maintaining comparability of benchmark results is difficult due to a lack of standardized specifications for measuring the scaling performance of simulators on high-performance computing (HPC) systems. Motivated by the challenging complexity of benchmarking, we define a generic workflow that decomposes the endeavor into unique segments consisting of separate modules. As a reference implementation for the conceptual workflow, we develop beNNch: an open-source software framework for the configuration, execution, and analysis of benchmarks for neuronal network simulations. The framework records benchmarking data and metadata in a unified way to foster reproducibility. For illustration, we measure the performance of various versions of the NEST simulator across network models with different levels of complexity on a contemporary HPC system, demonstrating how performance bottlenecks can be identified, ultimately guiding the development toward more efficient simulation technology.
Collapse
Affiliation(s)
- Jasper Albers
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
- *Correspondence: Jasper Albers
| | - Jari Pronold
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Anno Christopher Kurth
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Stine Brekke Vennemo
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | | | - Alexander Patronis
- Jülich Supercomputing Centre (JSC), Jülich Research Centre, Jülich, Germany
| | - Dennis Terhorst
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Susanne Kunkel
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen University, Aachen, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
11
|
di Volo M, Segneri M, Goldobin DS, Politi A, Torcini A. Coherent oscillations in balanced neural networks driven by endogenous fluctuations. CHAOS (WOODBURY, N.Y.) 2022; 32:023120. [PMID: 35232059 DOI: 10.1063/5.0075751] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 01/26/2022] [Indexed: 06/14/2023]
Abstract
We present a detailed analysis of the dynamical regimes observed in a balanced network of identical quadratic integrate-and-fire neurons with sparse connectivity for homogeneous and heterogeneous in-degree distributions. Depending on the parameter values, either an asynchronous regime or periodic oscillations spontaneously emerge. Numerical simulations are compared with a mean-field model based on a self-consistent Fokker-Planck equation (FPE). The FPE reproduces quite well the asynchronous dynamics in the homogeneous case by either assuming a Poissonian or renewal distribution for the incoming spike trains. An exact self-consistent solution for the mean firing rate obtained in the limit of infinite in-degree allows identifying balanced regimes that can be either mean- or fluctuation-driven. A low-dimensional reduction of the FPE in terms of circular cumulants is also considered. Two cumulants suffice to reproduce the transition scenario observed in the network. The emergence of periodic collective oscillations is well captured both in the homogeneous and heterogeneous setups by the mean-field models upon tuning either the connectivity or the input DC current. In the heterogeneous situation, we analyze also the role of structural heterogeneity.
Collapse
Affiliation(s)
- Matteo di Volo
- Laboratoire de Physique Théorique et Modélisation, UMR 8089, CY Cergy Paris Université, CNRS, 95302 Cergy-Pontoise, France
| | - Marco Segneri
- Laboratoire de Physique Théorique et Modélisation, UMR 8089, CY Cergy Paris Université, CNRS, 95302 Cergy-Pontoise, France
| | - Denis S Goldobin
- Institute of Continuous Media Mechanics, Ural Branch of RAS, Acad. Korolev street 1, 614013 Perm, Russia
| | - Antonio Politi
- Institute for Pure and Applied Mathematics and Department of Physics (SUPA), Old Aberdeen, Aberdeen AB24 3UE, United Kingdom
| | - Alessandro Torcini
- Laboratoire de Physique Théorique et Modélisation, UMR 8089, CY Cergy Paris Université, CNRS, 95302 Cergy-Pontoise, France
| |
Collapse
|
12
|
Dasbach S, Tetzlaff T, Diesmann M, Senk J. Dynamical Characteristics of Recurrent Neuronal Networks Are Robust Against Low Synaptic Weight Resolution. Front Neurosci 2021; 15:757790. [PMID: 35002599 PMCID: PMC8740282 DOI: 10.3389/fnins.2021.757790] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 11/03/2021] [Indexed: 11/13/2022] Open
Abstract
The representation of the natural-density, heterogeneous connectivity of neuronal network models at relevant spatial scales remains a challenge for Computational Neuroscience and Neuromorphic Computing. In particular, the memory demands imposed by the vast number of synapses in brain-scale network simulations constitute a major obstacle. Limiting the number resolution of synaptic weights appears to be a natural strategy to reduce memory and compute load. In this study, we investigate the effects of a limited synaptic-weight resolution on the dynamics of recurrent spiking neuronal networks resembling local cortical circuits and develop strategies for minimizing deviations from the dynamics of networks with high-resolution synaptic weights. We mimic the effect of a limited synaptic weight resolution by replacing normally distributed synaptic weights with weights drawn from a discrete distribution, and compare the resulting statistics characterizing firing rates, spike-train irregularity, and correlation coefficients with the reference solution. We show that a naive discretization of synaptic weights generally leads to a distortion of the spike-train statistics. If the weights are discretized such that the mean and the variance of the total synaptic input currents are preserved, the firing statistics remain unaffected for the types of networks considered in this study. For networks with sufficiently heterogeneous in-degrees, the firing statistics can be preserved even if all synaptic weights are replaced by the mean of the weight distribution. We conclude that even for simple networks with non-plastic neurons and synapses, a discretization of synaptic weights can lead to substantial deviations in the firing statistics unless the discretization is performed with care and guided by a rigorous validation process. For the network model used in this study, the synaptic weights can be replaced by low-resolution weights without affecting its macroscopic dynamical characteristics, thereby saving substantial amounts of memory.
Collapse
Affiliation(s)
- Stefan Dasbach
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
13
|
Bi H, di Volo M, Torcini A. Asynchronous and Coherent Dynamics in Balanced Excitatory-Inhibitory Spiking Networks. Front Syst Neurosci 2021; 15:752261. [PMID: 34955768 PMCID: PMC8702645 DOI: 10.3389/fnsys.2021.752261] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Accepted: 10/27/2021] [Indexed: 01/14/2023] Open
Abstract
Dynamic excitatory-inhibitory (E-I) balance is a paradigmatic mechanism invoked to explain the irregular low firing activity observed in the cortex. However, we will show that the E-I balance can be at the origin of other regimes observable in the brain. The analysis is performed by combining extensive simulations of sparse E-I networks composed of N spiking neurons with analytical investigations of low dimensional neural mass models. The bifurcation diagrams, derived for the neural mass model, allow us to classify the possible asynchronous and coherent behaviors emerging in balanced E-I networks with structural heterogeneity for any finite in-degree K. Analytic mean-field (MF) results show that both supra and sub-threshold balanced asynchronous regimes are observable in our system in the limit N >> K >> 1. Due to the heterogeneity, the asynchronous states are characterized at the microscopic level by the splitting of the neurons in to three groups: silent, fluctuation, and mean driven. These features are consistent with experimental observations reported for heterogeneous neural circuits. The coherent rhythms observed in our system can range from periodic and quasi-periodic collective oscillations (COs) to coherent chaos. These rhythms are characterized by regular or irregular temporal fluctuations joined to spatial coherence somehow similar to coherent fluctuations observed in the cortex over multiple spatial scales. The COs can emerge due to two different mechanisms. A first mechanism analogous to the pyramidal-interneuron gamma (PING), usually invoked for the emergence of γ-oscillations. The second mechanism is intimately related to the presence of current fluctuations, which sustain COs characterized by an essentially simultaneous bursting of the two populations. We observe period-doubling cascades involving the PING-like COs finally leading to the appearance of coherent chaos. Fluctuation driven COs are usually observable in our system as quasi-periodic collective motions characterized by two incommensurate frequencies. However, for sufficiently strong current fluctuations these collective rhythms can lock. This represents a novel mechanism of frequency locking in neural populations promoted by intrinsic fluctuations. COs are observable for any finite in-degree K, however, their existence in the limit N >> K >> 1 appears as uncertain.
Collapse
Affiliation(s)
- Hongjie Bi
- CY Cergy Paris Université, Laboratoire de Physique Théorique et Modélisation, CNRS, UMR 8089, Cergy-Pontoise, France
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology, Okinawa, Japan
| | - Matteo di Volo
- CY Cergy Paris Université, Laboratoire de Physique Théorique et Modélisation, CNRS, UMR 8089, Cergy-Pontoise, France
| | - Alessandro Torcini
- CY Cergy Paris Université, Laboratoire de Physique Théorique et Modélisation, CNRS, UMR 8089, Cergy-Pontoise, France
- CNR-Consiglio Nazionale delle Ricerche, Istituto dei Sistemi Complessi, Sesto Fiorentino, Italy
| |
Collapse
|
14
|
Dynamic Recovery: GABA Agonism Restores Neural Variability in Older, Poorer Performing Adults. J Neurosci 2021; 41:9350-9360. [PMID: 34732523 DOI: 10.1523/jneurosci.0335-21.2021] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Revised: 07/30/2021] [Accepted: 08/04/2021] [Indexed: 11/21/2022] Open
Abstract
Aging is associated with cognitive impairment, but there are large individual differences in these declines. One neural measure that is lower in older adults and predicts these individual differences is moment-to-moment brain signal variability. Testing the assumption that GABA should heighten neural variability, we examined whether reduced brain signal variability in older, poorer performing adults could be boosted by increasing GABA pharmacologically. Brain signal variability was estimated using fMRI in 20 young and 24 older healthy human adults during placebo and GABA agonist sessions. As expected, older adults exhibited lower signal variability at placebo, and, crucially, GABA agonism boosted older adults' variability to the levels of young adults. Furthermore, poorer performing older adults experienced a greater increase in variability on drug, suggesting that those with more to gain benefit the most from GABA system potentiation. GABA may thus serve as a core neurochemical target in future work on aging- and cognition-related human brain dynamics.SIGNIFICANCE STATEMENT Prior research indicates that moment-to-moment brain signal variability is lower in older, poorer performing adults. We found that this reduced brain signal variability could be boosted through GABA agonism in older adults to the levels of young adults and that this boost was largest in the poorer performing older adults. These results provide the first evidence that brain signal variability can be restored by increasing GABAergic activity and suggest the promise of developing interventions targeting inhibitory systems to help slow cognitive declines in healthy aging.
Collapse
|
15
|
Di Volo M, Destexhe A. Optimal responsiveness and information flow in networks of heterogeneous neurons. Sci Rep 2021; 11:17611. [PMID: 34475456 PMCID: PMC8413388 DOI: 10.1038/s41598-021-96745-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 08/11/2021] [Indexed: 02/07/2023] Open
Abstract
Cerebral cortex is characterized by a strong neuron-to-neuron heterogeneity, but it is unclear what consequences this may have for cortical computations, while most computational models consider networks of identical units. Here, we study network models of spiking neurons endowed with heterogeneity, that we treat independently for excitatory and inhibitory neurons. We find that heterogeneous networks are generally more responsive, with an optimal responsiveness occurring for levels of heterogeneity found experimentally in different published datasets, for both excitatory and inhibitory neurons. To investigate the underlying mechanisms, we introduce a mean-field model of heterogeneous networks. This mean-field model captures optimal responsiveness and suggests that it is related to the stability of the spontaneous asynchronous state. The mean-field model also predicts that new dynamical states can emerge from heterogeneity, a prediction which is confirmed by network simulations. Finally we show that heterogeneous networks maximise the information flow in large-scale networks, through recurrent connections. We conclude that neuronal heterogeneity confers different responsiveness to neural networks, which should be taken into account to investigate their information processing capabilities.
Collapse
Affiliation(s)
- Matteo Di Volo
- Laboratoire de Physique Théorique et Modélisation, Université de Cergy-Pontoise, CNRS, UMR 8089, 95302, Cergy-Pontoise cedex, France.
| | - Alain Destexhe
- Paris-Saclay University, Institute of Neuroscience, CNRS, Gif sur Yvette, France
| |
Collapse
|
16
|
Bernardi D, Doron G, Brecht M, Lindner B. A network model of the barrel cortex combined with a differentiator detector reproduces features of the behavioral response to single-neuron stimulation. PLoS Comput Biol 2021; 17:e1007831. [PMID: 33556070 PMCID: PMC7895413 DOI: 10.1371/journal.pcbi.1007831] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 02/19/2021] [Accepted: 01/17/2021] [Indexed: 11/23/2022] Open
Abstract
The stimulation of a single neuron in the rat somatosensory cortex can elicit a behavioral response. The probability of a behavioral response does not depend appreciably on the duration or intensity of a constant stimulation, whereas the response probability increases significantly upon injection of an irregular current. Biological mechanisms that can potentially suppress a constant input signal are present in the dynamics of both neurons and synapses and seem ideal candidates to explain these experimental findings. Here, we study a large network of integrate-and-fire neurons with several salient features of neuronal populations in the rat barrel cortex. The model includes cellular spike-frequency adaptation, experimentally constrained numbers and types of chemical synapses endowed with short-term plasticity, and gap junctions. Numerical simulations of this model indicate that cellular and synaptic adaptation mechanisms alone may not suffice to account for the experimental results if the local network activity is read out by an integrator. However, a circuit that approximates a differentiator can detect the single-cell stimulation with a reliability that barely depends on the length or intensity of the stimulus, but that increases when an irregular signal is used. This finding is in accordance with the experimental results obtained for the stimulation of a regularly-spiking excitatory cell. It is widely assumed that only a large group of neurons can encode a stimulus or control behavior. This tenet of neuroscience has been challenged by experiments in which stimulating a single cortical neuron has had a measurable effect on an animal’s behavior. Recently, theoretical studies have explored how a single-neuron stimulation could be detected in a large recurrent network. However, these studies missed essential biological mechanisms of cortical networks and are unable to explain more recent experiments in the barrel cortex. Here, to describe the stimulated brain area, we propose and study a network model endowed with many important biological features of the barrel cortex. Importantly, we also investigate different readout mechanisms, i.e. ways in which the stimulation effects can propagate to other brain areas. We show that a readout network which tracks rapid variations in the local network activity is in agreement with the experiments. Our model demonstrates a possible mechanism for how the stimulation of a single neuron translates into a signal at the population level, which is taken as a proxy of the animal’s response. Our results illustrate the power of spiking neural networks to properly describe the effects of a single neuron’s activity.
Collapse
Affiliation(s)
- Davide Bernardi
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Institut für Physik, Humboldt-Universität zu Berlin, Berlin, Germany
- Center for Translational Neurophysiology of Speech and Communication, Fondazione Istituto Italiano di Tecnologia, Ferrara, Italy
- * E-mail:
| | - Guy Doron
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Michael Brecht
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Institut für Physik, Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
17
|
Schleimer JH, Hesse J, Contreras SA, Schreiber S. Firing statistics in the bistable regime of neurons with homoclinic spike generation. Phys Rev E 2021; 103:012407. [PMID: 33601551 DOI: 10.1103/physreve.103.012407] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 11/20/2020] [Indexed: 11/07/2022]
Abstract
Neuronal voltage dynamics of regularly firing neurons typically has one stable attractor: either a fixed point (like in the subthreshold regime) or a limit cycle that defines the tonic firing of action potentials (in the suprathreshold regime). In two of the three spike onset bifurcation sequences that are known to give rise to all-or-none type action potentials, however, the resting-state fixed point and limit cycle spiking can coexist in an intermediate regime, resulting in bistable dynamics. Here, noise can induce switches between the attractors, i.e., between rest and spiking, and thus increase the variability of the spike train compared to neurons with only one stable attractor. Qualitative features of the resulting spike statistics depend on the spike onset bifurcations. This paper focuses on the creation of the spiking limit cycle via the saddle-homoclinic orbit (HOM) bifurcation and derives interspike interval (ISI) densities for a conductance-based neuron model in the bistable regime. The ISI densities of bistable homoclinic neurons are found to be unimodal yet distinct from the inverse Gaussian distribution associated with the saddle-node-on-invariant-cycle bifurcation. It is demonstrated that for the HOM bifurcation the transition between rest and spiking is mainly determined along the downstroke of the action potential-a dynamical feature that is not captured by the commonly used reset neuron models. The deduced spike statistics can help to identify HOM dynamics in experimental data.
Collapse
Affiliation(s)
- Jan-Hendrik Schleimer
- Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Philippstrasse 13, Haus 4, 10115 Berlin, Germany
- Bernstein Center for Computational Neuroscience, 10115 Berlin, Germany
| | - Janina Hesse
- Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Philippstrasse 13, Haus 4, 10115 Berlin, Germany
- MSH Medical School Hamburg, Am Kaiserkai 1, 20457 Hamburg, Germany
| | - Susana Andrea Contreras
- Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Philippstrasse 13, Haus 4, 10115 Berlin, Germany
- Bernstein Center for Computational Neuroscience, 10115 Berlin, Germany
| | - Susanne Schreiber
- Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Philippstrasse 13, Haus 4, 10115 Berlin, Germany
- Bernstein Center for Computational Neuroscience, 10115 Berlin, Germany
| |
Collapse
|
18
|
Harris SS, Wolf F, De Strooper B, Busche MA. Tipping the Scales: Peptide-Dependent Dysregulation of Neural Circuit Dynamics in Alzheimer’s Disease. Neuron 2020; 107:417-435. [DOI: 10.1016/j.neuron.2020.06.005] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2020] [Revised: 04/24/2020] [Accepted: 06/01/2020] [Indexed: 02/07/2023]
|
19
|
Bick C, Goodfellow M, Laing CR, Martens EA. Understanding the dynamics of biological and neural oscillator networks through exact mean-field reductions: a review. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2020; 10:9. [PMID: 32462281 PMCID: PMC7253574 DOI: 10.1186/s13408-020-00086-9] [Citation(s) in RCA: 109] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2019] [Accepted: 05/07/2020] [Indexed: 05/03/2023]
Abstract
Many biological and neural systems can be seen as networks of interacting periodic processes. Importantly, their functionality, i.e., whether these networks can perform their function or not, depends on the emerging collective dynamics of the network. Synchrony of oscillations is one of the most prominent examples of such collective behavior and has been associated both with function and dysfunction. Understanding how network structure and interactions, as well as the microscopic properties of individual units, shape the emerging collective dynamics is critical to find factors that lead to malfunction. However, many biological systems such as the brain consist of a large number of dynamical units. Hence, their analysis has either relied on simplified heuristic models on a coarse scale, or the analysis comes at a huge computational cost. Here we review recently introduced approaches, known as the Ott-Antonsen and Watanabe-Strogatz reductions, allowing one to simplify the analysis by bridging small and large scales. Thus, reduced model equations are obtained that exactly describe the collective dynamics for each subpopulation in the oscillator network via few collective variables only. The resulting equations are next-generation models: Rather than being heuristic, they exactly link microscopic and macroscopic descriptions and therefore accurately capture microscopic properties of the underlying system. At the same time, they are sufficiently simple to analyze without great computational effort. In the last decade, these reduction methods have become instrumental in understanding how network structure and interactions shape the collective dynamics and the emergence of synchrony. We review this progress based on concrete examples and outline possible limitations. Finally, we discuss how linking the reduced models with experimental data can guide the way towards the development of new treatment approaches, for example, for neurological disease.
Collapse
Affiliation(s)
- Christian Bick
- Centre for Systems, Dynamics, and Control, University of Exeter, Exeter, UK.
- Department of Mathematics, University of Exeter, Exeter, UK.
- EPSRC Centre for Predictive Modelling in Healthcare, University of Exeter, Exeter, UK.
- Mathematical Institute, University of Oxford, Oxford, UK.
- Institute for Advanced Study, Technische Universität München, Garching, Germany.
| | - Marc Goodfellow
- Department of Mathematics, University of Exeter, Exeter, UK
- EPSRC Centre for Predictive Modelling in Healthcare, University of Exeter, Exeter, UK
- Living Systems Institute, University of Exeter, Exeter, UK
- Wellcome Trust Centre for Biomedical Modelling and Analysis, University of Exeter, Exeter, UK
| | - Carlo R Laing
- School of Natural and Computational Sciences, Massey University, Auckland, New Zealand
| | - Erik A Martens
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kgs. Lyngby, Denmark.
- Department of Biomedical Science, University of Copenhagen, Copenhagen N, Denmark.
- Centre for Translational Neuroscience, University of Copenhagen, Copenhagen N, Denmark.
| |
Collapse
|
20
|
Manz P, Goedeke S, Memmesheimer RM. Dynamics and computation in mixed networks containing neurons that accelerate towards spiking. Phys Rev E 2019; 100:042404. [PMID: 31770941 DOI: 10.1103/physreve.100.042404] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Indexed: 11/07/2022]
Abstract
Networks in the brain consist of different types of neurons. Here we investigate the influence of neuron diversity on the dynamics, phase space structure, and computational capabilities of spiking neural networks. We find that already a single neuron of a different type can qualitatively change the network dynamics and that mixed networks may combine the computational capabilities of ones with a single-neuron type. We study inhibitory networks of concave leaky (LIF) and convex "antileaky" (XIF) integrate-and-fire neurons that generalize irregularly spiking nonchaotic LIF neuron networks. Endowed with simple conductance-based synapses for XIF neurons, our networks can generate a balanced state of irregular asynchronous spiking as well. We determine the voltage probability distributions and self-consistent firing rates assuming Poisson input with finite-size spike impacts. Further, we compute the full spectrum of Lyapunov exponents (LEs) and the covariant Lyapunov vectors (CLVs) specifying the corresponding perturbation directions. We find that there is approximately one positive LE for each XIF neuron. This indicates in particular that a single XIF neuron renders the network dynamics chaotic. A simple mean-field approach, which can be justified by properties of the CLVs, explains the finding. As an application, we propose a spike-based computing scheme where our networks serve as computational reservoirs and their different stability properties yield different computational capabilities.
Collapse
Affiliation(s)
- Paul Manz
- Neural Network Dynamics and Computation, Institute for Genetics, University of Bonn, 53115 Bonn, Germany
| | - Sven Goedeke
- Neural Network Dynamics and Computation, Institute for Genetics, University of Bonn, 53115 Bonn, Germany
| | - Raoul-Martin Memmesheimer
- Neural Network Dynamics and Computation, Institute for Genetics, University of Bonn, 53115 Bonn, Germany
| |
Collapse
|
21
|
Puelma Touzel M, Wolf F. Statistical mechanics of spike events underlying phase space partitioning and sequence codes in large-scale models of neural circuits. Phys Rev E 2019; 99:052402. [PMID: 31212548 DOI: 10.1103/physreve.99.052402] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2017] [Indexed: 11/07/2022]
Abstract
Cortical circuits operate in an inhibition-dominated regime of spiking activity. Recently, it was found that spiking circuit models in this regime can, despite disordered connectivity and asynchronous irregular activity, exhibit a locally stable dynamics that may be used for neural computation. The lack of existing mathematical tools has precluded analytical insight into this phase. Here we present analytical methods tailored to the granularity of spike-based interactions for analyzing attractor geometry in high-dimensional spiking dynamics. We apply them to reveal the properties of the complex geometry of trajectories of population spiking activity in a canonical model of locally stable spiking dynamics. We find that attractor basin boundaries are the preimages of spike-time collision events involving connected neurons. These spike-based instabilities control the divergence rate of neighboring basins and have no equivalent in rate-based models. They are located according to the disordered connectivity at a random subset of edges in a hypercube representation of the phase space. Iterating backward these edges using the stable dynamics induces a partition refinement on this space that converges to the attractor basins. We formulate a statistical theory of the locations of such events relative to attracting trajectories via a tractable representation of local trajectory ensembles. Averaging over the disorder, we derive the basin diameter distribution, whose characteristic scale emerges from the relative strengths of the stabilizing inhibitory coupling and destabilizing spike interactions. Our study provides an approach to analytically dissect how connectivity, coupling strength, and single-neuron dynamics shape the phase space geometry in the locally stable regime of spiking neural circuit dynamics.
Collapse
Affiliation(s)
- Maximilian Puelma Touzel
- Max Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany and Mila, Université de Montréal, Montréal, Quebec, Canada H2S 3H1
| | - Fred Wolf
- Max Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany; Faculty of Physics, Georg August University, 37077 Göttingen, Germany; Bernstein Center for Computational Neuroscience, 37077 Göttingen, Germany; and Kavli Institute for Theoretical Physics, University of California, Santa Barbara, Santa Barbara, California 93106-4111, USA
| |
Collapse
|
22
|
Dehghani N. Theoretical Principles of Multiscale Spatiotemporal Control of Neuronal Networks: A Complex Systems Perspective. Front Comput Neurosci 2018; 12:81. [PMID: 30349469 PMCID: PMC6187923 DOI: 10.3389/fncom.2018.00081] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2018] [Accepted: 09/11/2018] [Indexed: 01/14/2023] Open
Abstract
Success in the fine control of the nervous system depends on a deeper understanding of how neural circuits control behavior. There is, however, a wide gap between the components of neural circuits and behavior. We advance the idea that a suitable approach for narrowing this gap has to be based on a multiscale information-theoretic description of the system. We evaluate the possibility that brain-wide complex neural computations can be dissected into a hierarchy of computational motifs that rely on smaller circuit modules interacting at multiple scales. In doing so, we draw attention to the importance of formalizing the goals of stimulation in terms of neural computations so that the possible implementations are matched in scale to the underlying circuit modules.
Collapse
Affiliation(s)
- Nima Dehghani
- Department of Physics, Massachusetts Institute of Technology, Cambridge, MA, United States
- Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA, United States
| |
Collapse
|
23
|
di Volo M, Torcini A. Transition from Asynchronous to Oscillatory Dynamics in Balanced Spiking Networks with Instantaneous Synapses. PHYSICAL REVIEW LETTERS 2018; 121:128301. [PMID: 30296134 DOI: 10.1103/physrevlett.121.128301] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2018] [Revised: 08/10/2018] [Indexed: 05/20/2023]
Abstract
We report a transition from asynchronous to oscillatory behavior in balanced inhibitory networks for class I and II neurons with instantaneous synapses. Collective oscillations emerge for sufficiently connected networks. Their origin is understood in terms of a recently developed mean-field model, whose stable solution is a focus. Microscopic irregular firings, due to balance, trigger sustained oscillations by exciting the relaxation dynamics towards the macroscopic focus. The same mechanism induces in balanced excitatory-inhibitory networks quasiperiodic collective oscillations.
Collapse
Affiliation(s)
- Matteo di Volo
- Unité de Neuroscience, Information et Complexité (UNIC), CNRS FRE 3693, 1 avenue de la Terrasse, 91198 Gif sur Yvette, France
| | - Alessandro Torcini
- Laboratoire de Physique Théorique et Modélisation, Université de Cergy-Pontoise, CNRS, UMR 8089, 95302 Cergy-Pontoise cedex, France, Max Planck Institut für Physik komplexer Systeme, Nöthnitzer Str. 38, 01187 Dresden, Germany and CNR-Consiglio Nazionale delle Ricerche-Istituto dei Sistemi Complessi, via Madonna del Piano 10, 50019 Sesto Fiorentino, Italy
| |
Collapse
|
24
|
Ullner E, Politi A, Torcini A. Ubiquity of collective irregular dynamics in balanced networks of spiking neurons. CHAOS (WOODBURY, N.Y.) 2018; 28:081106. [PMID: 30180628 DOI: 10.1063/1.5049902] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2018] [Accepted: 08/09/2018] [Indexed: 06/08/2023]
Abstract
We revisit the dynamics of a prototypical model of balanced activity in networks of spiking neurons. A detailed investigation of the thermodynamic limit for fixed density of connections (massive coupling) shows that, when inhibition prevails, the asymptotic regime is not asynchronous but rather characterized by a self-sustained irregular, macroscopic (collective) dynamics. So long as the connectivity is massive, this regime is found in many different setups: leaky as well as quadratic integrate-and-fire neurons; large and small coupling strength; and weak and strong external currents.
Collapse
Affiliation(s)
- Ekkehard Ullner
- Institute for Complex Systems and Mathematical Biology and Department of Physics (SUPA), Old Aberdeen, Aberdeen AB24 3UE, United Kingdom
| | - Antonio Politi
- Institute for Complex Systems and Mathematical Biology and Department of Physics (SUPA), Old Aberdeen, Aberdeen AB24 3UE, United Kingdom
| | - Alessandro Torcini
- Max Planck Institut für Physik komplexer Systeme, Nöthnitzer Str. 38, 01187 Dresden, Germany
| |
Collapse
|
25
|
Laing CR. Chaos in small networks of theta neurons. CHAOS (WOODBURY, N.Y.) 2018; 28:073101. [PMID: 30070504 DOI: 10.1063/1.5028515] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2018] [Accepted: 06/25/2018] [Indexed: 06/08/2023]
Abstract
We consider small networks of instantaneously coupled theta neurons. For inhibitory coupling and fixed parameter values, some initial conditions give chaotic solutions while others give quasiperiodic ones. This behaviour seems to result from the reversibility of the equations governing the networks' dynamics. We investigate the robustness of the chaotic behaviour with respect to changes in initial conditions and parameters and find the behaviour to be quite robust as long as the reversibility of the system is preserved.
Collapse
Affiliation(s)
- Carlo R Laing
- Institute of Natural and Mathematical Sciences, Massey University, Private Bag 102-904 NSMC, Auckland, New Zealand
| |
Collapse
|
26
|
Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains. ENTROPY 2018; 20:e20010034. [PMID: 33265123 PMCID: PMC7512206 DOI: 10.3390/e20010034] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2017] [Revised: 01/03/2018] [Accepted: 01/05/2018] [Indexed: 11/16/2022]
Abstract
The spiking activity of neuronal networks follows laws that are not time-reversal symmetric; the notion of pre-synaptic and post-synaptic neurons, stimulus correlations and noise correlations have a clear time order. Therefore, a biologically realistic statistical model for the spiking activity should be able to capture some degree of time irreversibility. We use the thermodynamic formalism to build a framework in the context maximum entropy models to quantify the degree of time irreversibility, providing an explicit formula for the information entropy production of the inferred maximum entropy Markov chain. We provide examples to illustrate our results and discuss the importance of time irreversibility for modeling the spike train statistics.
Collapse
|
27
|
Bernardi D, Lindner B. Optimal Detection of a Localized Perturbation in Random Networks of Integrate-and-Fire Neurons. PHYSICAL REVIEW LETTERS 2017; 118:268301. [PMID: 28707933 DOI: 10.1103/physrevlett.118.268301] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2016] [Indexed: 06/07/2023]
Abstract
Experimental and theoretical studies suggest that cortical networks are chaotic and coding relies on averages over large populations. However, there is evidence that rats can respond to the short stimulation of a single cortical cell, a theoretically unexplained fact. We study effects of single-cell stimulation on a large recurrent network of integrate-and-fire neurons and propose a simple way to detect the perturbation. Detection rates obtained from simulations and analytical estimates are similar to experimental response rates if the readout is slightly biased towards specific neurons. Near-optimal detection is attained for a broad range of intermediate values of the mean coupling between neurons.
Collapse
Affiliation(s)
- Davide Bernardi
- Bernstein Center for Computational Neuroscience Berlin, Philippstraße 13, Haus 2, 10115 Berlin, Germany
- Physics Department of Humboldt University Berlin, Newtonstraße 15, 12489 Berlin, Germany
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Philippstraße 13, Haus 2, 10115 Berlin, Germany
- Physics Department of Humboldt University Berlin, Newtonstraße 15, 12489 Berlin, Germany
| |
Collapse
|
28
|
Shaham N, Burak Y. Slow diffusive dynamics in a chaotic balanced neural network. PLoS Comput Biol 2017; 13:e1005505. [PMID: 28459813 PMCID: PMC5432195 DOI: 10.1371/journal.pcbi.1005505] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2016] [Revised: 05/15/2017] [Accepted: 04/06/2017] [Indexed: 11/18/2022] Open
Abstract
It has been proposed that neural noise in the cortex arises from chaotic dynamics in the balanced state: in this model of cortical dynamics, the excitatory and inhibitory inputs to each neuron approximately cancel, and activity is driven by fluctuations of the synaptic inputs around their mean. It remains unclear whether neural networks in the balanced state can perform tasks that are highly sensitive to noise, such as storage of continuous parameters in working memory, while also accounting for the irregular behavior of single neurons. Here we show that continuous parameter working memory can be maintained in the balanced state, in a neural circuit with a simple network architecture. We show analytically that in the limit of an infinite network, the dynamics generated by this architecture are characterized by a continuous set of steady balanced states, allowing for the indefinite storage of a continuous parameter. In finite networks, we show that the chaotic noise drives diffusive motion along the approximate attractor, which gradually degrades the stored memory. We analyze the dynamics and show that the slow diffusive motion induces slowly decaying temporal cross correlations in the activity, which differ substantially from those previously described in the balanced state. We calculate the diffusivity, and show that it is inversely proportional to the system size. For large enough (but realistic) neural population sizes, and with suitable tuning of the network connections, the proposed balanced network can sustain continuous parameter values in memory over time scales larger by several orders of magnitude than the single neuron time scale. This work studies the effects of chaotic dynamics, a prominent feature of the balanced state model, on storage of continuous parameters in working memory. We propose a simple model of a balanced network with mutual inhibition, and show that it possesses a continuum of steady states, a commonly proposed mechanism for maintenance of continuous parameter working memory in the brain. We use analytical methods, combined with large scale numerical simulations, to analyze the diffusive dynamics and correlation patterns generated by the chaotic nature of the system. We obtain new conclusions and predictions on irregular activity resulting from the combination of continuous attractor dynamics with the balanced state. These include a prediction of measurable, slowly decaying spike correlations, and a quantification of how persistence depends on the neural population size.
Collapse
Affiliation(s)
- Nimrod Shaham
- Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Yoram Burak
- Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem, Israel.,Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
29
|
Antonopoulos CG, Baptista MS. Maintaining extensivity in evolutionary multiplex networks. PLoS One 2017; 12:e0175389. [PMID: 28403162 PMCID: PMC5389798 DOI: 10.1371/journal.pone.0175389] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2016] [Accepted: 03/25/2017] [Indexed: 12/03/2022] Open
Abstract
In this paper, we explore the role of network topology on maintaining the extensive property of entropy. We study analytically and numerically how the topology contributes to maintaining extensivity of entropy in multiplex networks, i.e. networks of subnetworks (layers), by means of the sum of the positive Lyapunov exponents, HKS, a quantity related to entropy. We show that extensivity relies not only on the interplay between the coupling strengths of the dynamics associated to the intra (short-range) and inter (long-range) interactions, but also on the sum of the intra-degrees of the nodes of the layers. For the analytically treated networks of size N, among several other results, we show that if the sum of the intra-degrees (and the sum of inter-degrees) scales as Nθ+1, θ > 0, extensivity can be maintained if the intra-coupling (and the inter-coupling) strength scales as N-θ, when evolution is driven by the maximisation of HKS. We then verify our analytical results by performing numerical simulations in multiplex networks formed by electrically and chemically coupled neurons.
Collapse
Affiliation(s)
- Chris G. Antonopoulos
- Department of Mathematical Sciences, University of Essex, Wivenhoe Park, United Kingdom
| | - Murilo S. Baptista
- Institute of Complex Sciences and Mathematical Biology, University of Aberdeen, SUPA, Aberdeen, United Kingdom
| |
Collapse
|
30
|
Lajoie G, Lin KK, Thivierge JP, Shea-Brown E. Encoding in Balanced Networks: Revisiting Spike Patterns and Chaos in Stimulus-Driven Systems. PLoS Comput Biol 2016; 12:e1005258. [PMID: 27973557 PMCID: PMC5156368 DOI: 10.1371/journal.pcbi.1005258] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2016] [Accepted: 11/21/2016] [Indexed: 11/22/2022] Open
Abstract
Highly connected recurrent neural networks often produce chaotic dynamics, meaning their precise activity is sensitive to small perturbations. What are the consequences of chaos for how such networks encode streams of temporal stimuli? On the one hand, chaos is a strong source of randomness, suggesting that small changes in stimuli will be obscured by intrinsically generated variability. On the other hand, recent work shows that the type of chaos that occurs in spiking networks can have a surprisingly low-dimensional structure, suggesting that there may be room for fine stimulus features to be precisely resolved. Here we show that strongly chaotic networks produce patterned spikes that reliably encode time-dependent stimuli: using a decoder sensitive to spike times on timescales of 10’s of ms, one can easily distinguish responses to very similar inputs. Moreover, recurrence serves to distribute signals throughout chaotic networks so that small groups of cells can encode substantial information about signals arriving elsewhere. A conclusion is that the presence of strong chaos in recurrent networks need not exclude precise encoding of temporal stimuli via spike patterns. Recurrently connected populations of excitatory and inhibitory neurons found in cortex are known to produce rich and irregular spiking activity, with complex trial-to-trial variability in response to input stimuli. Many theoretical studies found this firing regime to be associated with chaos, where tiny perturbations explode to impact subsequent neural activity. As a result, the precise spiking patterns produced by such networks would be expected to be too fragile to carry any valuable information about stimuli, since inevitable sources of noise such as synaptic failure or ion channel fluctuations would be amplified by chaotic dynamics on repeated trials. In this article we revisit the implications of chaos in input-driven networks and directly measure its impact on evoked population spike patterns. We find that chaotic network dynamics can, in fact, produce highly patterned spiking activity which can be used by a simple decoder to perform input-classification tasks. This can be explained by the presence of low-dimensional, input-specific chaotic attractors, leading to a form of trial-to-trial variability that is intermittent, rather than uniformly random. We propose that chaos is a manageable by-product of recurrent connectivity, which serves to efficiently distribute information about stimuli throughout a network.
Collapse
Affiliation(s)
- Guillaume Lajoie
- University of Washington Institute for Neuroengineering, University of Washington, Seattle, Washington, United States of America
- Department of Nonlinear Dynamics, Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
- * E-mail:
| | - Kevin K. Lin
- School of Mathematics, University of Arizona, Tucson, Arizona, United States of America
| | | | - Eric Shea-Brown
- University of Washington Institute for Neuroengineering, University of Washington, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
- Department of Physiology and Biophysics, University of Washington, Seattle, Washington, United States of America
| |
Collapse
|
31
|
Bi Z, Zhou C. Spike Pattern Structure Influences Synaptic Efficacy Variability under STDP and Synaptic Homeostasis. II: Spike Shuffling Methods on LIF Networks. Front Comput Neurosci 2016; 10:83. [PMID: 27555816 PMCID: PMC4977343 DOI: 10.3389/fncom.2016.00083] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2016] [Accepted: 07/25/2016] [Indexed: 12/12/2022] Open
Abstract
Synapses may undergo variable changes during plasticity because of the variability of spike patterns such as temporal stochasticity and spatial randomness. Here, we call the variability of synaptic weight changes during plasticity to be efficacy variability. In this paper, we investigate how four aspects of spike pattern statistics (i.e., synchronous firing, burstiness/regularity, heterogeneity of rates and heterogeneity of cross-correlations) influence the efficacy variability under pair-wise additive spike-timing dependent plasticity (STDP) and synaptic homeostasis (the mean strength of plastic synapses into a neuron is bounded), by implementing spike shuffling methods onto spike patterns self-organized by a network of excitatory and inhibitory leaky integrate-and-fire (LIF) neurons. With the increase of the decay time scale of the inhibitory synaptic currents, the LIF network undergoes a transition from asynchronous state to weak synchronous state and then to synchronous bursting state. We first shuffle these spike patterns using a variety of methods, each designed to evidently change a specific pattern statistics; and then investigate the change of efficacy variability of the synapses under STDP and synaptic homeostasis, when the neurons in the network fire according to the spike patterns before and after being treated by a shuffling method. In this way, we can understand how the change of pattern statistics may cause the change of efficacy variability. Our results are consistent with those of our previous study which implements spike-generating models on converging motifs. We also find that burstiness/regularity is important to determine the efficacy variability under asynchronous states, while heterogeneity of cross-correlations is the main factor to cause efficacy variability when the network moves into synchronous bursting states (the states observed in epilepsy).
Collapse
Affiliation(s)
- Zedong Bi
- State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of SciencesBeijing, China; Department of Physics, Hong Kong Baptist UniversityKowloon Tong, Hong Kong
| | - Changsong Zhou
- Department of Physics, Hong Kong Baptist UniversityKowloon Tong, Hong Kong; Centre for Nonlinear Studies, Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems, Institute of Computational and Theoretical Studies, Hong Kong Baptist UniversityKowloon Tong, Hong Kong; Beijing Computational Science Research CenterBeijing, China; Research Centre, Hong Kong Baptist University Institute of Research and Continuing EducationShenzhen, China
| |
Collapse
|
32
|
Denève S, Machens CK. Efficient codes and balanced networks. Nat Neurosci 2016; 19:375-82. [PMID: 26906504 DOI: 10.1038/nn.4243] [Citation(s) in RCA: 247] [Impact Index Per Article: 27.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2015] [Accepted: 01/13/2016] [Indexed: 12/12/2022]
Abstract
Recent years have seen a growing interest in inhibitory interneurons and their circuits. A striking property of cortical inhibition is how tightly it balances excitation. Inhibitory currents not only match excitatory currents on average, but track them on a millisecond time scale, whether they are caused by external stimuli or spontaneous fluctuations. We review, together with experimental evidence, recent theoretical approaches that investigate the advantages of such tight balance for coding and computation. These studies suggest a possible revision of the dominant view that neurons represent information with firing rates corrupted by Poisson noise. Instead, tight excitatory/inhibitory balance may be a signature of a highly cooperative code, orders of magnitude more precise than a Poisson rate code. Moreover, tight balance may provide a template that allows cortical neurons to construct high-dimensional population codes and learn complex functions of their inputs.
Collapse
Affiliation(s)
- Sophie Denève
- Laboratoire de Neurosciences Cognitives, École Normale Supérieure, Paris, France
| | | |
Collapse
|
33
|
Fisher JAN, Huang S, Ye M, Nabili M, Wilent WB, Krauthamer V, Myers MR, Welle CG. Real-Time Detection and Monitoring of Acute Brain Injury Utilizing Evoked Electroencephalographic Potentials. IEEE Trans Neural Syst Rehabil Eng 2016; 24:1003-1012. [PMID: 26955039 DOI: 10.1109/tnsre.2016.2529663] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Rapid detection and diagnosis of a traumatic brain injury (TBI) can significantly improve the prognosis for recovery. Helmet-mounted sensors that detect impact severity based on measurements of acceleration or pressure show promise for aiding triage and transport decisions in active, field environments such as professional sports or military combat. The detected signals, however, report on the mechanics of an impact rather than directly indicating the presence and severity of an injury. We explored the use of cortical somatosensory evoked electroencephalographic potentials (SSEPs) to detect and track, in real-time, neural electrophysiological abnormalities within the first hour following head injury in an animal model. To study the immediate electrophysiological effects of injury in vivo, we developed an experimental paradigm involving focused ultrasound that permits continuous, real-time measurements and minimizes mechanical artifact. Injury was associated with a dramatic reduction of amplitude over the damaged hemisphere directly after the injury. The amplitude systematically improved over time but remained significantly decreased at one hour, compared with baseline. In contrast, at one hour there was a concomitant enhancement of the cortical SSEP amplitude evoked from the uninjured hemisphere. Analysis of the inter-trial electroencephalogram (EEG) also revealed significant changes in low-frequency components and an increase in EEG entropy up to 30 minutes after injury, likely reflecting altered EEG reactivity to somatosensory stimuli. Injury-induced alterations in SSEPs were also observed using noninvasive epidermal electrodes, demonstrating viability of practical implementation. These results suggest cortical SSEPs recorded at just a few locations by head-mounted sensors and associated multiparametric analyses could potentially be used to rapidly detect and monitor brain injury in settings that normally present significant levels of mechanical and electrical noise.
Collapse
|
34
|
Der R. In Search for the Neural Mechanisms of Individual Development: Behavior-Driven Differential Hebbian Learning. Front Robot AI 2016. [DOI: 10.3389/frobt.2015.00037] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
35
|
Palmer TN, O'Shea M. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing. Front Comput Neurosci 2015; 9:124. [PMID: 26528173 PMCID: PMC4600914 DOI: 10.3389/fncom.2015.00124] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2015] [Accepted: 09/18/2015] [Indexed: 11/20/2022] Open
Abstract
How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.
Collapse
Affiliation(s)
- Tim N Palmer
- Department of Physics, University of Oxford Oxford, UK
| | - Michael O'Shea
- Centre for Computational Neuroscience and Robotics, University of Sussex Brighton, UK
| |
Collapse
|
36
|
Novel plasticity rule can explain the development of sensorimotor intelligence. Proc Natl Acad Sci U S A 2015; 112:E6224-32. [PMID: 26504200 DOI: 10.1073/pnas.1508400112] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Grounding autonomous behavior in the nervous system is a fundamental challenge for neuroscience. In particular, self-organized behavioral development provides more questions than answers. Are there special functional units for curiosity, motivation, and creativity? This paper argues that these features can be grounded in synaptic plasticity itself, without requiring any higher-level constructs. We propose differential extrinsic plasticity (DEP) as a new synaptic rule for self-learning systems and apply it to a number of complex robotic systems as a test case. Without specifying any purpose or goal, seemingly purposeful and adaptive rhythmic behavior is developed, displaying a certain level of sensorimotor intelligence. These surprising results require no system-specific modifications of the DEP rule. They rather arise from the underlying mechanism of spontaneous symmetry breaking, which is due to the tight brain body environment coupling. The new synaptic rule is biologically plausible and would be an interesting target for neurobiological investigation. We also argue that this neuronal mechanism may have been a catalyst in natural evolution.
Collapse
|
37
|
Harish O, Hansel D. Asynchronous Rate Chaos in Spiking Neuronal Circuits. PLoS Comput Biol 2015; 11:e1004266. [PMID: 26230679 PMCID: PMC4521798 DOI: 10.1371/journal.pcbi.1004266] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Accepted: 04/03/2015] [Indexed: 01/25/2023] Open
Abstract
The brain exhibits temporally complex patterns of activity with features similar to those of chaotic systems. Theoretical studies over the last twenty years have described various computational advantages for such regimes in neuronal systems. Nevertheless, it still remains unclear whether chaos requires specific cellular properties or network architectures, or whether it is a generic property of neuronal circuits. We investigate the dynamics of networks of excitatory-inhibitory (EI) spiking neurons with random sparse connectivity operating in the regime of balance of excitation and inhibition. Combining Dynamical Mean-Field Theory with numerical simulations, we show that chaotic, asynchronous firing rate fluctuations emerge generically for sufficiently strong synapses. Two different mechanisms can lead to these chaotic fluctuations. One mechanism relies on slow I-I inhibition which gives rise to slow subthreshold voltage and rate fluctuations. The decorrelation time of these fluctuations is proportional to the time constant of the inhibition. The second mechanism relies on the recurrent E-I-E feedback loop. It requires slow excitation but the inhibition can be fast. In the corresponding dynamical regime all neurons exhibit rate fluctuations on the time scale of the excitation. Another feature of this regime is that the population-averaged firing rate is substantially smaller in the excitatory population than in the inhibitory population. This is not necessarily the case in the I-I mechanism. Finally, we discuss the neurophysiological and computational significance of our results.
Collapse
Affiliation(s)
- Omri Harish
- Center for Neurophysics, Physiology and Pathologies, CNRS UMR8119 and Institute of Neuroscience and Cognition, Université Paris Descartes, Paris, France
| | - David Hansel
- Center for Neurophysics, Physiology and Pathologies, CNRS UMR8119 and Institute of Neuroscience and Cognition, Université Paris Descartes, Paris, France
- The Alexander Silberman Institute of Life Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
38
|
Thomas PJ. Commentary on Structured chaos shapes spike-response noise entropy in balanced neural networks, by Lajoie, Thivierge, and Shea-Brown. Front Comput Neurosci 2015; 9:23. [PMID: 25805988 PMCID: PMC4354338 DOI: 10.3389/fncom.2015.00023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2014] [Accepted: 02/08/2015] [Indexed: 11/13/2022] Open
|
39
|
Lajoie G, Thivierge JP, Shea-Brown E. Structured chaos shapes spike-response noise entropy in balanced neural networks. Front Comput Neurosci 2014; 8:123. [PMID: 25324772 PMCID: PMC4183092 DOI: 10.3389/fncom.2014.00123] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2014] [Accepted: 09/11/2014] [Indexed: 11/13/2022] Open
Abstract
Large networks of sparsely coupled, excitatory and inhibitory cells occur throughout the brain. For many models of these networks, a striking feature is that their dynamics are chaotic and thus, are sensitive to small perturbations. How does this chaos manifest in the neural code? Specifically, how variable are the spike patterns that such a network produces in response to an input signal? To answer this, we derive a bound for a general measure of variability-spike-train entropy. This leads to important insights on the variability of multi-cell spike pattern distributions in large recurrent networks of spiking neurons responding to fluctuating inputs. The analysis is based on results from random dynamical systems theory and is complemented by detailed numerical simulations. We find that the spike pattern entropy is an order of magnitude lower than what would be extrapolated from single cells. This holds despite the fact that network coupling becomes vanishingly sparse as network size grows-a phenomenon that depends on "extensive chaos," as previously discovered for balanced networks without stimulus drive. Moreover, we show how spike pattern entropy is controlled by temporal features of the inputs. Our findings provide insight into how neural networks may encode stimuli in the presence of inherently chaotic dynamics.
Collapse
Affiliation(s)
- Guillaume Lajoie
- Nonlinear Dynamics Department, Max Planck Institute for Dynamics and Self-Organization Goettingen, Germany ; Bernstein Center for Computational Neuroscience, Max Planck Institute for Dynamics and Self-Organization Goettingen, Germany ; Applied Mathematics Department, University of Washington Seattle, WA, USA
| | - Jean-Philippe Thivierge
- School of Psychology and Center for Neural Dynamics, University of Ottawa Ottawa, ON, Canada
| | - Eric Shea-Brown
- Applied Mathematics Department, University of Washington Seattle, WA, USA ; Physiology and Biophysics Department, University of Washington Seattle, WA, USA
| |
Collapse
|
40
|
Lajoie G, Thivierge JP, Shea-Brown E. Structured chaos shapes joint spike-response noise entropy in temporally driven balanced networks. BMC Neurosci 2014. [PMCID: PMC4126493 DOI: 10.1186/1471-2202-15-s1-p48] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
|
41
|
Doiron B, Litwin-Kumar A. Balanced neural architecture and the idling brain. Front Comput Neurosci 2014; 8:56. [PMID: 24904394 PMCID: PMC4034496 DOI: 10.3389/fncom.2014.00056] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2013] [Accepted: 05/07/2014] [Indexed: 12/05/2022] Open
Abstract
A signature feature of cortical spike trains is their trial-to-trial variability. This variability is large in the spontaneous state and is reduced when cortex is driven by a stimulus or task. Models of recurrent cortical networks with unstructured, yet balanced, excitation and inhibition generate variability consistent with evoked conditions. However, these models produce spike trains which lack the long timescale fluctuations and large variability exhibited during spontaneous cortical dynamics. We propose that global network architectures which support a large number of stable states (attractor networks) allow balanced networks to capture key features of neural variability in both spontaneous and evoked conditions. We illustrate this using balanced spiking networks with clustered assembly, feedforward chain, and ring structures. By assuming that global network structure is related to stimulus preference, we show that signal correlations are related to the magnitude of correlations in the spontaneous state. Finally, we contrast the impact of stimulation on the trial-to-trial variability in attractor networks with that of strongly coupled spiking networks with chaotic firing rate instabilities, recently investigated by Ostojic (2014). We find that only attractor networks replicate an experimentally observed stimulus-induced quenching of trial-to-trial variability. In total, the comparison of the trial-variable dynamics of single neurons or neuron pairs during spontaneous and evoked activity can be a window into the global structure of balanced cortical networks.
Collapse
Affiliation(s)
- Brent Doiron
- Department of Mathematics, University of Pittsburgh Pittsburgh, PA, USA ; Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University Pittsburgh, PA, USA
| | - Ashok Litwin-Kumar
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University Pittsburgh, PA, USA ; Program for Neural Computation, University of Pittsburgh and Carnegie Mellon University Pittsburgh, PA, USA
| |
Collapse
|
42
|
Wolf F, Engelken R, Puelma-Touzel M, Weidinger JDF, Neef A. Dynamical models of cortical circuits. Curr Opin Neurobiol 2014; 25:228-36. [PMID: 24658059 DOI: 10.1016/j.conb.2014.01.017] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2013] [Revised: 01/21/2014] [Accepted: 01/22/2014] [Indexed: 11/27/2022]
Abstract
Cortical neurons operate within recurrent neuronal circuits. Dissecting their operation is key to understanding information processing in the cortex and requires transparent and adequate dynamical models of circuit function. Convergent evidence from experimental and theoretical studies indicates that strong feedback inhibition shapes the operating regime of cortical circuits. For circuits operating in inhibition-dominated regimes, mathematical and computational studies over the past several years achieved substantial advances in understanding response modulation and heterogeneity, emergent stimulus selectivity, inter-neuron correlations, and microstate dynamics. The latter indicate a surprisingly strong dependence of the collective circuit dynamics on the features of single neuron action potential generation. New approaches are needed to definitely characterize the cortical operating regime.
Collapse
Affiliation(s)
- Fred Wolf
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany; Bernstein Center for Computational Neuroscience, Göttingen, Germany; Bernstein Focus Neurotechnology, Göttingen, Germany; Faculty of Physics, Göttingen University, Göttingen, Germany.
| | - Rainer Engelken
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany; Bernstein Center for Computational Neuroscience, Göttingen, Germany; Bernstein Focus Neurotechnology, Göttingen, Germany; Faculty of Physics, Göttingen University, Göttingen, Germany
| | - Maximilian Puelma-Touzel
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany; Bernstein Center for Computational Neuroscience, Göttingen, Germany; Bernstein Focus Neurotechnology, Göttingen, Germany; Faculty of Physics, Göttingen University, Göttingen, Germany
| | - Juan Daniel Flórez Weidinger
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany; Bernstein Center for Computational Neuroscience, Göttingen, Germany; Bernstein Focus Neurotechnology, Göttingen, Germany; Faculty of Physics, Göttingen University, Göttingen, Germany
| | - Andreas Neef
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany; Bernstein Center for Computational Neuroscience, Göttingen, Germany; Bernstein Focus Neurotechnology, Göttingen, Germany; Faculty of Physics, Göttingen University, Göttingen, Germany
| |
Collapse
|
43
|
Ostojic S. Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons. Nat Neurosci 2014; 17:594-600. [PMID: 24561997 DOI: 10.1038/nn.3658] [Citation(s) in RCA: 174] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2013] [Accepted: 01/23/2014] [Indexed: 12/14/2022]
Abstract
Asynchronous activity in balanced networks of excitatory and inhibitory neurons is believed to constitute the primary medium for the propagation and transformation of information in the neocortex. Here we show that an unstructured, sparsely connected network of model spiking neurons can display two fundamentally different types of asynchronous activity that imply vastly different computational properties. For weak synaptic couplings, the network at rest is in the well-studied asynchronous state, in which individual neurons fire irregularly at constant rates. In this state, an external input leads to a highly redundant response of different neurons that favors information transmission but hinders more complex computations. For strong couplings, we find that the network at rest displays rich internal dynamics, in which the firing rates of individual neurons fluctuate strongly in time and across neurons. In this regime, the internal dynamics interact with incoming stimuli to provide a substrate for complex information processing and learning.
Collapse
Affiliation(s)
- Srdjan Ostojic
- Group for Neural Theory, Laboratoire de Neurosciences Cognitives, INSERM U960, École Normale Supérieure, Paris, France
| |
Collapse
|
44
|
Farkhooi F, Froese A, Muller E, Menzel R, Nawrot MP. Cellular adaptation facilitates sparse and reliable coding in sensory pathways. PLoS Comput Biol 2013; 9:e1003251. [PMID: 24098101 PMCID: PMC3789775 DOI: 10.1371/journal.pcbi.1003251] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2013] [Accepted: 08/16/2013] [Indexed: 11/30/2022] Open
Abstract
Most neurons in peripheral sensory pathways initially respond vigorously when a preferred stimulus is presented, but adapt as stimulation continues. It is unclear how this phenomenon affects stimulus coding in the later stages of sensory processing. Here, we show that a temporally sparse and reliable stimulus representation develops naturally in sequential stages of a sensory network with adapting neurons. As a modeling framework we employ a mean-field approach together with an adaptive population density treatment, accompanied by numerical simulations of spiking neural networks. We find that cellular adaptation plays a critical role in the dynamic reduction of the trial-by-trial variability of cortical spike responses by transiently suppressing self-generated fast fluctuations in the cortical balanced network. This provides an explanation for a widespread cortical phenomenon by a simple mechanism. We further show that in the insect olfactory system cellular adaptation is sufficient to explain the emergence of the temporally sparse and reliable stimulus representation in the mushroom body. Our results reveal a generic, biophysically plausible mechanism that can explain the emergence of a temporally sparse and reliable stimulus representation within a sequential processing architecture.
Collapse
Affiliation(s)
- Farzad Farkhooi
- Neuroinformatics & Theoretical Neuroscience, Freie Universität Berlin, and Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Anja Froese
- Institute für Biologie-Neurobiologie, Freie Universität Berlin, Berlin, Germany
| | - Eilif Muller
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Randolf Menzel
- Institute für Biologie-Neurobiologie, Freie Universität Berlin, Berlin, Germany
| | - Martin P. Nawrot
- Neuroinformatics & Theoretical Neuroscience, Freie Universität Berlin, and Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| |
Collapse
|
45
|
Three generic bistable scenarios of the interplay of voltage pulses and gene expression in neurons. Neural Netw 2013; 44:51-63. [DOI: 10.1016/j.neunet.2013.02.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2011] [Revised: 01/05/2013] [Accepted: 02/25/2013] [Indexed: 12/28/2022]
|
46
|
Luccioli S, Olmi S, Politi A, Torcini A. Critical connectivity for emergence of collective oscillations in strongly diluted neural networks. BMC Neurosci 2013. [PMCID: PMC3704778 DOI: 10.1186/1471-2202-14-s1-p394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
|
47
|
Engelken R, Monteforte M, Wolf F. Dynamical entropy production in cortical circuits with different network topologies. BMC Neurosci 2013. [PMCID: PMC3704888 DOI: 10.1186/1471-2202-14-s1-p421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
48
|
Wallace E, Maei HR, Latham PE. Randomly Connected Networks Have Short Temporal Memory. Neural Comput 2013; 25:1408-39. [DOI: 10.1162/neco_a_00449] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The brain is easily able to process and categorize complex time-varying signals. For example, the two sentences, “It is cold in London this time of year” and “It is hot in London this time of year,” have different meanings, even though the words hot and cold appear several seconds before the ends of the two sentences. Any network that can tell these sentences apart must therefore have a long temporal memory. In other words, the current state of the network must depend on events that happened several seconds ago. This is a difficult task, as neurons are dominated by relatively short time constants—tens to hundreds of milliseconds. Nevertheless, it was recently proposed that randomly connected networks could exhibit the long memories necessary for complex temporal processing. This is an attractive idea, both for its simplicity and because little tuning of recurrent synaptic weights is required. However, we show that when connectivity is high, as it is in the mammalian brain, randomly connected networks cannot exhibit temporal memory much longer than the time constants of their constituent neurons.
Collapse
Affiliation(s)
- Edward Wallace
- Department of Biochemistry and Molecular Biophysics, University of Chicago, Chicago, IL 60637, U.S.A., and FAS Center for Systems Biology, Harvard University, Cambridge, MA 02138, U.S.A
| | - Hamid Reza Maei
- Electrical Engineering Department, Stanford University, Stanford, CA, U.S.A
| | - Peter E. Latham
- Gatsby Computational Neuroscience Unit, University College, London, London WC1N 3AR, U.K
| |
Collapse
|
49
|
Lajoie G, Lin KK, Shea-Brown E. Chaos and reliability in balanced spiking networks with temporal drive. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2013; 87:052901. [PMID: 23767592 PMCID: PMC4124755 DOI: 10.1103/physreve.87.052901] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2012] [Revised: 12/21/2012] [Indexed: 06/02/2023]
Abstract
Biological information processing is often carried out by complex networks of interconnected dynamical units. A basic question about such networks is that of reliability: If the same signal is presented many times with the network in different initial states, will the system entrain to the signal in a repeatable way? Reliability is of particular interest in neuroscience, where large, complex networks of excitatory and inhibitory cells are ubiquitous. These networks are known to autonomously produce strongly chaotic dynamics-an obvious threat to reliability. Here, we show that such chaos persists in the presence of weak and strong stimuli, but that even in the presence of chaos, intermittent periods of highly reliable spiking often coexist with unreliable activity. We elucidate the local dynamical mechanisms involved in this intermittent reliability, and investigate the relationship between this phenomenon and certain time-dependent attractors arising from the dynamics. A conclusion is that chaotic dynamics do not have to be an obstacle to precise spike responses, a fact with implications for signal coding in large networks.
Collapse
Affiliation(s)
- Guillaume Lajoie
- Department of Applied Mathematics, University of Washington, Seattle, Washington 98195, USA
| | | | | |
Collapse
|
50
|
Luccioli S, Olmi S, Politi A, Torcini A. Collective dynamics in sparse networks. PHYSICAL REVIEW LETTERS 2012; 109:138103. [PMID: 23030123 DOI: 10.1103/physrevlett.109.138103] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2012] [Revised: 07/23/2012] [Indexed: 06/01/2023]
Abstract
The microscopic and macroscopic dynamics of random networks is investigated in the strong-dilution limit (i.e., for sparse networks). By simulating chaotic maps, Stuart-Landau oscillators, and leaky integrate-and-fire neurons, we show that a finite connectivity (of the order of a few tens) is able to sustain a nontrivial collective dynamics even in the thermodynamic limit. Although the network structure implies a nonadditive dynamics, the microscopic evolution is extensive (i.e., the number of active degrees of freedom is proportional to the number of network elements).
Collapse
Affiliation(s)
- Stefano Luccioli
- CNR-Consiglio Nazionale delle Ricerche-Istituto dei Sistemi Complessi, Sesto Fiorentino, Italy
| | | | | | | |
Collapse
|