1
|
Senk J, Hagen E, van Albada SJ, Diesmann M. Reconciliation of weak pairwise spike-train correlations and highly coherent local field potentials across space. Cereb Cortex 2024; 34:bhae405. [PMID: 39462814 PMCID: PMC11513197 DOI: 10.1093/cercor/bhae405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 09/09/2024] [Accepted: 09/23/2024] [Indexed: 10/29/2024] Open
Abstract
Multi-electrode arrays covering several square millimeters of neural tissue provide simultaneous access to population signals such as extracellular potentials and spiking activity of one hundred or more individual neurons. The interpretation of the recorded data calls for multiscale computational models with corresponding spatial dimensions and signal predictions. Multi-layer spiking neuron network models of local cortical circuits covering about $1\,{\text{mm}^{2}}$ have been developed, integrating experimentally obtained neuron-type-specific connectivity data and reproducing features of observed in-vivo spiking statistics. Local field potentials can be computed from the simulated spiking activity. We here extend a local network and local field potential model to an area of $4\times 4\,{\text{mm}^{2}}$, preserving the neuron density and introducing distance-dependent connection probabilities and conduction delays. We find that the upscaling procedure preserves the overall spiking statistics of the original model and reproduces asynchronous irregular spiking across populations and weak pairwise spike-train correlations in agreement with experimental recordings from sensory cortex. Also compatible with experimental observations, the correlation of local field potential signals is strong and decays over a distance of several hundred micrometers. Enhanced spatial coherence in the low-gamma band around $50\,\text{Hz}$ may explain the recent report of an apparent band-pass filter effect in the spatial reach of the local field potential.
Collapse
Affiliation(s)
- Johanna Senk
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Wilhelm-Johnen-Str., 52428 Jülich, Germany
- Sussex AI, School of Engineering and Informatics, University of Sussex, Chichester, Falmer, Brighton BN1 9QJ, United Kingdom
| | - Espen Hagen
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Wilhelm-Johnen-Str., 52428 Jülich, Germany
- Centre for Precision Psychiatry, Institute of Clinical Medicine, University of Oslo, and Division of Mental Health and Addiction, Oslo University Hospital, Ullevål Hospital, 0424 Oslo, Norway
| | - Sacha J van Albada
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Wilhelm-Johnen-Str., 52428 Jülich, Germany
- Institute of Zoology, University of Cologne, Zülpicher Str., 50674 Cologne, Germany
| | - Markus Diesmann
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Wilhelm-Johnen-Str., 52428 Jülich, Germany
- JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Wilhelm-Johnen-Str., 52428 Jülich Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Pauwelsstr., 52074 Aachen, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Otto-Blumenthal-Str., 52074 Aachen, Germany
| |
Collapse
|
2
|
Spaeth A, Haussler D, Teodorescu M. Model-agnostic neural mean field with a data-driven transfer function. NEUROMORPHIC COMPUTING AND ENGINEERING 2024; 4:034013. [PMID: 39310743 PMCID: PMC11413991 DOI: 10.1088/2634-4386/ad787f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Revised: 09/02/2024] [Accepted: 09/09/2024] [Indexed: 09/25/2024]
Abstract
As one of the most complex systems known to science, modeling brain behavior and function is both fascinating and extremely difficult. Empirical data is increasingly available from ex vivo human brain organoids and surgical samples, as well as in vivo animal models, so the problem of modeling the behavior of large-scale neuronal systems is more relevant than ever. The statistical physics concept of a mean-field model offers a tractable way to bridge the gap between single-neuron and population-level descriptions of neuronal activity, by modeling the behavior of a single representative neuron and extending this to the population. However, existing neural mean-field methods typically either take the limit of small interaction sizes, or are applicable only to the specific neuron models for which they were derived. This paper derives a mean-field model by fitting a transfer function called Refractory SoftPlus, which is simple yet applicable to a broad variety of neuron types. The transfer function is fitted numerically to simulated spike time data, and is entirely agnostic to the underlying neuronal dynamics. The resulting mean-field model predicts the response of a network of randomly connected neurons to a time-varying external stimulus with a high degree of accuracy. Furthermore, it enables an accurate approximate bifurcation analysis as a function of the level of recurrent input. This model does not assume large presynaptic rates or small postsynaptic potential size, allowing mean-field models to be developed even for populations with large interaction terms.
Collapse
Affiliation(s)
- Alex Spaeth
- Electrical and Computer Engineering Department, University of California, Santa Cruz, Santa Cruz, CA, United States of America
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, United States of America
| | - David Haussler
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, United States of America
- Biomolecular Engineering Department, University of California, Santa Cruz, Santa Cruz, CA, United States of America
| | - Mircea Teodorescu
- Electrical and Computer Engineering Department, University of California, Santa Cruz, Santa Cruz, CA, United States of America
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, United States of America
- Biomolecular Engineering Department, University of California, Santa Cruz, Santa Cruz, CA, United States of America
| |
Collapse
|
3
|
Spaeth A, Haussler D, Teodorescu M. Model-Agnostic Neural Mean Field With The Refractory SoftPlus Transfer Function. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.05.579047. [PMID: 38370695 PMCID: PMC10871173 DOI: 10.1101/2024.02.05.579047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
Due to the complexity of neuronal networks and the nonlinear dynamics of individual neurons, it is challenging to develop a systems-level model which is accurate enough to be useful yet tractable enough to apply. Mean-field models which extrapolate from single-neuron descriptions to large-scale models can be derived from the neuron's transfer function, which gives its firing rate as a function of its synaptic input. However, analytically derived transfer functions are applicable only to the neurons and noise models from which they were originally derived. In recent work, approximate transfer functions have been empirically derived by fitting a sigmoidal curve, which imposes a maximum firing rate and applies only in the diffusion limit, restricting applications. In this paper, we propose an approximate transfer function called Refractory SoftPlus, which is simple yet applicable to a broad variety of neuron types. Refractory SoftPlus activation functions allow the derivation of simple empirically approximated mean-field models using simulation results, which enables prediction of the response of a network of randomly connected neurons to a time-varying external stimulus with a high degree of accuracy. These models also support an accurate approximate bifurcation analysis as a function of the level of recurrent input. Finally, the model works without assuming large presynaptic rates or small postsynaptic potential size, allowing mean-field models to be developed even for populations with large interaction terms.
Collapse
Affiliation(s)
- Alex Spaeth
- Electrical and Computer Engineering Department, University of California, Santa Cruz, Santa Cruz, CA, United States
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, United States
| | - David Haussler
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, United States
- Biomolecular Engineering Department, University of California, Santa Cruz, Santa Cruz, CA, United States
| | - Mircea Teodorescu
- Electrical and Computer Engineering Department, University of California, Santa Cruz, Santa Cruz, CA, United States
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, United States
- Biomolecular Engineering Department, University of California, Santa Cruz, Santa Cruz, CA, United States
| |
Collapse
|
4
|
Skaar JEW, Haug N, Stasik AJ, Einevoll GT, Tøndel K. Metamodelling of a two-population spiking neural network. PLoS Comput Biol 2023; 19:e1011625. [PMID: 38032904 PMCID: PMC10688753 DOI: 10.1371/journal.pcbi.1011625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 10/23/2023] [Indexed: 12/02/2023] Open
Abstract
In computational neuroscience, hypotheses are often formulated as bottom-up mechanistic models of the systems in question, consisting of differential equations that can be numerically integrated forward in time. Candidate models can then be validated by comparison against experimental data. The model outputs of neural network models depend on both neuron parameters, connectivity parameters and other model inputs. Successful model fitting requires sufficient exploration of the model parameter space, which can be computationally demanding. Additionally, identifying degeneracy in the parameters, i.e. different combinations of parameter values that produce similar outputs, is of interest, as they define the subset of parameter values consistent with the data. In this computational study, we apply metamodels to a two-population recurrent spiking network of point-neurons, the so-called Brunel network. Metamodels are data-driven approximations to more complex models with more desirable computational properties, which can be run considerably faster than the original model. Specifically, we apply and compare two different metamodelling techniques, masked autoregressive flows (MAF) and deep Gaussian process regression (DGPR), to estimate the power spectra of two different signals; the population spiking activities and the local field potential. We find that the metamodels are able to accurately model the power spectra in the asynchronous irregular regime, and that the DGPR metamodel provides a more accurate representation of the simulator compared to the MAF metamodel. Using the metamodels, we estimate the posterior probability distributions over parameters given observed simulator outputs separately for both LFP and population spiking activities. We find that these distributions correctly identify parameter combinations that give similar model outputs, and that some parameters are significantly more constrained by observing the LFP than by observing the population spiking activities.
Collapse
Affiliation(s)
- Jan-Eirik W. Skaar
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Nicolai Haug
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Alexander J. Stasik
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
- Department of Physics, University of Oslo, Oslo, Norway
| | - Gaute T. Einevoll
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
- Department of Physics, University of Oslo, Oslo, Norway
| | - Kristin Tøndel
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| |
Collapse
|
5
|
Layer M, Senk J, Essink S, van Meegen A, Bos H, Helias M. NNMT: Mean-Field Based Analysis Tools for Neuronal Network Models. Front Neuroinform 2022; 16:835657. [PMID: 35712677 PMCID: PMC9196133 DOI: 10.3389/fninf.2022.835657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Accepted: 03/17/2022] [Indexed: 11/13/2022] Open
Abstract
Mean-field theory of neuronal networks has led to numerous advances in our analytical and intuitive understanding of their dynamics during the past decades. In order to make mean-field based analysis tools more accessible, we implemented an extensible, easy-to-use open-source Python toolbox that collects a variety of mean-field methods for the leaky integrate-and-fire neuron model. The Neuronal Network Mean-field Toolbox (NNMT) in its current state allows for estimating properties of large neuronal networks, such as firing rates, power spectra, and dynamical stability in mean-field and linear response approximation, without running simulations. In this article, we describe how the toolbox is implemented, show how it is used to reproduce results of previous studies, and discuss different use-cases, such as parameter space explorations, or mapping different network models. Although the initial version of the toolbox focuses on methods for leaky integrate-and-fire neurons, its structure is designed to be open and extensible. It aims to provide a platform for collecting analytical methods for neuronal network model analysis, such that the neuroscientific community can take maximal advantage of them.
Collapse
Affiliation(s)
- Moritz Layer
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Simon Essink
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Alexander van Meegen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Institute of Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, Cologne, Germany
| | - Hannah Bos
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
6
|
Knoll G, Lindner B. Information transmission in recurrent networks: Consequences of network noise for synchronous and asynchronous signal encoding. Phys Rev E 2022; 105:044411. [PMID: 35590546 DOI: 10.1103/physreve.105.044411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 03/04/2022] [Indexed: 06/15/2023]
Abstract
Information about natural time-dependent stimuli encoded by the sensory periphery or communication between cortical networks may span a large frequency range or be localized to a smaller frequency band. Biological systems have been shown to multiplex such disparate broadband and narrow-band signals and then discriminate them in later populations by employing either an integration (low-pass) or coincidence detection (bandpass) encoding strategy. Analytical expressions have been developed for both encoding methods in feedforward populations of uncoupled neurons and confirm that the integration of a population's output low-pass filters the information, whereas synchronous output encodes less information overall and retains signal information in a selected frequency band. The present study extends the theory to recurrent networks and shows that recurrence may sharpen the synchronous bandpass filter. The frequency of the pass band is significantly influenced by the synaptic strengths, especially for inhibition-dominated networks. Synchronous information transfer is also increased when network models take into account heterogeneity that arises from the stochastic distribution of the synaptic weights.
Collapse
Affiliation(s)
- Gregory Knoll
- Bernstein Center for Computational Neuroscience Berlin, Philippstr. 13, Haus 2, 10115 Berlin, Germany and Physics Department of Humboldt University Berlin, Newtonstr. 15, 12489 Berlin, Germany
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Philippstr. 13, Haus 2, 10115 Berlin, Germany and Physics Department of Humboldt University Berlin, Newtonstr. 15, 12489 Berlin, Germany
| |
Collapse
|
7
|
Ter Wal M, Tiesinga PHE. Comprehensive characterization of oscillatory signatures in a model circuit with PV- and SOM-expressing interneurons. BIOLOGICAL CYBERNETICS 2021; 115:487-517. [PMID: 34628539 PMCID: PMC8551150 DOI: 10.1007/s00422-021-00894-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Accepted: 09/06/2021] [Indexed: 05/06/2023]
Abstract
Neural circuits contain a wide variety of interneuron types, which differ in their biophysical properties and connectivity patterns. The two most common interneuron types, parvalbumin-expressing and somatostatin-expressing cells, have been shown to be differentially involved in many cognitive functions. These cell types also show different relationships with the power and phase of oscillations in local field potentials. The mechanisms that underlie the emergence of different oscillatory rhythms in neural circuits with more than one interneuron subtype, and the roles specific interneurons play in those mechanisms, are not fully understood. Here, we present a comprehensive analysis of all possible circuit motifs and input regimes that can be achieved in circuits comprised of excitatory cells, PV-like fast-spiking interneurons and SOM-like low-threshold spiking interneurons. We identify 18 unique motifs and simulate their dynamics over a range of input strengths. Using several characteristics, such as oscillation frequency, firing rates, phase of firing and burst fraction, we cluster the resulting circuit dynamics across motifs in order to identify patterns of activity and compare these patterns to behaviors that were generated in circuits with one interneuron type. In addition to the well-known PING and ING gamma oscillations and an asynchronous state, our analysis identified three oscillatory behaviors that were generated by the three-cell-type motifs only: theta-nested gamma oscillations, stable beta oscillations and theta-locked bursting behavior, which have also been observed in experiments. Our characterization provides a map to interpret experimental activity patterns and suggests pharmacological manipulations or optogenetics approaches to validate these conclusions.
Collapse
Affiliation(s)
- Marije Ter Wal
- Department of Neuroinformatics, Donders Institute, Radboud University, Heyendaalseweg 135, 6525 AJ, Nijmegen, The Netherlands.
- School of Psychology, University of Birmingham, Edgbaston, B15 2TT, UK.
| | - Paul H E Tiesinga
- Department of Neuroinformatics, Donders Institute, Radboud University, Heyendaalseweg 135, 6525 AJ, Nijmegen, The Netherlands
| |
Collapse
|
8
|
Romaro C, Najman FA, Lytton WW, Roque AC, Dura-Bernal S. NetPyNE Implementation and Scaling of the Potjans-Diesmann Cortical Microcircuit Model. Neural Comput 2021; 33:1993-2032. [PMID: 34411272 PMCID: PMC8382011 DOI: 10.1162/neco_a_01400] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Accepted: 02/16/2021] [Indexed: 11/04/2022]
Abstract
The Potjans-Diesmann cortical microcircuit model is a widely used model originally implemented in NEST. Here, we reimplemented the model using NetPyNE, a high-level Python interface to the NEURON simulator, and reproduced the findings of the original publication. We also implemented a method for scaling the network size that preserves first- and second-order statistics, building on existing work on network theory. Our new implementation enabled the use of more detailed neuron models with multicompartmental morphologies and multiple biophysically realistic ion channels. This opens the model to new research, including the study of dendritic processing, the influence of individual channel parameters, the relation to local field potentials, and other multiscale interactions. The scaling method we used provides flexibility to increase or decrease the network size as needed when running these CPU-intensive detailed simulations. Finally, NetPyNE facilitates modifying or extending the model using its declarative language; optimizing model parameters; running efficient, large-scale parallelized simulations; and analyzing the model through built-in methods, including local field potential calculation and information flow measures.
Collapse
Affiliation(s)
- Cecilia Romaro
- Department of Physics, School of Philosophy, Sciences and Letters of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP 14049, Brazil
| | - Fernando Araujo Najman
- Institute of Mathematics and Statistics, University of São Paulo, São Paulo, SP 05508, Brazil
| | - William W Lytton
- Department of Physiology and Pharmacology, State University of New York Downstate Health Sciences University, New York, NY 11203, U.S.A.
| | - Antonio C Roque
- Department of Physics, School of Philosophy, Sciences and Letters of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP 14049, Brazil
| | - Salvador Dura-Bernal
- Department of Physiology and Pharmacology, State University of New York Downstate Health Sciences University, New York, NY 11203, U.S.A., and Nathan Kline Institute for Psychiatric Research, New York, NY 10962, U.S.A.
| |
Collapse
|
9
|
An Integrate-and-Fire Spiking Neural Network Model Simulating Artificially Induced Cortical Plasticity. eNeuro 2021; 8:ENEURO.0333-20.2021. [PMID: 33632810 PMCID: PMC7986529 DOI: 10.1523/eneuro.0333-20.2021] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 02/10/2021] [Accepted: 02/16/2021] [Indexed: 11/21/2022] Open
Abstract
We describe an integrate-and-fire (IF) spiking neural network that incorporates spike-timing-dependent plasticity (STDP) and simulates the experimental outcomes of four different conditioning protocols that produce cortical plasticity. The original conditioning experiments were performed in freely moving non-human primates (NHPs) with an autonomous head-fixed bidirectional brain-computer interface (BCI). Three protocols involved closed-loop stimulation triggered from (1) spike activity of single cortical neurons, (2) electromyographic (EMG) activity from forearm muscles, and (3) cycles of spontaneous cortical beta activity. A fourth protocol involved open-loop delivery of pairs of stimuli at neighboring cortical sites. The IF network that replicates the experimental results consists of 360 units with simulated membrane potentials produced by synaptic inputs and triggering a spike when reaching threshold. The 240 cortical units produce either excitatory or inhibitory postsynaptic potentials (PSPs) in their target units. In addition to the experimentally observed conditioning effects, the model also allows computation of underlying network behavior not originally documented. Furthermore, the model makes predictions about outcomes from protocols not yet investigated, including spike-triggered inhibition, γ-triggered stimulation and disynaptic conditioning. The success of the simulations suggests that a simple voltage-based IF model incorporating STDP can capture the essential mechanisms mediating targeted plasticity with closed-loop stimulation.
Collapse
|
10
|
Correlation Transfer by Layer 5 Cortical Neurons Under Recreated Synaptic Inputs In Vitro. J Neurosci 2019; 39:7648-7663. [PMID: 31346031 DOI: 10.1523/jneurosci.3169-18.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Revised: 07/06/2019] [Accepted: 07/12/2019] [Indexed: 11/21/2022] Open
Abstract
Correlated electrical activity in neurons is a prominent characteristic of cortical microcircuits. Despite a growing amount of evidence concerning both spike-count and subthreshold membrane potential pairwise correlations, little is known about how different types of cortical neurons convert correlated inputs into correlated outputs. We studied pyramidal neurons and two classes of GABAergic interneurons of layer 5 in neocortical brain slices obtained from rats of both sexes, and we stimulated them with biophysically realistic correlated inputs, generated using dynamic clamp. We found that the physiological differences between cell types manifested unique features in their capacity to transfer correlated inputs. We used linear response theory and computational modeling to gain clear insights into how cellular properties determine both the gain and timescale of correlation transfer, thus tying single-cell features with network interactions. Our results provide further ground for the functionally distinct roles played by various types of neuronal cells in the cortical microcircuit.SIGNIFICANCE STATEMENT No matter how we probe the brain, we find correlated neuronal activity over a variety of spatial and temporal scales. For the cerebral cortex, significant evidence has accumulated on trial-to-trial covariability in synaptic inputs activation, subthreshold membrane potential fluctuations, and output spike trains. Although we do not yet fully understand their origin and whether they are detrimental or beneficial for information processing, we believe that clarifying how correlations emerge is pivotal for understanding large-scale neuronal network dynamics and computation. Here, we report quantitative differences between excitatory and inhibitory cells, as they relay input correlations into output correlations. We explain this heterogeneity by simple biophysical models and provide the most experimentally validated test of a theory for the emergence of correlations.
Collapse
|
11
|
Dumont G, Gutkin B. Macroscopic phase resetting-curves determine oscillatory coherence and signal transfer in inter-coupled neural circuits. PLoS Comput Biol 2019; 15:e1007019. [PMID: 31071085 PMCID: PMC6529019 DOI: 10.1371/journal.pcbi.1007019] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 05/21/2019] [Accepted: 04/10/2019] [Indexed: 01/05/2023] Open
Abstract
Macroscopic oscillations of different brain regions show multiple phase relationships that are persistent across time and have been implicated in routing information. While multiple cellular mechanisms influence the network oscillatory dynamics and structure the macroscopic firing motifs, one of the key questions is to identify the biophysical neuronal and synaptic properties that permit such motifs to arise. A second important issue is how the different neural activity coherence states determine the communication between the neural circuits. Here we analyse the emergence of phase-locking within bidirectionally delayed-coupled spiking circuits in which global gamma band oscillations arise from synaptic coupling among largely excitable neurons. We consider both the interneuronal (ING) and the pyramidal-interneuronal (PING) population gamma rhythms and the inter coupling targeting the pyramidal or the inhibitory neurons. Using a mean-field approach together with an exact reduction method, we reduce each spiking network to a low dimensional nonlinear system and derive the macroscopic phase resetting-curves (mPRCs) that determine how the phase of the global oscillation responds to incoming perturbations. This is made possible by the use of the quadratic integrate-and-fire model together with a Lorentzian distribution of the bias current. Depending on the type of gamma (PING vs. ING), we show that incoming excitatory inputs can either speed up the macroscopic oscillation (phase advance; type I PRC) or induce both a phase advance and a delay (type II PRC). From there we determine the structure of macroscopic coherence states (phase-locking) of two weakly synaptically-coupled networks. To do so we derive a phase equation for the coupled system which links the synaptic mechanisms to the coherence states of the system. We show that a synaptic transmission delay is a necessary condition for symmetry breaking, i.e. a non-symmetric phase lag between the macroscopic oscillations. This potentially provides an explanation to the experimentally observed variety of gamma phase-locking modes. Our analysis further shows that symmetry-broken coherence states can lead to a preferred direction of signal transfer between the oscillatory networks where this directionality also depends on the timing of the signal. Hence we suggest a causal theory for oscillatory modulation of functional connectivity between cortical circuits. Large scale brain oscillations emerge from synaptic interactions within neuronal circuits. Over the past years, such macroscopic rhythms have been suggested to play a crucial role in routing the flow of information across cortical regions, resulting in a functional connectome. The underlying mechanism is cortical oscillations that bind together following a well-known motif called phase-locking. While there is significant experimental support for multiple phase-locking modes in the brain, it is still unclear what is the underlying mechanism that permits macroscopic rhythms to phase lock. In the present paper we take up with this issue, and to show that, one can study the emergent macroscopic phase-locking within the mathematical framework of weakly coupled oscillators. We find that under synaptic delays, fully symmetrically coupled networks can display symmetry-broken states of activity, where one network starts to lead in phase the second (also sometimes known as stuttering states). When we analyse how incoming transient signals affect the coupled system, we find that in the symmetry-broken state, the effect depends strongly on which network is targeted (the leader or the follower) as well as the timing of the input. Hence we show how the dynamics of the emergent phase-locked activity imposes a functional directionality on how signals are processed. We thus offer clarification on the synaptic and circuit properties responsible for the emergence of multiple phase-locking patterns and provide support for its functional implication in information transfer.
Collapse
Affiliation(s)
- Grégory Dumont
- Group for Neural Theory, LNC INSERM U960, DEC, Ecole Normale Supérieure PSL* University, Paris, France
- * E-mail: (GD); (BG)
| | - Boris Gutkin
- Group for Neural Theory, LNC INSERM U960, DEC, Ecole Normale Supérieure PSL* University, Paris, France
- Center for Cognition and Decision Making, Institute for Cognitive Neuroscience, NRU Higher School of Economics, Moscow, Russia
- * E-mail: (GD); (BG)
| |
Collapse
|
12
|
Einevoll GT, Destexhe A, Diesmann M, Grün S, Jirsa V, de Kamps M, Migliore M, Ness TV, Plesser HE, Schürmann F. The Scientific Case for Brain Simulations. Neuron 2019; 102:735-744. [DOI: 10.1016/j.neuron.2019.03.027] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2018] [Revised: 02/06/2019] [Accepted: 03/18/2019] [Indexed: 01/30/2023]
|
13
|
Krauss P, Schuster M, Dietrich V, Schilling A, Schulze H, Metzner C. Weight statistics controls dynamics in recurrent neural networks. PLoS One 2019; 14:e0214541. [PMID: 30964879 PMCID: PMC6456246 DOI: 10.1371/journal.pone.0214541] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Accepted: 03/14/2019] [Indexed: 11/19/2022] Open
Abstract
Recurrent neural networks are complex non-linear systems, capable of ongoing activity in the absence of driving inputs. The dynamical properties of these systems, in particular their long-time attractor states, are determined on the microscopic level by the connection strengths wij between the individual neurons. However, little is known to which extent network dynamics is tunable on a more coarse-grained level by the statistical features of the weight matrix. In this work, we investigate the dynamics of recurrent networks of Boltzmann neurons. In particular we study the impact of three statistical parameters: density (the fraction of non-zero connections), balance (the ratio of excitatory to inhibitory connections), and symmetry (the fraction of neuron pairs with wij = wji). By computing a 'phase diagram' of network dynamics, we find that balance is the essential control parameter: Its gradual increase from negative to positive values drives the system from oscillatory behavior into a chaotic regime, and eventually into stationary fixed points. Only directly at the border of the chaotic regime do the neural networks display rich but regular dynamics, thus enabling actual information processing. These results suggest that the brain, too, is fine-tuned to the 'edge of chaos' by assuring a proper balance between excitatory and inhibitory neural connections.
Collapse
Affiliation(s)
- Patrick Krauss
- Cognitive Computational Neuroscience Group at the Chair of English Philology and Linguistics, Department of English and American Studies, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Marc Schuster
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Verena Dietrich
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Achim Schilling
- Cognitive Computational Neuroscience Group at the Chair of English Philology and Linguistics, Department of English and American Studies, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Holger Schulze
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Claus Metzner
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
- Biophysics Group, Department of Physics, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| |
Collapse
|
14
|
di Volo M, Romagnoni A, Capone C, Destexhe A. Biologically Realistic Mean-Field Models of Conductance-Based Networks of Spiking Neurons with Adaptation. Neural Comput 2019; 31:653-680. [DOI: 10.1162/neco_a_01173] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Accurate population models are needed to build very large-scale neural models, but their derivation is difficult for realistic networks of neurons, in particular when nonlinear properties are involved, such as conductance-based interactions and spike-frequency adaptation. Here, we consider such models based on networks of adaptive exponential integrate-and-fire excitatory and inhibitory neurons. Using a master equation formalism, we derive a mean-field model of such networks and compare it to the full network dynamics. The mean-field model is capable of correctly predicting the average spontaneous activity levels in asynchronous irregular regimes similar to in vivo activity. It also captures the transient temporal response of the network to complex external inputs. Finally, the mean-field model is also able to quantitatively describe regimes where high- and low-activity states alternate (up-down state dynamics), leading to slow oscillations. We conclude that such mean-field models are biologically realistic in the sense that they can capture both spontaneous and evoked activity, and they naturally appear as candidates to build very large-scale models involving multiple brain areas.
Collapse
Affiliation(s)
- Matteo di Volo
- Unité de Neuroscience, Information et Complexité, CNRS FRE 3693, 91198 Gif sur Yvette, France
| | - Alberto Romagnoni
- Centre de Recherche sur l'inflammation UMR 1149, Inserm-Université Paris Diderot, 75018 Paris, France, and Data Team, Departement d'informatique de l'Ecole normale supérieure, CNRS, PSL Research University, 75005 Paris, France, and European Institute for Theoretical Neuroscience, 75012 Paris, France
| | - Cristiano Capone
- European Institute for Theoretical Neuroscience, 75012 Paris, France, and INFN Sezione di Roma, Rome 00185, Italy
| | - Alain Destexhe
- Unité de Neuroscience, Information et Complexité, CNRS FRE 3693, 91198 Gif sur Yvette, France, and European Institute for Theoretical Neuroscience, 75012 Paris, France
| |
Collapse
|
15
|
Gutzen R, von Papen M, Trensch G, Quaglio P, Grün S, Denker M. Reproducible Neural Network Simulations: Statistical Methods for Model Validation on the Level of Network Activity Data. Front Neuroinform 2018; 12:90. [PMID: 30618696 PMCID: PMC6305903 DOI: 10.3389/fninf.2018.00090] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2018] [Accepted: 11/14/2018] [Indexed: 11/13/2022] Open
Abstract
Computational neuroscience relies on simulations of neural network models to bridge the gap between the theory of neural networks and the experimentally observed activity dynamics in the brain. The rigorous validation of simulation results against reference data is thus an indispensable part of any simulation workflow. Moreover, the availability of different simulation environments and levels of model description require also validation of model implementations against each other to evaluate their equivalence. Despite rapid advances in the formalized description of models, data, and analysis workflows, there is no accepted consensus regarding the terminology and practical implementation of validation workflows in the context of neural simulations. This situation prevents the generic, unbiased comparison between published models, which is a key element of enhancing reproducibility of computational research in neuroscience. In this study, we argue for the establishment of standardized statistical test metrics that enable the quantitative validation of network models on the level of the population dynamics. Despite the importance of validating the elementary components of a simulation, such as single cell dynamics, building networks from validated building blocks does not entail the validity of the simulation on the network scale. Therefore, we introduce a corresponding set of validation tests and present an example workflow that practically demonstrates the iterative model validation of a spiking neural network model against its reproduction on the SpiNNaker neuromorphic hardware system. We formally implement the workflow using a generic Python library that we introduce for validation tests on neural network activity data. Together with the companion study (Trensch et al., 2018), the work presents a consistent definition, formalization, and implementation of the verification and validation process for neural network simulations.
Collapse
Affiliation(s)
- Robin Gutzen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | - Michael von Papen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Guido Trensch
- Simulation Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, JARA, Jülich Research Centre, Jülich, Germany
| | - Pietro Quaglio
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | - Sonja Grün
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | - Michael Denker
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
16
|
Arkhipov A, Gouwens NW, Billeh YN, Gratiy S, Iyer R, Wei Z, Xu Z, Abbasi-Asl R, Berg J, Buice M, Cain N, da Costa N, de Vries S, Denman D, Durand S, Feng D, Jarsky T, Lecoq J, Lee B, Li L, Mihalas S, Ocker GK, Olsen SR, Reid RC, Soler-Llavina G, Sorensen SA, Wang Q, Waters J, Scanziani M, Koch C. Visual physiology of the layer 4 cortical circuit in silico. PLoS Comput Biol 2018; 14:e1006535. [PMID: 30419013 PMCID: PMC6258373 DOI: 10.1371/journal.pcbi.1006535] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2018] [Revised: 11/26/2018] [Accepted: 09/29/2018] [Indexed: 01/15/2023] Open
Abstract
Despite advances in experimental techniques and accumulation of large datasets concerning the composition and properties of the cortex, quantitative modeling of cortical circuits under in-vivo-like conditions remains challenging. Here we report and publicly release a biophysically detailed circuit model of layer 4 in the mouse primary visual cortex, receiving thalamo-cortical visual inputs. The 45,000-neuron model was subjected to a battery of visual stimuli, and results were compared to published work and new in vivo experiments. Simulations reproduced a variety of observations, including effects of optogenetic perturbations. Critical to the agreement between responses in silico and in vivo were the rules of functional synaptic connectivity between neurons. Interestingly, after extreme simplification the model still performed satisfactorily on many measurements, although quantitative agreement with experiments suffered. These results emphasize the importance of functional rules of cortical wiring and enable a next generation of data-driven models of in vivo neural activity and computations. How can we capture the incredible complexity of brain circuits in quantitative models, and what can such models teach us about mechanisms underlying brain activity? To answer these questions, we set out to build extensive, bio-realistic models of brain circuitry by employing systematic datasets on brain structure and function. Here we report the first modeling results of this project, focusing on the layer 4 of the primary visual cortex (V1) of the mouse. Our simulations reproduced a variety of experimental observations in response to a large battery of visual stimuli. The results elucidated circuit mechanisms determining patters of neuronal activity in layer 4 –in particular, the roles of feedforward thalamic inputs and specific patterns of intracortical connectivity in producing tuning of neuronal responses to the orientation of motion. Simplification of neuronal models led to specific deficiencies in reproducing experimental data, giving insights into how biological details contribute to various aspects of brain activity. To enable future development of more sophisticated models, we make the software code, the model, and simulation results publicly available.
Collapse
Affiliation(s)
- Anton Arkhipov
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Nathan W Gouwens
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Yazan N Billeh
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Sergey Gratiy
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Ramakrishnan Iyer
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Ziqiang Wei
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
| | - Zihao Xu
- University of California San Diego, La Jolla, CA, United States of America
| | - Reza Abbasi-Asl
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Jim Berg
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Michael Buice
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Nicholas Cain
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Nuno da Costa
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Saskia de Vries
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Daniel Denman
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Severine Durand
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - David Feng
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Tim Jarsky
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Jérôme Lecoq
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Brian Lee
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Lu Li
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Stefan Mihalas
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Gabriel K Ocker
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Shawn R Olsen
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - R Clay Reid
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | | | - Staci A Sorensen
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Quanxin Wang
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Jack Waters
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Massimo Scanziani
- Howard Hughes Medical Institute and Department of Physiology, University of California San Francisco, San Francisco, California, United States of America
| | - Christof Koch
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| |
Collapse
|
17
|
Schmidt M, Bakker R, Shen K, Bezgin G, Diesmann M, van Albada SJ. A multi-scale layer-resolved spiking network model of resting-state dynamics in macaque visual cortical areas. PLoS Comput Biol 2018; 14:e1006359. [PMID: 30335761 PMCID: PMC6193609 DOI: 10.1371/journal.pcbi.1006359] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2017] [Accepted: 07/12/2018] [Indexed: 11/28/2022] Open
Abstract
Cortical activity has distinct features across scales, from the spiking statistics of individual cells to global resting-state networks. We here describe the first full-density multi-area spiking network model of cortex, using macaque visual cortex as a test system. The model represents each area by a microcircuit with area-specific architecture and features layer- and population-resolved connectivity between areas. Simulations reveal a structured asynchronous irregular ground state. In a metastable regime, the network reproduces spiking statistics from electrophysiological recordings and cortico-cortical interaction patterns in fMRI functional connectivity under resting-state conditions. Stable inter-area propagation is supported by cortico-cortical synapses that are moderately strong onto excitatory neurons and stronger onto inhibitory neurons. Causal interactions depend on both cortical structure and the dynamical state of populations. Activity propagates mainly in the feedback direction, similar to experimental results associated with visual imagery and sleep. The model unifies local and large-scale accounts of cortex, and clarifies how the detailed connectivity of cortex shapes its dynamics on multiple scales. Based on our simulations, we hypothesize that in the spontaneous condition the brain operates in a metastable regime where cortico-cortical projections target excitatory and inhibitory populations in a balanced manner that produces substantial inter-area interactions while maintaining global stability. The mammalian cortex fulfills its complex tasks by operating on multiple temporal and spatial scales from single cells to entire areas comprising millions of cells. These multi-scale dynamics are supported by specific network structures at all levels of organization. Since models of cortex hitherto tend to concentrate on a single scale, little is known about how cortical structure shapes the multi-scale dynamics of the network. We here present dynamical simulations of a multi-area network model at neuronal and synaptic resolution with population-specific connectivity based on extensive experimental data which accounts for a wide range of dynamical phenomena. Our model elucidates relationships between local and global scales in cortex and provides a platform for future studies of cortical function.
Collapse
Affiliation(s)
- Maximilian Schmidt
- Laboratory for Neural Coding and Brain Computing, RIKEN Center for Brain Science, Wako-Shi, Saitama, Japan
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Rembrandt Bakker
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Kelly Shen
- Rotman Research Institute, Baycrest, Toronto, Ontario, Canada
| | - Gleb Bezgin
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Canada
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
- Department of Physics, RWTH Aachen University, Aachen, Germany
| | - Sacha Jennifer van Albada
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- * E-mail:
| |
Collapse
|
18
|
Nowke C, Diaz-Pier S, Weyers B, Hentschel B, Morrison A, Kuhlen TW, Peyser A. Toward Rigorous Parameterization of Underconstrained Neural Network Models Through Interactive Visualization and Steering of Connectivity Generation. Front Neuroinform 2018; 12:32. [PMID: 29937723 PMCID: PMC5992991 DOI: 10.3389/fninf.2018.00032] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2017] [Accepted: 05/11/2018] [Indexed: 11/13/2022] Open
Abstract
Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find. Moreover, in evolving systems, unique final state solutions can be reached by multiple different trajectories. Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on connectivity generation, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases-the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed.
Collapse
Affiliation(s)
- Christian Nowke
- Visual Computing Institute, RWTH Aachen University, JARA-HPC, Aachen, Germany
| | - Sandra Diaz-Pier
- SimLab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Benjamin Weyers
- Visual Computing Institute, RWTH Aachen University, JARA-HPC, Aachen, Germany
| | - Bernd Hentschel
- Visual Computing Institute, RWTH Aachen University, JARA-HPC, Aachen, Germany
| | - Abigail Morrison
- SimLab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany.,Institute of Neuroscience and Medicine, Institute for Advanced Simulation, JARA Institute Brain Structure-Function Relationships, Forschungszentrum Jülich GmbH, Jülich, Germany.,Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| | - Torsten W Kuhlen
- Visual Computing Institute, RWTH Aachen University, JARA-HPC, Aachen, Germany
| | - Alexander Peyser
- SimLab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| |
Collapse
|
19
|
Kass RE, Amari SI, Arai K, Brown EN, Diekman CO, Diesmann M, Doiron B, Eden UT, Fairhall AL, Fiddyment GM, Fukai T, Grün S, Harrison MT, Helias M, Nakahara H, Teramae JN, Thomas PJ, Reimers M, Rodu J, Rotstein HG, Shea-Brown E, Shimazaki H, Shinomoto S, Yu BM, Kramer MA. Computational Neuroscience: Mathematical and Statistical Perspectives. ANNUAL REVIEW OF STATISTICS AND ITS APPLICATION 2018; 5:183-214. [PMID: 30976604 PMCID: PMC6454918 DOI: 10.1146/annurev-statistics-041715-033733] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Mathematical and statistical models have played important roles in neuroscience, especially by describing the electrical activity of neurons recorded individually, or collectively across large networks. As the field moves forward rapidly, new challenges are emerging. For maximal effectiveness, those working to advance computational neuroscience will need to appreciate and exploit the complementary strengths of mechanistic theory and the statistical paradigm.
Collapse
Affiliation(s)
- Robert E Kass
- Carnegie Mellon University, Pittsburgh, PA, USA, 15213;
| | - Shun-Ichi Amari
- RIKEN Brain Science Institute, Wako, Saitama Prefecture, Japan, 351-0198
| | | | - Emery N Brown
- Massachusetts Institute of Technology, Cambridge, MA, USA, 02139
- Harvard Medical School, Boston, MA, USA, 02115
| | | | - Markus Diesmann
- Jülich Research Centre, Jülich, Germany, 52428
- RWTH Aachen University, Aachen, Germany, 52062
| | - Brent Doiron
- University of Pittsburgh, Pittsburgh, PA, USA, 15260
| | - Uri T Eden
- Boston University, Boston, MA, USA, 02215
| | | | | | - Tomoki Fukai
- RIKEN Brain Science Institute, Wako, Saitama Prefecture, Japan, 351-0198
| | - Sonja Grün
- Jülich Research Centre, Jülich, Germany, 52428
- RWTH Aachen University, Aachen, Germany, 52062
| | | | - Moritz Helias
- Jülich Research Centre, Jülich, Germany, 52428
- RWTH Aachen University, Aachen, Germany, 52062
| | - Hiroyuki Nakahara
- RIKEN Brain Science Institute, Wako, Saitama Prefecture, Japan, 351-0198
| | | | - Peter J Thomas
- Case Western Reserve University, Cleveland, OH, USA, 44106
| | - Mark Reimers
- Michigan State University, East Lansing, MI, USA, 48824
| | - Jordan Rodu
- Carnegie Mellon University, Pittsburgh, PA, USA, 15213;
| | | | | | - Hideaki Shimazaki
- Honda Research Institute Japan, Wako, Saitama Prefecture, Japan, 351-0188
- Kyoto University, Kyoto, Kyoto Prefecture, Japan, 606-8502
| | | | - Byron M Yu
- Carnegie Mellon University, Pittsburgh, PA, USA, 15213;
| | | |
Collapse
|
20
|
Völker M, Fiederer LDJ, Berberich S, Hammer J, Behncke J, Kršek P, Tomášek M, Marusič P, Reinacher PC, Coenen VA, Helias M, Schulze-Bonhage A, Burgard W, Ball T. The dynamics of error processing in the human brain as reflected by high-gamma activity in noninvasive and intracranial EEG. Neuroimage 2018; 173:564-579. [PMID: 29471099 DOI: 10.1016/j.neuroimage.2018.01.059] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2017] [Revised: 01/11/2018] [Accepted: 01/21/2018] [Indexed: 01/13/2023] Open
Abstract
Error detection in motor behavior is a fundamental cognitive function heavily relying on local cortical information processing. Neural activity in the high-gamma frequency band (HGB) closely reflects such local cortical processing, but little is known about its role in error processing, particularly in the healthy human brain. Here we characterize the error-related response of the human brain based on data obtained with noninvasive EEG optimized for HGB mapping in 31 healthy subjects (15 females, 16 males), and additional intracranial EEG data from 9 epilepsy patients (4 females, 5 males). Our findings reveal a multiscale picture of the global and local dynamics of error-related HGB activity in the human brain. On the global level as reflected in the noninvasive EEG, the error-related response started with an early component dominated by anterior brain regions, followed by a shift to parietal regions, and a subsequent phase characterized by sustained parietal HGB activity. This phase lasted for more than 1 s after the error onset. On the local level reflected in the intracranial EEG, a cascade of both transient and sustained error-related responses involved an even more extended network, spanning beyond frontal and parietal regions to the insula and the hippocampus. HGB mapping appeared especially well suited to investigate late, sustained components of the error response, possibly linked to downstream functional stages such as error-related learning and behavioral adaptation. Our findings establish the basic spatio-temporal properties of HGB activity as a neural correlate of error processing, complementing traditional error-related potential studies.
Collapse
Affiliation(s)
- Martin Völker
- Translational Neurotechnology Lab, Medical Center - University of Freiburg, 79106, Freiburg, Germany; Graduate School of Robotics, University of Freiburg, 79106, Freiburg, Germany; Department of Computer Science, University of Freiburg, 79110, Freiburg, Germany; BrainLinks-BrainTools, University of Freiburg, 79110, Freiburg, Germany.
| | - Lukas D J Fiederer
- Translational Neurotechnology Lab, Medical Center - University of Freiburg, 79106, Freiburg, Germany; BrainLinks-BrainTools, University of Freiburg, 79110, Freiburg, Germany; Faculty of Biology, University of Freiburg, 79104, Freiburg, Germany; Bernstein Center, University of Freiburg, 79104, Freiburg, Germany
| | - Sofie Berberich
- Translational Neurotechnology Lab, Medical Center - University of Freiburg, 79106, Freiburg, Germany; Faculty of Medicine, University of Freiburg, 79106, Freiburg, Germany
| | - Jiří Hammer
- Translational Neurotechnology Lab, Medical Center - University of Freiburg, 79106, Freiburg, Germany; BrainLinks-BrainTools, University of Freiburg, 79110, Freiburg, Germany; Department of Neurology, 2nd Faculty of Medicine and Motol University Hospital, Charles University, 15006, Prague, Czech Republic
| | - Joos Behncke
- Translational Neurotechnology Lab, Medical Center - University of Freiburg, 79106, Freiburg, Germany; Department of Computer Science, University of Freiburg, 79110, Freiburg, Germany; BrainLinks-BrainTools, University of Freiburg, 79110, Freiburg, Germany
| | - Pavel Kršek
- Department of Paediatric Neurology, 2nd Faculty of Medicine and Motol University Hospital, Charles University, 15006, Prague, Czech Republic
| | - Martin Tomášek
- Department of Neurology, 2nd Faculty of Medicine and Motol University Hospital, Charles University, 15006, Prague, Czech Republic
| | - Petr Marusič
- Department of Neurology, 2nd Faculty of Medicine and Motol University Hospital, Charles University, 15006, Prague, Czech Republic
| | - Peter C Reinacher
- Faculty of Medicine, University of Freiburg, 79106, Freiburg, Germany; Stereotactic and Functional Neurosurgery, Medical Center - University of Freiburg, 79106, Freiburg, Germany
| | - Volker A Coenen
- Faculty of Medicine, University of Freiburg, 79106, Freiburg, Germany; Stereotactic and Functional Neurosurgery, Medical Center - University of Freiburg, 79106, Freiburg, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), Jülich Research Centre and JARA, 52428, Jülich, Germany
| | - Andreas Schulze-Bonhage
- BrainLinks-BrainTools, University of Freiburg, 79110, Freiburg, Germany; Faculty of Medicine, University of Freiburg, 79106, Freiburg, Germany; Epilepsy Center, Medical Center - University of Freiburg, 79106, Freiburg, Germany
| | - Wolfram Burgard
- Department of Computer Science, University of Freiburg, 79110, Freiburg, Germany; BrainLinks-BrainTools, University of Freiburg, 79110, Freiburg, Germany; Autonomous Intelligent Systems, University of Freiburg, 79110, Freiburg, Germany
| | - Tonio Ball
- Translational Neurotechnology Lab, Medical Center - University of Freiburg, 79106, Freiburg, Germany; BrainLinks-BrainTools, University of Freiburg, 79110, Freiburg, Germany; Bernstein Center, University of Freiburg, 79104, Freiburg, Germany; Faculty of Medicine, University of Freiburg, 79106, Freiburg, Germany
| |
Collapse
|
21
|
Bistability and up/down state alternations in inhibition-dominated randomly connected networks of LIF neurons. Sci Rep 2017; 7:11916. [PMID: 28931930 PMCID: PMC5607291 DOI: 10.1038/s41598-017-12033-y] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2017] [Accepted: 08/30/2017] [Indexed: 11/09/2022] Open
Abstract
Electrophysiological recordings in cortex in vivo have revealed a rich variety of dynamical regimes ranging from irregular asynchronous states to a diversity of synchronized states, depending on species, anesthesia, and external stimulation. The average population firing rate in these states is typically low. We study analytically and numerically a network of sparsely connected excitatory and inhibitory integrate-and-fire neurons in the inhibition-dominated, low firing rate regime. For sufficiently high values of the external input, the network exhibits an asynchronous low firing frequency state (L). Depending on synaptic time constants, we show that two scenarios may occur when external inputs are decreased: (1) the L state can destabilize through a Hopf bifucation as the external input is decreased, leading to synchronized oscillations spanning d δ to β frequencies; (2) the network can reach a bistable region, between the low firing frequency network state (L) and a quiescent one (Q). Adding an adaptation current to excitatory neurons leads to spontaneous alternations between L and Q states, similar to experimental observations on UP and DOWN states alternations.
Collapse
|
22
|
Hahne J, Dahmen D, Schuecker J, Frommer A, Bolten M, Helias M, Diesmann M. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator. Front Neuroinform 2017; 11:34. [PMID: 28596730 PMCID: PMC5442232 DOI: 10.3389/fninf.2017.00034] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2016] [Accepted: 05/01/2017] [Indexed: 01/21/2023] Open
Abstract
Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.
Collapse
Affiliation(s)
- Jan Hahne
- School of Mathematics and Natural Sciences, Bergische Universität WuppertalWuppertal, Germany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich Research CentreJülich, Germany
| | - Jannis Schuecker
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich Research CentreJülich, Germany
| | - Andreas Frommer
- School of Mathematics and Natural Sciences, Bergische Universität WuppertalWuppertal, Germany
| | - Matthias Bolten
- School of Mathematics and Natural Sciences, Bergische Universität WuppertalWuppertal, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich Research CentreJülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen UniversityAachen, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich Research CentreJülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen UniversityAachen, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen UniversityAachen, Germany
| |
Collapse
|
23
|
Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size. PLoS Comput Biol 2017; 13:e1005507. [PMID: 28422957 PMCID: PMC5415267 DOI: 10.1371/journal.pcbi.1005507] [Citation(s) in RCA: 72] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2016] [Revised: 05/03/2017] [Accepted: 04/07/2017] [Indexed: 11/22/2022] Open
Abstract
Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50–2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations. Understanding the brain requires mathematical models on different spatial scales. On the “microscopic” level of nerve cells, neural spike trains can be well predicted by phenomenological spiking neuron models. On a coarse scale, neural activity can be modeled by phenomenological equations that summarize the total activity of many thousands of neurons. Such population models are widely used to model neuroimaging data such as EEG, MEG or fMRI data. However, it is largely unknown how large-scale models are connected to an underlying microscale model. Linking the scales is vital for a correct description of rapid changes and fluctuations of the population activity, and is crucial for multiscale brain models. The challenge is to treat realistic spiking dynamics as well as fluctuations arising from the finite number of neurons. We obtained such a link by deriving stochastic population equations on the mesoscopic scale of 100–1000 neurons from an underlying microscopic model. These equations can be efficiently integrated and reproduce results of a microscopic simulation while achieving a high speed-up factor. We expect that our novel population theory on the mesoscopic scale will be instrumental for understanding experimental data on information processing in the brain, and ultimately link microscopic and macroscopic activity patterns.
Collapse
|
24
|
Herreras O. Local Field Potentials: Myths and Misunderstandings. Front Neural Circuits 2016; 10:101. [PMID: 28018180 PMCID: PMC5156830 DOI: 10.3389/fncir.2016.00101] [Citation(s) in RCA: 195] [Impact Index Per Article: 21.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Accepted: 11/28/2016] [Indexed: 12/02/2022] Open
Abstract
The intracerebral local field potential (LFP) is a measure of brain activity that reflects the highly dynamic flow of information across neural networks. This is a composite signal that receives contributions from multiple neural sources, yet interpreting its nature and significance may be hindered by several confounding factors and technical limitations. By and large, the main factor defining the amplitude of LFPs is the geometry of the current sources, over and above the degree of synchronization or the properties of the media. As such, similar levels of activity may result in potentials that differ in several orders of magnitude in different populations. The geometry of these sources has been experimentally inaccessible until intracerebral high density recordings enabled the co-activating sources to be revealed. Without this information, it has proven difficult to interpret a century's worth of recordings that used temporal cues alone, such as event or spike related potentials and frequency bands. Meanwhile, a collection of biophysically ill-founded concepts have been considered legitimate, which can now be corrected in the light of recent advances. The relationship of LFPs to their sources is often counterintuitive. For instance, most LFP activity is not local but remote, it may be larger further from rather than close to the source, the polarity does not define its excitatory or inhibitory nature, and the amplitude may increase when source's activity is reduced. As technological developments foster the use of LFPs, the time is now ripe to raise awareness of the need to take into account spatial aspects of these signals and of the errors derived from neglecting to do so.
Collapse
Affiliation(s)
- Oscar Herreras
- Department of Translational Neuroscience, Cajal Institute-CSICMadrid, Spain
| |
Collapse
|
25
|
Hagen E, Dahmen D, Stavrinou ML, Lindén H, Tetzlaff T, van Albada SJ, Grün S, Diesmann M, Einevoll GT. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks. Cereb Cortex 2016; 26:4461-4496. [PMID: 27797828 PMCID: PMC6193674 DOI: 10.1093/cercor/bhw237] [Citation(s) in RCA: 55] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2016] [Revised: 05/31/2016] [Accepted: 07/12/2016] [Indexed: 12/21/2022] Open
Abstract
With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail.
Collapse
Affiliation(s)
- Espen Hagen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, 52425 Jülich, Germany.,Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences, 1430 Ås, Norway
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, 52425 Jülich, Germany
| | - Maria L Stavrinou
- Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences, 1430 Ås, Norway.,Department of Psychology, University of Oslo, 0373 Oslo, Norway
| | - Henrik Lindén
- Department of Neuroscience and Pharmacology, University of Copenhagen, 2200 Copenhagen, Denmark.,Department of Computational Biology, School of Computer Science and Communication, Royal Institute of Technology, 100 44 Stockholm, Sweden
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, 52425 Jülich, Germany
| | - Sacha J van Albada
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, 52425 Jülich, Germany
| | - Sonja Grün
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, 52425 Jülich, Germany.,Theoretical Systems Neurobiology, RWTH Aachen University, 52056 Aachen, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, 52425 Jülich, Germany.,Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, 52062 Aachen, Germany
| | - Gaute T Einevoll
- Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences, 1430 Ås, Norway.,Department of Physics, University of Oslo, 0316 Oslo, Norway
| |
Collapse
|