51
|
Abstract
In this work, we address the neuronal encoding problem from a Bayesian perspective. Specifically, we ask whether neuronal responses in an in vitro neuronal network are consistent with ideal Bayesian observer responses under the free energy principle. In brief, we stimulated an in vitro cortical cell culture with stimulus trains that had a known statistical structure. We then asked whether recorded neuronal responses were consistent with variational message passing based upon free energy minimisation (i.e., evidence maximisation). Effectively, this required us to solve two problems: first, we had to formulate the Bayes-optimal encoding of the causes or sources of sensory stimulation, and then show that these idealised responses could account for observed electrophysiological responses. We describe a simulation of an optimal neural network (i.e., the ideal Bayesian neural code) and then consider the mapping from idealised in silico responses to recorded in vitro responses. Our objective was to find evidence for functional specialisation and segregation in the in vitro neural network that reproduced in silico learning via free energy minimisation. Finally, we combined the in vitro and in silico results to characterise learning in terms of trajectories in a variational information plane of accuracy and complexity.
Collapse
|
52
|
Rocha RP, Koçillari L, Suweis S, Corbetta M, Maritan A. Homeostatic plasticity and emergence of functional networks in a whole-brain model at criticality. Sci Rep 2018; 8:15682. [PMID: 30356174 PMCID: PMC6200722 DOI: 10.1038/s41598-018-33923-9] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2018] [Accepted: 09/27/2018] [Indexed: 11/09/2022] Open
Abstract
Understanding the relationship between large-scale structural and functional brain networks remains a crucial issue in modern neuroscience. Recently, there has been growing interest in investigating the role of homeostatic plasticity mechanisms, across different spatiotemporal scales, in regulating network activity and brain functioning against a wide range of environmental conditions and brain states (e.g., during learning, development, ageing, neurological diseases). In the present study, we investigate how the inclusion of homeostatic plasticity in a stochastic whole-brain model, implemented as a normalization of the incoming node's excitatory input, affects the macroscopic activity during rest and the formation of functional networks. Importantly, we address the structure-function relationship both at the group and individual-based levels. In this work, we show that normalization of the node's excitatory input improves the correspondence between simulated neural patterns of the model and various brain functional data. Indeed, we find that the best match is achieved when the model control parameter is in its critical value and that normalization minimizes both the variability of the critical points and neuronal activity patterns among subjects. Therefore, our results suggest that the inclusion of homeostatic principles lead to more realistic brain activity consistent with the hallmarks of criticality. Our theoretical framework open new perspectives in personalized brain modeling with potential applications to investigate the deviation from criticality due to structural lesions (e.g. stroke) or brain disorders.
Collapse
Affiliation(s)
- Rodrigo P Rocha
- Department of Physics, School of Philosophy, Sciences and Letters of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP, Brazil. .,Dipartimento di Fisica e Astronomia, Università di Padova and INFN, via Marzolo 8, I-35131, Padova, Italy. .,Padova Neuroscience Center, Università di Padova, Padova, Italy.
| | - Loren Koçillari
- Dipartimento di Fisica e Astronomia, Università di Padova and INFN, via Marzolo 8, I-35131, Padova, Italy.,Padova Neuroscience Center, Università di Padova, Padova, Italy
| | - Samir Suweis
- Dipartimento di Fisica e Astronomia, Università di Padova and INFN, via Marzolo 8, I-35131, Padova, Italy.,Padova Neuroscience Center, Università di Padova, Padova, Italy
| | - Maurizio Corbetta
- Padova Neuroscience Center, Università di Padova, Padova, Italy.,Dipartimento di Neuroscienze, Università di Padova, Padova, Italy.,Departments of Neurology, Radiology, Neuroscience, and Bioengineering, Washington University, School of Medicine, St. Louis, USA
| | - Amos Maritan
- Dipartimento di Fisica e Astronomia, Università di Padova and INFN, via Marzolo 8, I-35131, Padova, Italy.,Padova Neuroscience Center, Università di Padova, Padova, Italy
| |
Collapse
|
53
|
Kossio FYK, Goedeke S, van den Akker B, Ibarz B, Memmesheimer RM. Growing Critical: Self-Organized Criticality in a Developing Neural System. PHYSICAL REVIEW LETTERS 2018; 121:058301. [PMID: 30118252 DOI: 10.1103/physrevlett.121.058301] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2017] [Revised: 05/15/2018] [Indexed: 06/08/2023]
Abstract
Experiments in various neural systems found avalanches: bursts of activity with characteristics typical for critical dynamics. A possible explanation for their occurrence is an underlying network that self-organizes into a critical state. We propose a simple spiking model for developing neural networks, showing how these may "grow into" criticality. Avalanches generated by our model correspond to clusters of widely applied Hawkes processes. We analytically derive the cluster size and duration distributions and find that they agree with those of experimentally observed neuronal avalanches.
Collapse
Affiliation(s)
| | - Sven Goedeke
- Neural Network Dynamics and Computation, Institute of Genetics, University of Bonn, Bonn, Germany
| | | | - Borja Ibarz
- Nonlinear Dynamics and Chaos Group, Departamento de Fisica, Universidad Rey Juan Carlos, Madrid, Spain
| | - Raoul-Martin Memmesheimer
- Neural Network Dynamics and Computation, Institute of Genetics, University of Bonn, Bonn, Germany
- Department of Neuroinformatics, Radboud University Nijmegen, Nijmegen, Netherlands
| |
Collapse
|
54
|
Manninen T, Aćimović J, Havela R, Teppola H, Linne ML. Challenges in Reproducibility, Replicability, and Comparability of Computational Models and Tools for Neuronal and Glial Networks, Cells, and Subcellular Structures. Front Neuroinform 2018; 12:20. [PMID: 29765315 PMCID: PMC5938413 DOI: 10.3389/fninf.2018.00020] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2018] [Accepted: 04/06/2018] [Indexed: 01/26/2023] Open
Abstract
The possibility to replicate and reproduce published research results is one of the biggest challenges in all areas of science. In computational neuroscience, there are thousands of models available. However, it is rarely possible to reimplement the models based on the information in the original publication, let alone rerun the models just because the model implementations have not been made publicly available. We evaluate and discuss the comparability of a versatile choice of simulation tools: tools for biochemical reactions and spiking neuronal networks, and relatively new tools for growth in cell cultures. The replicability and reproducibility issues are considered for computational models that are equally diverse, including the models for intracellular signal transduction of neurons and glial cells, in addition to single glial cells, neuron-glia interactions, and selected examples of spiking neuronal networks. We also address the comparability of the simulation results with one another to comprehend if the studied models can be used to answer similar research questions. In addition to presenting the challenges in reproducibility and replicability of published results in computational neuroscience, we highlight the need for developing recommendations and good practices for publishing simulation tools and computational models. Model validation and flexible model description must be an integral part of the tool used to simulate and develop computational models. Constant improvement on experimental techniques and recording protocols leads to increasing knowledge about the biophysical mechanisms in neural systems. This poses new challenges for computational neuroscience: extended or completely new computational methods and models may be required. Careful evaluation and categorization of the existing models and tools provide a foundation for these future needs, for constructing multiscale models or extending the models to incorporate additional or more detailed biophysical mechanisms. Improving the quality of publications in computational neuroscience, enabling progressive building of advanced computational models and tools, can be achieved only through adopting publishing standards which underline replicability and reproducibility of research results.
Collapse
Affiliation(s)
- Tiina Manninen
- Computational Neuroscience Group, BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of Technology, Tampere, Finland
- Laboratory of Signal Processing, Tampere University of Technology, Tampere, Finland
| | - Jugoslava Aćimović
- Computational Neuroscience Group, BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of Technology, Tampere, Finland
- Laboratory of Signal Processing, Tampere University of Technology, Tampere, Finland
| | - Riikka Havela
- Computational Neuroscience Group, BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of Technology, Tampere, Finland
- Laboratory of Signal Processing, Tampere University of Technology, Tampere, Finland
| | - Heidi Teppola
- Computational Neuroscience Group, BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of Technology, Tampere, Finland
- Laboratory of Signal Processing, Tampere University of Technology, Tampere, Finland
| | - Marja-Leena Linne
- Computational Neuroscience Group, BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of Technology, Tampere, Finland
- Laboratory of Signal Processing, Tampere University of Technology, Tampere, Finland
| |
Collapse
|
55
|
Priesemann V, Shriki O. Can a time varying external drive give rise to apparent criticality in neural systems? PLoS Comput Biol 2018; 14:e1006081. [PMID: 29813052 PMCID: PMC6002119 DOI: 10.1371/journal.pcbi.1006081] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2017] [Revised: 06/14/2018] [Accepted: 03/08/2018] [Indexed: 02/01/2023] Open
Abstract
The finding of power law scaling in neural recordings lends support to the hypothesis of critical brain dynamics. However, power laws are not unique to critical systems and can arise from alternative mechanisms. Here, we investigate whether a common time-varying external drive to a set of Poisson units can give rise to neuronal avalanches and exhibit apparent criticality. To this end, we analytically derive the avalanche size and duration distributions, as well as additional measures, first for homogeneous Poisson activity, and then for slowly varying inhomogeneous Poisson activity. We show that homogeneous Poisson activity cannot give rise to power law distributions. Inhomogeneous activity can also not generate perfect power laws, but it can exhibit approximate power laws with cutoffs that are comparable to those typically observed in experiments. The mechanism of generating apparent criticality by time-varying external fields, forces or input may generalize to many other systems like dynamics of swarms, diseases or extinction cascades. Here, we illustrate the analytically derived effects for spike recordings in vivo and discuss approaches to distinguish true from apparent criticality. Ultimately, this requires causal interventions, which allow separating internal system properties from externally imposed ones.
Collapse
Affiliation(s)
- Viola Priesemann
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Oren Shriki
- Department of Cognitive and Brain Sciences, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| |
Collapse
|
56
|
Daffertshofer A, Ton R, Pietras B, Kringelbach ML, Deco G. Scale-freeness or partial synchronization in neural mass phase oscillator networks: Pick one of two? Neuroimage 2018; 180:428-441. [PMID: 29625237 DOI: 10.1016/j.neuroimage.2018.03.070] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2017] [Revised: 03/22/2018] [Accepted: 03/28/2018] [Indexed: 11/18/2022] Open
Abstract
Modeling and interpreting (partial) synchronous neural activity can be a challenge. We illustrate this by deriving the phase dynamics of two seminal neural mass models: the Wilson-Cowan firing rate model and the voltage-based Freeman model. We established that the phase dynamics of these models differed qualitatively due to an attractive coupling in the first and a repulsive coupling in the latter. Using empirical structural connectivity matrices, we determined that the two dynamics cover the functional connectivity observed in resting state activity. We further searched for two pivotal dynamical features that have been reported in many experimental studies: (1) a partial phase synchrony with a possibility of a transition towards either a desynchronized or a (fully) synchronized state; (2) long-term autocorrelations indicative of a scale-free temporal dynamics of phase synchronization. Only the Freeman phase model exhibited scale-free behavior. Its repulsive coupling, however, let the individual phases disperse and did not allow for a transition into a synchronized state. The Wilson-Cowan phase model, by contrast, could switch into a (partially) synchronized state, but it did not generate long-term correlations although being located close to the onset of synchronization, i.e. in its critical regime. That is, the phase-reduced models can display one of the two dynamical features, but not both.
Collapse
Affiliation(s)
- Andreas Daffertshofer
- Institute for Brain and Behavior Amsterdam & Amsterdam Movement Sciences, Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, van der Boechorststraat 9, 1081BT, Amsterdam, The Netherlands.
| | - Robert Ton
- Institute for Brain and Behavior Amsterdam & Amsterdam Movement Sciences, Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, van der Boechorststraat 9, 1081BT, Amsterdam, The Netherlands; Center for Brain and Cognition, Computational Neuroscience Group, Universitat Pompeu Fabra, Carrer Tanger 122-140, 08018, Barcelona, Spain
| | - Bastian Pietras
- Institute for Brain and Behavior Amsterdam & Amsterdam Movement Sciences, Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, van der Boechorststraat 9, 1081BT, Amsterdam, The Netherlands; Department of Physics, Lancaster University, Lancaster, LA1 4YB, UK
| | - Morten L Kringelbach
- University Department of Psychiatry, University of Oxford, Oxford, OX3 7JX, UK; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, Denmark
| | - Gustavo Deco
- Center for Brain and Cognition, Computational Neuroscience Group, Universitat Pompeu Fabra, Carrer Tanger 122-140, 08018, Barcelona, Spain; Institució Catalana de la Recerca i Estudis Avanats (ICREA), Universitat Pompeu Fabra, Carrer Tanger 122-140, 08018, Barcelona, Spain
| |
Collapse
|
57
|
Gallinaro JV, Rotter S. Associative properties of structural plasticity based on firing rate homeostasis in recurrent neuronal networks. Sci Rep 2018; 8:3754. [PMID: 29491474 PMCID: PMC5830542 DOI: 10.1038/s41598-018-22077-3] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2017] [Accepted: 02/16/2018] [Indexed: 11/18/2022] Open
Abstract
Correlation-based Hebbian plasticity is thought to shape neuronal connectivity during development and learning, whereas homeostatic plasticity would stabilize network activity. Here we investigate another, new aspect of this dichotomy: Can Hebbian associative properties also emerge as a network effect from a plasticity rule based on homeostatic principles on the neuronal level? To address this question, we simulated a recurrent network of leaky integrate-and-fire neurons, in which excitatory connections are subject to a structural plasticity rule based on firing rate homeostasis. We show that a subgroup of neurons develop stronger within-group connectivity as a consequence of receiving stronger external stimulation. In an experimentally well-documented scenario we show that feature specific connectivity, similar to what has been observed in rodent visual cortex, can emerge from such a plasticity rule. The experience-dependent structural changes triggered by stimulation are long-lasting and decay only slowly when the neurons are exposed again to unspecific external inputs.
Collapse
Affiliation(s)
- Júlia V Gallinaro
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany.
| | - Stefan Rotter
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany
| |
Collapse
|
58
|
Yaghoubi M, de Graaf T, Orlandi JG, Girotto F, Colicos MA, Davidsen J. Neuronal avalanche dynamics indicates different universality classes in neuronal cultures. Sci Rep 2018; 8:3417. [PMID: 29467426 PMCID: PMC5821811 DOI: 10.1038/s41598-018-21730-1] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Accepted: 02/09/2018] [Indexed: 11/16/2022] Open
Abstract
Neuronal avalanches have become an ubiquitous tool to describe the activity of large neuronal assemblies. The emergence of scale-free statistics with well-defined exponents has led to the belief that the brain might operate near a critical point. Yet not much is known in terms of how the different exponents arise or how robust they are. Using calcium imaging recordings of dissociated neuronal cultures we show that the exponents are not universal, and that significantly different exponents arise with different culture preparations, leading to the existence of different universality classes. Naturally developing cultures show avalanche statistics consistent with those of a mean-field branching process, however, cultures grown in the presence of folic acid metabolites appear to be in a distinct universality class with significantly different critical exponents. Given the increased synaptic density and number of feedback loops in folate reared cultures, our results suggest that network topology plays a leading role in shaping the avalanche dynamics. We also show that for both types of cultures pronounced correlations exist in the sizes of neuronal avalanches indicating size clustering, being much stronger in folate reared cultures.
Collapse
Affiliation(s)
- Mohammad Yaghoubi
- Complexity Science Group, Department of Physics and Astronomy, Faculty of Science, University of Calgary, Calgary, AB, T2N 1N4, Canada
| | - Ty de Graaf
- Complexity Science Group, Department of Physics and Astronomy, Faculty of Science, University of Calgary, Calgary, AB, T2N 1N4, Canada
| | - Javier G Orlandi
- Complexity Science Group, Department of Physics and Astronomy, Faculty of Science, University of Calgary, Calgary, AB, T2N 1N4, Canada
| | - Fernando Girotto
- Complexity Science Group, Department of Physics and Astronomy, Faculty of Science, University of Calgary, Calgary, AB, T2N 1N4, Canada
| | - Michael A Colicos
- Department of Physiology & Pharmacology, Faculty of Medicine, and the Hotchkiss Brain Institute, University of Calgary, Calgary, AB, T2N 1N4, Canada.
| | - Jörn Davidsen
- Complexity Science Group, Department of Physics and Astronomy, Faculty of Science, University of Calgary, Calgary, AB, T2N 1N4, Canada.
| |
Collapse
|
59
|
Fractal Analyses of Networks of Integrate-and-Fire Stochastic Spiking Neurons. COMPLEX NETWORKS IX 2018. [DOI: 10.1007/978-3-319-73198-8_14] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
60
|
Gosak M, Stožer A, Markovič R, Dolenšek J, Perc M, Rupnik MS, Marhl M. Critical and Supercritical Spatiotemporal Calcium Dynamics in Beta Cells. Front Physiol 2017; 8:1106. [PMID: 29312008 PMCID: PMC5743929 DOI: 10.3389/fphys.2017.01106] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2017] [Accepted: 12/14/2017] [Indexed: 01/12/2023] Open
Abstract
A coordinated functioning of beta cells within pancreatic islets is mediated by oscillatory membrane depolarization and subsequent changes in cytoplasmic calcium concentration. While gap junctions allow for intraislet information exchange, beta cells within islets form complex syncytia that are intrinsically nonlinear and highly heterogeneous. To study spatiotemporal calcium dynamics within these syncytia, we make use of computational modeling and confocal high-speed functional multicellular imaging. We show that model predictions are in good agreement with experimental data, especially if a high degree of heterogeneity in the intercellular coupling term is assumed. In particular, during the first few minutes after stimulation, the probability distribution of calcium wave sizes is characterized by a power law, thus indicating critical behavior. After this period, the dynamics changes qualitatively such that the number of global intercellular calcium events increases to the point where the behavior becomes supercritical. To better mimic normal in vivo conditions, we compare the described behavior during supraphysiological non-oscillatory stimulation with the behavior during exposure to a slightly lower and oscillatory glucose challenge. In the case of this protocol, we observe only critical behavior in both experiment and model. Our results indicate that the loss of oscillatory changes, along with the rise in plasma glucose observed in diabetes, could be associated with a switch to supercritical calcium dynamics and loss of beta cell functionality.
Collapse
Affiliation(s)
- Marko Gosak
- Faculty of Medicine, Institute of Physiology, University of Maribor, Maribor, Slovenia
- Faculty of Natural Sciences and Mathematics, University of Maribor, Maribor, Slovenia
| | - Andraž Stožer
- Faculty of Medicine, Institute of Physiology, University of Maribor, Maribor, Slovenia
| | - Rene Markovič
- Faculty of Natural Sciences and Mathematics, University of Maribor, Maribor, Slovenia
- Faculty of Education, University of Maribor, Maribor, Slovenia
- Faculty of Energy Technology, University of Maribor, Krško, Slovenia
| | - Jurij Dolenšek
- Faculty of Medicine, Institute of Physiology, University of Maribor, Maribor, Slovenia
| | - Matjaž Perc
- Faculty of Natural Sciences and Mathematics, University of Maribor, Maribor, Slovenia
- Center for Applied Mathematics and Theoretical Physics, University of Maribor, Maribor, Slovenia
- Complexity Science Hub, Vienna, Austria
| | - Marjan S. Rupnik
- Faculty of Medicine, Institute of Physiology, University of Maribor, Maribor, Slovenia
- Institute of Physiology and Pharmacology, Medical University of Vienna, Vienna, Austria
| | - Marko Marhl
- Faculty of Natural Sciences and Mathematics, University of Maribor, Maribor, Slovenia
- Faculty of Education, University of Maribor, Maribor, Slovenia
| |
Collapse
|
61
|
Optimizing information processing in neuronal networks beyond critical states. PLoS One 2017; 12:e0184367. [PMID: 28922366 PMCID: PMC5603180 DOI: 10.1371/journal.pone.0184367] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2017] [Accepted: 08/22/2017] [Indexed: 11/19/2022] Open
Abstract
Critical dynamics have been postulated as an ideal regime for neuronal networks in the brain, considering optimal dynamic range and information processing. Herein, we focused on how information entropy encoded in spatiotemporal activity patterns may vary in critical networks. We employed branching process based models to investigate how entropy can be embedded in spatiotemporal patterns. We determined that the information capacity of critical networks may vary depending on the manipulation of microscopic parameters. Specifically, the mean number of connections governed the number of spatiotemporal patterns in the networks. These findings are compatible with those of the real neuronal networks observed in specific brain circuitries, where critical behavior is necessary for the optimal dynamic range response but the uncertainty provided by high entropy as coded by spatiotemporal patterns is not required. With this, we were able to reveal that information processing can be optimized in neuronal networks beyond critical states.
Collapse
|
62
|
Self-Organized Supercriticality and Oscillations in Networks of Stochastic Spiking Neurons. ENTROPY 2017. [DOI: 10.3390/e19080399] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
63
|
Välkki IA, Lenk K, Mikkonen JE, Kapucu FE, Hyttinen JAK. Network-Wide Adaptive Burst Detection Depicts Neuronal Activity with Improved Accuracy. Front Comput Neurosci 2017; 11:40. [PMID: 28620291 PMCID: PMC5450420 DOI: 10.3389/fncom.2017.00040] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2016] [Accepted: 05/08/2017] [Indexed: 11/17/2022] Open
Abstract
Neuronal networks are often characterized by their spiking and bursting statistics. Previously, we introduced an adaptive burst analysis method which enhances the analysis power for neuronal networks with highly varying firing dynamics. The adaptation is based on single channels analyzing each element of a network separately. Such kind of analysis was adequate for the assessment of local behavior, where the analysis focuses on the neuronal activity in the vicinity of a single electrode. However, the assessment of the whole network may be hampered, if parts of the network are analyzed using different rules. Here, we test how using multiple channels and measurement time points affect adaptive burst detection. The main emphasis is, if network-wide adaptive burst detection can provide new insights into the assessment of network activity. Therefore, we propose a modification to the previously introduced inter-spike interval (ISI) histogram based cumulative moving average (CMA) algorithm to analyze multiple spike trains simultaneously. The network size can be freely defined, e.g., to include all the electrodes in a microelectrode array (MEA) recording. Additionally, the method can be applied on a series of measurements on the same network to pool the data for statistical analysis. Firstly, we apply both the original CMA-algorithm and our proposed network-wide CMA-algorithm on artificial spike trains to investigate how the modification changes the burst detection. Thereafter, we use the algorithms on MEA data of spontaneously active chemically manipulated in vitro rat cortical networks. Moreover, we compare the synchrony of the detected bursts introducing a new burst synchrony measure. Finally, we demonstrate how the bursting statistics can be used to classify networks by applying k-means clustering to the bursting statistics. The results show that the proposed network wide adaptive burst detection provides a method to unify the burst definition in the whole network and thus improves the assessment and classification of the neuronal activity, e.g., the effects of different pharmaceuticals. The results indicate that the novel method is adaptive enough to be usable on networks with different dynamics, and it is especially feasible when comparing the behavior of differently spiking networks, for example in developing networks.
Collapse
Affiliation(s)
- Inkeri A Välkki
- BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of TechnologyTampere, Finland
| | - Kerstin Lenk
- BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of TechnologyTampere, Finland
| | - Jarno E Mikkonen
- Department of Computer Science and Information Systems, University of JyväskyläJyväskylä, Finland
| | - Fikret E Kapucu
- BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of TechnologyTampere, Finland.,Pervasive Computing, Faculty of Computing and Electrical Engineering, Tampere University of TechnologyTampere, Finland
| | - Jari A K Hyttinen
- BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of TechnologyTampere, Finland
| |
Collapse
|
64
|
Del Papa B, Priesemann V, Triesch J. Criticality meets learning: Criticality signatures in a self-organizing recurrent neural network. PLoS One 2017; 12:e0178683. [PMID: 28552964 PMCID: PMC5446191 DOI: 10.1371/journal.pone.0178683] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2016] [Accepted: 05/17/2017] [Indexed: 11/23/2022] Open
Abstract
Many experiments have suggested that the brain operates close to a critical state, based on signatures of criticality such as power-law distributed neuronal avalanches. In neural network models, criticality is a dynamical state that maximizes information processing capacities, e.g. sensitivity to input, dynamical range and storage capacity, which makes it a favorable candidate state for brain function. Although models that self-organize towards a critical state have been proposed, the relation between criticality signatures and learning is still unclear. Here, we investigate signatures of criticality in a self-organizing recurrent neural network (SORN). Investigating criticality in the SORN is of particular interest because it has not been developed to show criticality. Instead, the SORN has been shown to exhibit spatio-temporal pattern learning through a combination of neural plasticity mechanisms and it reproduces a number of biological findings on neural variability and the statistics and fluctuations of synaptic efficacies. We show that, after a transient, the SORN spontaneously self-organizes into a dynamical state that shows criticality signatures comparable to those found in experiments. The plasticity mechanisms are necessary to attain that dynamical state, but not to maintain it. Furthermore, onset of external input transiently changes the slope of the avalanche distributions – matching recent experimental findings. Interestingly, the membrane noise level necessary for the occurrence of the criticality signatures reduces the model’s performance in simple learning tasks. Overall, our work shows that the biologically inspired plasticity and homeostasis mechanisms responsible for the SORN’s spatio-temporal learning abilities can give rise to criticality signatures in its activity when driven by random input, but these break down under the structured input of short repeating sequences.
Collapse
Affiliation(s)
- Bruno Del Papa
- Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe University, Frankfurt am Main, Germany
- International Max Planck Research School for Neural Circuits, Max Planck Institute for Brain Research, Frankfurt am Main, Germany
- * E-mail:
| | - Viola Priesemann
- Department of Non-linear Dynamics, Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe University, Frankfurt am Main, Germany
| |
Collapse
|
65
|
Abstract
In real-world applications, observations are often constrained to a small fraction of a system. Such spatial subsampling can be caused by the inaccessibility or the sheer size of the system, and cannot be overcome by longer sampling. Spatial subsampling can strongly bias inferences about a system's aggregated properties. To overcome the bias, we derive analytically a subsampling scaling framework that is applicable to different observables, including distributions of neuronal avalanches, of number of people infected during an epidemic outbreak, and of node degrees. We demonstrate how to infer the correct distributions of the underlying full system, how to apply it to distinguish critical from subcritical systems, and how to disentangle subsampling and finite size effects. Lastly, we apply subsampling scaling to neuronal avalanche models and to recordings from developing neural networks. We show that only mature, but not young networks follow power-law scaling, indicating self-organization to criticality during development. We can often observe only a small fraction of a system, which leads to biases in the inference of its global properties. Here, the authors develop a framework that enables overcoming subsampling effects, apply it to recordings from developing neural networks, and find that neural networks become critical as they mature.
Collapse
Affiliation(s)
- A Levina
- Institute of Science and Technology Austria, Am Campus 1, 3400 Klosterneuburg, Austria.,Bernstein Center for Computational Neuroscience, Am Fassberg 17, 37077 Göttingen, Germany
| | - V Priesemann
- Bernstein Center for Computational Neuroscience, Am Fassberg 17, 37077 Göttingen, Germany.,Max Planck Institute for Dynamics and Self-Organization, Am Fassberg 17, 37077 Göttingen, Germany
| |
Collapse
|
66
|
Kanders K, Lorimer T, Stoop R. Avalanche and edge-of-chaos criticality do not necessarily co-occur in neural networks. CHAOS (WOODBURY, N.Y.) 2017; 27:047408. [PMID: 28456175 DOI: 10.1063/1.4978998] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
There are indications that for optimizing neural computation, neural networks may operate at criticality. Previous approaches have used distinct fingerprints of criticality, leaving open the question whether the different notions would necessarily reflect different aspects of one and the same instance of criticality, or whether they could potentially refer to distinct instances of criticality. In this work, we choose avalanche criticality and edge-of-chaos criticality and demonstrate for a recurrent spiking neural network that avalanche criticality does not necessarily entrain dynamical edge-of-chaos criticality. This suggests that the different fingerprints may pertain to distinct phenomena.
Collapse
Affiliation(s)
- Karlis Kanders
- Institute of Neuroinformatics and Institute for Computational Science, University of Zurich and ETH Zurich, Winterthurerstr. 190, 8057 Zurich, Switzerland
| | - Tom Lorimer
- Institute of Neuroinformatics and Institute for Computational Science, University of Zurich and ETH Zurich, Winterthurerstr. 190, 8057 Zurich, Switzerland
| | - Ruedi Stoop
- Institute of Neuroinformatics and Institute for Computational Science, University of Zurich and ETH Zurich, Winterthurerstr. 190, 8057 Zurich, Switzerland
| |
Collapse
|
67
|
Abstract
Extended numerical simulations of threshold models have been performed on a human brain network with N=836733 connected nodes available from the Open Connectome Project. While in the case of simple threshold models a sharp discontinuous phase transition without any critical dynamics arises, variable threshold models exhibit extended power-law scaling regions. This is attributed to fact that Griffiths effects, stemming from the topological or interaction heterogeneity of the network, can become relevant if the input sensitivity of nodes is equalized. I have studied the effects of link directness, as well as the consequence of inhibitory connections. Nonuniversal power-law avalanche size and time distributions have been found with exponents agreeing with the values obtained in electrode experiments of the human brain. The dynamical critical region occurs in an extended control parameter space without the assumption of self-organized criticality.
Collapse
Affiliation(s)
- Géza Ódor
- Institute of Technical Physics and Materials Science, Centre for Energy Research of the Hungarian Academy of Sciences, P.O. Box 49, H-1525 Budapest, Hungary
| |
Collapse
|
68
|
Yada Y, Mita T, Sanada A, Yano R, Kanzaki R, Bakkum DJ, Hierlemann A, Takahashi H. Development of neural population activity toward self-organized criticality. Neuroscience 2016; 343:55-65. [PMID: 27915209 DOI: 10.1016/j.neuroscience.2016.11.031] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2016] [Revised: 11/21/2016] [Accepted: 11/21/2016] [Indexed: 12/13/2022]
Abstract
Self-organized criticality (SoC), a spontaneous dynamic state established and maintained in networks of moderate complexity, is a universal characteristic of neural systems. Such systems produce cascades of spontaneous activity that are typically characterized by power-law distributions and rich, stable spatiotemporal patterns (i.e., neuronal avalanches). Since the dynamics of the critical state confer advantages in information processing within neuronal networks, it is of great interest to determine how criticality emerges during development. One possible mechanism is developmental, and includes axonal elongation during synaptogenesis and subsequent synaptic pruning in combination with the maturation of GABAergic inhibition (i.e., the integration then fragmentation process). Because experimental evidence for this mechanism remains inconclusive, we studied the developmental variation of neuronal avalanches in dissociated cortical neurons using high-density complementary metal-oxide semiconductor (CMOS) microelectrode arrays (MEAs). The spontaneous activities of nine cultures were monitored using CMOS MEAs from 4 to 30days in vitro (DIV) at single-cell spatial resolution. While cells were immature, cultures demonstrated random-like patterns of activity and an exponential avalanche size distribution; this distribution was followed by a bimodal distribution, and finally a power-law-like distribution. The bimodal distribution was associated with a large-scale avalanche with a homogeneous spatiotemporal pattern, while the subsequent power-law distribution was associated with diverse patterns. These results suggest that the SoC emerges through a two-step process: the integration process accompanying the characteristic large-scale avalanche and the fragmentation process associated with diverse middle-size avalanches.
Collapse
Affiliation(s)
- Yuichiro Yada
- Research Center for Advanced Science and Technology, The University of Tokyo, 4-6-1, Komaba, Meguro-ku, Tokyo 153-8904, Japan; Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-8654, Japan; Japan Society for the Promotion of Science (JSPS) Research Fellow, 5-3-1, Koji-machi, Chiyoda-ku, Tokyo 102-0083, Japan
| | - Takeshi Mita
- Research Center for Advanced Science and Technology, The University of Tokyo, 4-6-1, Komaba, Meguro-ku, Tokyo 153-8904, Japan; Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-8654, Japan
| | - Akihiro Sanada
- Research Center for Advanced Science and Technology, The University of Tokyo, 4-6-1, Komaba, Meguro-ku, Tokyo 153-8904, Japan; Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-8654, Japan
| | - Ryuichi Yano
- Research Center for Advanced Science and Technology, The University of Tokyo, 4-6-1, Komaba, Meguro-ku, Tokyo 153-8904, Japan; Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-8654, Japan
| | - Ryohei Kanzaki
- Research Center for Advanced Science and Technology, The University of Tokyo, 4-6-1, Komaba, Meguro-ku, Tokyo 153-8904, Japan; Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-8654, Japan
| | - Douglas J Bakkum
- Department of Biosystems Science and Engineering, ETH, Mattenstrasse 26, 4058 Basel, Switzerland
| | - Andreas Hierlemann
- Department of Biosystems Science and Engineering, ETH, Mattenstrasse 26, 4058 Basel, Switzerland
| | - Hirokazu Takahashi
- Research Center for Advanced Science and Technology, The University of Tokyo, 4-6-1, Komaba, Meguro-ku, Tokyo 153-8904, Japan; Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-8654, Japan.
| |
Collapse
|
69
|
Timme NM, Marshall NJ, Bennett N, Ripp M, Lautzenhiser E, Beggs JM. Criticality Maximizes Complexity in Neural Tissue. Front Physiol 2016; 7:425. [PMID: 27729870 PMCID: PMC5037237 DOI: 10.3389/fphys.2016.00425] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Accepted: 09/08/2016] [Indexed: 11/25/2022] Open
Abstract
The analysis of neural systems leverages tools from many different fields. Drawing on techniques from the study of critical phenomena in statistical mechanics, several studies have reported signatures of criticality in neural systems, including power-law distributions, shape collapses, and optimized quantities under tuning. Independently, neural complexity-an information theoretic measure-has been introduced in an effort to quantify the strength of correlations across multiple scales in a neural system. This measure represents an important tool in complex systems research because it allows for the quantification of the complexity of a neural system. In this analysis, we studied the relationships between neural complexity and criticality in neural culture data. We analyzed neural avalanches in 435 recordings from dissociated hippocampal cultures produced from rats, as well as neural avalanches from a cortical branching model. We utilized recently developed maximum likelihood estimation power-law fitting methods that account for doubly truncated power-laws, an automated shape collapse algorithm, and neural complexity and branching ratio calculation methods that account for sub-sampling, all of which are implemented in the freely available Neural Complexity and Criticality MATLAB toolbox. We found evidence that neural systems operate at or near a critical point and that neural complexity is optimized in these neural systems at or near the critical point. Surprisingly, we found evidence that complexity in neural systems is dependent upon avalanche profiles and neuron firing rate, but not precise spiking relationships between neurons. In order to facilitate future research, we made all of the culture data utilized in this analysis freely available online.
Collapse
Affiliation(s)
- Nicholas M. Timme
- Department of Psychology, Indiana University - Purdue University IndianapolisIndianapolis, IN, USA
| | | | | | - Monica Ripp
- Department of Physics, Syracuse UniversitySyracuse, NY, USA
| | | | - John M. Beggs
- Department of Physics, Indiana UniversityBloomington, IN, USA
- Biocomplexity Institute, Indiana UniversityBloomington, IN, USA
| |
Collapse
|
70
|
Li Y, Kulvicius T, Tetzlaff C. Induction and Consolidation of Calcium-Based Homo- and Heterosynaptic Potentiation and Depression. PLoS One 2016; 11:e0161679. [PMID: 27560350 PMCID: PMC4999190 DOI: 10.1371/journal.pone.0161679] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2015] [Accepted: 08/10/2016] [Indexed: 11/19/2022] Open
Abstract
The adaptive mechanisms of homo- and heterosynaptic plasticity play an important role in learning and memory. In order to maintain plasticity-induced changes for longer time scales (up to several days), they have to be consolidated by transferring them from a short-lasting early-phase to a long-lasting late-phase state. The underlying processes of this synaptic consolidation are already well-known for homosynaptic plasticity, however, it is not clear whether the same processes also enable the induction and consolidation of heterosynaptic plasticity. In this study, by extending a generic calcium-based plasticity model with the processes of synaptic consolidation, we show in simulations that indeed heterosynaptic plasticity can be induced and, furthermore, consolidated by the same underlying processes as for homosynaptic plasticity. Furthermore, we show that by local diffusion processes the heterosynaptic effect can be restricted to a few synapses neighboring the homosynaptically changed ones. Taken together, this generic model reproduces many experimental results of synaptic tagging and consolidation, provides several predictions for heterosynaptic induction and consolidation, and yields insights into the complex interactions between homo- and heterosynaptic plasticity over a broad variety of time (minutes to days) and spatial scales (several micrometers).
Collapse
Affiliation(s)
- Yinyun Li
- III. Institute of Physics – Biophysics, Georg-August-University, 37077 Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Georg-August-University, 37077 Göttingen, Germany
- School of System Science, Beijing Normal University, 100875 Beijing, China
- * E-mail:
| | - Tomas Kulvicius
- III. Institute of Physics – Biophysics, Georg-August-University, 37077 Göttingen, Germany
- Maersk Mc-Kinney Moller Institute, University of Southern Denmark, 5230 Odense, Denmark
| | - Christian Tetzlaff
- Bernstein Center for Computational Neuroscience, Georg-August-University, 37077 Göttingen, Germany
- Max Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany
- Department of Neurobiology, Weizmann Institute of Science, 76100 Rehovot, Israel
| |
Collapse
|
71
|
Minati L, de Candia A, Scarpetta S. Critical phenomena at a first-order phase transition in a lattice of glow lamps: Experimental findings and analogy to neural activity. CHAOS (WOODBURY, N.Y.) 2016; 26:073103. [PMID: 27475063 DOI: 10.1063/1.4954879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Networks of non-linear electronic oscillators have shown potential as physical models of neural dynamics. However, two properties of brain activity, namely, criticality and metastability, remain under-investigated with this approach. Here, we present a simple circuit that exhibits both phenomena. The apparatus consists of a two-dimensional square lattice of capacitively coupled glow (neon) lamps. The dynamics of lamp breakdown (flash) events are controlled by a DC voltage globally connected to all nodes via fixed resistors. Depending on this parameter, two phases having distinct event rate and degree of spatiotemporal order are observed. The transition between them is hysteretic, thus a first-order one, and it is possible to enter a metastability region, wherein, approaching a spinodal point, critical phenomena emerge. Avalanches of events occur according to power-law distributions having exponents ≈3/2 for size and ≈2 for duration, and fractal structure is evident as power-law scaling of the Fano factor. These critical exponents overlap observations in biological neural networks; hence, this circuit may have value as building block to realize corresponding physical models.
Collapse
Affiliation(s)
- Ludovico Minati
- Center for Mind/Brain Sciences, University of Trento, 38123 Mattarello, Italy
| | - Antonio de Candia
- Department of Physics "E. Pancini," University of Naples "Federico II," Napoli, Italy
| | | |
Collapse
|
72
|
Fauth M, Tetzlaff C. Opposing Effects of Neuronal Activity on Structural Plasticity. Front Neuroanat 2016; 10:75. [PMID: 27445713 PMCID: PMC4923203 DOI: 10.3389/fnana.2016.00075] [Citation(s) in RCA: 52] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2015] [Accepted: 06/16/2016] [Indexed: 12/21/2022] Open
Abstract
The connectivity of the brain is continuously adjusted to new environmental influences by several activity-dependent adaptive processes. The most investigated adaptive mechanism is activity-dependent functional or synaptic plasticity regulating the transmission efficacy of existing synapses. Another important but less prominently discussed adaptive process is structural plasticity, which changes the connectivity by the formation and deletion of synapses. In this review, we show, based on experimental evidence, that structural plasticity can be classified similar to synaptic plasticity into two categories: (i) Hebbian structural plasticity, which leads to an increase (decrease) of the number of synapses during phases of high (low) neuronal activity and (ii) homeostatic structural plasticity, which balances these changes by removing and adding synapses. Furthermore, based on experimental and theoretical insights, we argue that each type of structural plasticity fulfills a different function. While Hebbian structural changes enhance memory lifetime, storage capacity, and memory robustness, homeostatic structural plasticity self-organizes the connectivity of the neural network to assure stability. However, the link between functional synaptic and structural plasticity as well as the detailed interactions between Hebbian and homeostatic structural plasticity are more complex. This implies even richer dynamics requiring further experimental and theoretical investigations.
Collapse
Affiliation(s)
- Michael Fauth
- Department of Computational Neuroscience, Third Institute of Physics - Biophysics, Georg-August UniversityGöttingen, Germany; Bernstein Center for Computational NeuroscienceGöttingen, Germany
| | - Christian Tetzlaff
- Bernstein Center for Computational NeuroscienceGöttingen, Germany; Max Planck Institute for Dynamics and Self-OrganizationGöttingen, Germany
| |
Collapse
|
73
|
Abstract
A simple theory of health has recently been proposed: while poor quality regulation corresponds to poor quality health so that improving regulation should improve health, optimal regulation optimizes function and optimizes health. Examining the term 'optimal regulation' in biological systems leads to a straightforward definition in terms of 'criticality' in complexity biology, a concept that seems to apply universally throughout biology. Criticality maximizes information processing and sensitivity of response to external stimuli, and for these reasons may be held to optimize regulation. In this way a definition of health has been given in terms of regulation, a scientific concept, which ties into detailed properties of complex systems, including brain cortices, and mental health. Models of experience and meditation built on complexity also point to criticality: it represents the condition making self-awareness possible, and is strengthened by meditation practices leading to the state of pure consciousness-the content-free state of mind in deep meditation. From this it follows that healthy function of the brain cortex, its sensitivity,y and consistency of response to external challenges should improve by practicing techniques leading to content-free awareness-transcending the original focus introduced during practice. Evidence for this is reviewed.
Collapse
Affiliation(s)
- Alex Hankey
- a Vivekananda Yoga Anusandhana Samsthana , Department of Yoga and Physical Science , Karnataka , India
| | - Rashmi Shetkar
- a Vivekananda Yoga Anusandhana Samsthana , Department of Yoga and Physical Science , Karnataka , India
| |
Collapse
|
74
|
Schneidman E. Towards the design principles of neural population codes. Curr Opin Neurobiol 2016; 37:133-140. [PMID: 27016639 DOI: 10.1016/j.conb.2016.03.001] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2016] [Revised: 03/01/2016] [Accepted: 03/02/2016] [Indexed: 12/18/2022]
Abstract
The ability to record the joint activity of large groups of neurons would allow for direct study of information representation and computation at the level of whole circuits in the brain. The combinatorial space of potential population activity patterns and neural noise imply that it would be impossible to directly map the relations between stimuli and population responses. Understanding of large neural population codes therefore depends on identifying simplifying design principles. We review recent results showing that strongly correlated population codes can be explained using minimal models that rely on low order relations among cells. We discuss the implications for large populations, and how such models allow for mapping the semantic organization of the neural codebook and stimulus space, and decoding.
Collapse
Affiliation(s)
- Elad Schneidman
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel.
| |
Collapse
|
75
|
Ribeiro TL, Ribeiro S, Copelli M. Repertoires of Spike Avalanches Are Modulated by Behavior and Novelty. Front Neural Circuits 2016; 10:16. [PMID: 27047341 PMCID: PMC4802163 DOI: 10.3389/fncir.2016.00016] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2015] [Accepted: 03/07/2016] [Indexed: 11/13/2022] Open
Abstract
Neuronal avalanches measured as consecutive bouts of thresholded field potentials represent a statistical signature that the brain operates near a critical point. In theory, criticality optimizes stimulus sensitivity, information transmission, computational capability and mnemonic repertoires size. Field potential avalanches recorded via multielectrode arrays from cortical slice cultures are repeatable spatiotemporal activity patterns. It remains unclear whether avalanches of action potentials observed in forebrain regions of freely-behaving rats also form recursive repertoires, and whether these have any behavioral relevance. Here, we show that spike avalanches, recorded from hippocampus (HP) and sensory neocortex of freely-behaving rats, constitute distinct families of recursive spatiotemporal patterns. A significant number of those patterns were specific to a behavioral state. Although avalanches produced during sleep were mostly similar to others that occurred during waking, the repertoire of patterns recruited during sleep differed significantly from that of waking. More importantly, exposure to novel objects increased the rate at which new patterns arose, also leading to changes in post-exposure repertoires, which were significantly different from those before the exposure. A significant number of families occurred exclusively during periods of whisker contact with objects, but few were associated with specific objects. Altogether, the results provide original evidence linking behavior and criticality at the spike level: spike avalanches form repertoires that emerge in waking, recur during sleep, are diversified by novelty and contribute to object representation.
Collapse
Affiliation(s)
- Tiago L Ribeiro
- Section on Critical Brain Dynamics, National Institute of Mental Health (NIMH), National Institutes of Health (NIH)Bethesda, MD, USA; Physics Department, Federal University of Pernambuco (UFPE)Recife, PE, Brazil
| | - Sidarta Ribeiro
- Brain Institute, Federal University of Rio Grande do Norte (UFRN) Natal, RN, Brazil
| | - Mauro Copelli
- Physics Department, Federal University of Pernambuco (UFPE) Recife, PE, Brazil
| |
Collapse
|
76
|
Cultured Cortical Neurons Can Perform Blind Source Separation According to the Free-Energy Principle. PLoS Comput Biol 2015; 11:e1004643. [PMID: 26690814 PMCID: PMC4686348 DOI: 10.1371/journal.pcbi.1004643] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2015] [Accepted: 11/03/2015] [Indexed: 11/19/2022] Open
Abstract
Blind source separation is the computation underlying the cocktail party effect--a partygoer can distinguish a particular talker's voice from the ambient noise. Early studies indicated that the brain might use blind source separation as a signal processing strategy for sensory perception and numerous mathematical models have been proposed; however, it remains unclear how the neural networks extract particular sources from a complex mixture of inputs. We discovered that neurons in cultures of dissociated rat cortical cells could learn to represent particular sources while filtering out other signals. Specifically, the distinct classes of neurons in the culture learned to respond to the distinct sources after repeating training stimulation. Moreover, the neural network structures changed to reduce free energy, as predicted by the free-energy principle, a candidate unified theory of learning and memory, and by Jaynes' principle of maximum entropy. This implicit learning can only be explained by some form of Hebbian plasticity. These results are the first in vitro (as opposed to in silico) demonstration of neural networks performing blind source separation, and the first formal demonstration of neuronal self-organization under the free energy principle.
Collapse
|
77
|
Hankey A. A complexity basis for phenomenology: How information states at criticality offer a new approach to understanding experience of self, being and time. PROGRESS IN BIOPHYSICS AND MOLECULAR BIOLOGY 2015; 119:288-302. [DOI: 10.1016/j.pbiomolbio.2015.07.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
78
|
Isomura T, Shimba K, Takayama Y, Takeuchi A, Kotani K, Jimbo Y. Signal transfer within a cultured asymmetric cortical neuron circuit. J Neural Eng 2015; 12:066023. [DOI: 10.1088/1741-2560/12/6/066023] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
79
|
Poli D, Pastore VP, Massobrio P. Functional connectivity in in vitro neuronal assemblies. Front Neural Circuits 2015; 9:57. [PMID: 26500505 PMCID: PMC4595785 DOI: 10.3389/fncir.2015.00057] [Citation(s) in RCA: 58] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2015] [Accepted: 09/22/2015] [Indexed: 01/21/2023] Open
Abstract
Complex network topologies represent the necessary substrate to support complex brain functions. In this work, we reviewed in vitro neuronal networks coupled to Micro-Electrode Arrays (MEAs) as biological substrate. Networks of dissociated neurons developing in vitro and coupled to MEAs, represent a valid experimental model for studying the mechanisms governing the formation, organization and conservation of neuronal cell assemblies. In this review, we present some examples of the use of statistical Cluster Coefficients and Small World indices to infer topological rules underlying the dynamics exhibited by homogeneous and engineered neuronal networks.
Collapse
Affiliation(s)
- Daniele Poli
- Department of Informatics, Bioengineering, Robotics and System Engineering, University of Genova Genova, Italy
| | - Vito P Pastore
- Department of Informatics, Bioengineering, Robotics and System Engineering, University of Genova Genova, Italy
| | - Paolo Massobrio
- Department of Informatics, Bioengineering, Robotics and System Engineering, University of Genova Genova, Italy
| |
Collapse
|
80
|
Ness TV, Chintaluri C, Potworowski J, Łęski S, Głąbska H, Wójcik DK, Einevoll GT. Modelling and Analysis of Electrical Potentials Recorded in Microelectrode Arrays (MEAs). Neuroinformatics 2015; 13:403-26. [PMID: 25822810 PMCID: PMC4626530 DOI: 10.1007/s12021-015-9265-6] [Citation(s) in RCA: 55] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Microelectrode arrays (MEAs), substrate-integrated planar arrays of up to thousands of closely spaced metal electrode contacts, have long been used to record neuronal activity in in vitro brain slices with high spatial and temporal resolution. However, the analysis of the MEA potentials has generally been mainly qualitative. Here we use a biophysical forward-modelling formalism based on the finite element method (FEM) to establish quantitatively accurate links between neural activity in the slice and potentials recorded in the MEA set-up. Then we develop a simpler approach based on the method of images (MoI) from electrostatics, which allows for computation of MEA potentials by simple formulas similar to what is used for homogeneous volume conductors. As we find MoI to give accurate results in most situations of practical interest, including anisotropic slices covered with highly conductive saline and MEA-electrode contacts of sizable physical extensions, a Python software package (ViMEAPy) has been developed to facilitate forward-modelling of MEA potentials generated by biophysically detailed multicompartmental neurons. We apply our scheme to investigate the influence of the MEA set-up on single-neuron spikes as well as on potentials generated by a cortical network comprising more than 3000 model neurons. The generated MEA potentials are substantially affected by both the saline bath covering the brain slice and a (putative) inadvertent saline layer at the interface between the MEA chip and the brain slice. We further explore methods for estimation of current-source density (CSD) from MEA potentials, and find the results to be much less sensitive to the experimental set-up.
Collapse
Affiliation(s)
- Torbjørn V Ness
- Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Chaitanya Chintaluri
- Department of Neurophysiology, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, Warsaw, Poland
| | - Jan Potworowski
- Department of Neurophysiology, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, Warsaw, Poland
| | - Szymon Łęski
- Department of Neurophysiology, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, Warsaw, Poland
| | - Helena Głąbska
- Department of Neurophysiology, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, Warsaw, Poland
| | - Daniel K Wójcik
- Department of Neurophysiology, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, Warsaw, Poland
| | - Gaute T Einevoll
- Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences, Ås, Norway.
- Department of Physics, University of Oslo, Oslo, Norway.
| |
Collapse
|
81
|
Ódor G, Dickman R, Ódor G. Griffiths phases and localization in hierarchical modular networks. Sci Rep 2015; 5:14451. [PMID: 26399323 PMCID: PMC4585858 DOI: 10.1038/srep14451] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2015] [Accepted: 08/14/2015] [Indexed: 12/02/2022] Open
Abstract
We study variants of hierarchical modular network models suggested by Kaiser and Hilgetag [ Front. in Neuroinform., 4 (2010) 8] to model functional brain connectivity, using extensive simulations and quenched mean-field theory (QMF), focusing on structures with a connection probability that decays exponentially with the level index. Such networks can be embedded in two-dimensional Euclidean space. We explore the dynamic behavior of the contact process (CP) and threshold models on networks of this kind, including hierarchical trees. While in the small-world networks originally proposed to model brain connectivity, the topological heterogeneities are not strong enough to induce deviations from mean-field behavior, we show that a Griffiths phase can emerge under reduced connection probabilities, approaching the percolation threshold. In this case the topological dimension of the networks is finite, and extended regions of bursty, power-law dynamics are observed. Localization in the steady state is also shown via QMF. We investigate the effects of link asymmetry and coupling disorder, and show that localization can occur even in small-world networks with high connectivity in case of link disorder.
Collapse
Affiliation(s)
- Géza Ódor
- MTA-MFA-EK Research Institute for Technical Physics and Materials Science, H-1121 Budapest, P.O. Box 49, Hungary
| | - Ronald Dickman
- Departamento de Fisica and National Institute of Science and Technology of Complex Systems, ICEx, Universidade Federal de Minas Gerais, Caixa Postal 702, 30161-970, Belo Horizonte - Minas Gerais, Brazil
| | - Gergely Ódor
- Massachusetts Institute of Technology, 77 Massachusetts Avenue Cambridge, MA 02139-4307, USA
| |
Collapse
|
82
|
Abstract
We here illustrate how a well-founded study of the brain may originate in assuming analogies with phase-transition phenomena. Analyzing to what extent a weak signal endures in noisy environments, we identify the underlying mechanisms, and it results a description of how the excitability associated to (non-equilibrium) phase changes and criticality optimizes the processing of the signal. Our setting is a network of integrate-and-fire nodes in which connections are heterogeneous with rapid time-varying intensities mimicking fatigue and potentiation. Emergence then becomes quite robust against wiring topology modification—in fact, we considered from a fully connected network to the Homo sapiens connectome—showing the essential role of synaptic flickering on computations. We also suggest how to experimentally disclose significant changes during actual brain operation.
Collapse
|
83
|
Bellay T, Klaus A, Seshadri S, Plenz D. Irregular spiking of pyramidal neurons organizes as scale-invariant neuronal avalanches in the awake state. eLife 2015; 4:e07224. [PMID: 26151674 PMCID: PMC4492006 DOI: 10.7554/elife.07224] [Citation(s) in RCA: 84] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2015] [Accepted: 06/10/2015] [Indexed: 12/22/2022] Open
Abstract
Spontaneous fluctuations in neuronal activity emerge at many spatial and temporal scales in cortex. Population measures found these fluctuations to organize as scale-invariant neuronal avalanches, suggesting cortical dynamics to be critical. Macroscopic dynamics, though, depend on physiological states and are ambiguous as to their cellular composition, spatiotemporal origin, and contributions from synaptic input or action potential (AP) output. Here, we study spontaneous firing in pyramidal neurons (PNs) from rat superficial cortical layers in vivo and in vitro using 2-photon imaging. As the animal transitions from the anesthetized to awake state, spontaneous single neuron firing increases in irregularity and assembles into scale-invariant avalanches at the group level. In vitro spike avalanches emerged naturally yet required balanced excitation and inhibition. This demonstrates that neuronal avalanches are linked to the global physiological state of wakefulness and that cortical resting activity organizes as avalanches from firing of local PN groups to global population activity. DOI:http://dx.doi.org/10.7554/eLife.07224.001 Even when we are not engaged in any specific task, the brain shows coordinated patterns of spontaneous activity that can be monitored using electrodes placed on the scalp. This resting activity shapes the way that the brain responds to subsequent stimuli. Changes in resting activity patterns are seen in various neurological and psychiatric disorders, as well as in healthy individuals following sleep deprivation. The brain's outer layer is known as the cortex. On a large scale, when monitoring many thousands of neurons, resting activity in the cortex demonstrates propagation in the brain in an organized manner. Specifically, resting activity was found to organize as so-called neuronal avalanches, in which large bursts of neuronal activity are grouped with medium-sized and smaller bursts in a very characteristic order. In fact, the sizes of these bursts—that is, the number of neurons that fire—are found to be scale-invariant, that is, the ratio of large bursts to medium-sized bursts is the same as that of medium-sized to small bursts. Such scale-invariance suggests that neuronal bursts are not independent of one another. However, it is largely unclear how neuronal avalanches arise from individual neurons, which fire simply in a noisy, irregular manner. Bellay, Klaus et al. have now provided insights into this process by examining patterns of firing of a particular type of neuron—known as a pyramidal cell—in the cortex of rats as they recover from anesthesia. As the animals awaken, the firing of individual pyramidal cells in the cortex becomes even more irregular than under anesthesia. However, by considering the activity of a group of these neurons, Bellay, Klaus et al. realized that it is this more irregular firing that gives rise to neuronal avalanches, and that this occurs only when the animals are awake. Further experiments on individual pyramidal cells grown in the laboratory confirmed that neuronal avalanches emerge spontaneously from the irregular firing of individual neurons. These avalanches depend on there being a balance between two types of activity among the cells: ‘excitatory’ activity that causes other neurons to fire, and ‘inhibitory’ activity that prevents neuronal firing. Given that resting activity influences the brain's responses to the outside world, the origins of neuronal avalanches are likely to provide clues about the way the brain processes information. Future experiments should also examine the possibility that the emergence of neuronal avalanches marks the transition from unconsciousness to wakefulness within the brain. DOI:http://dx.doi.org/10.7554/eLife.07224.002
Collapse
Affiliation(s)
- Timothy Bellay
- Section on Critical Brain Dynamics, National Institute of Mental Health, Bethesda, United States
| | - Andreas Klaus
- Section on Critical Brain Dynamics, National Institute of Mental Health, Bethesda, United States
| | - Saurav Seshadri
- Section on Critical Brain Dynamics, National Institute of Mental Health, Bethesda, United States
| | - Dietmar Plenz
- Section on Critical Brain Dynamics, National Institute of Mental Health, Bethesda, United States
| |
Collapse
|
84
|
Massobrio P, Pasquale V, Martinoia S. Self-organized criticality in cortical assemblies occurs in concurrent scale-free and small-world networks. Sci Rep 2015; 5:10578. [PMID: 26030608 PMCID: PMC4450754 DOI: 10.1038/srep10578] [Citation(s) in RCA: 65] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2014] [Accepted: 04/17/2015] [Indexed: 11/10/2022] Open
Abstract
The spontaneous activity of cortical networks is characterized by the emergence of different dynamic states. Although several attempts were accomplished to understand the origin of these dynamics, the underlying factors continue to be elusive. In this work, we specifically investigated the interplay between network topology and spontaneous dynamics within the framework of self-organized criticality (SOC). The obtained results support the hypothesis that the emergence of critical states occurs in specific complex network topologies. By combining multi-electrode recordings of spontaneous activity of in vitro cortical assemblies with theoretical models, we demonstrate that different 'connectivity rules' drive the network towards different dynamic states. In particular, scale-free architectures with different degree of small-worldness account better for the variability observed in experimental data, giving rise to different dynamic states. Moreover, in relationship with the balance between excitation and inhibition and percentage of inhibitory hubs, the simulated cortical networks fall in a critical regime.
Collapse
Affiliation(s)
- Paolo Massobrio
- Neuroengineering and Bio-nano Technology Lab (NBT), Department of Informatics, Bioengineering, Robotics, System Engineering (DIBRIS), University of Genova, Genova - Italy
| | - Valentina Pasquale
- Department of Neuroscience and Brain Technologies, Istituto Italiano di Tecnologia (IIT), Genova - Italy
| | - Sergio Martinoia
- Neuroengineering and Bio-nano Technology Lab (NBT), Department of Informatics, Bioengineering, Robotics, System Engineering (DIBRIS), University of Genova, Genova - Italy
| |
Collapse
|
85
|
Abstract
Complex cognitive processes require neuronal activity to be coordinated across multiple scales, ranging from local microcircuits to cortex-wide networks. However, multiscale cortical dynamics are not well understood because few experimental approaches have provided sufficient support for hypotheses involving multiscale interactions. To address these limitations, we used, in experiments involving mice, genetically encoded voltage indicator imaging, which measures cortex-wide electrical activity at high spatiotemporal resolution. Here we show that, as mice recovered from anesthesia, scale-invariant spatiotemporal patterns of neuronal activity gradually emerge. We show for the first time that this scale-invariant activity spans four orders of magnitude in awake mice. In contrast, we found that the cortical dynamics of anesthetized mice were not scale invariant. Our results bridge empirical evidence from disparate scales and support theoretical predictions that the awake cortex operates in a dynamical regime known as criticality. The criticality hypothesis predicts that small-scale cortical dynamics are governed by the same principles as those governing larger-scale dynamics. Importantly, these scale-invariant principles also optimize certain aspects of information processing. Our results suggest that during the emergence from anesthesia, criticality arises as information processing demands increase. We expect that, as measurement tools advance toward larger scales and greater resolution, the multiscale framework offered by criticality will continue to provide quantitative predictions and insight on how neurons, microcircuits, and large-scale networks are dynamically coordinated in the brain.
Collapse
|
86
|
Patel TP, Man K, Firestein BL, Meaney DF. Automated quantification of neuronal networks and single-cell calcium dynamics using calcium imaging. J Neurosci Methods 2015; 243:26-38. [PMID: 25629800 PMCID: PMC5553047 DOI: 10.1016/j.jneumeth.2015.01.020] [Citation(s) in RCA: 110] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2014] [Revised: 11/30/2014] [Accepted: 01/18/2015] [Indexed: 11/29/2022]
Abstract
BACKGROUND Recent advances in genetically engineered calcium and membrane potential indicators provide the potential to estimate the activation dynamics of individual neurons within larger, mesoscale networks (100s-1000+neurons). However, a fully integrated automated workflow for the analysis and visualization of neural microcircuits from high speed fluorescence imaging data is lacking. NEW METHOD Here we introduce FluoroSNNAP, Fluorescence Single Neuron and Network Analysis Package. FluoroSNNAP is an open-source, interactive software developed in MATLAB for automated quantification of numerous biologically relevant features of both the calcium dynamics of single-cells and network activity patterns. FluoroSNNAP integrates and improves upon existing tools for spike detection, synchronization analysis, and inference of functional connectivity, making it most useful to experimentalists with little or no programming knowledge. RESULTS We apply FluoroSNNAP to characterize the activity patterns of neuronal microcircuits undergoing developmental maturation in vitro. Separately, we highlight the utility of single-cell analysis for phenotyping a mixed population of neurons expressing a human mutant variant of the microtubule associated protein tau and wild-type tau. COMPARISON WITH EXISTING METHOD(S) We show the performance of semi-automated cell segmentation using spatiotemporal independent component analysis and significant improvement in detecting calcium transients using a template-based algorithm in comparison to peak-based or wavelet-based detection methods. Our software further enables automated analysis of microcircuits, which is an improvement over existing methods. CONCLUSIONS We expect the dissemination of this software will facilitate a comprehensive analysis of neuronal networks, promoting the rapid interrogation of circuits in health and disease.
Collapse
Affiliation(s)
- Tapan P Patel
- Department of Bioengineering, University of Pennsylvania, United States
| | - Karen Man
- Department of Bioengineering, University of Pennsylvania, United States
| | - Bonnie L Firestein
- Department of Cell Biology and Neuroscience, Rutgers University, United States
| | - David F Meaney
- Department of Bioengineering, University of Pennsylvania, United States; Department of Neurosurgery, University of Pennsylvania, United States.
| |
Collapse
|
87
|
Butz M, Steenbuck ID, van Ooyen A. Homeostatic structural plasticity can account for topology changes following deafferentation and focal stroke. Front Neuroanat 2014; 8:115. [PMID: 25360087 PMCID: PMC4199279 DOI: 10.3389/fnana.2014.00115] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2014] [Accepted: 09/24/2014] [Indexed: 01/12/2023] Open
Abstract
After brain lesions caused by tumors or stroke, or after lasting loss of input (deafferentation), inter- and intra-regional brain networks respond with complex changes in topology. Not only areas directly affected by the lesion but also regions remote from the lesion may alter their connectivity—a phenomenon known as diaschisis. Changes in network topology after brain lesions can lead to cognitive decline and increasing functional disability. However, the principles governing changes in network topology are poorly understood. Here, we investigated whether homeostatic structural plasticity can account for changes in network topology after deafferentation and brain lesions. Homeostatic structural plasticity postulates that neurons aim to maintain a desired level of electrical activity by deleting synapses when neuronal activity is too high and by providing new synaptic contacts when activity is too low. Using our Model of Structural Plasticity, we explored how local changes in connectivity induced by a focal loss of input affected global network topology. In accordance with experimental and clinical data, we found that after partial deafferentation, the network as a whole became more random, although it maintained its small-world topology, while deafferentated neurons increased their betweenness centrality as they rewired and returned to the homeostatic range of activity. Furthermore, deafferentated neurons increased their global but decreased their local efficiency and got longer tailed degree distributions, indicating the emergence of hub neurons. Together, our results suggest that homeostatic structural plasticity may be an important driving force for lesion-induced network reorganization and that the increase in betweenness centrality of deafferentated areas may hold as a biomarker for brain repair.
Collapse
Affiliation(s)
- Markus Butz
- Simulation Lab Neuroscience - Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Forschungszentrum Jülich Jülich, Germany
| | - Ines D Steenbuck
- Student of the Medical Faculty, University of Freiburg Freiburg, Germany
| | - Arjen van Ooyen
- Department of Integrative Neurophysiology, VU University Amsterdam Amsterdam, Netherlands
| |
Collapse
|
88
|
Pizzella V, Marzetti L, Penna SD, de Pasquale F, Zappasodi F, Romani GL. Magnetoencephalography in the study of brain dynamics. FUNCTIONAL NEUROLOGY 2014; 29:241-253. [PMID: 25764254 PMCID: PMC4370437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
To progress toward understanding of the mechanisms underlying the functional organization of the human brain, either a bottom-up or a top-down approach may be adopted. The former starts from the study of the detailed functioning of a small number of neuronal assemblies, while the latter tries to decode brain functioning by considering the brain as a whole. This review discusses the top-down approach and the use of magnetoencephalography (MEG) to describe global brain properties. The main idea behind this approach is that the concurrence of several areas is required for the brain to instantiate a specific behavior/functioning. A central issue is therefore the study of brain functional connectivity and the concept of brain networks as ensembles of distant brain areas that preferentially exchange information. Importantly, the human brain is a dynamic device, and MEG is ideally suited to investigate phenomena on behaviorally relevant timescales, also offering the possibility of capturing behaviorally-related brain connectivity dynamics.
Collapse
|
89
|
Hesse J, Gross T. Self-organized criticality as a fundamental property of neural systems. Front Syst Neurosci 2014; 8:166. [PMID: 25294989 PMCID: PMC4171833 DOI: 10.3389/fnsys.2014.00166] [Citation(s) in RCA: 218] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2014] [Accepted: 08/25/2014] [Indexed: 11/19/2022] Open
Abstract
The neural criticality hypothesis states that the brain may be poised in a critical state at a boundary between different types of dynamics. Theoretical and experimental studies show that critical systems often exhibit optimal computational properties, suggesting the possibility that criticality has been evolutionarily selected as a useful trait for our nervous system. Evidence for criticality has been found in cell cultures, brain slices, and anesthetized animals. Yet, inconsistent results were reported for recordings in awake animals and humans, and current results point to open questions about the exact nature and mechanism of criticality, as well as its functional role. Therefore, the criticality hypothesis has remained a controversial proposition. Here, we provide an account of the mathematical and physical foundations of criticality. In the light of this conceptual framework, we then review and discuss recent experimental studies with the aim of identifying important next steps to be taken and connections to other fields that should be explored.
Collapse
Affiliation(s)
- Janina Hesse
- Computational Neurophysiology Group, Institute for Theoretical Biology, Humboldt Universität zu Berlin Berlin, Germany ; Bernstein Center for Computational Neuroscience Berlin Berlin, Germany ; École Normale Supérieure Paris, France
| | - Thilo Gross
- Department of Engineering Mathematics, Merchant Venturers School of Engineering, University of Bristol Bristol, UK
| |
Collapse
|
90
|
Roberts JA, Iyer KK, Vanhatalo S, Breakspear M. Critical role for resource constraints in neural models. Front Syst Neurosci 2014; 8:154. [PMID: 25309349 PMCID: PMC4163687 DOI: 10.3389/fnsys.2014.00154] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2014] [Accepted: 08/05/2014] [Indexed: 11/13/2022] Open
Abstract
Criticality has emerged as a leading dynamical candidate for healthy and pathological neuronal activity. At the heart of criticality in neural systems is the need for parameters to be tuned to specific values or for the existence of self-organizing mechanisms. Existing models lack precise physiological descriptions for how the brain maintains its tuning near a critical point. In this paper we argue that a key ingredient missing from the field is a formulation of reciprocal coupling between neural activity and metabolic resources. We propose that the constraint of optimizing the balance between energy use and activity plays a major role in tuning brain states to lie near criticality. Important recent findings aligned with our viewpoint have emerged from analyses of disorders that involve severe metabolic disturbances and alter scale-free properties of brain dynamics, including burst suppression. Moreover, we argue that average shapes of neuronal avalanches are a signature of scale-free activity that offers sharper insights into underlying mechanisms than afforded by traditional analyses of avalanche statistics.
Collapse
Affiliation(s)
- James A Roberts
- Systems Neuroscience Group, QIMR Berghofer Medical Research Institute Brisbane, QLD, Australia
| | - Kartik K Iyer
- Systems Neuroscience Group, QIMR Berghofer Medical Research Institute Brisbane, QLD, Australia ; Faculty of Health Sciences, School of Medicine, University of Queensland Brisbane, QLD, Australia
| | - Sampsa Vanhatalo
- Department Clinical Neurophysiology, Children's Hospital, Helsinki University Central Hospital, University of Helsinki Helsinki, Finland
| | - Michael Breakspear
- Systems Neuroscience Group, QIMR Berghofer Medical Research Institute Brisbane, QLD, Australia ; Royal Brisbane and Women's Hospital Herston, QLD, Australia
| |
Collapse
|
91
|
Thivierge JP, Tauskela JS. Development of avalanches and efficient communication in neuronal networks. BMC Neurosci 2014. [PMCID: PMC4126459 DOI: 10.1186/1471-2202-15-s1-p31] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
|
92
|
Priesemann V, Wibral M, Valderrama M, Pröpper R, Le Van Quyen M, Geisel T, Triesch J, Nikolić D, Munk MHJ. Spike avalanches in vivo suggest a driven, slightly subcritical brain state. Front Syst Neurosci 2014; 8:108. [PMID: 25009473 PMCID: PMC4068003 DOI: 10.3389/fnsys.2014.00108] [Citation(s) in RCA: 168] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2014] [Accepted: 05/21/2014] [Indexed: 11/15/2022] Open
Abstract
In self-organized critical (SOC) systems avalanche size distributions follow power-laws. Power-laws have also been observed for neural activity, and so it has been proposed that SOC underlies brain organization as well. Surprisingly, for spiking activity in vivo, evidence for SOC is still lacking. Therefore, we analyzed highly parallel spike recordings from awake rats and monkeys, anesthetized cats, and also local field potentials from humans. We compared these to spiking activity from two established critical models: the Bak-Tang-Wiesenfeld model, and a stochastic branching model. We found fundamental differences between the neural and the model activity. These differences could be overcome for both models through a combination of three modifications: (1) subsampling, (2) increasing the input to the model (this way eliminating the separation of time scales, which is fundamental to SOC and its avalanche definition), and (3) making the model slightly sub-critical. The match between the neural activity and the modified models held not only for the classical avalanche size distributions and estimated branching parameters, but also for two novel measures (mean avalanche size, and frequency of single spikes), and for the dependence of all these measures on the temporal bin size. Our results suggest that neural activity in vivo shows a mélange of avalanches, and not temporally separated ones, and that their global activity propagation can be approximated by the principle that one spike on average triggers a little less than one spike in the next step. This implies that neural activity does not reflect a SOC state but a slightly sub-critical regime without a separation of time scales. Potential advantages of this regime may be faster information processing, and a safety margin from super-criticality, which has been linked to epilepsy.
Collapse
Affiliation(s)
- Viola Priesemann
- Department of Non-linear Dynamics, Max Planck Institute for Dynamics and Self-Organization Göttingen, Germany ; Bernstein Center for Computational Neuroscience Göttingen, Germany ; Frankfurt Institute for Advanced Studies Frankfurt, Germany ; Department of Neurophysiology, Max Planck Institute for Brain Research Frankfurt, Germany
| | - Michael Wibral
- Magnetoencephalography Unit, Brain Imaging Center, Johann Wolfgang Goethe University Frankfurt, Germany ; Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society Frankfurt, Germany
| | - Mario Valderrama
- Department of Biomedical Engineering, University of Los Andes Bogotá, Colombia
| | - Robert Pröpper
- Neural Information Processing Group, Department of Software Engineering and Theoretical Computer Science, TU Berlin Berlin, Germany ; Bernstein Center for Computational Neuroscience Berlin, Germany
| | - Michel Le Van Quyen
- Centre de Recherche de l'Institut du Cerveau et de la Moelle épinière, Hôpital de la Pitié-Salpêtrière, INSERM UMRS 975-CNRS UMR 7225-UPMC Paris, France
| | - Theo Geisel
- Department of Non-linear Dynamics, Max Planck Institute for Dynamics and Self-Organization Göttingen, Germany ; Bernstein Center for Computational Neuroscience Göttingen, Germany
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies Frankfurt, Germany
| | - Danko Nikolić
- Frankfurt Institute for Advanced Studies Frankfurt, Germany ; Department of Neurophysiology, Max Planck Institute for Brain Research Frankfurt, Germany ; Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society Frankfurt, Germany ; Department of Psychology, Faculty of Humanities and Social Sciences, University of Zagreb Zagreb, Croatia
| | - Matthias H J Munk
- Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics Tübingen, Germany
| |
Collapse
|
93
|
Orlandi JG, Stetter O, Soriano J, Geisel T, Battaglia D. Transfer entropy reconstruction and labeling of neuronal connections from simulated calcium imaging. PLoS One 2014; 9:e98842. [PMID: 24905689 PMCID: PMC4048312 DOI: 10.1371/journal.pone.0098842] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2013] [Accepted: 05/08/2014] [Indexed: 11/23/2022] Open
Abstract
Neuronal dynamics are fundamentally constrained by the underlying structural network architecture, yet much of the details of this synaptic connectivity are still unknown even in neuronal cultures in vitro. Here we extend a previous approach based on information theory, the Generalized Transfer Entropy, to the reconstruction of connectivity of simulated neuronal networks of both excitatory and inhibitory neurons. We show that, due to the model-free nature of the developed measure, both kinds of connections can be reliably inferred if the average firing rate between synchronous burst events exceeds a small minimum frequency. Furthermore, we suggest, based on systematic simulations, that even lower spontaneous inter-burst rates could be raised to meet the requirements of our reconstruction algorithm by applying a weak spatially homogeneous stimulation to the entire network. By combining multiple recordings of the same in silico network before and after pharmacologically blocking inhibitory synaptic transmission, we show then how it becomes possible to infer with high confidence the excitatory or inhibitory nature of each individual neuron.
Collapse
Affiliation(s)
- Javier G. Orlandi
- Departament d'Estructura i Consituents de la Matèria, Universitat de Barcelona, Barcelona, Spain
| | - Olav Stetter
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
- Georg-August-Universität, Physics Department, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
| | - Jordi Soriano
- Departament d'Estructura i Consituents de la Matèria, Universitat de Barcelona, Barcelona, Spain
| | - Theo Geisel
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
- Georg-August-Universität, Physics Department, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
| | - Demian Battaglia
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
- Institut de Neurosciences des Systèmes, Inserm UMR1106, Aix-Marseille Université, Marseille, France
- * E-mail:
| |
Collapse
|
94
|
Cabessa J, Villa AEP. An attractor-based complexity measurement for Boolean recurrent neural networks. PLoS One 2014; 9:e94204. [PMID: 24727866 PMCID: PMC3984152 DOI: 10.1371/journal.pone.0094204] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2013] [Accepted: 03/14/2014] [Indexed: 12/16/2022] Open
Abstract
We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits.
Collapse
Affiliation(s)
- Jérémie Cabessa
- Neuroheuristic Research Group, Faculty of Business and Economics, University of Lausanne, Lausanne, Switzerland
- Laboratory of Mathematical Economics (LEMMA), University of Paris 2 – Panthéon-Assas, Paris, France
- * E-mail: (JC); (AV)
| | - Alessandro E. P. Villa
- Neuroheuristic Research Group, Faculty of Business and Economics, University of Lausanne, Lausanne, Switzerland
- Grenoble Institute of Neuroscience, Faculty of Medicine, University Joseph Fourier, Grenoble, France
- * E-mail: (JC); (AV)
| |
Collapse
|
95
|
Butz M, Steenbuck ID, van Ooyen A. Homeostatic structural plasticity increases the efficiency of small-world networks. Front Synaptic Neurosci 2014; 6:7. [PMID: 24744727 PMCID: PMC3978244 DOI: 10.3389/fnsyn.2014.00007] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2013] [Accepted: 03/10/2014] [Indexed: 11/24/2022] Open
Abstract
In networks with small-world topology, which are characterized by a high clustering coefficient and a short characteristic path length, information can be transmitted efficiently and at relatively low costs. The brain is composed of small-world networks, and evolution may have optimized brain connectivity for efficient information processing. Despite many studies on the impact of topology on information processing in neuronal networks, little is known about the development of network topology and the emergence of efficient small-world networks. We investigated how a simple growth process that favors short-range connections over long-range connections in combination with a synapse formation rule that generates homeostasis in post-synaptic firing rates shapes neuronal network topology. Interestingly, we found that small-world networks benefited from homeostasis by an increase in efficiency, defined as the averaged inverse of the shortest paths through the network. Efficiency particularly increased as small-world networks approached the desired level of electrical activity. Ultimately, homeostatic small-world networks became almost as efficient as random networks. The increase in efficiency was caused by the emergent property of the homeostatic growth process that neurons started forming more long-range connections, albeit at a low rate, when their electrical activity was close to the homeostatic set-point. Although global network topology continued to change when neuronal activities were around the homeostatic equilibrium, the small-world property of the network was maintained over the entire course of development. Our results may help understand how complex systems such as the brain could set up an efficient network topology in a self-organizing manner. Insights from our work may also lead to novel techniques for constructing large-scale neuronal networks by self-organization.
Collapse
Affiliation(s)
- Markus Butz
- Simulation Lab Neuroscience, Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Forschungszentrum Jülich Jülich, Germany
| | - Ines D Steenbuck
- Student of the Medical Faculty, University of Freiburg Freiburg, Germany
| | - Arjen van Ooyen
- Department of Integrative Neurophysiology, VU University Amsterdam Amsterdam, Netherlands
| |
Collapse
|
96
|
Tibau E, Valencia M, Soriano J. Identification of neuronal network properties from the spectral analysis of calcium imaging signals in neuronal cultures. Front Neural Circuits 2013; 7:199. [PMID: 24385953 PMCID: PMC3866384 DOI: 10.3389/fncir.2013.00199] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2013] [Accepted: 12/01/2013] [Indexed: 11/13/2022] Open
Abstract
Neuronal networks in vitro are prominent systems to study the development of connections in living neuronal networks and the interplay between connectivity, activity and function. These cultured networks show a rich spontaneous activity that evolves concurrently with the connectivity of the underlying network. In this work we monitor the development of neuronal cultures, and record their activity using calcium fluorescence imaging. We use spectral analysis to characterize global dynamical and structural traits of the neuronal cultures. We first observe that the power spectrum can be used as a signature of the state of the network, for instance when inhibition is active or silent, as well as a measure of the network's connectivity strength. Second, the power spectrum identifies prominent developmental changes in the network such as GABAA switch. And third, the analysis of the spatial distribution of the spectral density, in experiments with a controlled disintegration of the network through CNQX, an AMPA-glutamate receptor antagonist in excitatory neurons, reveals the existence of communities of strongly connected, highly active neurons that display synchronous oscillations. Our work illustrates the interest of spectral analysis for the study of in vitro networks, and its potential use as a network-state indicator, for instance to compare healthy and diseased neuronal networks.
Collapse
Affiliation(s)
- Elisenda Tibau
- Neurophysics Laboratory, Departament d'Estructura i Constituents de la Matèria, Universitat de Barcelona Barcelona, Spain
| | - Miguel Valencia
- Neurophysiology Laboratory, Division of Neurosciences, CIMA, Universidad de Navarra Pamplona, Spain
| | - Jordi Soriano
- Neurophysics Laboratory, Departament d'Estructura i Constituents de la Matèria, Universitat de Barcelona Barcelona, Spain
| |
Collapse
|
97
|
Dennis M, Spiegler BJ, Juranek JJ, Bigler ED, Snead OC, Fletcher JM. Age, plasticity, and homeostasis in childhood brain disorders. Neurosci Biobehav Rev 2013; 37:2760-73. [PMID: 24096190 PMCID: PMC3859812 DOI: 10.1016/j.neubiorev.2013.09.010] [Citation(s) in RCA: 73] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2013] [Revised: 07/29/2013] [Accepted: 09/19/2013] [Indexed: 12/26/2022]
Abstract
It has been widely accepted that the younger the age and/or immaturity of the organism, the greater the brain plasticity, the young age plasticity privilege. This paper examines the relation of a young age to plasticity, reviewing human pediatric brain disorders, as well as selected animal models, human developmental and adult brain disorder studies. As well, we review developmental and childhood acquired disorders that involve a failure of regulatory homeostasis. Our core arguments are as follows:
Collapse
Affiliation(s)
- Maureen Dennis
- Program in Neurosciences and Mental Health, The Hospital for Sick Children, Toronto Department of Surgery, Faculty of Medicine, University of Toronto, Toronto, ON M5G 1X8, Canada.
| | | | | | | | | | | |
Collapse
|
98
|
Masquelier T, Deco G. Network bursting dynamics in excitatory cortical neuron cultures results from the combination of different adaptive mechanisms. PLoS One 2013; 8:e75824. [PMID: 24146781 PMCID: PMC3795681 DOI: 10.1371/journal.pone.0075824] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2013] [Accepted: 08/16/2013] [Indexed: 11/25/2022] Open
Abstract
In the brain, synchronization among cells of an assembly is a common phenomenon, and thought to be functionally relevant. Here we used an in vitro experimental model of cell assemblies, cortical cultures, combined with numerical simulations of a spiking neural network (SNN) to investigate how and why spontaneous synchronization occurs. In order to deal with excitation only, we pharmacologically blocked GABAAergic transmission using bicuculline. Synchronous events in cortical cultures tend to involve almost every cell and to display relatively constant durations. We have thus named these “network spikes” (NS). The inter-NS-intervals (INSIs) proved to be a more interesting phenomenon. In most cortical cultures NSs typically come in series or bursts (“bursts of NSs”, BNS), with short (∼1 s) INSIs and separated by long silent intervals (tens of s), which leads to bimodal INSI distributions. This suggests that a facilitating mechanism is at work, presumably short-term synaptic facilitation, as well as two fatigue mechanisms: one with a short timescale, presumably short-term synaptic depression, and another one with a longer timescale, presumably cellular adaptation. We thus incorporated these three mechanisms into the SNN, which, indeed, produced realistic BNSs. Next, we systematically varied the recurrent excitation for various adaptation timescales. Strong excitability led to frequent, quasi-periodic BNSs (CV∼0), and weak excitability led to rare BNSs, approaching a Poisson process (CV∼1). Experimental cultures appear to operate within an intermediate weakly-synchronized regime (CV∼0.5), with an adaptation timescale in the 2–8 s range, and well described by a Poisson-with-refractory-period model. Taken together, our results demonstrate that the INSI statistics are indeed informative: they allowed us to infer the mechanisms at work, and many parameters that we cannot access experimentally.
Collapse
Affiliation(s)
- Timothée Masquelier
- Unit for Brain and Cognition, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
- Laboratory of Neurobiology of Adaptive Processes (UMR 7102), Centre National de la Recherche Scientifique and University Pierre and Marie Curie, Paris, France
- * E-mail:
| | - Gustavo Deco
- Unit for Brain and Cognition, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
- Institució Catalana de la Recerca i Estudis Avançats, Universitat Pompeu Fabra, Barcelona, Spain
| |
Collapse
|
99
|
Timme N, Alford W, Flecker B, Beggs JM. Synergy, redundancy, and multivariate information measures: an experimentalist’s perspective. J Comput Neurosci 2013; 36:119-40. [DOI: 10.1007/s10827-013-0458-4] [Citation(s) in RCA: 130] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2012] [Revised: 04/26/2013] [Accepted: 04/29/2013] [Indexed: 11/29/2022]
|
100
|
Tetzlaff C, Kolodziejski C, Markelic I, Wörgötter F. Time scales of memory, learning, and plasticity. BIOLOGICAL CYBERNETICS 2012; 106:715-726. [PMID: 23160712 DOI: 10.1007/s00422-012-0529-z] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2012] [Accepted: 10/10/2012] [Indexed: 06/01/2023]
Abstract
After only about 10 days would the storage capacity of our nervous system be reached if we stored every bit of input. The nervous system relies on at least two mechanisms that counteract this capacity limit: compression and forgetting. But the latter mechanism needs to know how long an entity should be stored: some memories are relevant only for the next few minutes, some are important even after the passage of several years. Psychology and physiology have found and described many different memory mechanisms, and these mechanisms indeed use different time scales. In this prospect we review these mechanisms with respect to their time scale and propose relations between mechanisms in learning and memory and their underlying physiological basis.
Collapse
Affiliation(s)
- Christian Tetzlaff
- Bernstein Centre for Computational Neuroscience, III. Institute of Physics-Biophysics, Georg-August-Universität, Göttingen, Germany.
| | | | | | | |
Collapse
|