1
|
Kobayashi R, Shinomoto S. Inference of monosynaptic connections from parallel spike trains: A review. Neurosci Res 2024:S0168-0102(24)00097-X. [PMID: 39098768 DOI: 10.1016/j.neures.2024.07.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 07/12/2024] [Accepted: 07/19/2024] [Indexed: 08/06/2024]
Abstract
This article presents a mini-review about the progress in inferring monosynaptic connections from spike trains of multiple neurons over the past twenty years. First, we explain a variety of meanings of "neuronal connectivity" in different research areas of neuroscience, such as structural connectivity, monosynaptic connectivity, and functional connectivity. Among these, we focus on the methods used to infer the monosynaptic connectivity from spike data. We then summarize the inference methods based on two main approaches, i.e., correlation-based and model-based approaches. Finally, we describe available source codes for connectivity inference and future challenges. Although inference will never be perfect, the accuracy of identifying the monosynaptic connections has improved dramatically in recent years due to continuous efforts.
Collapse
Affiliation(s)
- Ryota Kobayashi
- Graduate School of Frontier Sciences, The University of Tokyo, Chiba 277-8561, Japan; Mathematics and Informatics Center, The University of Tokyo, Tokyo 113-8656, Japan.
| | - Shigeru Shinomoto
- Graduate School of Biostudies, Kyoto University, Kyoto 606-8501, Japan; Research Organization of Open Innovation and Collaboration, Ritsumeikan University, Osaka 567-8570, Japan
| |
Collapse
|
2
|
Tai P, Ding P, Wang F, Gong A, Li T, Zhao L, Su L, Fu Y. Brain-computer interface paradigms and neural coding. Front Neurosci 2024; 17:1345961. [PMID: 38287988 PMCID: PMC10822902 DOI: 10.3389/fnins.2023.1345961] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 12/28/2023] [Indexed: 01/31/2024] Open
Abstract
Brain signal patterns generated in the central nervous system of brain-computer interface (BCI) users are closely related to BCI paradigms and neural coding. In BCI systems, BCI paradigms and neural coding are critical elements for BCI research. However, so far there have been few references that clearly and systematically elaborated on the definition and design principles of the BCI paradigm as well as the definition and modeling principles of BCI neural coding. Therefore, these contents are expounded and the existing main BCI paradigms and neural coding are introduced in the review. Finally, the challenges and future research directions of BCI paradigm and neural coding were discussed, including user-centered design and evaluation for BCI paradigms and neural coding, revolutionizing the traditional BCI paradigms, breaking through the existing techniques for collecting brain signals and combining BCI technology with advanced AI technology to improve brain signal decoding performance. It is expected that the review will inspire innovative research and development of the BCI paradigm and neural coding.
Collapse
Affiliation(s)
- Pengrui Tai
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
| | - Peng Ding
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
| | - Fan Wang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
| | - Anmin Gong
- School of Information Engineering, Chinese People’s Armed Police Force Engineering University, Xi’an, China
| | - Tianwen Li
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
- Faculty of Science, Kunming University of Science and Technology, Kunming, China
| | - Lei Zhao
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
- Faculty of Science, Kunming University of Science and Technology, Kunming, China
| | - Lei Su
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
| | - Yunfa Fu
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
| |
Collapse
|
3
|
Boscaglia M, Gastaldi C, Gerstner W, Quian Quiroga R. A dynamic attractor network model of memory formation, reinforcement and forgetting. PLoS Comput Biol 2023; 19:e1011727. [PMID: 38117859 PMCID: PMC10766193 DOI: 10.1371/journal.pcbi.1011727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 01/04/2024] [Accepted: 12/02/2023] [Indexed: 12/22/2023] Open
Abstract
Empirical evidence shows that memories that are frequently revisited are easy to recall, and that familiar items involve larger hippocampal representations than less familiar ones. In line with these observations, here we develop a modelling approach to provide a mechanistic understanding of how hippocampal neural assemblies evolve differently, depending on the frequency of presentation of the stimuli. For this, we added an online Hebbian learning rule, background firing activity, neural adaptation and heterosynaptic plasticity to a rate attractor network model, thus creating dynamic memory representations that can persist, increase or fade according to the frequency of presentation of the corresponding memory patterns. Specifically, we show that a dynamic interplay between Hebbian learning and background firing activity can explain the relationship between the memory assembly sizes and their frequency of stimulation. Frequently stimulated assemblies increase their size independently from each other (i.e. creating orthogonal representations that do not share neurons, thus avoiding interference). Importantly, connections between neurons of assemblies that are not further stimulated become labile so that these neurons can be recruited by other assemblies, providing a neuronal mechanism of forgetting.
Collapse
Affiliation(s)
- Marta Boscaglia
- Centre for Systems Neuroscience, University of Leicester, United Kingdom
- School of Psychology and Vision Sciences, University of Leicester, United Kingdom
| | - Chiara Gastaldi
- School of Computer and Communication Sciences and School of Life Sciences, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and School of Life Sciences, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Rodrigo Quian Quiroga
- Centre for Systems Neuroscience, University of Leicester, United Kingdom
- Hospital del Mar Medical Research Institute (IMIM), Barcelona, Spain
- Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
- Ruijin hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, People’s Republic of China
| |
Collapse
|
4
|
Li J, Abbas H, Ang DS, Ali A, Ju X. Emerging memristive artificial neuron and synapse devices for the neuromorphic electronics era. NANOSCALE HORIZONS 2023; 8:1456-1484. [PMID: 37615055 DOI: 10.1039/d3nh00180f] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
Abstract
Growth of data eases the way to access the world but requires increasing amounts of energy to store and process. Neuromorphic electronics has emerged in the last decade, inspired by biological neurons and synapses, with in-memory computing ability, extenuating the 'von Neumann bottleneck' between the memory and processor and offering a promising solution to reduce the efforts both in data storage and processing, thanks to their multi-bit non-volatility, biology-emulated characteristics, and silicon compatibility. This work reviews the recent advances in emerging memristive devices for artificial neuron and synapse applications, including memory and data-processing ability: the physics and characteristics are discussed first, i.e., valence changing, electrochemical metallization, phase changing, interfaced-controlling, charge-trapping, ferroelectric tunnelling, and spin-transfer torquing. Next, we propose a universal benchmark for the artificial synapse and neuron devices on spiking energy consumption, standby power consumption, and spike timing. Based on the benchmark, we address the challenges, suggest the guidelines for intra-device and inter-device design, and provide an outlook for the neuromorphic applications of resistive switching-based artificial neuron and synapse devices.
Collapse
Affiliation(s)
- Jiayi Li
- School of Electrical and Electronics Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798.
| | - Haider Abbas
- School of Electrical and Electronics Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798.
| | - Diing Shenp Ang
- School of Electrical and Electronics Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798.
| | - Asif Ali
- School of Electrical and Electronics Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798.
| | - Xin Ju
- Institute of Materials Research and Engineering (IMRE), Agency for Science, Technology and Research (A*STAR), 2 Fusionopolis Way, Singapore 138634
| |
Collapse
|
5
|
Ma G, Yan R, Tang H. Exploiting noise as a resource for computation and learning in spiking neural networks. PATTERNS (NEW YORK, N.Y.) 2023; 4:100831. [PMID: 37876899 PMCID: PMC10591140 DOI: 10.1016/j.patter.2023.100831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 07/06/2023] [Accepted: 08/07/2023] [Indexed: 10/26/2023]
Abstract
Networks of spiking neurons underpin the extraordinary information-processing capabilities of the brain and have become pillar models in neuromorphic artificial intelligence. Despite extensive research on spiking neural networks (SNNs), most studies are established on deterministic models, overlooking the inherent non-deterministic, noisy nature of neural computations. This study introduces the noisy SNN (NSNN) and the noise-driven learning (NDL) rule by incorporating noisy neuronal dynamics to exploit the computational advantages of noisy neural processing. The NSNN provides a theoretical framework that yields scalable, flexible, and reliable computation and learning. We demonstrate that this framework leads to spiking neural models with competitive performance, improved robustness against challenging perturbations compared with deterministic SNNs, and better reproducing probabilistic computation in neural coding. Generally, this study offers a powerful and easy-to-use tool for machine learning, neuromorphic intelligence practitioners, and computational neuroscience researchers.
Collapse
Affiliation(s)
- Gehua Ma
- College of Computer Science and Technology, Zhejiang University, Hangzhou, PRC
| | - Rui Yan
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, PRC
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, PRC
- State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou, PRC
| |
Collapse
|
6
|
Malakasis N, Chavlis S, Poirazi P. Synaptic turnover promotes efficient learning in bio-realistic spiking neural networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.22.541722. [PMID: 37292929 PMCID: PMC10245885 DOI: 10.1101/2023.05.22.541722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
While artificial machine learning systems achieve superhuman performance in specific tasks such as language processing, image and video recognition, they do so use extremely large datasets and huge amounts of power. On the other hand, the brain remains superior in several cognitively challenging tasks while operating with the energy of a small lightbulb. We use a biologically constrained spiking neural network model to explore how the neural tissue achieves such high efficiency and assess its learning capacity on discrimination tasks. We found that synaptic turnover, a form of structural plasticity, which is the ability of the brain to form and eliminate synapses continuously, increases both the speed and the performance of our network on all tasks tested. Moreover, it allows accurate learning using a smaller number of examples. Importantly, these improvements are most significant under conditions of resource scarcity, such as when the number of trainable parameters is halved and when the task difficulty is increased. Our findings provide new insights into the mechanisms that underlie efficient learning in the brain and can inspire the development of more efficient and flexible machine learning algorithms.
Collapse
Affiliation(s)
- Nikos Malakasis
- School of Medicine, University of Crete, Heraklion 70013, Greece
- Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology-Hellas, Heraklion 70013, Greece
| | - Spyridon Chavlis
- Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology-Hellas, Heraklion 70013, Greece
| | - Panayiota Poirazi
- Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology-Hellas, Heraklion 70013, Greece
| |
Collapse
|
7
|
Joshi SN, Joshi AN, Joshi ND. Interplay between biochemical processes and network properties generates neuronal up and down states at the tripartite synapse. Phys Rev E 2023; 107:024415. [PMID: 36932559 DOI: 10.1103/physreve.107.024415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Accepted: 01/03/2023] [Indexed: 06/18/2023]
Abstract
Neuronal up and down states have long been known to exist both in vitro and in vivo. A variety of functions and mechanisms have been proposed for their generation, but there has not been a clear connection between the functions and mechanisms. We explore the potential contribution of cellular-level biochemistry to the network-level mechanisms thought to underlie the generation of up and down states. We develop a neurochemical model of a single tripartite synapse, assumed to be within a network of similar tripartite synapses, to investigate possible function-mechanism links for the appearance of up and down states. We characterize the behavior of our model in different regions of parameter space and show that resource limitation at the tripartite synapse affects its ability to faithfully transmit input signals, leading to extinction-down states. Recovery of resources allows for "reignition" into up states. The tripartite synapse exhibits distinctive "regimes" of operation depending on whether ATP, neurotransmitter (glutamate), both, or neither, is limiting. Our model qualitatively matches the behavior of six disparate experimental systems, including both in vitro and in vivo models, without changing any model parameters except those related to the experimental conditions. We also explore the effects of varying different critical parameters within the model. Here we show that availability of energy, represented by ATP, and glutamate for neurotransmission at the cellular level are intimately related, and are capable of promoting state transitions at the network level as ignition and extinction phenomena. Our model is complementary to existing models of neuronal up and down states in that it focuses on cellular-level dynamics while still retaining essential network-level processes. Our model predicts the existence of a "final common pathway" of behavior at the tripartite synapse arising from scarcity of resources and may explain use dependence in the phenomenon of "local sleep." Ultimately, sleeplike behavior may be a fundamental property of networks of tripartite synapses.
Collapse
Affiliation(s)
- Shubhada N Joshi
- National Center for Adaptive Neurotechnologies (NCAN), David Axelrod Institute, Wadsworth Center, New York State Department of Health, 120 New Scotland Ave., Albany, New York 12208, USA
| | - Aditya N Joshi
- Stanford University School of Medicine, 300 Pasteur Dr., Stanford, California 94305, USA
| | - Narendra D Joshi
- General Electric Global Research, 1 Research Circle, Niskayuna, New York 12309, USA
| |
Collapse
|
8
|
The molecular memory code and synaptic plasticity: A synthesis. Biosystems 2023; 224:104825. [PMID: 36610586 DOI: 10.1016/j.biosystems.2022.104825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 12/29/2022] [Accepted: 12/30/2022] [Indexed: 01/06/2023]
Abstract
The most widely accepted view of memory in the brain holds that synapses are the storage sites of memory, and that memories are formed through associative modification of synapses. This view has been challenged on conceptual and empirical grounds. As an alternative, it has been proposed that molecules within the cell body are the storage sites of memory, and that memories are formed through biochemical operations on these molecules. This paper proposes a synthesis of these two views, grounded in a computational model of memory. Synapses are conceived as storage sites for the parameters of an approximate posterior probability distribution over latent causes. Intracellular molecules are conceived as storage sites for the parameters of a generative model. The model stipulates how these two components work together as part of an integrated algorithm for learning and inference.
Collapse
|
9
|
Bittar A, Garner PN. A surrogate gradient spiking baseline for speech command recognition. Front Neurosci 2022; 16:865897. [PMID: 36117617 PMCID: PMC9479696 DOI: 10.3389/fnins.2022.865897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Accepted: 07/21/2022] [Indexed: 11/23/2022] Open
Abstract
Artificial neural networks (ANNs) are the basis of recent advances in artificial intelligence (AI); they typically use real valued neuron responses. By contrast, biological neurons are known to operate using spike trains. In principle, spiking neural networks (SNNs) may have a greater representational capability than ANNs, especially for time series such as speech; however their adoption has been held back by both a lack of stable training algorithms and a lack of compatible baselines. We begin with a fairly thorough review of literature around the conjunction of ANNs and SNNs. Focusing on surrogate gradient approaches, we proceed to define a simple but relevant evaluation based on recent speech command tasks. After evaluating a representative selection of architectures, we show that a combination of adaptation, recurrence and surrogate gradients can yield light spiking architectures that are not only able to compete with ANN solutions, but also retain a high degree of compatibility with them in modern deep learning frameworks. We conclude tangibly that SNNs are appropriate for future research in AI, in particular for speech processing applications, and more speculatively that they may also assist in inference about biological function.
Collapse
Affiliation(s)
- Alexandre Bittar
- Idiap Research Institute, Martigny, Switzerland
- École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- *Correspondence: Alexandre Bittar
| | | |
Collapse
|
10
|
Zeng Y, Bao W, Tao L, Hu D, Yang Z, Yang L, Shang D. Regularized Spectral Spike Response Model: A Neuron Model for Robust Parameter Reduction. Brain Sci 2022; 12:1008. [PMID: 36009071 PMCID: PMC9405574 DOI: 10.3390/brainsci12081008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 07/12/2022] [Accepted: 07/25/2022] [Indexed: 02/04/2023] Open
Abstract
The modeling procedure of current biological neuron models is hindered by either hyperparameter optimization or overparameterization, which limits their application to a variety of biologically realistic tasks. This article proposes a novel neuron model called the Regularized Spectral Spike Response Model (RSSRM) to address these issues. The selection of hyperparameters is avoided by the model structure and fitting strategy, while the number of parameters is constrained by regularization techniques. Twenty firing simulation experiments indicate the superiority of RSSRM. In particular, after pruning more than 99% of its parameters, RSSRM with 100 parameters achieves an RMSE of 5.632 in membrane potential prediction, a VRD of 47.219, and an F1-score of 0.95 in spike train forecasting with correct timing (±1.4 ms), which are 25%, 99%, 55%, and 24% better than the average of other neuron models with the same number of parameters in RMSE, VRD, F1-score, and correct timing, respectively. Moreover, RSSRM with 100 parameters achieves a memory use of 10 KB and a runtime of 1 ms during inference, which is more efficient than the Izhikevich model.
Collapse
Affiliation(s)
- Yinuo Zeng
- Nanjing Institute of Intelligent Technology, Nanjing 210000, China; (Y.Z.); (W.B.); (D.H.); (Z.Y.); (L.Y.)
| | - Wendi Bao
- Nanjing Institute of Intelligent Technology, Nanjing 210000, China; (Y.Z.); (W.B.); (D.H.); (Z.Y.); (L.Y.)
| | - Liying Tao
- Institute of Microelectronics of the Chinese Academy of Sciences, Beijing 100000, China;
- University of Chinese Academy of Sciences, Beijing 100000, China
| | - Die Hu
- Nanjing Institute of Intelligent Technology, Nanjing 210000, China; (Y.Z.); (W.B.); (D.H.); (Z.Y.); (L.Y.)
| | - Zonglin Yang
- Nanjing Institute of Intelligent Technology, Nanjing 210000, China; (Y.Z.); (W.B.); (D.H.); (Z.Y.); (L.Y.)
| | - Liren Yang
- Nanjing Institute of Intelligent Technology, Nanjing 210000, China; (Y.Z.); (W.B.); (D.H.); (Z.Y.); (L.Y.)
| | - Delong Shang
- Nanjing Institute of Intelligent Technology, Nanjing 210000, China; (Y.Z.); (W.B.); (D.H.); (Z.Y.); (L.Y.)
- Institute of Microelectronics of the Chinese Academy of Sciences, Beijing 100000, China;
- University of Chinese Academy of Sciences, Beijing 100000, China
| |
Collapse
|
11
|
|
12
|
Korcsak-Gorzo A, Müller MG, Baumbach A, Leng L, Breitwieser OJ, van Albada SJ, Senn W, Meier K, Legenstein R, Petrovici MA. Cortical oscillations support sampling-based computations in spiking neural networks. PLoS Comput Biol 2022; 18:e1009753. [PMID: 35324886 PMCID: PMC8947809 DOI: 10.1371/journal.pcbi.1009753] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Accepted: 12/14/2021] [Indexed: 11/19/2022] Open
Abstract
Being permanently confronted with an uncertain world, brains have faced evolutionary pressure to represent this uncertainty in order to respond appropriately. Often, this requires visiting multiple interpretations of the available information or multiple solutions to an encountered problem. This gives rise to the so-called mixing problem: since all of these "valid" states represent powerful attractors, but between themselves can be very dissimilar, switching between such states can be difficult. We propose that cortical oscillations can be effectively used to overcome this challenge. By acting as an effective temperature, background spiking activity modulates exploration. Rhythmic changes induced by cortical oscillations can then be interpreted as a form of simulated tempering. We provide a rigorous mathematical discussion of this link and study some of its phenomenological implications in computer simulations. This identifies a new computational role of cortical oscillations and connects them to various phenomena in the brain, such as sampling-based probabilistic inference, memory replay, multisensory cue combination, and place cell flickering.
Collapse
Affiliation(s)
- Agnes Korcsak-Gorzo
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Michael G. Müller
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Andreas Baumbach
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Luziwei Leng
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | | | - Sacha J. van Albada
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Institute of Zoology, University of Cologne, Cologne, Germany
| | - Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Karlheinz Meier
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Robert Legenstein
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Mihai A. Petrovici
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- Department of Physiology, University of Bern, Bern, Switzerland
| |
Collapse
|
13
|
Jegminat J, Surace SC, Pfister JP. Learning as filtering: Implications for spike-based plasticity. PLoS Comput Biol 2022; 18:e1009721. [PMID: 35196324 PMCID: PMC8865661 DOI: 10.1371/journal.pcbi.1009721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Accepted: 12/03/2021] [Indexed: 11/22/2022] Open
Abstract
Most normative models in computational neuroscience describe the task of learning as the optimisation of a cost function with respect to a set of parameters. However, learning as optimisation fails to account for a time-varying environment during the learning process and the resulting point estimate in parameter space does not account for uncertainty. Here, we frame learning as filtering, i.e., a principled method for including time and parameter uncertainty. We derive the filtering-based learning rule for a spiking neuronal network—the Synaptic Filter—and show its computational and biological relevance. For the computational relevance, we show that filtering improves the weight estimation performance compared to a gradient learning rule with optimal learning rate. The dynamics of the mean of the Synaptic Filter is consistent with spike-timing dependent plasticity (STDP) while the dynamics of the variance makes novel predictions regarding spike-timing dependent changes of EPSP variability. Moreover, the Synaptic Filter explains experimentally observed negative correlations between homo- and heterosynaptic plasticity. The task of learning is commonly framed as parameter optimisation. Here, we adopt the framework of learning as filtering where the task is to continuously estimate the uncertainty about the parameters to be learned. We apply this framework to synaptic plasticity in a spiking neuronal network. Filtering includes a time-varying environment and parameter uncertainty on the level of the learning task. We show that learning as filtering can qualitatively explain two biological experiments on synaptic plasticity that cannot be explained by learning as optimisation. Moreover, we make a new prediction and improve performance with respect to a gradient learning rule. Thus, learning as filtering is a promising candidate for learning models.
Collapse
Affiliation(s)
- Jannes Jegminat
- Department of Physiology, University of Bern, Bern, Switzerland
- Institute of Neuroinformatics and Neuroscience Center Zurich, ETH and the University of Zurich, Zurich, Switzerland
- * E-mail:
| | | | - Jean-Pascal Pfister
- Department of Physiology, University of Bern, Bern, Switzerland
- Institute of Neuroinformatics and Neuroscience Center Zurich, ETH and the University of Zurich, Zurich, Switzerland
| |
Collapse
|
14
|
Wen S, Yin A, Tseng PH, Itti L, Lebedev MA, Nicolelis M. Capturing spike train temporal pattern with wavelet average coefficient for brain machine interface. Sci Rep 2021; 11:19020. [PMID: 34561503 PMCID: PMC8463672 DOI: 10.1038/s41598-021-98578-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 09/08/2021] [Indexed: 11/24/2022] Open
Abstract
Motor brain machine interfaces (BMIs) directly link the brain to artificial actuators and have the potential to mitigate severe body paralysis caused by neurological injury or disease. Most BMI systems involve a decoder that analyzes neural spike counts to infer movement intent. However, many classical BMI decoders (1) fail to take advantage of temporal patterns of spike trains, possibly over long time horizons; (2) are insufficient to achieve good BMI performance at high temporal resolution, as the underlying Gaussian assumption of decoders based on spike counts is violated. Here, we propose a new statistical feature that represents temporal patterns or temporal codes of spike events with richer description-wavelet average coefficients (WAC)-to be used as decoder input instead of spike counts. We constructed a wavelet decoder framework by using WAC features with a sliding-window approach, and compared the resulting decoder against classical decoders (Wiener and Kalman family) and new deep learning based decoders ( Long Short-Term Memory) using spike count features. We found that the sliding-window approach boosts decoding temporal resolution, and using WAC features significantly improves decoding performance over using spike count features.
Collapse
Affiliation(s)
- Shixian Wen
- Department of Computer science, University of Southern California, Los Angeles, CA, 90089, USA.
| | - Allen Yin
- Department of Neurobiology, Duke University, Durham, NC, 27710, USA
| | - Po-He Tseng
- Department of Neurobiology, Duke University, Durham, NC, 27710, USA
| | - Laurent Itti
- Department of Computer science, University of Southern California, Los Angeles, CA, 90089, USA
- Department of Psychology, University of Southern California, Los Angeles, CA, 90089, USA
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, 90089, USA
| | - Mikhail A Lebedev
- V.Zelman Center For Neurobiology and Brain Restoration, Skolkovo Institute of Science and Technology, Moscow, Russia
- Department of Neurobiology, Duke University, Durham, NC, 27710, USA
| | - Miguel Nicolelis
- Department of Neurobiology, Duke University, Durham, NC, 27710, USA
| |
Collapse
|
15
|
Harkin EF, Shen PR, Goel A, Richards BA, Naud R. Parallel and Recurrent Cascade Models as a Unifying Force for Understanding Sub-cellular Computation. Neuroscience 2021; 489:200-215. [PMID: 34358629 DOI: 10.1016/j.neuroscience.2021.07.026] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 07/06/2021] [Accepted: 07/25/2021] [Indexed: 11/15/2022]
Abstract
Neurons are very complicated computational devices, incorporating numerous non-linear processes, particularly in their dendrites. Biophysical models capture these processes directly by explicitly modelling physiological variables, such as ion channels, current flow, membrane capacitance, etc. However, another option for capturing the complexities of real neural computation is to use cascade models, which treat individual neurons as a cascade of linear and non-linear operations, akin to a multi-layer artificial neural network. Recent research has shown that cascade models can capture single-cell computation well, but there are still a number of sub-cellular, regenerative dendritic phenomena that they cannot capture, such as the interaction between sodium, calcium, and NMDA spikes in different compartments. Here, we propose that it is possible to capture these additional phenomena using parallel, recurrent cascade models, wherein an individual neuron is modelled as a cascade of parallel linear and non-linear operations that can be connected recurrently, akin to a multi-layer, recurrent, artificial neural network. Given their tractable mathematical structure, we show that neuron models expressed in terms of parallel recurrent cascades can themselves be integrated into multi-layered artificial neural networks and trained to perform complex tasks. We go on to discuss potential implications and uses of these models for artificial intelligence. Overall, we argue that parallel, recurrent cascade models provide an important, unifying tool for capturing single-cell computation and exploring the algorithmic implications of physiological phenomena.
Collapse
Affiliation(s)
- Emerson F Harkin
- uOttawa Brain and Mind Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Peter R Shen
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Anish Goel
- Lisgar Collegiate Institute, Ottawa, ON, Canada
| | - Blake A Richards
- Mila, Montréal, QC, Canada; Montreal Neurological Institute, Montréal, QC, Canada; Department of Neurology and Neurosurgery, McGill University, Montréal, QC, Canada; School of Computer Science, McGill University, Montréal, QC, Canada.
| | - Richard Naud
- uOttawa Brain and Mind Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada; Department of Physics, University of Ottawa, Ottawa, ON, Canada.
| |
Collapse
|
16
|
Weber AI, Shea-Brown E, Rieke F. Identification of Multiple Noise Sources Improves Estimation of Neural Responses across Stimulus Conditions. eNeuro 2021; 8:ENEURO.0191-21.2021. [PMID: 34083382 PMCID: PMC8260275 DOI: 10.1523/eneuro.0191-21.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Accepted: 05/10/2021] [Indexed: 11/21/2022] Open
Abstract
Most models of neural responses are constructed to reproduce the average response to inputs but lack the flexibility to capture observed variability in responses. The origins and structure of this variability have significant implications for how information is encoded and processed in the nervous system, both by limiting information that can be conveyed and by determining processing strategies that are favorable for minimizing its negative effects. Here, we present a new modeling framework that incorporates multiple sources of noise to better capture observed features of neural response variability across stimulus conditions. We apply this model to retinal ganglion cells at two different ambient light levels and demonstrate that it captures the full distribution of responses. Further, the model reveals light level-dependent changes that could not be seen with previous models, showing both large changes in rectification of nonlinear circuit elements and systematic differences in the contributions of different noise sources under different conditions.
Collapse
Affiliation(s)
- Alison I Weber
- Graduate Program in Neuroscience, University of Washington, Seattle, WA 98195
| | - Eric Shea-Brown
- Department of Applied Mathematics, University of Washington, Seattle, WA 98195
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| |
Collapse
|
17
|
Pokorny C, Ison MJ, Rao A, Legenstein R, Papadimitriou C, Maass W. STDP Forms Associations between Memory Traces in Networks of Spiking Neurons. Cereb Cortex 2021; 30:952-968. [PMID: 31403679 PMCID: PMC7132978 DOI: 10.1093/cercor/bhz140] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2018] [Revised: 03/25/2019] [Accepted: 05/09/2019] [Indexed: 11/17/2022] Open
Abstract
Memory traces and associations between them are fundamental for cognitive brain function. Neuron recordings suggest that distributed assemblies of neurons in the brain serve as memory traces for spatial information, real-world items, and concepts. However, there is conflicting evidence regarding neural codes for associated memory traces. Some studies suggest the emergence of overlaps between assemblies during an association, while others suggest that the assemblies themselves remain largely unchanged and new assemblies emerge as neural codes for associated memory items. Here we study the emergence of neural codes for associated memory items in a generic computational model of recurrent networks of spiking neurons with a data-constrained rule for spike-timing-dependent plasticity. The model depends critically on 2 parameters, which control the excitability of neurons and the scale of initial synaptic weights. By modifying these 2 parameters, the model can reproduce both experimental data from the human brain on the fast formation of associations through emergent overlaps between assemblies, and rodent data where new neurons are recruited to encode the associated memories. Hence, our findings suggest that the brain can use both of these 2 neural codes for associations, and dynamically switch between them during consolidation.
Collapse
Affiliation(s)
- Christoph Pokorny
- Institute for Theoretical Computer Science, Graz University of Technology, 8010 Graz, Austria
| | - Matias J Ison
- School of Psychology, University of Nottingham, Nottingham, NG7 2RD, UK
| | - Arjun Rao
- Institute for Theoretical Computer Science, Graz University of Technology, 8010 Graz, Austria
| | - Robert Legenstein
- Institute for Theoretical Computer Science, Graz University of Technology, 8010 Graz, Austria
| | - Christos Papadimitriou
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720-1770, USA
| | - Wolfgang Maass
- Institute for Theoretical Computer Science, Graz University of Technology, 8010 Graz, Austria
| |
Collapse
|
18
|
Zeldenrust F, Gutkin B, Denéve S. Efficient and robust coding in heterogeneous recurrent networks. PLoS Comput Biol 2021; 17:e1008673. [PMID: 33930016 PMCID: PMC8115785 DOI: 10.1371/journal.pcbi.1008673] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 05/12/2021] [Accepted: 04/07/2021] [Indexed: 11/19/2022] Open
Abstract
Cortical networks show a large heterogeneity of neuronal properties. However, traditional coding models have focused on homogeneous populations of excitatory and inhibitory neurons. Here, we analytically derive a class of recurrent networks of spiking neurons that close to optimally track a continuously varying input online, based on two assumptions: 1) every spike is decoded linearly and 2) the network aims to reduce the mean-squared error between the input and the estimate. From this we derive a class of predictive coding networks, that unifies encoding and decoding and in which we can investigate the difference between homogeneous networks and heterogeneous networks, in which each neurons represents different features and has different spike-generating properties. We find that in this framework, 'type 1' and 'type 2' neurons arise naturally and networks consisting of a heterogeneous population of different neuron types are both more efficient and more robust against correlated noise. We make two experimental predictions: 1) we predict that integrators show strong correlations with other integrators and resonators are correlated with resonators, whereas the correlations are much weaker between neurons with different coding properties and 2) that 'type 2' neurons are more coherent with the overall network activity than 'type 1' neurons.
Collapse
Affiliation(s)
- Fleur Zeldenrust
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Boris Gutkin
- Group for Neural Theory, INSERM U960, Département d’Études Cognitives, École Normal Supérieure PSL University, Paris, France
- Center for Cognition and Decision Making, National Research University Higher School of Economics, Moscow, Russia
| | - Sophie Denéve
- Group for Neural Theory, INSERM U960, Département d’Études Cognitives, École Normal Supérieure PSL University, Paris, France
| |
Collapse
|
19
|
Rossbroich J, Trotter D, Beninger J, Tóth K, Naud R. Linear-nonlinear cascades capture synaptic dynamics. PLoS Comput Biol 2021; 17:e1008013. [PMID: 33720935 PMCID: PMC7993773 DOI: 10.1371/journal.pcbi.1008013] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 03/25/2021] [Accepted: 02/25/2021] [Indexed: 11/18/2022] Open
Abstract
Short-term synaptic dynamics differ markedly across connections and strongly regulate how action potentials communicate information. To model the range of synaptic dynamics observed in experiments, we have developed a flexible mathematical framework based on a linear-nonlinear operation. This model can capture various experimentally observed features of synaptic dynamics and different types of heteroskedasticity. Despite its conceptual simplicity, we show that it is more adaptable than previous models. Combined with a standard maximum likelihood approach, synaptic dynamics can be accurately and efficiently characterized using naturalistic stimulation patterns. These results make explicit that synaptic processing bears algorithmic similarities with information processing in convolutional neural networks.
Collapse
Affiliation(s)
- Julian Rossbroich
- Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland
| | - Daniel Trotter
- Department of Physics, University of Ottawa, Ottawa, ON, Canada
| | - John Beninger
- uOttawa Brain Mind Institute, Center for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Katalin Tóth
- uOttawa Brain Mind Institute, Center for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Richard Naud
- Department of Physics, University of Ottawa, Ottawa, ON, Canada
- uOttawa Brain Mind Institute, Center for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
20
|
Wang K, Hu Q, Gao B, Lin Q, Zhuge FW, Zhang DY, Wang L, He YH, Scheicher RH, Tong H, Miao XS. Threshold switching memristor-based stochastic neurons for probabilistic computing. MATERIALS HORIZONS 2021; 8:619-629. [PMID: 34821279 DOI: 10.1039/d0mh01759k] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Biological neurons exhibit dynamic excitation behavior in the form of stochastic firing, rather than stiffly giving out spikes upon reaching a fixed threshold voltage, which empowers the brain to perform probabilistic inference in the face of uncertainty. However, owing to the complexity of the stochastic firing process in biological neurons, the challenge of fabricating and applying stochastic neurons with bio-realistic dynamics to probabilistic scenarios remains to be fully addressed. In this work, a novel CuS/GeSe conductive-bridge threshold switching memristor is fabricated and singled out to realize electronic stochastic neurons, which is ascribed to the similarity between the stochastic switching behavior observed in the device and that of biological ion channels. The corresponding electric circuit of a stochastic neuron is then constructed and the probabilistic firing capacity of the neuron is utilized to implement Bayesian inference in a spiking neural network (SNN). The application prospects are demonstrated on the example of a tumor diagnosis task, where common fatal diagnostic errors of a conventional artificial neural network are successfully circumvented. Moreover, in comparison to deterministic neuron-based SNNs, the stochastic neurons enable SNNs to deliver an estimate of the uncertainty in their predictions, and the fidelity of the judgement is drastically improved by 81.2%.
Collapse
Affiliation(s)
- Kuan Wang
- Wuhan National Laboratory for Optoelectronics, School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan 430074, China.
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
21
|
Van Pottelbergh T, Drion G, Sepulchre R. From Biophysical to Integrate-and-Fire Modeling. Neural Comput 2021; 33:563-589. [PMID: 33400899 DOI: 10.1162/neco_a_01353] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
This article proposes a methodology to extract a low-dimensional integrate-and-fire model from an arbitrarily detailed single-compartment biophysical model. The method aims at relating the modulation of maximal conductance parameters in the biophysical model to the modulation of parameters in the proposed integrate-and-fire model. The approach is illustrated on two well-documented examples of cellular neuromodulation: the transition between type I and type II excitability and the transition between spiking and bursting.
Collapse
Affiliation(s)
| | - Guillaume Drion
- Department of Electrical Engineering and Computer Science, University of Liège, 4000 Liège, Belgium
| | - Rodolphe Sepulchre
- Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, U.K.
| |
Collapse
|
22
|
A generative spiking neural-network model of goal-directed behaviour and one-step planning. PLoS Comput Biol 2020; 16:e1007579. [PMID: 33290414 PMCID: PMC7748287 DOI: 10.1371/journal.pcbi.1007579] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2019] [Revised: 12/18/2020] [Accepted: 10/01/2020] [Indexed: 11/21/2022] Open
Abstract
In mammals, goal-directed and planning processes support flexible behaviour used to face new situations that cannot be tackled through more efficient but rigid habitual behaviours. Within the Bayesian modelling approach of brain and behaviour, models have been proposed to perform planning as probabilistic inference but this approach encounters a crucial problem: explaining how such inference might be implemented in brain spiking networks. Recently, the literature has proposed some models that face this problem through recurrent spiking neural networks able to internally simulate state trajectories, the core function at the basis of planning. However, the proposed models have relevant limitations that make them biologically implausible, namely their world model is trained ‘off-line’ before solving the target tasks, and they are trained with supervised learning procedures that are biologically and ecologically not plausible. Here we propose two novel hypotheses on how brain might overcome these problems, and operationalise them in a novel architecture pivoting on a spiking recurrent neural network. The first hypothesis allows the architecture to learn the world model in parallel with its use for planning: to this purpose, a new arbitration mechanism decides when to explore, for learning the world model, or when to exploit it, for planning, based on the entropy of the world model itself. The second hypothesis allows the architecture to use an unsupervised learning process to learn the world model by observing the effects of actions. The architecture is validated by reproducing and accounting for the learning profiles and reaction times of human participants learning to solve a visuomotor learning task that is new for them. Overall, the architecture represents the first instance of a model bridging probabilistic planning and spiking-processes that has a degree of autonomy analogous to the one of real organisms. Goal-directed behaviour relies on brain processes supporting planning of actions based on their expected consequences before performing them in the environment. An important computational modelling approach proposes that the brain performs goal-directed processes on the basis of probability distributions and computations on them. A key challenge of this approach is to explain how these probabilistic processes can rely on the spiking processes of the brain. The literature has recently proposed some models that do so by ‘thinking ahead’ alternative possible action-outcomes based on low-level neuronal stochastic events. However, these models have a limited autonomy as they require to learn how the environment works (‘world model’) before solving the tasks, and use a biologically implausible learning process requiring an ‘external teacher’ to tell how their internal units should respond. Here we present a novel architecture proposing how organisms might overcome these challenging problems. First, the architecture can decide if exploring, to learn the world model, or planning, using such model, by evaluating how confident it is on the model knowledge. Second, the architecture can autonomously learn the world model based on experience. The architecture represents a first fully autonomous planning model relying on a spiking neural network.
Collapse
|
23
|
Synthesis of recurrent neural dynamics for monotone inclusion with application to Bayesian inference. Neural Netw 2020; 131:231-241. [PMID: 32818873 DOI: 10.1016/j.neunet.2020.07.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2020] [Revised: 07/06/2020] [Accepted: 07/31/2020] [Indexed: 11/22/2022]
Abstract
We propose a top-down approach to construct recurrent neural circuit dynamics for the mathematical problem of monotone inclusion (MoI). MoI in a general optimization framework that encompasses a wide range of contemporary problems, including Bayesian inference and Markov decision making. We show that in a recurrent neural circuit/network with Poisson neurons, each neuron's firing curve can be understood as a proximal operator of a local objective function, while the overall circuit dynamics constitutes an operator-splitting system of ordinary differential equations whose equilibrium point corresponds to the solution of the MoI problem. Our analysis thus establishes that neural circuits are a substrate for solving a broad class of computational tasks. In this regard, we provide an explicit synthesis procedure for building neural circuits for specific MoI problems and demonstrate it for the specific case of Bayesian inference and sparse neural coding.
Collapse
|
24
|
Limbacher T, Legenstein R. Emergence of Stable Synaptic Clusters on Dendrites Through Synaptic Rewiring. Front Comput Neurosci 2020; 14:57. [PMID: 32848681 PMCID: PMC7424032 DOI: 10.3389/fncom.2020.00057] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Accepted: 05/22/2020] [Indexed: 11/16/2022] Open
Abstract
The connectivity structure of neuronal networks in cortex is highly dynamic. This ongoing cortical rewiring is assumed to serve important functions for learning and memory. We analyze in this article a model for the self-organization of synaptic inputs onto dendritic branches of pyramidal cells. The model combines a generic stochastic rewiring principle with a simple synaptic plasticity rule that depends on local dendritic activity. In computer simulations, we find that this synaptic rewiring model leads to synaptic clustering, that is, temporally correlated inputs become locally clustered on dendritic branches. This empirical finding is backed up by a theoretical analysis which shows that rewiring in our model favors network configurations with synaptic clustering. We propose that synaptic clustering plays an important role in the organization of computation and memory in cortical circuits: we find that synaptic clustering through the proposed rewiring mechanism can serve as a mechanism to protect memories from subsequent modifications on a medium time scale. Rewiring of synaptic connections onto specific dendritic branches may thus counteract the general problem of catastrophic forgetting in neural networks.
Collapse
Affiliation(s)
| | - Robert Legenstein
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| |
Collapse
|
25
|
Whitwell HJ, Bacalini MG, Blyuss O, Chen S, Garagnani P, Gordleeva SY, Jalan S, Ivanchenko M, Kanakov O, Kustikova V, Mariño IP, Meyerov I, Ullner E, Franceschi C, Zaikin A. The Human Body as a Super Network: Digital Methods to Analyze the Propagation of Aging. Front Aging Neurosci 2020; 12:136. [PMID: 32523526 PMCID: PMC7261843 DOI: 10.3389/fnagi.2020.00136] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Accepted: 04/22/2020] [Indexed: 12/13/2022] Open
Abstract
Biological aging is a complex process involving multiple biological processes. These can be understood theoretically though considering them as individual networks-e.g., epigenetic networks, cell-cell networks (such as astroglial networks), and population genetics. Mathematical modeling allows the combination of such networks so that they may be studied in unison, to better understand how the so-called "seven pillars of aging" combine and to generate hypothesis for treating aging as a condition at relatively early biological ages. In this review, we consider how recent progression in mathematical modeling can be utilized to investigate aging, particularly in, but not exclusive to, the context of degenerative neuronal disease. We also consider how the latest techniques for generating biomarker models for disease prediction, such as longitudinal analysis and parenclitic analysis can be applied to as both biomarker platforms for aging, as well as to better understand the inescapable condition. This review is written by a highly diverse and multi-disciplinary team of scientists from across the globe and calls for greater collaboration between diverse fields of research.
Collapse
Affiliation(s)
- Harry J Whitwell
- Department of Chemical Engineering, Imperial College London, London, United Kingdom
| | | | - Oleg Blyuss
- School of Physics, Astronomy and Mathematics, University of Hertfordshire, Harfield, United Kingdom.,Department of Paediatrics and Paediatric Infectious Diseases, Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Shangbin Chen
- Britton Chance Centre for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China
| | - Paolo Garagnani
- Department of Experimental, Diagnostic and Specialty Medicine (DIMES), University of Bologna, Bologna, Italy
| | - Susan Yu Gordleeva
- Laboratory of Systems Medicine of Healthy Aging, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Sarika Jalan
- Complex Systems Laboratory, Discipline of Physics, Indian Institute of Technology Indore, Indore, India.,Centre for Bio-Science and Bio-Medical Engineering, Indian Institute of Technology Indore, Indore, India
| | - Mikhail Ivanchenko
- Institute of Information Technologies, Mathematics and Mechanics, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Oleg Kanakov
- Laboratory of Systems Medicine of Healthy Aging, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Valentina Kustikova
- Institute of Information Technologies, Mathematics and Mechanics, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Ines P Mariño
- Department of Biology and Geology, Physics and Inorganic Chemistry, Universidad Rey Juan Carlos, Madrid, Spain
| | - Iosif Meyerov
- Institute of Information Technologies, Mathematics and Mechanics, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Ekkehard Ullner
- Department of Physics (SUPA), Institute for Complex Systems and Mathematical Biology, University of Aberdeen, Aberdeen, United Kingdom
| | - Claudio Franceschi
- Laboratory of Systems Medicine of Healthy Aging, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia.,Institute of Information Technologies, Mathematics and Mechanics, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Alexey Zaikin
- Department of Paediatrics and Paediatric Infectious Diseases, Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia.,Institute of Information Technologies, Mathematics and Mechanics, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia.,Department of Mathematics, Institute for Women's Health, University College London, London, United Kingdom
| |
Collapse
|
26
|
Fang Y, Yu Z, Chen F. Noise Helps Optimization Escape From Saddle Points in the Synaptic Plasticity. Front Neurosci 2020; 14:343. [PMID: 32410937 PMCID: PMC7201302 DOI: 10.3389/fnins.2020.00343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2019] [Accepted: 03/23/2020] [Indexed: 11/20/2022] Open
Abstract
Numerous experimental studies suggest that noise is inherent in the human brain. However, the functional importance of noise remains unknown. n particular, from a computational perspective, such stochasticity is potentially harmful to brain function. In machine learning, a large number of saddle points are surrounded by high error plateaus and give the illusion of the existence of local minimum. As a result, being trapped in the saddle points can dramatically impair learning and adding noise will attack such saddle point problems in high-dimensional optimization, especially under the strict saddle condition. Motivated by these arguments, we propose one biologically plausible noise structure and demonstrate that noise can efficiently improve the optimization performance of spiking neural networks based on stochastic gradient descent. The strict saddle condition for synaptic plasticity is deduced, and under such conditions, noise can help optimization escape from saddle points on high dimensional domains. The theoretical results explain the stochasticity of synapses and guide us on how to make use of noise. In addition, we provide biological interpretations of proposed noise structures from two points: one based on the free energy principle in neuroscience and another based on observations of in vivo experiments. Our simulation results manifest that in the learning and test phase, the accuracy of synaptic sampling with noise is almost 20% higher than that without noise for synthesis dataset, and the gain in accuracy with/without noise is at least 10% for the MNIST and CIFAR-10 dataset. Our study provides a new learning framework for the brain and sheds new light on deep noisy spiking neural networks.
Collapse
Affiliation(s)
- Ying Fang
- Department of Automation, Center for Brain-Inspired Computing Research, Tsinghua University, Beijing, China
- Beijing Innovation Center for Future Chip, Beijing, China
- Beijing Key Laboratory of Security in Big Data Processing and Application, Beijing, China
| | - Zhaofei Yu
- National Engineering Laboratory for Video Technology, School of Electronics Engineering and Computer Science, Peking University, Beijing, China
| | - Feng Chen
- Department of Automation, Center for Brain-Inspired Computing Research, Tsinghua University, Beijing, China
- Beijing Innovation Center for Future Chip, Beijing, China
- Beijing Key Laboratory of Security in Big Data Processing and Application, Beijing, China
| |
Collapse
|
27
|
Matzner A, Gorodetski L, Korngreen A, Bar-Gad I. Dynamic input-dependent encoding of individual basal ganglia neurons. Sci Rep 2020; 10:5833. [PMID: 32242059 PMCID: PMC7118110 DOI: 10.1038/s41598-020-62750-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2019] [Accepted: 03/16/2020] [Indexed: 11/09/2022] Open
Abstract
Computational models are crucial to studying the encoding of individual neurons. Static models are composed of a fixed set of parameters, thus resulting in static encoding properties that do not change under different inputs. Here, we challenge this basic concept which underlies these models. Using generalized linear models, we quantify the encoding and information processing properties of basal ganglia neurons recorded in-vitro. These properties are highly sensitive to the internal state of the neuron due to factors such as dependency on the baseline firing rate. Verification of these experimental results with simulations provides insights into the mechanisms underlying this input-dependent encoding. Thus, static models, which are not context dependent, represent only part of the neuronal encoding capabilities, and are not sufficient to represent the dynamics of a neuron over varying inputs. Input-dependent encoding is crucial for expanding our understanding of neuronal behavior in health and disease and underscores the need for a new generation of dynamic neuronal models.
Collapse
Affiliation(s)
- Ayala Matzner
- The Leslie & Susan Goldschmied (Gonda) Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan, Israel
| | - Lilach Gorodetski
- Goodman Faculty of life sciences, Bar-Ilan University, Ramat-Gan, Israel
| | - Alon Korngreen
- The Leslie & Susan Goldschmied (Gonda) Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan, Israel.,Goodman Faculty of life sciences, Bar-Ilan University, Ramat-Gan, Israel
| | - Izhar Bar-Gad
- The Leslie & Susan Goldschmied (Gonda) Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan, Israel.
| |
Collapse
|
28
|
Yu Z, Guo S, Deng F, Yan Q, Huang K, Liu JK, Chen F. Emergent Inference of Hidden Markov Models in Spiking Neural Networks Through Winner-Take-All. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:1347-1354. [PMID: 30295641 DOI: 10.1109/tcyb.2018.2871144] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Hidden Markov models (HMMs) underpin the solution to many problems in computational neuroscience. However, it is still unclear how to implement inference of HMMs with a network of neurons in the brain. The existing methods suffer from the problem of being nonspiking and inaccurate. Here, we build a precise equivalence between the inference equation of HMMs with time-invariant hidden variables and the dynamics of spiking winner-take-all (WTA) neural networks. We show that the membrane potential of each spiking neuron in the WTA circuit encodes the logarithm of the posterior probability of the hidden variable in each state, and the firing rate of each neuron is proportional to the posterior probability of the HMMs. We prove that the time course of the neural firing rate can implement posterior inference of HMMs. Theoretical analysis and experimental results show that the proposed WTA circuit can get accurate inference results of HMMs.
Collapse
|
29
|
Latimer KW, Rieke F, Pillow JW. Inferring synaptic inputs from spikes with a conductance-based neural encoding model. eLife 2019; 8:47012. [PMID: 31850846 PMCID: PMC6989090 DOI: 10.7554/elife.47012] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Accepted: 12/17/2019] [Indexed: 01/15/2023] Open
Abstract
Descriptive statistical models of neural responses generally aim to characterize the mapping from stimuli to spike responses while ignoring biophysical details of the encoding process. Here, we introduce an alternative approach, the conductance-based encoding model (CBEM), which describes a mapping from stimuli to excitatory and inhibitory synaptic conductances governing the dynamics of sub-threshold membrane potential. Remarkably, we show that the CBEM can be fit to extracellular spike train data and then used to predict excitatory and inhibitory synaptic currents. We validate these predictions with intracellular recordings from macaque retinal ganglion cells. Moreover, we offer a novel quasi-biophysical interpretation of the Poisson generalized linear model (GLM) as a special case of the CBEM in which excitation and inhibition are perfectly balanced. This work forges a new link between statistical and biophysical models of neural encoding and sheds new light on the biophysical variables that underlie spiking in the early visual pathway.
Collapse
Affiliation(s)
- Kenneth W Latimer
- Department of Physiology and Biophysics, University of Washington, Seattle, United States
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, United States
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Department of Psychology, Princeton University, Princeton, United States
| |
Collapse
|
30
|
Lubejko ST, Fontaine B, Soueidan SE, MacLeod KM. Spike threshold adaptation diversifies neuronal operating modes in the auditory brain stem. J Neurophysiol 2019; 122:2576-2590. [PMID: 31577531 DOI: 10.1152/jn.00234.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Single neurons function along a spectrum of neuronal operating modes whose properties determine how the output firing activity is generated from synaptic input. The auditory brain stem contains a diversity of neurons, from pure coincidence detectors to pure integrators and those with intermediate properties. We investigated how intrinsic spike initiation mechanisms regulate neuronal operating mode in the avian cochlear nucleus. Although the neurons in one division of the avian cochlear nucleus, nucleus magnocellularis, have been studied in depth, the spike threshold dynamics of the tonically firing neurons of a second division of cochlear nucleus, nucleus angularis (NA), remained unexplained. The input-output functions of tonically firing NA neurons were interrogated with directly injected in vivo-like current stimuli during whole cell patch-clamp recordings in vitro. Increasing the amplitude of the noise fluctuations in the current stimulus enhanced the firing rates in one subset of tonically firing neurons ("differentiators") but not another ("integrators"). We found that spike thresholds showed significantly greater adaptation and variability in the differentiator neurons. A leaky integrate-and-fire neuronal model with an adaptive spike initiation process derived from sodium channel dynamics was fit to the firing responses and could recapitulate >80% of the precise temporal firing across a range of fluctuation and mean current levels. Greater threshold adaptation explained the frequency-current curve changes due to a hyperpolarized shift in the effective adaptation voltage range and longer-lasting threshold adaptation in differentiators. The fine-tuning of the intrinsic properties of different NA neurons suggests they may have specialized roles in spectrotemporal processing.NEW & NOTEWORTHY Avian cochlear nucleus angularis (NA) neurons are responsible for encoding sound intensity for sound localization and spectrotemporal processing. An adaptive spike threshold mechanism fine-tunes a subset of repetitive-spiking neurons in NA to confer coincidence detector-like properties. A model based on sodium channel inactivation properties reproduced the activity via a hyperpolarized shift in adaptation conferring fluctuation sensitivity.
Collapse
Affiliation(s)
- Susan T Lubejko
- Department of Biology, University of Maryland, College Park, Maryland
| | - Bertrand Fontaine
- Laboratory of Auditory Neurophysiology, University of Leuven, Leuven, Belgium
| | - Sara E Soueidan
- Department of Biology, University of Maryland, College Park, Maryland
| | - Katrina M MacLeod
- Department of Biology, University of Maryland, College Park, Maryland.,Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland.,Center for the Comparative and Evolutionary Biology of Hearing, University of Maryland, College Park, Maryland
| |
Collapse
|
31
|
Ujfalussy BB, Makara JK, Lengyel M, Branco T. Global and Multiplexed Dendritic Computations under In Vivo-like Conditions. Neuron 2019; 100:579-592.e5. [PMID: 30408443 PMCID: PMC6226578 DOI: 10.1016/j.neuron.2018.08.032] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Revised: 07/07/2018] [Accepted: 08/21/2018] [Indexed: 10/27/2022]
Abstract
Dendrites integrate inputs nonlinearly, but it is unclear how these nonlinearities contribute to the overall input-output transformation of single neurons. We developed statistically principled methods using a hierarchical cascade of linear-nonlinear subunits (hLN) to model the dynamically evolving somatic response of neurons receiving complex, in vivo-like spatiotemporal synaptic input patterns. We used the hLN to predict the somatic membrane potential of an in vivo-validated detailed biophysical model of a L2/3 pyramidal cell. Linear input integration with a single global dendritic nonlinearity achieved above 90% prediction accuracy. A novel hLN motif, input multiplexing into parallel processing channels, could improve predictions as much as conventionally used additional layers of local nonlinearities. We obtained similar results in two other cell types. This approach provides a data-driven characterization of a key component of cortical circuit computations: the input-output transformation of neurons during in vivo-like conditions.
Collapse
Affiliation(s)
- Balázs B Ujfalussy
- MRC Laboratory of Molecular Biology, Cambridge, UK; Laboratory of Neuronal Signaling, Institute of Experimental Medicine, Budapest, Hungary; Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK; MTA Wigner Research Center for Physics, Budapest, Hungary.
| | - Judit K Makara
- Laboratory of Neuronal Signaling, Institute of Experimental Medicine, Budapest, Hungary
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK; Department of Cognitive Science, Central European University, Budapest, Hungary
| | - Tiago Branco
- MRC Laboratory of Molecular Biology, Cambridge, UK; Sainsbury Wellcome Centre, University College London, London, UK
| |
Collapse
|
32
|
Levakova M, Kostal L, Monsempès C, Lucas P, Kobayashi R. Adaptive integrate-and-fire model reproduces the dynamics of olfactory receptor neuron responses in a moth. J R Soc Interface 2019; 16:20190246. [PMID: 31387478 PMCID: PMC6731495 DOI: 10.1098/rsif.2019.0246] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023] Open
Abstract
In order to understand how olfactory stimuli are encoded and processed in the brain, it is important to build a computational model for olfactory receptor neurons (ORNs). Here, we present a simple and reliable mathematical model of a moth ORN generating spikes. The model incorporates a simplified description of the chemical kinetics leading to olfactory receptor activation and action potential generation. We show that an adaptive spike threshold regulated by prior spike history is an effective mechanism for reproducing the typical phasic-tonic time course of ORN responses. Our model reproduces the response dynamics of individual neurons to a fluctuating stimulus that approximates odorant fluctuations in nature. The parameters of the spike threshold are essential for reproducing the response heterogeneity in ORNs. The model provides a valuable tool for efficient simulations of olfactory circuits.
Collapse
Affiliation(s)
- Marie Levakova
- Department of Computational Neuroscience, Institute of Physiology of the Czech Academy of Sciences, Videnska 1083, 14220 Prague 4, Czech Republic
| | - Lubomir Kostal
- Department of Computational Neuroscience, Institute of Physiology of the Czech Academy of Sciences, Videnska 1083, 14220 Prague 4, Czech Republic
| | - Christelle Monsempès
- Institute of Ecology and Environmental Sciences, INRA, route de St Cyr, 78000 Versailles, France
| | - Philippe Lucas
- Institute of Ecology and Environmental Sciences, INRA, route de St Cyr, 78000 Versailles, France
| | - Ryota Kobayashi
- Principles of Informatics Research Division, National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo, Japan.,Department of Informatics, SOKENDAI (The Graduate University for Advanced Studies), 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo, Japan
| |
Collapse
|
33
|
Geminiani A, Casellato C, D'Angelo E, Pedrocchi A. Complex Electroresponsive Dynamics in Olivocerebellar Neurons Represented With Extended-Generalized Leaky Integrate and Fire Models. Front Comput Neurosci 2019; 13:35. [PMID: 31244635 PMCID: PMC6563830 DOI: 10.3389/fncom.2019.00035] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Accepted: 05/20/2019] [Indexed: 11/24/2022] Open
Abstract
The neurons of the olivocerebellar circuit exhibit complex electroresponsive dynamics, which are thought to play a fundamental role for network entraining, plasticity induction, signal processing, and noise filtering. In order to reproduce these properties in single-point neuron models, we have optimized the Extended-Generalized Leaky Integrate and Fire (E-GLIF) neuron through a multi-objective gradient-based algorithm targeting the desired input–output relationships. In this way, E-GLIF was tuned toward the unique input–output properties of Golgi cells, granule cells, Purkinje cells, molecular layer interneurons, deep cerebellar nuclei cells, and inferior olivary cells. E-GLIF proved able to simulate the complex cell-specific electroresponsive dynamics of the main olivocerebellar neurons including pacemaking, adaptation, bursting, post-inhibitory rebound excitation, subthreshold oscillations, resonance, and phase reset. The integration of these E-GLIF point-neuron models into olivocerebellar Spiking Neural Networks will allow to evaluate the impact of complex electroresponsive dynamics at the higher scales, up to motor behavior, in closed-loop simulations of sensorimotor tasks.
Collapse
Affiliation(s)
- Alice Geminiani
- NEARLab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Claudia Casellato
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Egidio D'Angelo
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Alessandra Pedrocchi
- NEARLab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| |
Collapse
|
34
|
Naud R, Longtin A. Linking demyelination to compound action potential dispersion with a spike-diffuse-spike approach. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2019; 9:3. [PMID: 31147800 PMCID: PMC6542900 DOI: 10.1186/s13408-019-0071-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/10/2018] [Accepted: 05/20/2019] [Indexed: 06/09/2023]
Abstract
To establish and exploit novel biomarkers of demyelinating diseases requires a mechanistic understanding of axonal propagation. Here, we present a novel computational framework called the stochastic spike-diffuse-spike (SSDS) model for assessing the effects of demyelination on axonal transmission. It models transmission through nodal and internodal compartments with two types of operations: a stochastic integrate-and-fire operation captures nodal excitability and a linear filtering operation describes internodal propagation. The effects of demyelinated segments on the probability of transmission, transmission delay and spike time jitter are explored. We argue that demyelination-induced impedance mismatch prevents propagation mostly when the action potential leaves a demyelinated region, not when it enters a demyelinated region. In addition, we model sodium channel remodeling as a homeostatic control of nodal excitability. We find that the effects of mild demyelination on transmission probability and delay can be largely counterbalanced by an increase in excitability at the nodes surrounding the demyelination. The spike timing jitter, however, reflects the level of demyelination whether excitability is fixed or is allowed to change in compensation. This jitter can accumulate over long axons and leads to a broadening of the compound action potential, linking microscopic defects to a mesoscopic observable. Our findings articulate why action potential jitter and compound action potential dispersion can serve as potential markers of weak and sporadic demyelination.
Collapse
Affiliation(s)
- Richard Naud
- Ottawa Brain and Mind Research Institute, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Canada
- Department of Physics, University of Ottawa, Ottawa, Canada
| | - André Longtin
- Department of Physics, University of Ottawa, Ottawa, Canada
| |
Collapse
|
35
|
Geminiani A, Casellato C, Locatelli F, Prestori F, Pedrocchi A, D'Angelo E. Complex Dynamics in Simplified Neuronal Models: Reproducing Golgi Cell Electroresponsiveness. Front Neuroinform 2018; 12:88. [PMID: 30559658 PMCID: PMC6287018 DOI: 10.3389/fninf.2018.00088] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Accepted: 11/13/2018] [Indexed: 11/21/2022] Open
Abstract
Brain neurons exhibit complex electroresponsive properties – including intrinsic subthreshold oscillations and pacemaking, resonance and phase-reset – which are thought to play a critical role in controlling neural network dynamics. Although these properties emerge from detailed representations of molecular-level mechanisms in “realistic” models, they cannot usually be generated by simplified neuronal models (although these may show spike-frequency adaptation and bursting). We report here that this whole set of properties can be generated by the extended generalized leaky integrate-and-fire (E-GLIF) neuron model. E-GLIF derives from the GLIF model family and is therefore mono-compartmental, keeps the limited computational load typical of a linear low-dimensional system, admits analytical solutions and can be tuned through gradient-descent algorithms. Importantly, E-GLIF is designed to maintain a correspondence between model parameters and neuronal membrane mechanisms through a minimum set of equations. In order to test its potential, E-GLIF was used to model a specific neuron showing rich and complex electroresponsiveness, the cerebellar Golgi cell, and was validated against experimental electrophysiological data recorded from Golgi cells in acute cerebellar slices. During simulations, E-GLIF was activated by stimulus patterns, including current steps and synaptic inputs, identical to those used for the experiments. The results demonstrate that E-GLIF can reproduce the whole set of complex neuronal dynamics typical of these neurons – including intensity-frequency curves, spike-frequency adaptation, post-inhibitory rebound bursting, spontaneous subthreshold oscillations, resonance, and phase-reset – providing a new effective tool to investigate brain dynamics in large-scale simulations.
Collapse
Affiliation(s)
- Alice Geminiani
- NEARLab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Claudia Casellato
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Francesca Locatelli
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Francesca Prestori
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Alessandra Pedrocchi
- NEARLab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Egidio D'Angelo
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| |
Collapse
|
36
|
Lin J, Yuan JS. Analysis and Simulation of Capacitor-Less ReRAM-Based Stochastic Neurons for the in-Memory Spiking Neural Network. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2018; 12:1004-1017. [PMID: 30010591 DOI: 10.1109/tbcas.2018.2843286] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The stochastic neuron is a key for event-based probabilistic neural networks. We propose a stochastic neuron using a metal-oxide resistive random-access memory (ReRAM). The ReRAM's conducting filament with built-in stochasticity is used to mimic the neuron's membrane capacitor, which temporally integrates input spikes. A capacitor-less neuron circuit is designed, laid out, and simulated. The output spiking train of the neuron obeys the Poisson distribution. Using the 65-nm CMOS technology node, the area of the neuron is , which is one ninth the size of a 1-pF capacitor. The average power consumption of the neuron is 1.289 W. We introduce the neural array-A modified one-transistor-one-ReRAM (1T1R) crossbar that integrates the ReRAM neurons with ReRAM synapses to form a compact and energy efficient in-memory spiking neural network. A spiking deep belief network (DBN) with a noisy rectified linear unit (NReLU) is trained and mapped to the spiking DBN using the proposed ReRAM neurons. Simulation results show that the ReRAM neuron-based DBN is able to recognize the handwritten digits with 94.7% accuracy and is robust against the ReRAM process variation effect.
Collapse
|
37
|
Heiberg T, Kriener B, Tetzlaff T, Einevoll GT, Plesser HE. Firing-rate models for neurons with a broad repertoire of spiking behaviors. J Comput Neurosci 2018; 45:103-132. [PMID: 30146661 PMCID: PMC6208914 DOI: 10.1007/s10827-018-0693-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2018] [Revised: 08/01/2018] [Accepted: 08/02/2018] [Indexed: 11/29/2022]
Abstract
Capturing the response behavior of spiking neuron models with rate-based models facilitates the investigation of neuronal networks using powerful methods for rate-based network dynamics. To this end, we investigate the responses of two widely used neuron model types, the Izhikevich and augmented multi-adapative threshold (AMAT) models, to a range of spiking inputs ranging from step responses to natural spike data. We find (i) that linear-nonlinear firing rate models fitted to test data can be used to describe the firing-rate responses of AMAT and Izhikevich spiking neuron models in many cases; (ii) that firing-rate responses are generally too complex to be captured by first-order low-pass filters but require bandpass filters instead; (iii) that linear-nonlinear models capture the response of AMAT models better than of Izhikevich models; (iv) that the wide range of response types evoked by current-injection experiments collapses to few response types when neurons are driven by stationary or sinusoidally modulated Poisson input; and (v) that AMAT and Izhikevich models show different responses to spike input despite identical responses to current injections. Together, these findings suggest that rate-based models of network dynamics may capture a wider range of neuronal response properties by incorporating second-order bandpass filters fitted to responses of spiking model neurons. These models may contribute to bringing rate-based network modeling closer to the reality of biological neuronal networks.
Collapse
Affiliation(s)
- Thomas Heiberg
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Birgit Kriener
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway.,Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, Jülich, Germany.,Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany.,JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Gaute T Einevoll
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway.,Department of Physics, University of Oslo, Oslo, Norway
| | - Hans E Plesser
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway. .,Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, Jülich, Germany.
| |
Collapse
|
38
|
Ullner E, Politi A, Torcini A. Ubiquity of collective irregular dynamics in balanced networks of spiking neurons. CHAOS (WOODBURY, N.Y.) 2018; 28:081106. [PMID: 30180628 DOI: 10.1063/1.5049902] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2018] [Accepted: 08/09/2018] [Indexed: 06/08/2023]
Abstract
We revisit the dynamics of a prototypical model of balanced activity in networks of spiking neurons. A detailed investigation of the thermodynamic limit for fixed density of connections (massive coupling) shows that, when inhibition prevails, the asymptotic regime is not asynchronous but rather characterized by a self-sustained irregular, macroscopic (collective) dynamics. So long as the connectivity is massive, this regime is found in many different setups: leaky as well as quadratic integrate-and-fire neurons; large and small coupling strength; and weak and strong external currents.
Collapse
Affiliation(s)
- Ekkehard Ullner
- Institute for Complex Systems and Mathematical Biology and Department of Physics (SUPA), Old Aberdeen, Aberdeen AB24 3UE, United Kingdom
| | - Antonio Politi
- Institute for Complex Systems and Mathematical Biology and Department of Physics (SUPA), Old Aberdeen, Aberdeen AB24 3UE, United Kingdom
| | - Alessandro Torcini
- Max Planck Institut für Physik komplexer Systeme, Nöthnitzer Str. 38, 01187 Dresden, Germany
| |
Collapse
|
39
|
Neftci EO. Data and Power Efficient Intelligence with Neuromorphic Learning Machines. iScience 2018; 5:52-68. [PMID: 30240646 PMCID: PMC6123858 DOI: 10.1016/j.isci.2018.06.010] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 06/04/2018] [Accepted: 06/26/2018] [Indexed: 11/22/2022] Open
Abstract
The success of deep networks and recent industry involvement in brain-inspired computing is igniting a widespread interest in neuromorphic hardware that emulates the biological processes of the brain on an electronic substrate. This review explores interdisciplinary approaches anchored in machine learning theory that enable the applicability of neuromorphic technologies to real-world, human-centric tasks. We find that (1) recent work in binary deep networks and approximate gradient descent learning are strikingly compatible with a neuromorphic substrate; (2) where real-time adaptability and autonomy are necessary, neuromorphic technologies can achieve significant advantages over main-stream ones; and (3) challenges in memory technologies, compounded by a tradition of bottom-up approaches in the field, block the road to major breakthroughs. We suggest that a neuromorphic learning framework, tuned specifically for the spatial and temporal constraints of the neuromorphic substrate, will help guiding hardware algorithm co-design and deploying neuromorphic hardware for proactive learning of real-world data.
Collapse
Affiliation(s)
- Emre O Neftci
- Department of Cognitive Sciences, UC Irvine, Irvine, CA 92697-5100, USA; Department of Computer Science, UC Irvine, Irvine, CA 92697-5100, USA.
| |
Collapse
|
40
|
Spike and burst coding in thalamocortical relay cells. PLoS Comput Biol 2018; 14:e1005960. [PMID: 29432418 PMCID: PMC5834212 DOI: 10.1371/journal.pcbi.1005960] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2017] [Revised: 03/02/2018] [Accepted: 01/08/2018] [Indexed: 11/19/2022] Open
Abstract
Mammalian thalamocortical relay (TCR) neurons switch their firing activity between a tonic spiking and a bursting regime. In a combined experimental and computational study, we investigated the features in the input signal that single spikes and bursts in the output spike train represent and how this code is influenced by the membrane voltage state of the neuron. Identical frozen Gaussian noise current traces were injected into TCR neurons in rat brain slices as well as in a validated three-compartment TCR model cell. The resulting membrane voltage traces and spike trains were analyzed by calculating the coherence and impedance. Reverse correlation techniques gave the Event-Triggered Average (ETA) and the Event-Triggered Covariance (ETC). This demonstrated that the feature selectivity started relatively long before the events (up to 300 ms) and showed a clear distinction between spikes (selective for fluctuations) and bursts (selective for integration). The model cell was fine-tuned to mimic the frozen noise initiated spike and burst responses to within experimental accuracy, especially for the mixed mode regimes. The information content carried by the various types of events in the signal as well as by the whole signal was calculated. Bursts phase-lock to and transfer information at lower frequencies than single spikes. On depolarization the neuron transits smoothly from the predominantly bursting regime to a spiking regime, in which it is more sensitive to high-frequency fluctuations. The model was then used to elucidate properties that could not be assessed experimentally, in particular the role of two important subthreshold voltage-dependent currents: the low threshold activated calcium current (IT) and the cyclic nucleotide modulated h current (Ih). The ETAs of those currents and their underlying activation/inactivation states not only explained the state dependence of the firing regime but also the long-lasting concerted dynamic action of the two currents. Finally, the model was used to investigate the more realistic “high-conductance state”, where fluctuations are caused by (synaptic) conductance changes instead of current injection. Under “standard” conditions bursts are difficult to initiate, given the high degree of inactivation of the T-type calcium current. Strong and/or precisely timed inhibitory currents were able to remove this inactivation. Neurons in the brain respond to (sensory) stimuli by generating electrical pulses called ‘spikes’ or ‘action potentials’. Spikes are organized in different temporal patterns, such as ‘bursts’ in which they occur at a high frequency followed by a period of silence. Bursts are ubiquitous in the nervous system: they occur in different parts of the brain and in different species. Different mechanisms that generate them have been pointed out. Why the nervous system uses bursts in its communication, or what type of information is represented by bursts, remains largely unknown. Here, we looked at bursting in thalamocortical relay (TCR) cells, neurons that form a bridge between early sensory processing and higher-order structures (cortex). These cells fire bursts as a result of the activation of two distinct subthreshold ionic currents: the T-type calcium current and the h-type current. We investigated experimentally and computationally what features in the input makes TCR cells respond with bursts, and what features with single spikes. Bursts are a response to low-frequency slowly increasing input; single spikes are a response to faster fluctuations. Moreover, bursts are rare and highly informative, in line with an earlier hypothesis that bursts could play a ‘wake-up call’ role in the nervous system.
Collapse
|
41
|
Florescu D, Coca D. Identification of Linear and Nonlinear Sensory Processing Circuits from Spiking Neuron Data. Neural Comput 2018; 30:670-707. [PMID: 29342394 DOI: 10.1162/neco_a_01051] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Inferring mathematical models of sensory processing systems directly from input-output observations, while making the fewest assumptions about the model equations and the types of measurements available, is still a major issue in computational neuroscience. This letter introduces two new approaches for identifying sensory circuit models consisting of linear and nonlinear filters in series with spiking neuron models, based only on the sampled analog input to the filter and the recorded spike train output of the spiking neuron. For an ideal integrate-and-fire neuron model, the first algorithm can identify the spiking neuron parameters as well as the structure and parameters of an arbitrary nonlinear filter connected to it. The second algorithm can identify the parameters of the more general leaky integrate-and-fire spiking neuron model, as well as the parameters of an arbitrary linear filter connected to it. Numerical studies involving simulated and real experimental recordings are used to demonstrate the applicability and evaluate the performance of the proposed algorithms.
Collapse
Affiliation(s)
- Dorian Florescu
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, S1 3JD, U.K.
| | - Daniel Coca
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, S1 3JD, U.K.
| |
Collapse
|
42
|
Weber AI, Pillow JW. Capturing the Dynamical Repertoire of Single Neurons with Generalized Linear Models. Neural Comput 2017; 29:3260-3289. [PMID: 28957020 DOI: 10.1162/neco_a_01021] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A key problem in computational neuroscience is to find simple, tractable models that are nevertheless flexible enough to capture the response properties of real neurons. Here we examine the capabilities of recurrent point process models known as Poisson generalized linear models (GLMs). These models are defined by a set of linear filters and a point nonlinearity and are conditionally Poisson spiking. They have desirable statistical properties for fitting and have been widely used to analyze spike trains from electrophysiological recordings. However, the dynamical repertoire of GLMs has not been systematically compared to that of real neurons. Here we show that GLMs can reproduce a comprehensive suite of canonical neural response behaviors, including tonic and phasic spiking, bursting, spike rate adaptation, type I and type II excitation, and two forms of bistability. GLMs can also capture stimulus-dependent changes in spike timing precision and reliability that mimic those observed in real neurons, and can exhibit varying degrees of stochasticity, from virtually deterministic responses to greater-than-Poisson variability. These results show that Poisson GLMs can exhibit a wide range of dynamic spiking behaviors found in real neurons, making them well suited for qualitative dynamical as well as quantitative statistical studies of single-neuron and population response properties.
Collapse
Affiliation(s)
- Alison I Weber
- Graduate Program in Neuroscience, University of Washington, Seattle, WA 98195, U.S.A.
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton University, and Department of Psychology, Princeton University, Princeton, NJ 08540, U.S.A.
| |
Collapse
|
43
|
Geng K, Marmarelis VZ. Methodology of Recurrent Laguerre-Volterra Network for Modeling Nonlinear Dynamic Systems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:2196-2208. [PMID: 27352401 PMCID: PMC5596897 DOI: 10.1109/tnnls.2016.2581141] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
In this paper, we have introduced a general modeling approach for dynamic nonlinear systems that utilizes a variant of the simulated annealing algorithm for training the Laguerre-Volterra network (LVN) to overcome the local minima and convergence problems and employs a pruning technique to achieve sparse LVN representations with l1 regularization. We tested this new approach with computer simulated systems and extended it to autoregressive sparse LVN (ASLVN) model structures that are suitable for input-output modeling of nonlinear systems that exhibit transitions in dynamic states, such as the Hodgkin-Huxley (H-H) equations of neuronal firing. Application of the proposed ASLVN to the H-H equations yields a more parsimonious input-output model with improved predictive capability that is amenable to more insightful physiological/biological interpretation.
Collapse
|
44
|
Jonke Z, Legenstein R, Habenschuss S, Maass W. Feedback Inhibition Shapes Emergent Computational Properties of Cortical Microcircuit Motifs. J Neurosci 2017; 37:8511-8523. [PMID: 28760861 PMCID: PMC6596876 DOI: 10.1523/jneurosci.2078-16.2017] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2016] [Revised: 07/18/2017] [Accepted: 07/23/2017] [Indexed: 01/28/2023] Open
Abstract
Cortical microcircuits are very complex networks, but they are composed of a relatively small number of stereotypical motifs. Hence, one strategy for throwing light on the computational function of cortical microcircuits is to analyze emergent computational properties of these stereotypical microcircuit motifs. We are addressing here the question how spike timing-dependent plasticity shapes the computational properties of one motif that has frequently been studied experimentally: interconnected populations of pyramidal cells and parvalbumin-positive inhibitory cells in layer 2/3. Experimental studies suggest that these inhibitory neurons exert some form of divisive inhibition on the pyramidal cells. We show that this data-based form of feedback inhibition, which is softer than that of winner-take-all models that are commonly considered in theoretical analyses, contributes to the emergence of an important computational function through spike timing-dependent plasticity: The capability to disentangle superimposed firing patterns in upstream networks, and to represent their information content through a sparse assembly code.SIGNIFICANCE STATEMENT We analyze emergent computational properties of a ubiquitous cortical microcircuit motif: populations of pyramidal cells that are densely interconnected with inhibitory neurons. Simulations of this model predict that sparse assembly codes emerge in this microcircuit motif under spike timing-dependent plasticity. Furthermore, we show that different assemblies will represent different hidden sources of upstream firing activity. Hence, we propose that spike timing-dependent plasticity enables this microcircuit motif to perform a fundamental computational operation on neural activity patterns.
Collapse
Affiliation(s)
- Zeno Jonke
- Institute for Theoretical Computer Science, Graz University of Technology, Inffeldgasse 16b/I, 8010 Graz, Austria
| | - Robert Legenstein
- Institute for Theoretical Computer Science, Graz University of Technology, Inffeldgasse 16b/I, 8010 Graz, Austria
| | - Stefan Habenschuss
- Institute for Theoretical Computer Science, Graz University of Technology, Inffeldgasse 16b/I, 8010 Graz, Austria
| | - Wolfgang Maass
- Institute for Theoretical Computer Science, Graz University of Technology, Inffeldgasse 16b/I, 8010 Graz, Austria
| |
Collapse
|
45
|
Kobayashi R, Nishimaru H, Nishijo H, Lansky P. A single spike deteriorates synaptic conductance estimation. Biosystems 2017; 161:41-45. [PMID: 28756162 DOI: 10.1016/j.biosystems.2017.07.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2017] [Revised: 07/19/2017] [Accepted: 07/20/2017] [Indexed: 11/19/2022]
Abstract
We investigated the estimation accuracy of synaptic conductances by analyzing simulated voltage traces generated by a Hodgkin-Huxley type model. We show that even a single spike substantially deteriorates the estimation. We also demonstrate that two approaches, namely, negative current injection and spike removal, can ameliorate this deterioration.
Collapse
Affiliation(s)
- Ryota Kobayashi
- Principles of Informatics Research Division, National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo, Japan; Department of Informatics, Graduate University for Advanced Studies (Sokendai), 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo, Japan.
| | - Hiroshi Nishimaru
- System Emotional Science, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Sugitani 2630, Toyama 930-0194, Japan
| | - Hisao Nishijo
- System Emotional Science, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Sugitani 2630, Toyama 930-0194, Japan
| | - Petr Lansky
- Institute of Physiology, The Czech Academy of Sciences, 142 20 Prague 4, Czech Republic
| |
Collapse
|
46
|
Setareh H, Deger M, Petersen CCH, Gerstner W. Cortical Dynamics in Presence of Assemblies of Densely Connected Weight-Hub Neurons. Front Comput Neurosci 2017; 11:52. [PMID: 28690508 PMCID: PMC5480278 DOI: 10.3389/fncom.2017.00052] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2017] [Accepted: 05/29/2017] [Indexed: 01/21/2023] Open
Abstract
Experimental measurements of pairwise connection probability of pyramidal neurons together with the distribution of synaptic weights have been used to construct randomly connected model networks. However, several experimental studies suggest that both wiring and synaptic weight structure between neurons show statistics that differ from random networks. Here we study a network containing a subset of neurons which we call weight-hub neurons, that are characterized by strong inward synapses. We propose a connectivity structure for excitatory neurons that contain assemblies of densely connected weight-hub neurons, while the pairwise connection probability and synaptic weight distribution remain consistent with experimental data. Simulations of such a network with generalized integrate-and-fire neurons display regular and irregular slow oscillations akin to experimentally observed up/down state transitions in the activity of cortical neurons with a broad distribution of pairwise spike correlations. Moreover, stimulation of a model network in the presence or absence of assembly structure exhibits responses similar to light-evoked responses of cortical layers in optogenetically modified animals. We conclude that a high connection probability into and within assemblies of excitatory weight-hub neurons, as it likely is present in some but not all cortical layers, changes the dynamics of a layer of cortical microcircuitry significantly.
Collapse
Affiliation(s)
- Hesam Setareh
- Laboratory of Computational Neuroscience, School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de LausanneLausanne, Switzerland
| | - Moritz Deger
- Laboratory of Computational Neuroscience, School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de LausanneLausanne, Switzerland.,Faculty of Mathematics and Natural Sciences, Institute for Zoology, University of CologneCologne, Germany
| | - Carl C H Petersen
- Laboratory of Sensory Processing, Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de LausanneLausanne, Switzerland
| | - Wulfram Gerstner
- Laboratory of Computational Neuroscience, School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de LausanneLausanne, Switzerland
| |
Collapse
|
47
|
Exact firing time statistics of neurons driven by discrete inhibitory noise. Sci Rep 2017; 7:1577. [PMID: 28484244 PMCID: PMC5431561 DOI: 10.1038/s41598-017-01658-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2017] [Accepted: 03/29/2017] [Indexed: 12/15/2022] Open
Abstract
Neurons in the intact brain receive a continuous and irregular synaptic bombardment from excitatory and inhibitory pre- synaptic neurons, which determines the firing activity of the stimulated neuron. In order to investigate the influence of inhibitory stimulation on the firing time statistics, we consider Leaky Integrate-and-Fire neurons subject to inhibitory instantaneous post- synaptic potentials. In particular, we report exact results for the firing rate, the coefficient of variation and the spike train spectrum for various synaptic weight distributions. Our results are not limited to stimulations of infinitesimal amplitude, but they apply as well to finite amplitude post-synaptic potentials, thus being able to capture the effect of rare and large spikes. The developed methods are able to reproduce also the average firing properties of heterogeneous neuronal populations.
Collapse
|
48
|
Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size. PLoS Comput Biol 2017; 13:e1005507. [PMID: 28422957 PMCID: PMC5415267 DOI: 10.1371/journal.pcbi.1005507] [Citation(s) in RCA: 68] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2016] [Revised: 05/03/2017] [Accepted: 04/07/2017] [Indexed: 11/22/2022] Open
Abstract
Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50–2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations. Understanding the brain requires mathematical models on different spatial scales. On the “microscopic” level of nerve cells, neural spike trains can be well predicted by phenomenological spiking neuron models. On a coarse scale, neural activity can be modeled by phenomenological equations that summarize the total activity of many thousands of neurons. Such population models are widely used to model neuroimaging data such as EEG, MEG or fMRI data. However, it is largely unknown how large-scale models are connected to an underlying microscale model. Linking the scales is vital for a correct description of rapid changes and fluctuations of the population activity, and is crucial for multiscale brain models. The challenge is to treat realistic spiking dynamics as well as fluctuations arising from the finite number of neurons. We obtained such a link by deriving stochastic population equations on the mesoscopic scale of 100–1000 neurons from an underlying microscopic model. These equations can be efficiently integrated and reproduce results of a microscopic simulation while achieving a high speed-up factor. We expect that our novel population theory on the mesoscopic scale will be instrumental for understanding experimental data on information processing in the brain, and ultimately link microscopic and macroscopic activity patterns.
Collapse
|
49
|
Gerhard F, Deger M, Truccolo W. On the stability and dynamics of stochastic spiking neuron models: Nonlinear Hawkes process and point process GLMs. PLoS Comput Biol 2017; 13:e1005390. [PMID: 28234899 PMCID: PMC5325182 DOI: 10.1371/journal.pcbi.1005390] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2016] [Accepted: 01/28/2017] [Indexed: 01/12/2023] Open
Abstract
Point process generalized linear models (PP-GLMs) provide an important statistical framework for modeling spiking activity in single-neurons and neuronal networks. Stochastic stability is essential when sampling from these models, as done in computational neuroscience to analyze statistical properties of neuronal dynamics and in neuro-engineering to implement closed-loop applications. Here we show, however, that despite passing common goodness-of-fit tests, PP-GLMs estimated from data are often unstable, leading to divergent firing rates. The inclusion of absolute refractory periods is not a satisfactory solution since the activity then typically settles into unphysiological rates. To address these issues, we derive a framework for determining the existence and stability of fixed points of the expected conditional intensity function (CIF) for general PP-GLMs. Specifically, in nonlinear Hawkes PP-GLMs, the CIF is expressed as a function of the previous spike history and exogenous inputs. We use a mean-field quasi-renewal (QR) approximation that decomposes spike history effects into the contribution of the last spike and an average of the CIF over all spike histories prior to the last spike. Fixed points for stationary rates are derived as self-consistent solutions of integral equations. Bifurcation analysis and the number of fixed points predict that the original models can show stable, divergent, and metastable (fragile) dynamics. For fragile models, fluctuations of the single-neuron dynamics predict expected divergence times after which rates approach unphysiologically high values. This metric can be used to estimate the probability of rates to remain physiological for given time periods, e.g., for simulation purposes. We demonstrate the use of the stability framework using simulated single-neuron examples and neurophysiological recordings. Finally, we show how to adapt PP-GLM estimation procedures to guarantee model stability. Overall, our results provide a stability framework for data-driven PP-GLMs and shed new light on the stochastic dynamics of state-of-the-art statistical models of neuronal spiking activity.
Collapse
Affiliation(s)
- Felipe Gerhard
- Department of Neuroscience, Brown University, Providence, Rhode Island, United States of America
| | - Moritz Deger
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École polytechnique fédérale de Lausanne (EPFL), Lausanne, Switzerland
- Institute for Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, Cologne, Germany
| | - Wilson Truccolo
- Department of Neuroscience, Brown University, Providence, Rhode Island, United States of America
- Institute for Brain Science, Brown University, Providence, Rhode Island, United States of America
- Center for Neurorestoration & Neurotechnology, U. S. Department of Veterans Affairs, Providence, Rhode Island, United States of America
- * E-mail:
| |
Collapse
|
50
|
Schuecker J, Schmidt M, van Albada SJ, Diesmann M, Helias M. Fundamental Activity Constraints Lead to Specific Interpretations of the Connectome. PLoS Comput Biol 2017; 13:e1005179. [PMID: 28146554 PMCID: PMC5287462 DOI: 10.1371/journal.pcbi.1005179] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2016] [Accepted: 10/03/2016] [Indexed: 01/11/2023] Open
Abstract
The continuous integration of experimental data into coherent models of the brain is an increasing challenge of modern neuroscience. Such models provide a bridge between structure and activity, and identify the mechanisms giving rise to experimental observations. Nevertheless, structurally realistic network models of spiking neurons are necessarily underconstrained even if experimental data on brain connectivity are incorporated to the best of our knowledge. Guided by physiological observations, any model must therefore explore the parameter ranges within the uncertainty of the data. Based on simulation results alone, however, the mechanisms underlying stable and physiologically realistic activity often remain obscure. We here employ a mean-field reduction of the dynamics, which allows us to include activity constraints into the process of model construction. We shape the phase space of a multi-scale network model of the vision-related areas of macaque cortex by systematically refining its connectivity. Fundamental constraints on the activity, i.e., prohibiting quiescence and requiring global stability, prove sufficient to obtain realistic layer- and area-specific activity. Only small adaptations of the structure are required, showing that the network operates close to an instability. The procedure identifies components of the network critical to its collective dynamics and creates hypotheses for structural data and future experiments. The method can be applied to networks involving any neuron model with a known gain function.
Collapse
Affiliation(s)
- Jannis Schuecker
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Maximilian Schmidt
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Sacha J. van Albada
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| |
Collapse
|