1
|
Pham T, Hansel C. Intrinsic threshold plasticity: cholinergic activation and role in the neuronal recognition of incomplete input patterns. J Physiol 2023; 601:3221-3239. [PMID: 35879872 PMCID: PMC9873838 DOI: 10.1113/jp283473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 07/15/2022] [Indexed: 01/27/2023] Open
Abstract
Activity-dependent changes in membrane excitability are observed in neurons across brain areas and represent a cell-autonomous form of plasticity (intrinsic plasticity; IP) that in itself does not involve alterations in synaptic strength (synaptic plasticity; SP). Non-homeostatic IP may play an essential role in learning, e.g. by changing the action potential threshold near the soma. A computational problem, however, arises from the implication that such amplification does not discriminate between synaptic inputs and therefore may reduce the resolution of input representation. Here, we investigate consequences of IP for the performance of an artificial neural network in (a) the discrimination of unknown input patterns and (b) the recognition of known/learned patterns. While negative changes in threshold potentials in the output layer indeed reduce its ability to discriminate patterns, they benefit the recognition of known but incompletely presented patterns. An analysis of thresholds and IP-induced threshold changes in published sets of physiological data obtained from whole-cell patch-clamp recordings from L2/3 pyramidal neurons in (a) the primary visual cortex (V1) of awake macaques and (b) the primary somatosensory cortex (S1) of mice in vitro, respectively, reveals a difference between resting and threshold potentials of ∼15 mV for V1 and ∼25 mV for S1, and a total plasticity range of ∼10 mV (S1). The most efficient activity pattern to lower threshold is paired cholinergic and electric activation. Our findings show that threshold reduction promotes a shift in neural coding strategies from accurate faithful representation to interpretative assignment of input patterns to learned object categories. KEY POINTS: Intrinsic plasticity may change the action potential threshold near the soma of neurons (threshold plasticity), thus altering the input-output function for all synaptic inputs 'upstream' of the plasticity location. A potential problem arising from this shared amplification is that it may reduce the ability to discriminate between different input patterns. Here, we assess the performance of an artificial neural network in the discrimination of unknown input patterns as well as the recognition of known patterns subsequent to changes in the spike threshold. We observe that negative changes in threshold potentials do reduce discrimination performance, but at the same time improve performance in an object recognition task, in particular when patterns are incompletely presented. Analysis of whole-cell patch-clamp recordings from pyramidal neurons in the primary somatosensory cortex (S1) of mice reveals that negative threshold changes preferentially result from electric stimulation of neurons paired with the activation of muscarinic acetylcholine receptors.
Collapse
Affiliation(s)
- Tuan Pham
- Committee on Computational Neuroscience, The University of Chicago
| | - Christian Hansel
- Committee on Computational Neuroscience, The University of Chicago
- Department of Neurobiology, The University of Chicago
| |
Collapse
|
2
|
Scott DN, Frank MJ. Adaptive control of synaptic plasticity integrates micro- and macroscopic network function. Neuropsychopharmacology 2023; 48:121-144. [PMID: 36038780 PMCID: PMC9700774 DOI: 10.1038/s41386-022-01374-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 06/23/2022] [Accepted: 06/24/2022] [Indexed: 11/09/2022]
Abstract
Synaptic plasticity configures interactions between neurons and is therefore likely to be a primary driver of behavioral learning and development. How this microscopic-macroscopic interaction occurs is poorly understood, as researchers frequently examine models within particular ranges of abstraction and scale. Computational neuroscience and machine learning models offer theoretically powerful analyses of plasticity in neural networks, but results are often siloed and only coarsely linked to biology. In this review, we examine connections between these areas, asking how network computations change as a function of diverse features of plasticity and vice versa. We review how plasticity can be controlled at synapses by calcium dynamics and neuromodulatory signals, the manifestation of these changes in networks, and their impacts in specialized circuits. We conclude that metaplasticity-defined broadly as the adaptive control of plasticity-forges connections across scales by governing what groups of synapses can and can't learn about, when, and to what ends. The metaplasticity we discuss acts by co-opting Hebbian mechanisms, shifting network properties, and routing activity within and across brain systems. Asking how these operations can go awry should also be useful for understanding pathology, which we address in the context of autism, schizophrenia and Parkinson's disease.
Collapse
Affiliation(s)
- Daniel N Scott
- Cognitive Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA.
- Carney Institute for Brain Science, Brown University, Providence, RI, USA.
| | - Michael J Frank
- Cognitive Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA.
- Carney Institute for Brain Science, Brown University, Providence, RI, USA.
| |
Collapse
|
3
|
Wong EC. Distributed Phase Oscillatory Excitation Efficiently Produces Attractors Using Spike-Timing-Dependent Plasticity. Neural Comput 2021; 34:415-436. [PMID: 34915556 DOI: 10.1162/neco_a_01466] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Accepted: 09/18/2021] [Indexed: 11/04/2022]
Abstract
The brain is thought to represent information in the form of activity in distributed groups of neurons known as attractors. We show here that in a randomly connected network of simulated spiking neurons, periodic stimulation of neurons with distributed phase offsets, along with standard spike-timing-dependent plasticity (STDP), efficiently creates distributed attractors. These attractors may have a consistent ordered firing pattern or become irregular, depending on the conditions. We also show that when two such attractors are stimulated in sequence, the same STDP mechanism can create a directed association between them, forming the basis of an associative network. We find that for an STDP time constant of 20 ms, the dependence of the efficiency of attractor creation on the driving frequency has a broad peak centered around 8 Hz. Upon restimulation, the attractors self-oscillate, but with an oscillation frequency that is higher than the driving frequency, ranging from 10 to 100 Hz.
Collapse
Affiliation(s)
- Eric C Wong
- Departments of Radiology and Psychiatry, University of California, San Diego, La Jolla, CA 92093, U.S.A.
| |
Collapse
|
4
|
Aljadeff J, Gillett M, Pereira Obilinovic U, Brunel N. From synapse to network: models of information storage and retrieval in neural circuits. Curr Opin Neurobiol 2021; 70:24-33. [PMID: 34175521 DOI: 10.1016/j.conb.2021.05.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 05/06/2021] [Accepted: 05/25/2021] [Indexed: 10/21/2022]
Abstract
The mechanisms of information storage and retrieval in brain circuits are still the subject of debate. It is widely believed that information is stored at least in part through changes in synaptic connectivity in networks that encode this information and that these changes lead in turn to modifications of network dynamics, such that the stored information can be retrieved at a later time. Here, we review recent progress in deriving synaptic plasticity rules from experimental data and in understanding how plasticity rules affect the dynamics of recurrent networks. We show that the dynamics generated by such networks exhibit a large degree of diversity, depending on parameters, similar to experimental observations in vivo during delayed response tasks.
Collapse
Affiliation(s)
- Johnatan Aljadeff
- Neurobiology Section, Division of Biological Sciences, UC San Diego, USA
| | | | | | - Nicolas Brunel
- Department of Neurobiology, Duke University, USA; Department of Physics, Duke University, USA.
| |
Collapse
|
5
|
Frölich S, Marković D, Kiebel SJ. Neuronal Sequence Models for Bayesian Online Inference. Front Artif Intell 2021; 4:530937. [PMID: 34095815 PMCID: PMC8176225 DOI: 10.3389/frai.2021.530937] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Accepted: 04/13/2021] [Indexed: 11/13/2022] Open
Abstract
Various imaging and electrophysiological studies in a number of different species and brain regions have revealed that neuronal dynamics associated with diverse behavioral patterns and cognitive tasks take on a sequence-like structure, even when encoding stationary concepts. These neuronal sequences are characterized by robust and reproducible spatiotemporal activation patterns. This suggests that the role of neuronal sequences may be much more fundamental for brain function than is commonly believed. Furthermore, the idea that the brain is not simply a passive observer but an active predictor of its sensory input, is supported by an enormous amount of evidence in fields as diverse as human ethology and physiology, besides neuroscience. Hence, a central aspect of this review is to illustrate how neuronal sequences can be understood as critical for probabilistic predictive information processing, and what dynamical principles can be used as generators of neuronal sequences. Moreover, since different lines of evidence from neuroscience and computational modeling suggest that the brain is organized in a functional hierarchy of time scales, we will also review how models based on sequence-generating principles can be embedded in such a hierarchy, to form a generative model for recognition and prediction of sensory input. We shortly introduce the Bayesian brain hypothesis as a prominent mathematical description of how online, i.e., fast, recognition, and predictions may be computed by the brain. Finally, we briefly discuss some recent advances in machine learning, where spatiotemporally structured methods (akin to neuronal sequences) and hierarchical networks have independently been developed for a wide range of tasks. We conclude that the investigation of specific dynamical and structural principles of sequential brain activity not only helps us understand how the brain processes information and generates predictions, but also informs us about neuroscientific principles potentially useful for designing more efficient artificial neuronal networks for machine learning tasks.
Collapse
Affiliation(s)
- Sascha Frölich
- Department of Psychology, Technische Universität Dresden, Dresden, Germany
| | | | | |
Collapse
|
6
|
Maes A, Barahona M, Clopath C. Learning compositional sequences with multiple time scales through a hierarchical network of spiking neurons. PLoS Comput Biol 2021; 17:e1008866. [PMID: 33764970 PMCID: PMC8023498 DOI: 10.1371/journal.pcbi.1008866] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Revised: 04/06/2021] [Accepted: 03/08/2021] [Indexed: 11/17/2022] Open
Abstract
Sequential behaviour is often compositional and organised across multiple time scales: a set of individual elements developing on short time scales (motifs) are combined to form longer functional sequences (syntax). Such organisation leads to a natural hierarchy that can be used advantageously for learning, since the motifs and the syntax can be acquired independently. Despite mounting experimental evidence for hierarchical structures in neuroscience, models for temporal learning based on neuronal networks have mostly focused on serial methods. Here, we introduce a network model of spiking neurons with a hierarchical organisation aimed at sequence learning on multiple time scales. Using biophysically motivated neuron dynamics and local plasticity rules, the model can learn motifs and syntax independently. Furthermore, the model can relearn sequences efficiently and store multiple sequences. Compared to serial learning, the hierarchical model displays faster learning, more flexible relearning, increased capacity, and higher robustness to perturbations. The hierarchical model redistributes the variability: it achieves high motif fidelity at the cost of higher variability in the between-motif timings.
Collapse
Affiliation(s)
- Amadeus Maes
- Bioengineering Department, Imperial College London, London, United Kingdom
| | - Mauricio Barahona
- Mathematics Department, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, United Kingdom
| |
Collapse
|
7
|
Michaelis C, Lehr AB, Tetzlaff C. Robust Trajectory Generation for Robotic Control on the Neuromorphic Research Chip Loihi. Front Neurorobot 2020; 14:589532. [PMID: 33324191 PMCID: PMC7726255 DOI: 10.3389/fnbot.2020.589532] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 10/28/2020] [Indexed: 11/13/2022] Open
Abstract
Neuromorphic hardware has several promising advantages compared to von Neumann architectures and is highly interesting for robot control. However, despite the high speed and energy efficiency of neuromorphic computing, algorithms utilizing this hardware in control scenarios are still rare. One problem is the transition from fast spiking activity on the hardware, which acts on a timescale of a few milliseconds, to a control-relevant timescale on the order of hundreds of milliseconds. Another problem is the execution of complex trajectories, which requires spiking activity to contain sufficient variability, while at the same time, for reliable performance, network dynamics must be adequately robust against noise. In this study we exploit a recently developed biologically-inspired spiking neural network model, the so-called anisotropic network. We identified and transferred the core principles of the anisotropic network to neuromorphic hardware using Intel's neuromorphic research chip Loihi and validated the system on trajectories from a motor-control task performed by a robot arm. We developed a network architecture including the anisotropic network and a pooling layer which allows fast spike read-out from the chip and performs an inherent regularization. With this, we show that the anisotropic network on Loihi reliably encodes sequential patterns of neural activity, each representing a robotic action, and that the patterns allow the generation of multidimensional trajectories on control-relevant timescales. Taken together, our study presents a new algorithm that allows the generation of complex robotic movements as a building block for robotic control using state of the art neuromorphic hardware.
Collapse
Affiliation(s)
- Carlo Michaelis
- Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany
| | | | | |
Collapse
|
8
|
Limbacher T, Legenstein R. Emergence of Stable Synaptic Clusters on Dendrites Through Synaptic Rewiring. Front Comput Neurosci 2020; 14:57. [PMID: 32848681 PMCID: PMC7424032 DOI: 10.3389/fncom.2020.00057] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Accepted: 05/22/2020] [Indexed: 11/16/2022] Open
Abstract
The connectivity structure of neuronal networks in cortex is highly dynamic. This ongoing cortical rewiring is assumed to serve important functions for learning and memory. We analyze in this article a model for the self-organization of synaptic inputs onto dendritic branches of pyramidal cells. The model combines a generic stochastic rewiring principle with a simple synaptic plasticity rule that depends on local dendritic activity. In computer simulations, we find that this synaptic rewiring model leads to synaptic clustering, that is, temporally correlated inputs become locally clustered on dendritic branches. This empirical finding is backed up by a theoretical analysis which shows that rewiring in our model favors network configurations with synaptic clustering. We propose that synaptic clustering plays an important role in the organization of computation and memory in cortical circuits: we find that synaptic clustering through the proposed rewiring mechanism can serve as a mechanism to protect memories from subsequent modifications on a medium time scale. Rewiring of synaptic connections onto specific dendritic branches may thus counteract the general problem of catastrophic forgetting in neural networks.
Collapse
Affiliation(s)
| | - Robert Legenstein
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| |
Collapse
|
9
|
Cabessa J, Tchaptchet A. Automata complete computation with Hodgkin-Huxley neural networks composed of synfire rings. Neural Netw 2020; 126:312-334. [PMID: 32278841 DOI: 10.1016/j.neunet.2020.03.019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2019] [Revised: 03/23/2020] [Accepted: 03/23/2020] [Indexed: 11/15/2022]
Abstract
Synfire rings are neural circuits capable of conveying synchronous, temporally precise and self-sustained activities in a robust manner. We propose a cell assembly based paradigm for abstract neural computation centered on the concept of synfire rings. More precisely, we empirically show that Hodgkin-Huxley neural networks modularly composed of synfire rings are automata complete. We provide an algorithmic construction which, starting from any given finite state automaton, builds a corresponding Hodgkin-Huxley neural network modularly composed of synfire rings and capable of simulating it. We illustrate the correctness of the construction on two specific examples. We further analyze the stability and robustness of the construction as a function of changes in the ring topologies as well as with respect to cell death and synaptic failure mechanisms, respectively. These results establish the possibility of achieving abstract computation with bio-inspired neural networks. They might constitute a theoretical ground for the realization of biological neural computers.
Collapse
Affiliation(s)
- Jérémie Cabessa
- Laboratory of Mathematical Economics and Applied Microeconomics (LEMMA), Université Paris 2, Panthéon-Assas, 75005 Paris, France; Institute of Computer Science of the Czech Academy of Sciences, P. O. Box 5, 18207 Prague 8, Czech Republic.
| | - Aubin Tchaptchet
- Institute of Physiology, Philipps University of Marburg, 35037 Marburg, Germany.
| |
Collapse
|
10
|
Maes A, Barahona M, Clopath C. Learning spatiotemporal signals using a recurrent spiking network that discretizes time. PLoS Comput Biol 2020; 16:e1007606. [PMID: 31961853 PMCID: PMC7028299 DOI: 10.1371/journal.pcbi.1007606] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 02/18/2020] [Accepted: 12/13/2019] [Indexed: 12/15/2022] Open
Abstract
Learning to produce spatiotemporal sequences is a common task that the brain has to solve. The same neurons may be used to produce different sequential behaviours. The way the brain learns and encodes such tasks remains unknown as current computational models do not typically use realistic biologically-plausible learning. Here, we propose a model where a spiking recurrent network of excitatory and inhibitory spiking neurons drives a read-out layer: the dynamics of the driver recurrent network is trained to encode time which is then mapped through the read-out neurons to encode another dimension, such as space or a phase. Different spatiotemporal patterns can be learned and encoded through the synaptic weights to the read-out neurons that follow common Hebbian learning rules. We demonstrate that the model is able to learn spatiotemporal dynamics on time scales that are behaviourally relevant and we show that the learned sequences are robustly replayed during a regime of spontaneous activity.
Collapse
Affiliation(s)
- Amadeus Maes
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Mauricio Barahona
- Department of Mathematics, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
11
|
Cabessa J. Turing complete neural computation based on synaptic plasticity. PLoS One 2019; 14:e0223451. [PMID: 31618230 PMCID: PMC6795493 DOI: 10.1371/journal.pone.0223451] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Accepted: 09/20/2019] [Indexed: 11/19/2022] Open
Abstract
In neural computation, the essential information is generally encoded into the neurons via their spiking configurations, activation values or (attractor) dynamics. The synapses and their associated plasticity mechanisms are, by contrast, mainly used to process this information and implement the crucial learning features. Here, we propose a novel Turing complete paradigm of neural computation where the essential information is encoded into discrete synaptic states, and the updating of this information achieved via synaptic plasticity mechanisms. More specifically, we prove that any 2-counter machine—and hence any Turing machine—can be simulated by a rational-weighted recurrent neural network employing spike-timing-dependent plasticity (STDP) rules. The computational states and counter values of the machine are encoded into discrete synaptic strengths. The transitions between those synaptic weights are then achieved via STDP. These considerations show that a Turing complete synaptic-based paradigm of neural computation is theoretically possible and potentially exploitable. They support the idea that synapses are not only crucially involved in information processing and learning features, but also in the encoding of essential information. This approach represents a paradigm shift in the field of neural computation.
Collapse
Affiliation(s)
- Jérémie Cabessa
- Laboratory of Mathematical Economics and Applied Microeconomics (LEMMA), University Paris 2 – Panthéon-Assas, 75005 Paris, France
- Institute of Computer Science, Czech Academy of Sciences, 18207 Prague 8, Czech Republic
- * E-mail:
| |
Collapse
|
12
|
Gallinaro JV, Rotter S. Associative properties of structural plasticity based on firing rate homeostasis in recurrent neuronal networks. Sci Rep 2018; 8:3754. [PMID: 29491474 PMCID: PMC5830542 DOI: 10.1038/s41598-018-22077-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2017] [Accepted: 02/16/2018] [Indexed: 11/18/2022] Open
Abstract
Correlation-based Hebbian plasticity is thought to shape neuronal connectivity during development and learning, whereas homeostatic plasticity would stabilize network activity. Here we investigate another, new aspect of this dichotomy: Can Hebbian associative properties also emerge as a network effect from a plasticity rule based on homeostatic principles on the neuronal level? To address this question, we simulated a recurrent network of leaky integrate-and-fire neurons, in which excitatory connections are subject to a structural plasticity rule based on firing rate homeostasis. We show that a subgroup of neurons develop stronger within-group connectivity as a consequence of receiving stronger external stimulation. In an experimentally well-documented scenario we show that feature specific connectivity, similar to what has been observed in rodent visual cortex, can emerge from such a plasticity rule. The experience-dependent structural changes triggered by stimulation are long-lasting and decay only slowly when the neurons are exposed again to unspecific external inputs.
Collapse
Affiliation(s)
- Júlia V Gallinaro
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany.
| | - Stefan Rotter
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany
| |
Collapse
|
13
|
Neuronal Intrinsic Physiology Changes During Development of a Learned Behavior. eNeuro 2017; 4:eN-NWR-0297-17. [PMID: 29062887 PMCID: PMC5649544 DOI: 10.1523/eneuro.0297-17.2017] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2017] [Accepted: 09/07/2017] [Indexed: 01/14/2023] Open
Abstract
Juvenile male zebra finches learn their songs over distinct auditory and sensorimotor stages, the former requiring exposure to an adult tutor song pattern. The cortical premotor nucleus HVC (acronym is name) plays a necessary role in both learning stages, as well as the production of adult song. Consistent with neural network models where synaptic plasticity mediates developmental forms of learning, exposure to tutor song drives changes in the turnover, density, and morphology of HVC synapses during vocal development. A network's output, however, is also influenced by the intrinsic properties (e.g., ion channels) of the component neurons, which could change over development. Here, we use patch clamp recordings to show cell-type-specific changes in the intrinsic physiology of HVC projection neurons as a function of vocal development. Developmental changes in HVC neurons that project to the basal ganglia include an increased voltage sag response to hyperpolarizing currents and an increased rebound depolarization following hyperpolarization. Developmental changes in HVC neurons that project to vocal-motor cortex include a decreased resting membrane potential and an increased spike amplitude. HVC interneurons, however, show a relatively stable range of intrinsic features across vocal development. We used mathematical models to deduce possible changes in ionic currents that underlie the physiological changes and to show that the magnitude of the observed changes could alter HVC circuit function. The results demonstrate developmental plasticity in the intrinsic physiology of HVC projection neurons and suggest that intrinsic plasticity may have a role in the process of song learning.
Collapse
|
14
|
Vidybida A, Shchur O. Information reduction in a reverberatory neuronal network through convergence to complex oscillatory firing patterns. Biosystems 2017; 161:24-30. [PMID: 28756163 DOI: 10.1016/j.biosystems.2017.07.008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2017] [Revised: 07/20/2017] [Accepted: 07/24/2017] [Indexed: 11/28/2022]
Abstract
Dynamics of a reverberating neural net is studied by means of computer simulation. The net, which is composed of 9 leaky integrate-and-fire (LIF) neurons arranged in a square lattice, is fully connected with interneuronal communication delay proportional to the corresponding distance. The network is initially stimulated with different stimuli and then goes freely. For each stimulus, in the course of free evolution, activity either dies out completely or the network converges to a periodic trajectory, which may be different for different stimuli. The latter is observed for a set of 285290 initial stimuli which constitutes 83% of all stimuli applied. After applying each stimulus from the set, 102 different periodic end-states are found. The conclusion is made, after analyzing the trajectories, that neuronal firing is the necessary prerequisite for merging different trajectories into a single one, which eventually transforms into a periodic regime. Observed phenomena of self-organization in the time domain are discussed as a possible model for processes taking place during perception. The repetitive firing in the periodic regimes could underpin memory formation.
Collapse
Affiliation(s)
- A Vidybida
- Bogolyubov Institute for Theoretical Physics, Metrologichna Str., 14-B, Kyiv 03680, Ukraine.
| | - O Shchur
- Taras Shevchenko National University of Kyiv, Volodymyrska Str., 60, Kyiv 01033, Ukraine.
| |
Collapse
|
15
|
Zheng P, Kozloski J. Striatal Network Models of Huntington's Disease Dysfunction Phenotypes. Front Comput Neurosci 2017; 11:70. [PMID: 28798680 PMCID: PMC5529396 DOI: 10.3389/fncom.2017.00070] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2017] [Accepted: 07/13/2017] [Indexed: 11/17/2022] Open
Abstract
We present a network model of striatum, which generates "winnerless" dynamics typical for a network of sparse, unidirectionally connected inhibitory units. We observe that these dynamics, while interesting and a good match to normal striatal electrophysiological recordings, are fragile. Specifically, we find that randomly initialized networks often show dynamics more resembling "winner-take-all," and relate this "unhealthy" model activity to dysfunctional physiological and anatomical phenotypes in the striatum of Huntington's disease animal models. We report plasticity as a potent mechanism to refine randomly initialized networks and create a healthy winnerless dynamic in our model, and we explore perturbations to a healthy network, modeled on changes observed in Huntington's disease, such as neuron cell death and increased bidirectional connectivity. We report the effect of these perturbations on the conversion risk of the network to an unhealthy state. Finally we discuss the relationship between structural and functional phenotypes observed at the level of simulated network dynamics as a promising means to model disease progression in different patient populations.
Collapse
Affiliation(s)
| | - James Kozloski
- Computational Neuroscience and Multiscale Brain Modeling, Computational Biology Center, IBM Research Division, IBM T. J. Watson Research CenterNew York, NY, United States
| |
Collapse
|
16
|
Del Papa B, Priesemann V, Triesch J. Criticality meets learning: Criticality signatures in a self-organizing recurrent neural network. PLoS One 2017; 12:e0178683. [PMID: 28552964 PMCID: PMC5446191 DOI: 10.1371/journal.pone.0178683] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2016] [Accepted: 05/17/2017] [Indexed: 11/23/2022] Open
Abstract
Many experiments have suggested that the brain operates close to a critical state, based on signatures of criticality such as power-law distributed neuronal avalanches. In neural network models, criticality is a dynamical state that maximizes information processing capacities, e.g. sensitivity to input, dynamical range and storage capacity, which makes it a favorable candidate state for brain function. Although models that self-organize towards a critical state have been proposed, the relation between criticality signatures and learning is still unclear. Here, we investigate signatures of criticality in a self-organizing recurrent neural network (SORN). Investigating criticality in the SORN is of particular interest because it has not been developed to show criticality. Instead, the SORN has been shown to exhibit spatio-temporal pattern learning through a combination of neural plasticity mechanisms and it reproduces a number of biological findings on neural variability and the statistics and fluctuations of synaptic efficacies. We show that, after a transient, the SORN spontaneously self-organizes into a dynamical state that shows criticality signatures comparable to those found in experiments. The plasticity mechanisms are necessary to attain that dynamical state, but not to maintain it. Furthermore, onset of external input transiently changes the slope of the avalanche distributions – matching recent experimental findings. Interestingly, the membrane noise level necessary for the occurrence of the criticality signatures reduces the model’s performance in simple learning tasks. Overall, our work shows that the biologically inspired plasticity and homeostasis mechanisms responsible for the SORN’s spatio-temporal learning abilities can give rise to criticality signatures in its activity when driven by random input, but these break down under the structured input of short repeating sequences.
Collapse
Affiliation(s)
- Bruno Del Papa
- Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe University, Frankfurt am Main, Germany
- International Max Planck Research School for Neural Circuits, Max Planck Institute for Brain Research, Frankfurt am Main, Germany
- * E-mail:
| | - Viola Priesemann
- Department of Non-linear Dynamics, Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe University, Frankfurt am Main, Germany
| |
Collapse
|
17
|
Ravid Tannenbaum N, Burak Y. Shaping Neural Circuits by High Order Synaptic Interactions. PLoS Comput Biol 2016; 12:e1005056. [PMID: 27517461 PMCID: PMC4982676 DOI: 10.1371/journal.pcbi.1005056] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2015] [Accepted: 06/30/2016] [Indexed: 11/19/2022] Open
Abstract
Spike timing dependent plasticity (STDP) is believed to play an important role in shaping the structure of neural circuits. Here we show that STDP generates effective interactions between synapses of different neurons, which were neglected in previous theoretical treatments, and can be described as a sum over contributions from structural motifs. These interactions can have a pivotal influence on the connectivity patterns that emerge under the influence of STDP. In particular, we consider two highly ordered forms of structure: wide synfire chains, in which groups of neurons project to each other sequentially, and self connected assemblies. We show that high order synaptic interactions can enable the formation of both structures, depending on the form of the STDP function and the time course of synaptic currents. Furthermore, within a certain regime of biophysical parameters, emergence of the ordered connectivity occurs robustly and autonomously in a stochastic network of spiking neurons, without a need to expose the neural network to structured inputs during learning. Plasticity between neural connections plays a key role in our ability to process and store information. One of the fundamental questions on plasticity, is the extent to which local processes, affecting individual synapses, are responsible for large scale structures of neural connectivity. Here we focus on two types of structures: synfire chains and self connected assemblies. These structures are often proposed as forms of neural connectivity that can support brain functions such as memory and generation of motor activity. We show that an important plasticity mechanism, spike timing dependent plasticity, can lead to autonomous emergence of these large scale structures in the brain: in contrast to previous theoretical proposals, we show that the emergence can occur autonomously even if instructive signals are not fed into the neural network while its form is shaped by synaptic plasticity.
Collapse
Affiliation(s)
- Neta Ravid Tannenbaum
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem, Israel
| | - Yoram Burak
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem, Israel
- Racah Institute of Physics, Hebrew University, Jerusalem, Israel
- * E-mail:
| |
Collapse
|
18
|
Fauth M, Tetzlaff C. Opposing Effects of Neuronal Activity on Structural Plasticity. Front Neuroanat 2016; 10:75. [PMID: 27445713 PMCID: PMC4923203 DOI: 10.3389/fnana.2016.00075] [Citation(s) in RCA: 52] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2015] [Accepted: 06/16/2016] [Indexed: 12/21/2022] Open
Abstract
The connectivity of the brain is continuously adjusted to new environmental influences by several activity-dependent adaptive processes. The most investigated adaptive mechanism is activity-dependent functional or synaptic plasticity regulating the transmission efficacy of existing synapses. Another important but less prominently discussed adaptive process is structural plasticity, which changes the connectivity by the formation and deletion of synapses. In this review, we show, based on experimental evidence, that structural plasticity can be classified similar to synaptic plasticity into two categories: (i) Hebbian structural plasticity, which leads to an increase (decrease) of the number of synapses during phases of high (low) neuronal activity and (ii) homeostatic structural plasticity, which balances these changes by removing and adding synapses. Furthermore, based on experimental and theoretical insights, we argue that each type of structural plasticity fulfills a different function. While Hebbian structural changes enhance memory lifetime, storage capacity, and memory robustness, homeostatic structural plasticity self-organizes the connectivity of the neural network to assure stability. However, the link between functional synaptic and structural plasticity as well as the detailed interactions between Hebbian and homeostatic structural plasticity are more complex. This implies even richer dynamics requiring further experimental and theoretical investigations.
Collapse
Affiliation(s)
- Michael Fauth
- Department of Computational Neuroscience, Third Institute of Physics - Biophysics, Georg-August UniversityGöttingen, Germany; Bernstein Center for Computational NeuroscienceGöttingen, Germany
| | - Christian Tetzlaff
- Bernstein Center for Computational NeuroscienceGöttingen, Germany; Max Planck Institute for Dynamics and Self-OrganizationGöttingen, Germany
| |
Collapse
|
19
|
Kozloski J. Closed-Loop Brain Model of Neocortical Information-Based Exchange. Front Neuroanat 2016; 10:3. [PMID: 26834573 PMCID: PMC4716663 DOI: 10.3389/fnana.2016.00003] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2015] [Accepted: 01/02/2016] [Indexed: 11/25/2022] Open
Abstract
Here we describe an “information-based exchange” model of brain function that ascribes to neocortex, basal ganglia, and thalamus distinct network functions. The model allows us to analyze whole brain system set point measures, such as the rate and heterogeneity of transitions in striatum and neocortex, in the context of neuromodulation and other perturbations. Our closed-loop model is grounded in neuroanatomical observations, proposing a novel “Grand Loop” through neocortex, and invokes different forms of plasticity at specific tissue interfaces and their principle cell synapses to achieve these transitions. By implementing a system for maximum information-based exchange of action potentials between modeled neocortical areas, we observe changes to these measures in simulation. We hypothesize that similar dynamic set points and modulations exist in the brain's resting state activity, and that different modifications to information-based exchange may shift the risk profile of different component tissues, resulting in different neurodegenerative diseases. This model is targeted for further development using IBM's Neural Tissue Simulator, which allows scalable elaboration of networks, tissues, and their neural and synaptic components toward ever greater complexity and biological realism.
Collapse
Affiliation(s)
- James Kozloski
- IBM Research Division, Computational Biology Center, IBM T.J. Watson Research Center Yorktown Heights, NY, USA
| |
Collapse
|
20
|
Where's the Noise? Key Features of Spontaneous Activity and Neural Variability Arise through Learning in a Deterministic Network. PLoS Comput Biol 2015; 11:e1004640. [PMID: 26714277 PMCID: PMC4694925 DOI: 10.1371/journal.pcbi.1004640] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2014] [Accepted: 11/02/2015] [Indexed: 11/26/2022] Open
Abstract
Even in the absence of sensory stimulation the brain is spontaneously active. This background “noise” seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN), which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network’s spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network’s behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural responses can be accounted for by a simple deterministic recurrent neural network which learns a predictive model of its sensory environment via a combination of generic neural plasticity mechanisms. Neural recordings seem very noisy. If the exact same stimulus is shown to an animal multiple times, the neural response will vary substantially. In fact, the activity of a single neuron shows many features of a random process. Furthermore, the spontaneous activity occurring in the absence of any sensory stimulus, which is usually considered a kind of background noise, often has a magnitude comparable to the activity evoked by stimulus presentation and interacts with sensory inputs in interesting ways. Here we show that the key features of neural variability and spontaneous activity can all be accounted for by a simple and completely deterministic neural network learning a predictive model of its sensory inputs. The network’s deterministic dynamics give rise to structured but variable responses matching key experimental findings obtained in different mammalian species with different recording techniques. Our results suggest that the notorious variability of neural recordings and the complex features of spontaneous brain activity could reflect the dynamics of a largely deterministic but highly adaptive network learning a predictive model of its sensory environment.
Collapse
|
21
|
Gilson M, Savin C, Zenke F. Editorial: Emergent Neural Computation from the Interaction of Different Forms of Plasticity. Front Comput Neurosci 2015; 9:145. [PMID: 26648864 PMCID: PMC4663259 DOI: 10.3389/fncom.2015.00145] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2015] [Accepted: 11/13/2015] [Indexed: 11/13/2022] Open
Affiliation(s)
- Matthieu Gilson
- Computational Neuroscience Group, Department of Technology and Information of Communication, Universitat Pompeu Fabra Barcelona, Spain
| | - Cristina Savin
- Institute of Science and Technology Austria Klosterneuburg, Austria
| | - Friedemann Zenke
- Neural Dynamics and Computation Lab, Department of Applied Physics, Stanford University Stanford, CA, USA
| |
Collapse
|
22
|
Hartmann C, Miner DC, Triesch J. Precise Synaptic Efficacy Alignment Suggests Potentiation Dominated Learning. Front Neural Circuits 2015; 9:90. [PMID: 26793070 PMCID: PMC4711154 DOI: 10.3389/fncir.2015.00090] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2015] [Accepted: 12/22/2015] [Indexed: 12/04/2022] Open
Abstract
Recent evidence suggests that parallel synapses from the same axonal branch onto the same dendritic branch have almost identical strength. It has been proposed that this alignment is only possible through learning rules that integrate activity over long time spans. However, learning mechanisms such as spike-timing-dependent plasticity (STDP) are commonly assumed to be temporally local. Here, we propose that the combination of temporally local STDP and a multiplicative synaptic normalization mechanism is sufficient to explain the alignment of parallel synapses. To address this issue, we introduce three increasingly complex models: First, we model the idealized interaction of STDP and synaptic normalization in a single neuron as a simple stochastic process and derive analytically that the alignment effect can be described by a so-called Kesten process. From this we can derive that synaptic efficacy alignment requires potentiation-dominated learning regimes. We verify these conditions in a single-neuron model with independent spiking activities but more realistic synapses. As expected, we only observe synaptic efficacy alignment for long-term potentiation-biased STDP. Finally, we explore how well the findings transfer to recurrent neural networks where the learning mechanisms interact with the correlated activity of the network. We find that due to the self-reinforcing correlations in recurrent circuits under STDP, alignment occurs for both long-term potentiation- and depression-biased STDP, because the learning will be potentiation dominated in both cases due to the potentiating events induced by correlated activity. This is in line with recent results demonstrating a dominance of potentiation over depression during waking and normalization during sleep. This leads us to predict that individual spine pairs will be more similar after sleep compared to after sleep deprivation. In conclusion, we show that synaptic normalization in conjunction with coordinated potentiation--in this case, from STDP in the presence of correlated pre- and post-synaptic activity--naturally leads to an alignment of parallel synapses.
Collapse
Affiliation(s)
- Christoph Hartmann
- Department of Neuroscience, Frankfurt Institute for Advanced StudiesFrankfurt am Main, Germany
- International Max Planck Research School for Neural Circuits, Max Planck Institute for Brain ResearchFrankfurt am Main, Germany
- *Correspondence: Christoph Hartmann
| | - Daniel C. Miner
- Department of Neuroscience, Frankfurt Institute for Advanced StudiesFrankfurt am Main, Germany
| | - Jochen Triesch
- Department of Neuroscience, Frankfurt Institute for Advanced StudiesFrankfurt am Main, Germany
| |
Collapse
|