1
|
Devalle F, Roxin A. How plasticity shapes the formation of neuronal assemblies driven by oscillatory and stochastic inputs. J Comput Neurosci 2025; 53:9-23. [PMID: 39661297 DOI: 10.1007/s10827-024-00885-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Revised: 11/21/2024] [Accepted: 11/25/2024] [Indexed: 12/12/2024]
Abstract
Synaptic connections in neuronal circuits are modulated by pre- and post-synaptic spiking activity. Previous theoretical work has studied how such Hebbian plasticity rules shape network connectivity when firing rates are constant, or slowly varying in time. However, oscillations and fluctuations, which can arise through sensory inputs or intrinsic brain mechanisms, are ubiquitous in neuronal circuits. Here we study how oscillatory and fluctuating inputs shape recurrent network connectivity given a temporally asymmetric plasticity rule. We do this analytically using a separation of time scales approach for pairs of neurons, and then show that the analysis can be extended to understand the structure in large networks. In the case of oscillatory inputs, the resulting network structure is strongly affected by the phase relationship between drive to different neurons. In large networks, distributed phases tend to lead to hierarchical clustering. The analysis for stochastic inputs reveals a rich phase plane in which there is multistability between different possible connectivity motifs. Our results may be of relevance for understanding the effect of sensory-driven inputs, which are by nature time-varying, on synaptic plasticity, and hence on learning and memory.
Collapse
Affiliation(s)
- Federico Devalle
- Computational Neuroscience Group, Centre de Recerca Matemàtica, Campus de Bellaterra, Edifici C, 08193, Bellterra, Spain
| | - Alex Roxin
- Computational Neuroscience Group, Centre de Recerca Matemàtica, Campus de Bellaterra, Edifici C, 08193, Bellterra, Spain.
| |
Collapse
|
2
|
Madadi Asl M, Ramezani Akbarabadi S. Delay-dependent transitions of phase synchronization and coupling symmetry between neurons shaped by spike-timing-dependent plasticity. Cogn Neurodyn 2023; 17:523-536. [PMID: 37007192 PMCID: PMC10050303 DOI: 10.1007/s11571-022-09850-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2021] [Revised: 05/24/2022] [Accepted: 07/06/2022] [Indexed: 11/03/2022] Open
Abstract
Synchronization plays a key role in learning and memory by facilitating the communication between neurons promoted by synaptic plasticity. Spike-timing-dependent plasticity (STDP) is a form of synaptic plasticity that modifies the strength of synaptic connections between neurons based on the coincidence of pre- and postsynaptic spikes. In this way, STDP simultaneously shapes the neuronal activity and synaptic connectivity in a feedback loop. However, transmission delays due to the physical distance between neurons affect neuronal synchronization and the symmetry of synaptic coupling. To address the question that how transmission delays and STDP can jointly determine the emergent pairwise activity-connectivity patterns, we studied phase synchronization properties and coupling symmetry between two bidirectionally coupled neurons using both phase oscillator and conductance-based neuron models. We show that depending on the range of transmission delays, the activity of the two-neuron motif can achieve an in-phase/anti-phase synchronized state and its connectivity can attain a symmetric/asymmetric coupling regime. The coevolutionary dynamics of the neuronal system and the synaptic weights due to STDP stabilizes the motif in either one of these states by transitions between in-phase/anti-phase synchronization states and symmetric/asymmetric coupling regimes at particular transmission delays. These transitions crucially depend on the phase response curve (PRC) of the neurons, but they are relatively robust to the heterogeneity of transmission delays and potentiation-depression imbalance of the STDP profile.
Collapse
Affiliation(s)
- Mojtaba Madadi Asl
- School of Biological Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran, 19395-5531 Iran
| | | |
Collapse
|
3
|
Madadi Asl M, Asadi A, Enayati J, Valizadeh A. Inhibitory Spike-Timing-Dependent Plasticity Can Account for Pathological Strengthening of Pallido-Subthalamic Synapses in Parkinson's Disease. Front Physiol 2022; 13:915626. [PMID: 35665225 PMCID: PMC9160312 DOI: 10.3389/fphys.2022.915626] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 04/29/2022] [Indexed: 01/26/2023] Open
Abstract
Parkinson's disease (PD) is a neurodegenerative brain disorder associated with dysfunction of the basal ganglia (BG) circuitry. Dopamine (DA) depletion in experimental PD models leads to the pathological strengthening of pallido-subthalamic synaptic connections, contributing to the emergence of abnormally synchronized neuronal activity in the external segment of the globus pallidus (GPe) and subthalamic nucleus (STN). Augmented GPe-STN transmission following loss of DA was attributed to heterosynaptic plasticity mechanisms induced by cortico-subthalamic inputs. However, synaptic plasticity may play a role in this process. Here, by employing computational modeling we show that assuming inhibitory spike-timing-dependent plasticity (iSTDP) at pallido-subthalamic synapses can account for pathological strengthening of pallido-subthalamic synapses in PD by further promoting correlated neuronal activity in the GPe-STN network. In addition, we show that GPe-STN transmission delays can shape bistable activity-connectivity states due to iSTDP, characterized by strong connectivity and strong synchronized activity (pathological states) as opposed to weak connectivity and desynchronized activity (physiological states). Our results may shed light on how abnormal reshaping of GPe-STN connectivity by synaptic plasticity during parkinsonism is related to the PD pathophysiology and contribute to the development of therapeutic brain stimulation techniques targeting plasticity-induced rewiring of network connectivity.
Collapse
Affiliation(s)
- Mojtaba Madadi Asl
- Department of Physics, Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan, Iran
| | - Atefeh Asadi
- Department of Physics, Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan, Iran
| | - Jamil Enayati
- Physics Department, College of Education, University of Garmian, Kalar, Iraq
| | - Alireza Valizadeh
- Department of Physics, Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan, Iran
| |
Collapse
|
4
|
Shorten DP, Priesemann V, Wibral M, Lizier JT. Early lock-in of structured and specialised information flows during neural development. eLife 2022; 11:74651. [PMID: 35286256 PMCID: PMC9064303 DOI: 10.7554/elife.74651] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Accepted: 03/13/2022] [Indexed: 11/13/2022] Open
Abstract
The brains of many organisms are capable of complicated distributed computation underpinned by a highly advanced information processing capacity. Although substantial progress has been made towards characterising the information flow component of this capacity in mature brains, there is a distinct lack of work characterising its emergence during neural development. This lack of progress has been largely driven by the lack of effective estimators of information processing operations for spiking data. Here, we leverage recent advances in this estimation task in order to quantify the changes in transfer entropy during development. We do so by studying the changes in the intrinsic dynamics of the spontaneous activity of developing dissociated neural cell cultures. We find that the quantity of information flowing across these networks undergoes a dramatic increase across development. Moreover, the spatial structure of these flows exhibits a tendency to lock-in at the point when they arise. We also characterise the flow of information during the crucial periods of population bursts. We find that, during these bursts, nodes tend to undertake specialised computational roles as either transmitters, mediators, or receivers of information, with these roles tending to align with their average spike ordering. Further, we find that these roles are regularly locked-in when the information flows are established. Finally, we compare these results to information flows in a model network developing according to a spike-timing-dependent plasticity learning rule. Similar temporal patterns in the development of information flows were observed in these networks, hinting at the broader generality of these phenomena.
Collapse
Affiliation(s)
- David P Shorten
- Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
| | - Viola Priesemann
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
| | - Michael Wibral
- Campus Institute for Dynamics of Biological Networks, Georg August University, Göttingen, Germany
| | - Joseph T Lizier
- Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
| |
Collapse
|
5
|
Madadi Asl M, Vahabie AH, Valizadeh A, Tass PA. Spike-Timing-Dependent Plasticity Mediated by Dopamine and its Role in Parkinson's Disease Pathophysiology. FRONTIERS IN NETWORK PHYSIOLOGY 2022; 2:817524. [PMID: 36926058 PMCID: PMC10013044 DOI: 10.3389/fnetp.2022.817524] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Accepted: 02/08/2022] [Indexed: 01/05/2023]
Abstract
Parkinson's disease (PD) is a multi-systemic neurodegenerative brain disorder. Motor symptoms of PD are linked to the significant dopamine (DA) loss in substantia nigra pars compacta (SNc) followed by basal ganglia (BG) circuit dysfunction. Increasing experimental and computational evidence indicates that (synaptic) plasticity plays a key role in the emergence of PD-related pathological changes following DA loss. Spike-timing-dependent plasticity (STDP) mediated by DA provides a mechanistic model for synaptic plasticity to modify synaptic connections within the BG according to the neuronal activity. To shed light on how DA-mediated STDP can shape neuronal activity and synaptic connectivity in the PD condition, we reviewed experimental and computational findings addressing the modulatory effect of DA on STDP as well as other plasticity mechanisms and discussed their potential role in PD pathophysiology and related network dynamics and connectivity. In particular, reshaping of STDP profiles together with other plasticity-mediated processes following DA loss may abnormally modify synaptic connections in competing pathways of the BG. The cascade of plasticity-induced maladaptive or compensatory changes can impair the excitation-inhibition balance towards the BG output nuclei, leading to the emergence of pathological activity-connectivity patterns in PD. Pre-clinical, clinical as well as computational studies reviewed here provide an understanding of the impact of synaptic plasticity and other plasticity mechanisms on PD pathophysiology, especially PD-related network activity and connectivity, after DA loss. This review may provide further insights into the abnormal structure-function relationship within the BG contributing to the emergence of pathological states in PD. Specifically, this review is intended to provide detailed information for the development of computational network models for PD, serving as testbeds for the development and optimization of invasive and non-invasive brain stimulation techniques. Computationally derived hypotheses may accelerate the development of therapeutic stimulation techniques and potentially reduce the number of related animal experiments.
Collapse
Affiliation(s)
- Mojtaba Madadi Asl
- Department of Physics, Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan, Iran
| | - Abdol-Hossein Vahabie
- School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran.,Department of Psychology, Faculty of Psychology and Education, University of Tehran, Tehran, Iran
| | - Alireza Valizadeh
- Department of Physics, Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan, Iran
| | - Peter A Tass
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, CA, United States
| |
Collapse
|
6
|
Akil AE, Rosenbaum R, Josić K. Balanced networks under spike-time dependent plasticity. PLoS Comput Biol 2021; 17:e1008958. [PMID: 33979336 PMCID: PMC8143429 DOI: 10.1371/journal.pcbi.1008958] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Revised: 05/24/2021] [Accepted: 04/12/2021] [Indexed: 11/28/2022] Open
Abstract
The dynamics of local cortical networks are irregular, but correlated. Dynamic excitatory–inhibitory balance is a plausible mechanism that generates such irregular activity, but it remains unclear how balance is achieved and maintained in plastic neural networks. In particular, it is not fully understood how plasticity induced changes in the network affect balance, and in turn, how correlated, balanced activity impacts learning. How do the dynamics of balanced networks change under different plasticity rules? How does correlated spiking activity in recurrent networks change the evolution of weights, their eventual magnitude, and structure across the network? To address these questions, we develop a theory of spike–timing dependent plasticity in balanced networks. We show that balance can be attained and maintained under plasticity–induced weight changes. We find that correlations in the input mildly affect the evolution of synaptic weights. Under certain plasticity rules, we find an emergence of correlations between firing rates and synaptic weights. Under these rules, synaptic weights converge to a stable manifold in weight space with their final configuration dependent on the initial state of the network. Lastly, we show that our framework can also describe the dynamics of plastic balanced networks when subsets of neurons receive targeted optogenetic input. Animals are able to learn complex tasks through changes in individual synapses between cells. Such changes lead to the coevolution of neural activity patterns and the structure of neural connectivity, but the consequences of these interactions are not fully understood. We consider plasticity in model neural networks which achieve an average balance between the excitatory and inhibitory synaptic inputs to different cells, and display cortical–like, irregular activity. We extend the theory of balanced networks to account for synaptic plasticity and show which rules can maintain balance, and which will drive the network into a different state. This theory of plasticity can provide insights into the relationship between stimuli, network dynamics, and synaptic circuitry.
Collapse
Affiliation(s)
- Alan Eric Akil
- Department of Mathematics, University of Houston, Houston, Texas, United States of America
| | - Robert Rosenbaum
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana, United States of America
- Interdisciplinary Center for Network Science and Applications, University of Notre Dame, Notre Dame, Indiana, United States of America
| | - Krešimir Josić
- Department of Mathematics, University of Houston, Houston, Texas, United States of America
- Department of Biology and Biochemistry, University of Houston, Houston, Texas, United States of America
- * E-mail:
| |
Collapse
|
7
|
An Integrate-and-Fire Spiking Neural Network Model Simulating Artificially Induced Cortical Plasticity. eNeuro 2021; 8:ENEURO.0333-20.2021. [PMID: 33632810 PMCID: PMC7986529 DOI: 10.1523/eneuro.0333-20.2021] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 02/10/2021] [Accepted: 02/16/2021] [Indexed: 11/21/2022] Open
Abstract
We describe an integrate-and-fire (IF) spiking neural network that incorporates spike-timing-dependent plasticity (STDP) and simulates the experimental outcomes of four different conditioning protocols that produce cortical plasticity. The original conditioning experiments were performed in freely moving non-human primates (NHPs) with an autonomous head-fixed bidirectional brain-computer interface (BCI). Three protocols involved closed-loop stimulation triggered from (1) spike activity of single cortical neurons, (2) electromyographic (EMG) activity from forearm muscles, and (3) cycles of spontaneous cortical beta activity. A fourth protocol involved open-loop delivery of pairs of stimuli at neighboring cortical sites. The IF network that replicates the experimental results consists of 360 units with simulated membrane potentials produced by synaptic inputs and triggering a spike when reaching threshold. The 240 cortical units produce either excitatory or inhibitory postsynaptic potentials (PSPs) in their target units. In addition to the experimentally observed conditioning effects, the model also allows computation of underlying network behavior not originally documented. Furthermore, the model makes predictions about outcomes from protocols not yet investigated, including spike-triggered inhibition, γ-triggered stimulation and disynaptic conditioning. The success of the simulations suggests that a simple voltage-based IF model incorporating STDP can capture the essential mechanisms mediating targeted plasticity with closed-loop stimulation.
Collapse
|
8
|
Montangie L, Miehl C, Gjorgjieva J. Autonomous emergence of connectivity assemblies via spike triplet interactions. PLoS Comput Biol 2020; 16:e1007835. [PMID: 32384081 PMCID: PMC7239496 DOI: 10.1371/journal.pcbi.1007835] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 05/20/2020] [Accepted: 03/31/2020] [Indexed: 01/08/2023] Open
Abstract
Non-random connectivity can emerge without structured external input driven by activity-dependent mechanisms of synaptic plasticity based on precise spiking patterns. Here we analyze the emergence of global structures in recurrent networks based on a triplet model of spike timing dependent plasticity (STDP), which depends on the interactions of three precisely-timed spikes, and can describe plasticity experiments with varying spike frequency better than the classical pair-based STDP rule. We derive synaptic changes arising from correlations up to third-order and describe them as the sum of structural motifs, which determine how any spike in the network influences a given synaptic connection through possible connectivity paths. This motif expansion framework reveals novel structural motifs under the triplet STDP rule, which support the formation of bidirectional connections and ultimately the spontaneous emergence of global network structure in the form of self-connected groups of neurons, or assemblies. We propose that under triplet STDP assembly structure can emerge without the need for externally patterned inputs or assuming a symmetric pair-based STDP rule common in previous studies. The emergence of non-random network structure under triplet STDP occurs through internally-generated higher-order correlations, which are ubiquitous in natural stimuli and neuronal spiking activity, and important for coding. We further demonstrate how neuromodulatory mechanisms that modulate the shape of the triplet STDP rule or the synaptic transmission function differentially promote structural motifs underlying the emergence of assemblies, and quantify the differences using graph theoretic measures. Emergent non-random connectivity structures in different brain regions are tightly related to specific patterns of neural activity and support diverse brain functions. For instance, self-connected groups of neurons, known as assemblies, have been proposed to represent functional units in brain circuits and can emerge even without patterned external instruction. Here we investigate the emergence of non-random connectivity in recurrent networks using a particular plasticity rule, triplet STDP, which relies on the interaction of spike triplets and can capture higher-order statistical dependencies in neural activity. We derive the evolution of the synaptic strengths in the network and explore the conditions for the self-organization of connectivity into assemblies. We demonstrate key differences of the triplet STDP rule compared to the classical pair-based rule in terms of how assemblies are formed, including the realistic asymmetric shape and influence of novel connectivity motifs on network plasticity driven by higher-order correlations. Assembly formation depends on the specific shape of the STDP window and synaptic transmission function, pointing towards an important role of neuromodulatory signals on formation of intrinsically generated assemblies.
Collapse
Affiliation(s)
- Lisandro Montangie
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
| | - Christoph Miehl
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
- Technical University of Munich, School of Life Sciences, Freising, Germany
| | - Julijana Gjorgjieva
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
- Technical University of Munich, School of Life Sciences, Freising, Germany
- * E-mail:
| |
Collapse
|
9
|
Rhythmogenesis evolves as a consequence of long-term plasticity of inhibitory synapses. Sci Rep 2018; 8:13050. [PMID: 30158555 PMCID: PMC6115462 DOI: 10.1038/s41598-018-31412-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2018] [Accepted: 08/07/2018] [Indexed: 11/08/2022] Open
Abstract
Brain rhythms are widely believed to reflect numerous cognitive processes. Changes in rhythmicity have been associated with pathological states. However, the mechanism underlying these rhythms remains unknown. Here, we present a theoretical analysis of the evolvement of rhythm generating capabilities in neuronal circuits. We tested the hypothesis that brain rhythms can be acquired via an intrinsic unsupervised learning process of activity dependent plasticity. Specifically, we focused on spike timing dependent plasticity (STDP) of inhibitory synapses. We detail how rhythmicity can develop via STDP under certain conditions that serve as a natural prediction of the hypothesis. We show how global features of the STDP rule govern and stabilize the resultant rhythmic activity. Finally, we demonstrate how rhythmicity is retained even in the face of synaptic variability. This study suggests a role for inhibitory plasticity that is beyond homeostatic processes.
Collapse
|
10
|
Delay-Induced Multistability and Loop Formation in Neuronal Networks with Spike-Timing-Dependent Plasticity. Sci Rep 2018; 8:12068. [PMID: 30104713 PMCID: PMC6089910 DOI: 10.1038/s41598-018-30565-9] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2018] [Accepted: 08/02/2018] [Indexed: 12/16/2022] Open
Abstract
Spike-timing-dependent plasticity (STDP) adjusts synaptic strengths according to the precise timing of pre- and postsynaptic spike pairs. Theoretical and computational studies have revealed that STDP may contribute to the emergence of a variety of structural and dynamical states in plastic neuronal populations. In this manuscript, we show that by incorporating dendritic and axonal propagation delays in recurrent networks of oscillatory neurons, the asymptotic connectivity displays multistability, where different structures emerge depending on the initial distribution of the synaptic strengths. In particular, we show that the standard deviation of the initial distribution of synaptic weights, besides its mean, determines the main properties of the emergent structural connectivity such as the mean final synaptic weight, the number of two-neuron loops and the symmetry of the final structure. We also show that the firing rates of the neurons affect the evolution of the network, and a more symmetric configuration of the synapses emerges at higher firing rates. We justify the network results based on a two-neuron framework and show how the results translate to large recurrent networks.
Collapse
|
11
|
Min B, Zhou D, Cai D. Effects of Firing Variability on Network Structures with Spike-Timing-Dependent Plasticity. Front Comput Neurosci 2018; 12:1. [PMID: 29410621 PMCID: PMC5787127 DOI: 10.3389/fncom.2018.00001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2017] [Accepted: 01/03/2018] [Indexed: 11/17/2022] Open
Abstract
Synaptic plasticity is believed to be the biological substrate underlying learning and memory. One of the most widespread forms of synaptic plasticity, spike-timing-dependent plasticity (STDP), uses the spike timing information of presynaptic and postsynaptic neurons to induce synaptic potentiation or depression. An open question is how STDP organizes the connectivity patterns in neuronal circuits. Previous studies have placed much emphasis on the role of firing rate in shaping connectivity patterns. Here, we go beyond the firing rate description to develop a self-consistent linear response theory that incorporates the information of both firing rate and firing variability. By decomposing the pairwise spike correlation into one component associated with local direct connections and the other associated with indirect connections, we identify two distinct regimes regarding the network structures learned through STDP. In one regime, the contribution of the direct-connection correlations dominates over that of the indirect-connection correlations in the learning dynamics; this gives rise to a network structure consistent with the firing rate description. In the other regime, the contribution of the indirect-connection correlations dominates in the learning dynamics, leading to a network structure different from the firing rate description. We demonstrate that the heterogeneity of firing variability across neuronal populations induces a temporally asymmetric structure of indirect-connection correlations. This temporally asymmetric structure underlies the emergence of the second regime. Our study provides a new perspective that emphasizes the role of high-order statistics of spiking activity in the spike-correlation-sensitive learning dynamics.
Collapse
Affiliation(s)
- Bin Min
- Center for Neural Science, Courant Institute of Mathematical Sciences, New York University, New York, NY, United States.,NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Douglas Zhou
- School of Mathematical Sciences, MOE-LSC, Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
| | - David Cai
- Center for Neural Science, Courant Institute of Mathematical Sciences, New York University, New York, NY, United States.,NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.,School of Mathematical Sciences, MOE-LSC, Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
12
|
Ocker GK, Hu Y, Buice MA, Doiron B, Josić K, Rosenbaum R, Shea-Brown E. From the statistics of connectivity to the statistics of spike times in neuronal networks. Curr Opin Neurobiol 2017; 46:109-119. [PMID: 28863386 DOI: 10.1016/j.conb.2017.07.011] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2017] [Revised: 07/21/2017] [Accepted: 07/27/2017] [Indexed: 10/19/2022]
Abstract
An essential step toward understanding neural circuits is linking their structure and their dynamics. In general, this relationship can be almost arbitrarily complex. Recent theoretical work has, however, begun to identify some broad principles underlying collective spiking activity in neural circuits. The first is that local features of network connectivity can be surprisingly effective in predicting global statistics of activity across a network. The second is that, for the important case of large networks with excitatory-inhibitory balance, correlated spiking persists or vanishes depending on the spatial scales of recurrent and feedforward connectivity. We close by showing how these ideas, together with plasticity rules, can help to close the loop between network structure and activity statistics.
Collapse
Affiliation(s)
| | - Yu Hu
- Center for Brain Science, Harvard University, United States
| | - Michael A Buice
- Allen Institute for Brain Science, United States; Department of Applied Mathematics, University of Washington, United States
| | - Brent Doiron
- Department of Mathematics, University of Pittsburgh, United States; Center for the Neural Basis of Cognition, Pittsburgh, United States
| | - Krešimir Josić
- Department of Mathematics, University of Houston, United States; Department of Biology and Biochemistry, University of Houston, United States; Department of BioSciences, Rice University, United States
| | - Robert Rosenbaum
- Department of Mathematics, University of Notre Dame, United States
| | - Eric Shea-Brown
- Allen Institute for Brain Science, United States; Department of Applied Mathematics, University of Washington, United States; Department of Physiology and Biophysics, and University of Washington Institute for Neuroengineering, United States.
| |
Collapse
|
13
|
Lajoie G, Krouchev NI, Kalaska JF, Fairhall AL, Fetz EE. Correlation-based model of artificially induced plasticity in motor cortex by a bidirectional brain-computer interface. PLoS Comput Biol 2017; 13:e1005343. [PMID: 28151957 PMCID: PMC5313237 DOI: 10.1371/journal.pcbi.1005343] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2016] [Revised: 02/16/2017] [Accepted: 01/03/2017] [Indexed: 12/19/2022] Open
Abstract
Experiments show that spike-triggered stimulation performed with Bidirectional Brain-Computer-Interfaces (BBCI) can artificially strengthen connections between separate neural sites in motor cortex (MC). When spikes from a neuron recorded at one MC site trigger stimuli at a second target site after a fixed delay, the connections between sites eventually strengthen. It was also found that effective spike-stimulus delays are consistent with experimentally derived spike-timing-dependent plasticity (STDP) rules, suggesting that STDP is key to drive these changes. However, the impact of STDP at the level of circuits, and the mechanisms governing its modification with neural implants remain poorly understood. The present work describes a recurrent neural network model with probabilistic spiking mechanisms and plastic synapses capable of capturing both neural and synaptic activity statistics relevant to BBCI conditioning protocols. Our model successfully reproduces key experimental results, both established and new, and offers mechanistic insights into spike-triggered conditioning. Using analytical calculations and numerical simulations, we derive optimal operational regimes for BBCIs, and formulate predictions concerning the efficacy of spike-triggered conditioning in different regimes of cortical activity.
Collapse
Affiliation(s)
- Guillaume Lajoie
- University of Washington Institute for Neuroengineering, University of Washington, Seattle, WA, USA
| | | | - John F. Kalaska
- Groupe de recherche sur le système nerveux central, Département de neurosciences, Université de Montreal, Montreal, QC, Canada
| | - Adrienne L. Fairhall
- University of Washington Institute for Neuroengineering, University of Washington, Seattle, WA, USA
- Dept. of Physiology and Biophysics, University of Washington, Seattle, WA, USA
- Dept. of Physics, University of Washington, Seattle, WA, USA
| | - Eberhard E. Fetz
- University of Washington Institute for Neuroengineering, University of Washington, Seattle, WA, USA
- Dept. of Physiology and Biophysics, University of Washington, Seattle, WA, USA
| |
Collapse
|
14
|
Madadi Asl M, Valizadeh A, Tass PA. Dendritic and Axonal Propagation Delays Determine Emergent Structures of Neuronal Networks with Plastic Synapses. Sci Rep 2017; 7:39682. [PMID: 28045109 PMCID: PMC5206725 DOI: 10.1038/srep39682] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2016] [Accepted: 11/25/2016] [Indexed: 11/09/2022] Open
Abstract
Spike-timing-dependent plasticity (STDP) modifies synaptic strengths based on the relative timing of pre- and postsynaptic spikes. The temporal order of spikes turned out to be crucial. We here take into account how propagation delays, composed of dendritic and axonal delay times, may affect the temporal order of spikes. In a minimal setting, characterized by neglecting dendritic and axonal propagation delays, STDP eliminates bidirectional connections between two coupled neurons and turns them into unidirectional connections. In this paper, however, we show that depending on the dendritic and axonal propagation delays, the temporal order of spikes at the synapses can be different from those in the cell bodies and, consequently, qualitatively different connectivity patterns emerge. In particular, we show that for a system of two coupled oscillatory neurons, bidirectional synapses can be preserved and potentiated. Intriguingly, this finding also translates to large networks of type-II phase oscillators and, hence, crucially impacts on the overall hierarchical connectivity patterns of oscillatory neuronal networks.
Collapse
Affiliation(s)
- Mojtaba Madadi Asl
- Institute for Advanced Studies in Basic Sciences (IASBS), Department of Physics, Zanjan, 45195-1159, Iran
| | - Alireza Valizadeh
- Institute for Advanced Studies in Basic Sciences (IASBS), Department of Physics, Zanjan, 45195-1159, Iran.,Institute for Research in Fundamental Sciences (IPM), School of Cognitive Sciences, Tehran, 19395-5746, Iran
| | - Peter A Tass
- Institute of Neuroscience and Medicine - Neuromodulation (INM-7), Research Center Jülich, Jülich, 52425, Germany.,Stanford University, Department of Neurosurgery, Stanford, CA, 94305, USA.,University of Cologne, Department of Neuromodulation, Cologne, 50937, Germany
| |
Collapse
|
15
|
Kuczala A, Sharpee TO. Eigenvalue spectra of large correlated random matrices. Phys Rev E 2016; 94:050101. [PMID: 27967175 PMCID: PMC5161118 DOI: 10.1103/physreve.94.050101] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2016] [Indexed: 11/07/2022]
Abstract
Using the diagrammatic method, we derive a set of self-consistent equations that describe eigenvalue distributions of large correlated asymmetric random matrices. The matrix elements can have different variances and be correlated with each other. The analytical results are confirmed by numerical simulations. The results have implications for the dynamics of neural and other biological networks where plasticity induces correlations in the connection strengths within the network. We find that the presence of correlations can have a major impact on network stability.
Collapse
Affiliation(s)
- Alexander Kuczala
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, California 92037, USA and Department of Physics, University of California, San Diego, California 92161, USA
| | - Tatyana O Sharpee
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, California 92037, USA and Department of Physics, University of California, San Diego, California 92161, USA
| |
Collapse
|
16
|
Luz Y, Shamir M. Oscillations via Spike-Timing Dependent Plasticity in a Feed-Forward Model. PLoS Comput Biol 2016; 12:e1004878. [PMID: 27082118 PMCID: PMC4833372 DOI: 10.1371/journal.pcbi.1004878] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2015] [Accepted: 03/16/2016] [Indexed: 12/18/2022] Open
Abstract
Neuronal oscillatory activity has been reported in relation to a wide range of cognitive processes including the encoding of external stimuli, attention, and learning. Although the specific role of these oscillations has yet to be determined, it is clear that neuronal oscillations are abundant in the central nervous system. This raises the question of the origin of these oscillations: are the mechanisms for generating these oscillations genetically hard-wired or can they be acquired via a learning process? Here, we study the conditions under which oscillatory activity emerges through a process of spike timing dependent plasticity (STDP) in a feed-forward architecture. First, we analyze the effect of oscillations on STDP-driven synaptic dynamics of a single synapse, and study how the parameters that characterize the STDP rule and the oscillations affect the resultant synaptic weight. Next, we analyze STDP-driven synaptic dynamics of a pre-synaptic population of neurons onto a single post-synaptic cell. The pre-synaptic neural population is assumed to be oscillating at the same frequency, albeit with different phases, such that the net activity of the pre-synaptic population is constant in time. Thus, in the homogeneous case in which all synapses are equal, the post-synaptic neuron receives constant input and hence does not oscillate. To investigate the transition to oscillatory activity, we develop a mean-field Fokker-Planck approximation of the synaptic dynamics. We analyze the conditions causing the homogeneous solution to lose its stability. The findings show that oscillatory activity appears through a mechanism of spontaneous symmetry breaking. However, in the general case the homogeneous solution is unstable, and the synaptic dynamics does not converge to a different fixed point, but rather to a limit cycle. We show how the temporal structure of the STDP rule determines the stability of the homogeneous solution and the drift velocity of the limit cycle. Oscillatory activity in the brain has been described in relation to many cognitive states and tasks, including the encoding of external stimuli, attention, learning and consolidation of memory. However, without tuning of synaptic weights with the preferred phase of firing the oscillatory signal may not be able to propagate downstream—due to distractive interference. Here we investigate how synaptic plasticity can facilitate the transmission of oscillatory signal downstream along the information processing pathway in the brain. We show that basic synaptic plasticity rules, that have been reported empirically, are sufficient to generate the required tuning that enables the propagation of the oscillatory signal. In addition, our work presents a synaptic learning process that does not converge to a stationary state, but rather remains dynamic. We demonstrate how the functionality of the system, i.e., transmission of oscillatory activity, can be maintained in the face of constant remodeling of synaptic weights.
Collapse
Affiliation(s)
- Yotam Luz
- Department of Physiology and Cell Biology Faculty of Health Sciences, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- * E-mail:
| | - Maoz Shamir
- Department of Physiology and Cell Biology Faculty of Health Sciences, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- Department of Physics Faculty of Natural Sciences, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| |
Collapse
|
17
|
Bi Z, Zhou C. Spike Pattern Structure Influences Synaptic Efficacy Variability under STDP and Synaptic Homeostasis. I: Spike Generating Models on Converging Motifs. Front Comput Neurosci 2016; 10:14. [PMID: 26941634 PMCID: PMC4763167 DOI: 10.3389/fncom.2016.00014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2015] [Accepted: 02/01/2016] [Indexed: 11/26/2022] Open
Abstract
In neural systems, synaptic plasticity is usually driven by spike trains. Due to the inherent noises of neurons and synapses as well as the randomness of connection details, spike trains typically exhibit variability such as spatial randomness and temporal stochasticity, resulting in variability of synaptic changes under plasticity, which we call efficacy variability. How the variability of spike trains influences the efficacy variability of synapses remains unclear. In this paper, we try to understand this influence under pair-wise additive spike-timing dependent plasticity (STDP) when the mean strength of plastic synapses into a neuron is bounded (synaptic homeostasis). Specifically, we systematically study, analytically and numerically, how four aspects of statistical features, i.e., synchronous firing, burstiness/regularity, heterogeneity of rates and heterogeneity of cross-correlations, as well as their interactions influence the efficacy variability in converging motifs (simple networks in which one neuron receives from many other neurons). Neurons (including the post-synaptic neuron) in a converging motif generate spikes according to statistical models with tunable parameters. In this way, we can explicitly control the statistics of the spike patterns, and investigate their influence onto the efficacy variability, without worrying about the feedback from synaptic changes onto the dynamics of the post-synaptic neuron. We separate efficacy variability into two parts: the drift part (DriftV) induced by the heterogeneity of change rates of different synapses, and the diffusion part (DiffV) induced by weight diffusion caused by stochasticity of spike trains. Our main findings are: (1) synchronous firing and burstiness tend to increase DiffV, (2) heterogeneity of rates induces DriftV when potentiation and depression in STDP are not balanced, and (3) heterogeneity of cross-correlations induces DriftV together with heterogeneity of rates. We anticipate our work important for understanding functional processes of neuronal networks (such as memory) and neural development.
Collapse
Affiliation(s)
- Zedong Bi
- State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of SciencesBeijing, China; Department of Physics, Hong Kong Baptist UniversityKowloon Tong, Hong Kong
| | - Changsong Zhou
- Department of Physics, Hong Kong Baptist UniversityKowloon Tong, Hong Kong; Centre for Nonlinear Studies, Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems, Institute of Computational and Theoretical Studies, Hong Kong Baptist UniversityKowloon Tong, Hong Kong; Beijing Computational Science Research CenterBeijing, China; Research Centre, HKBU Institute of Research and Continuing EducationShenzhen, China
| |
Collapse
|
18
|
Ocker GK, Litwin-Kumar A, Doiron B. Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses. PLoS Comput Biol 2015; 11:e1004458. [PMID: 26291697 PMCID: PMC4546203 DOI: 10.1371/journal.pcbi.1004458] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Accepted: 07/19/2015] [Indexed: 11/18/2022] Open
Abstract
The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.
Collapse
Affiliation(s)
- Gabriel Koch Ocker
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Melon University, Pittsburgh, Pennsylvania, United States of America
| | - Ashok Litwin-Kumar
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Melon University, Pittsburgh, Pennsylvania, United States of America
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
| | - Brent Doiron
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Melon University, Pittsburgh, Pennsylvania, United States of America
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| |
Collapse
|
19
|
On Stationary Distributions of Stochastic Neural Networks. J Appl Probab 2014. [DOI: 10.1017/s0021900200011700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The paper deals with nonlinear Poisson neuron network models with bounded memory dynamics, which can include both Hebbian learning mechanisms and refractory periods. The state of the network is described by the times elapsed since its neurons fired within the post-synaptic transfer kernel memory span, and the current strengths of synaptic connections, the state spaces of our models being hierarchies of finite-dimensional components. We prove the ergodicity of the stochastic processes describing the behaviour of the networks, establish the existence of continuously differentiable stationary distribution densities (with respect to the Lebesgue measures of corresponding dimensionality) on the components of the state space, and find upper bounds for them. For the density components, we derive a system of differential equations that can be solved in a few simplest cases only. Approaches to approximate computation of the stationary density are discussed. One approach is to reduce the dimensionality of the problem by modifying the network so that each neuron cannot fire if the number of spikes it emitted within the post-synaptic transfer kernel memory span reaches a given threshold. We show that the stationary distribution of this ‘truncated’ network converges to that of the unrestricted network as the threshold increases, and that the convergence is at a superexponential rate. A complementary approach uses discrete Markov chain approximations to the network process.
Collapse
|
20
|
Borovkov K, Decrouez G, Gilson M. On Stationary Distributions of Stochastic Neural Networks. J Appl Probab 2014. [DOI: 10.1239/jap/1409932677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The paper deals with nonlinear Poisson neuron network models with bounded memory dynamics, which can include both Hebbian learning mechanisms and refractory periods. The state of the network is described by the times elapsed since its neurons fired within the post-synaptic transfer kernel memory span, and the current strengths of synaptic connections, the state spaces of our models being hierarchies of finite-dimensional components. We prove the ergodicity of the stochastic processes describing the behaviour of the networks, establish the existence of continuously differentiable stationary distribution densities (with respect to the Lebesgue measures of corresponding dimensionality) on the components of the state space, and find upper bounds for them. For the density components, we derive a system of differential equations that can be solved in a few simplest cases only. Approaches to approximate computation of the stationary density are discussed. One approach is to reduce the dimensionality of the problem by modifying the network so that each neuron cannot fire if the number of spikes it emitted within the post-synaptic transfer kernel memory span reaches a given threshold. We show that the stationary distribution of this ‘truncated’ network converges to that of the unrestricted network as the threshold increases, and that the convergence is at a superexponential rate. A complementary approach uses discrete Markov chain approximations to the network process.
Collapse
|
21
|
Strack B, Jacobs KM, Cios KJ. Simulating vertical and horizontal inhibition with short-term dynamics in a multi-column multi-layer model of neocortex. Int J Neural Syst 2014; 24:1440002. [PMID: 24875787 PMCID: PMC9422346 DOI: 10.1142/s0129065714400024] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
The paper introduces a multi-layer multi-column model of the cortex that uses four different neuron types and short-term plasticity dynamics. It was designed with details of neuronal connectivity available in the literature and meets these conditions: (1) biologically accurate laminar and columnar flows of activity, (2) normal function of low-threshold spiking and fast spiking neurons, and (3) ability to generate different stages of epileptiform activity. With these characteristics the model allows for modeling lesioned or malformed cortex, i.e. examine properties of developmentally malformed cortex in which the balance between inhibitory neuron subtypes is disturbed.
Collapse
Affiliation(s)
- Beata Strack
- Department of Computer Science, Virginia Commonwealth University, Richmond, VA, USA
| | | | | |
Collapse
|
22
|
Rudolph-Lilith M, Muller LE. Aspects of randomness in neural graph structures. BIOLOGICAL CYBERNETICS 2014; 108:381-396. [PMID: 24824724 DOI: 10.1007/s00422-014-0606-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2013] [Accepted: 04/21/2014] [Indexed: 06/03/2023]
Abstract
In the past two decades, significant advances have been made in understanding the structural and functional properties of biological networks, via graph-theoretic analysis. In general, most graph-theoretic studies are conducted in the presence of serious uncertainties, such as major undersampling of the experimental data. In the specific case of neural systems, however, a few moderately robust experimental reconstructions have been reported, and these have long served as fundamental prototypes for studying connectivity patterns in the nervous system. In this paper, we provide a comparative analysis of these "historical" graphs, both in their directed (original) and symmetrized (a common preprocessing step) forms, and provide a set of measures that can be consistently applied across graphs (directed or undirected, with or without self-loops). We focus on simple structural characterizations of network connectivity and find that in many measures, the networks studied are captured by simple random graph models. In a few key measures, however, we observe a marked departure from the random graph prediction. Our results suggest that the mechanism of graph formation in the networks studied is not well captured by existing abstract graph models in their first- and second-order connectivity.
Collapse
Affiliation(s)
- Michelle Rudolph-Lilith
- CNRS, Unité de Neurosciences, Information et Complexité (UNIC), 1 Ave de la Terrasse, 91198 , Gif-sur-Yvette, France,
| | | |
Collapse
|
23
|
Interplay between short- and long-term plasticity in cell-assembly formation. PLoS One 2014; 9:e101535. [PMID: 25007209 PMCID: PMC4090127 DOI: 10.1371/journal.pone.0101535] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2013] [Accepted: 06/08/2014] [Indexed: 11/19/2022] Open
Abstract
Various hippocampal and neocortical synapses of mammalian brain show both short-term plasticity and long-term plasticity, which are considered to underlie learning and memory by the brain. According to Hebb’s postulate, synaptic plasticity encodes memory traces of past experiences into cell assemblies in cortical circuits. However, it remains unclear how the various forms of long-term and short-term synaptic plasticity cooperatively create and reorganize such cell assemblies. Here, we investigate the mechanism in which the three forms of synaptic plasticity known in cortical circuits, i.e., spike-timing-dependent plasticity (STDP), short-term depression (STD) and homeostatic plasticity, cooperatively generate, retain and reorganize cell assemblies in a recurrent neuronal network model. We show that multiple cell assemblies generated by external stimuli can survive noisy spontaneous network activity for an adequate range of the strength of STD. Furthermore, our model predicts that a symmetric temporal window of STDP, such as observed in dopaminergic modulations on hippocampal neurons, is crucial for the retention and integration of multiple cell assemblies. These results may have implications for the understanding of cortical memory processes.
Collapse
|
24
|
Savin C, Triesch J. Emergence of task-dependent representations in working memory circuits. Front Comput Neurosci 2014; 8:57. [PMID: 24904395 PMCID: PMC4035833 DOI: 10.3389/fncom.2014.00057] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2014] [Accepted: 05/10/2014] [Indexed: 01/31/2023] Open
Abstract
A wealth of experimental evidence suggests that working memory circuits preferentially represent information that is behaviorally relevant. Still, we are missing a mechanistic account of how these representations come about. Here we provide a simple explanation for a range of experimental findings, in light of prefrontal circuits adapting to task constraints by reward-dependent learning. In particular, we model a neural network shaped by reward-modulated spike-timing dependent plasticity (r-STDP) and homeostatic plasticity (intrinsic excitability and synaptic scaling). We show that the experimentally-observed neural representations naturally emerge in an initially unstructured circuit as it learns to solve several working memory tasks. These results point to a critical, and previously unappreciated, role for reward-dependent learning in shaping prefrontal cortex activity.
Collapse
Affiliation(s)
- Cristina Savin
- Frankfurt Institute for Advanced Studies Frankfurt am Main, Germany
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies Frankfurt am Main, Germany ; Physics Department, Goethe University Frankfurt am Main, Germany
| |
Collapse
|
25
|
Chrol-Cannon J, Jin Y. Computational modeling of neural plasticity for self-organization of neural networks. Biosystems 2014; 125:43-54. [PMID: 24769242 DOI: 10.1016/j.biosystems.2014.04.003] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2013] [Revised: 04/03/2014] [Accepted: 04/04/2014] [Indexed: 11/28/2022]
Abstract
Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence.
Collapse
Affiliation(s)
- Joseph Chrol-Cannon
- Department of Computing, University of Surrey, Guildford GU2 7XH, United Kingdom
| | - Yaochu Jin
- Department of Computing, University of Surrey, Guildford GU2 7XH, United Kingdom.
| |
Collapse
|
26
|
Jimenez Rezende D, Gerstner W. Stochastic variational learning in recurrent spiking networks. Front Comput Neurosci 2014; 8:38. [PMID: 24772078 PMCID: PMC3983494 DOI: 10.3389/fncom.2014.00038] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2013] [Accepted: 03/17/2014] [Indexed: 11/15/2022] Open
Abstract
The ability to learn and perform statistical inference with biologically plausible recurrent networks of spiking neurons is an important step toward understanding perception and reasoning. Here we derive and investigate a new learning rule for recurrent spiking networks with hidden neurons, combining principles from variational learning and reinforcement learning. Our network defines a generative model over spike train histories and the derived learning rule has the form of a local Spike Timing Dependent Plasticity rule modulated by global factors (neuromodulators) conveying information about “novelty” on a statistically rigorous ground. Simulations show that our model is able to learn both stationary and non-stationary patterns of spike trains. We also propose one experiment that could potentially be performed with animals in order to test the dynamics of the predicted novelty signal.
Collapse
Affiliation(s)
- Danilo Jimenez Rezende
- Laboratory of Cognitive Neuroscience, School of Life Sciences, Brain Mind Institute, Ecole Polytechnique Federale de Lausanne Lausanne, Vaud, Switzerland ; Laboratory of Computational Neuroscience, School of Computer and Communication Sciences, Ecole Polytechnique Federale de Lausanne Lausanne, Vaud, Switzerland
| | - Wulfram Gerstner
- Laboratory of Cognitive Neuroscience, School of Life Sciences, Brain Mind Institute, Ecole Polytechnique Federale de Lausanne Lausanne, Vaud, Switzerland ; Laboratory of Computational Neuroscience, School of Computer and Communication Sciences, Ecole Polytechnique Federale de Lausanne Lausanne, Vaud, Switzerland
| |
Collapse
|
27
|
Coexistence of reward and unsupervised learning during the operant conditioning of neural firing rates. PLoS One 2014; 9:e87123. [PMID: 24475240 PMCID: PMC3903641 DOI: 10.1371/journal.pone.0087123] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2013] [Accepted: 12/21/2013] [Indexed: 11/24/2022] Open
Abstract
A fundamental goal of neuroscience is to understand how cognitive processes, such as operant conditioning, are performed by the brain. Typical and well studied examples of operant conditioning, in which the firing rates of individual cortical neurons in monkeys are increased using rewards, provide an opportunity for insight into this. Studies of reward-modulated spike-timing-dependent plasticity (RSTDP), and of other models such as R-max, have reproduced this learning behavior, but they have assumed that no unsupervised learning is present (i.e., no learning occurs without, or independent of, rewards). We show that these models cannot elicit firing rate reinforcement while exhibiting both reward learning and ongoing, stable unsupervised learning. To fix this issue, we propose a new RSTDP model of synaptic plasticity based upon the observed effects that dopamine has on long-term potentiation and depression (LTP and LTD). We show, both analytically and through simulations, that our new model can exhibit unsupervised learning and lead to firing rate reinforcement. This requires that the strengthening of LTP by the reward signal is greater than the strengthening of LTD and that the reinforced neuron exhibits irregular firing. We show the robustness of our findings to spike-timing correlations, to the synaptic weight dependence that is assumed, and to changes in the mean reward. We also consider our model in the differential reinforcement of two nearby neurons. Our model aligns more strongly with experimental studies than previous models and makes testable predictions for future experiments.
Collapse
|
28
|
Babadi B, Abbott LF. Pairwise analysis can account for network structures arising from spike-timing dependent plasticity. PLoS Comput Biol 2013; 9:e1002906. [PMID: 23436986 PMCID: PMC3578766 DOI: 10.1371/journal.pcbi.1002906] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2012] [Accepted: 12/14/2012] [Indexed: 11/18/2022] Open
Abstract
Spike timing-dependent plasticity (STDP) modifies synaptic strengths based on timing information available locally at each synapse. Despite this, it induces global structures within a recurrently connected network. We study such structures both through simulations and by analyzing the effects of STDP on pair-wise interactions of neurons. We show how conventional STDP acts as a loop-eliminating mechanism and organizes neurons into in- and out-hubs. Loop-elimination increases when depression dominates and turns into loop-generation when potentiation dominates. STDP with a shifted temporal window such that coincident spikes cause depression enhances recurrent connections and functions as a strict buffering mechanism that maintains a roughly constant average firing rate. STDP with the opposite temporal shift functions as a loop eliminator at low rates and as a potent loop generator at higher rates. In general, studying pairwise interactions of neurons provides important insights about the structures that STDP can produce in large networks.
Collapse
Affiliation(s)
- Baktash Babadi
- Center for Theoretical Neuroscience, Department of Neuroscience, Columbia University, New York, New York, United States of America.
| | | |
Collapse
|
29
|
Kerr RR, Burkitt AN, Thomas DA, Gilson M, Grayden DB. Delay selection by spike-timing-dependent plasticity in recurrent networks of spiking neurons receiving oscillatory inputs. PLoS Comput Biol 2013; 9:e1002897. [PMID: 23408878 PMCID: PMC3567188 DOI: 10.1371/journal.pcbi.1002897] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2012] [Accepted: 12/10/2012] [Indexed: 11/28/2022] Open
Abstract
Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem. Our brain's ability to perform cognitive processes, such as object identification, problem solving, and decision making, comes from the specific connections between neurons. The neurons carry information as spikes that are transmitted to other neurons via connections with different strengths and propagation delays. Experimentally observed learning rules can modify the strengths of connections between neurons based on the timing of their spikes. The learning that occurs in neuronal networks due to these rules is thought to be vital to creating the structures necessary for different cognitive processes as well as for memory. The spiking rate of populations of neurons has been observed to oscillate at particular frequencies in various brain regions, and there is evidence that these oscillations play a role in cognition. Here, we use analytical and numerical methods to investigate the changes to the network structure caused by a specific learning rule during oscillatory neural activity. We find the conditions under which connections with propagation delays that resonate with the oscillations are strengthened relative to the other connections. We demonstrate that networks learn to oscillate more strongly to oscillations at the frequency they were presented with during learning. We discuss the possible application of these results to specific areas of the brain.
Collapse
Affiliation(s)
- Robert R. Kerr
- NeuroEngineering Laboratory, Department of Electrical and Electronic Engineering, The University of Melbourne, Melbourne, Victoria, Australia
- Centre for Neural Engineering, The University of Melbourne, Melbourne, Victoria, Australia
| | - Anthony N. Burkitt
- NeuroEngineering Laboratory, Department of Electrical and Electronic Engineering, The University of Melbourne, Melbourne, Victoria, Australia
- Centre for Neural Engineering, The University of Melbourne, Melbourne, Victoria, Australia
- Bionics Institute, Melbourne, Victoria, Australia
- * E-mail:
| | - Doreen A. Thomas
- Department of Mechanical Engineering, The University of Melbourne, Melbourne, Victoria, Australia
| | - Matthieu Gilson
- NeuroEngineering Laboratory, Department of Electrical and Electronic Engineering, The University of Melbourne, Melbourne, Victoria, Australia
- Centre for Neural Engineering, The University of Melbourne, Melbourne, Victoria, Australia
- Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute, Saitama, Japan
| | - David B. Grayden
- NeuroEngineering Laboratory, Department of Electrical and Electronic Engineering, The University of Melbourne, Melbourne, Victoria, Australia
- Centre for Neural Engineering, The University of Melbourne, Melbourne, Victoria, Australia
- Bionics Institute, Melbourne, Victoria, Australia
| |
Collapse
|
30
|
Bol K, Marsat G, Mejias JF, Maler L, Longtin A. Modeling cancelation of periodic inputs with burst-STDP and feedback. Neural Netw 2013; 47:120-33. [PMID: 23332545 DOI: 10.1016/j.neunet.2012.12.011] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2012] [Revised: 10/17/2012] [Accepted: 12/17/2012] [Indexed: 11/15/2022]
Abstract
Prediction and cancelation of redundant information is an important feature that many neural systems must display in order to efficiently code external signals. We develop an analytic framework for such cancelation in sensory neurons produced by a cerebellar-like structure in wave-type electric fish. Our biologically plausible mechanism is motivated by experimental evidence of cancelation of periodic input arising from the proximity of conspecifics as well as tail motion. This mechanism involves elements present in a wide range of systems: (1) stimulus-driven feedback to the neurons acting as detectors, (2) a large variety of temporal delays in the pathways transmitting such feedback, responsible for producing frequency channels, and (3) burst-induced long-term plasticity. The bursting arises from back-propagating action potentials. Bursting events drive the input frequency-dependent learning rule, which in turn affects the feedback input and thus the burst rate. We show how the mean firing rate and the rate of production of 2- and 4-spike bursts (the main learning events) can be estimated analytically for a leaky integrate-and-fire model driven by (slow) sinusoidal, back-propagating and feedback inputs as well as rectified filtered noise. The effect of bursts on the average synaptic strength is also derived. Our results shed light on why bursts rather than single spikes can drive learning in such networks "online", i.e. in the absence of a correlative discharge. Phase locked spiking in frequency specific channels together with a frequency-dependent STDP window size regulate burst probability and duration self-consistently to implement cancelation.
Collapse
Affiliation(s)
- K Bol
- Department of Physics, University of Ottawa, K1N 6N5 Ottawa, Canada
| | | | | | | | | |
Collapse
|
31
|
Kerr RR, Burkitt AN, Thomas DA, Grayden DB. STDP encodes oscillation frequencies in the connections of recurrent networks of spiking neurons. BMC Neurosci 2012. [PMCID: PMC3403623 DOI: 10.1186/1471-2202-13-s1-p130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
|
32
|
Davies S, Galluppi F, Rast AD, Furber SB. A forecast-based STDP rule suitable for neuromorphic implementation. Neural Netw 2012; 32:3-14. [PMID: 22386500 DOI: 10.1016/j.neunet.2012.02.018] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2011] [Revised: 01/15/2012] [Accepted: 02/07/2012] [Indexed: 11/17/2022]
Abstract
Artificial neural networks increasingly involve spiking dynamics to permit greater computational efficiency. This becomes especially attractive for on-chip implementation using dedicated neuromorphic hardware. However, both spiking neural networks and neuromorphic hardware have historically found difficulties in implementing efficient, effective learning rules. The best-known spiking neural network learning paradigm is Spike Timing Dependent Plasticity (STDP) which adjusts the strength of a connection in response to the time difference between the pre- and post-synaptic spikes. Approaches that relate learning features to the membrane potential of the post-synaptic neuron have emerged as possible alternatives to the more common STDP rule, with various implementations and approximations. Here we use a new type of neuromorphic hardware, SpiNNaker, which represents the flexible "neuromimetic" architecture, to demonstrate a new approach to this problem. Based on the standard STDP algorithm with modifications and approximations, a new rule, called STDP TTS (Time-To-Spike) relates the membrane potential with the Long Term Potentiation (LTP) part of the basic STDP rule. Meanwhile, we use the standard STDP rule for the Long Term Depression (LTD) part of the algorithm. We show that on the basis of the membrane potential it is possible to make a statistical prediction of the time needed by the neuron to reach the threshold, and therefore the LTP part of the STDP algorithm can be triggered when the neuron receives a spike. In our system these approximations allow efficient memory access, reducing the overall computational time and the memory bandwidth required. The improvements here presented are significant for real-time applications such as the ones for which the SpiNNaker system has been designed. We present simulation results that show the efficacy of this algorithm using one or more input patterns repeated over the whole time of the simulation. On-chip results show that the STDP TTS algorithm allows the neural network to adapt and detect the incoming pattern with improvements both in the reliability of, and the time required for, consistent output. Through the approximations we suggest in this paper, we introduce a learning rule that is easy to implement both in event-driven simulators and in dedicated hardware, reducing computational complexity relative to the standard STDP rule. Such a rule offers a promising solution, complementary to standard STDP evaluation algorithms, for real-time learning using spiking neural networks in time-critical applications.
Collapse
Affiliation(s)
- S Davies
- School of Computer Science, The University of Manchester, Oxford Road, Manchester, M13 9PL, United Kingdom.
| | | | | | | |
Collapse
|
33
|
Luz Y, Shamir M. Balancing feed-forward excitation and inhibition via Hebbian inhibitory synaptic plasticity. PLoS Comput Biol 2012; 8:e1002334. [PMID: 22291583 PMCID: PMC3266879 DOI: 10.1371/journal.pcbi.1002334] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2011] [Accepted: 11/16/2011] [Indexed: 12/02/2022] Open
Abstract
It has been suggested that excitatory and inhibitory inputs to cortical cells are balanced, and that this balance is important for the highly irregular firing observed in the cortex. There are two hypotheses as to the origin of this balance. One assumes that it results from a stable solution of the recurrent neuronal dynamics. This model can account for a balance of steady state excitation and inhibition without fine tuning of parameters, but not for transient inputs. The second hypothesis suggests that the feed forward excitatory and inhibitory inputs to a postsynaptic cell are already balanced. This latter hypothesis thus does account for the balance of transient inputs. However, it remains unclear what mechanism underlies the fine tuning required for balancing feed forward excitatory and inhibitory inputs. Here we investigated whether inhibitory synaptic plasticity is responsible for the balance of transient feed forward excitation and inhibition. We address this issue in the framework of a model characterizing the stochastic dynamics of temporally anti-symmetric Hebbian spike timing dependent plasticity of feed forward excitatory and inhibitory synaptic inputs to a single post-synaptic cell. Our analysis shows that inhibitory Hebbian plasticity generates 'negative feedback' that balances excitation and inhibition, which contrasts with the 'positive feedback' of excitatory Hebbian synaptic plasticity. As a result, this balance may increase the sensitivity of the learning dynamics to the correlation structure of the excitatory inputs.
Collapse
Affiliation(s)
- Yotam Luz
- Department of Physiology and Neurobiology, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Maoz Shamir
- Department of Physiology and Neurobiology, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- Department of Physics, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| |
Collapse
|
34
|
Gilson M, Masquelier T, Hugues E. STDP allows fast rate-modulated coding with Poisson-like spike trains. PLoS Comput Biol 2011; 7:e1002231. [PMID: 22046113 PMCID: PMC3203056 DOI: 10.1371/journal.pcbi.1002231] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2011] [Accepted: 09/01/2011] [Indexed: 11/18/2022] Open
Abstract
Spike timing-dependent plasticity (STDP) has been shown to enable single neurons to detect repeatedly presented spatiotemporal spike patterns. This holds even when such patterns are embedded in equally dense random spiking activity, that is, in the absence of external reference times such as a stimulus onset. Here we demonstrate, both analytically and numerically, that STDP can also learn repeating rate-modulated patterns, which have received more experimental evidence, for example, through post-stimulus time histograms (PSTHs). Each input spike train is generated from a rate function using a stochastic sampling mechanism, chosen to be an inhomogeneous Poisson process here. Learning is feasible provided significant covarying rate modulations occur within the typical timescale of STDP (∼10–20 ms) for sufficiently many inputs (∼100 among 1000 in our simulations), a condition that is met by many experimental PSTHs. Repeated pattern presentations induce spike-time correlations that are captured by STDP. Despite imprecise input spike times and even variable spike counts, a single trained neuron robustly detects the pattern just a few milliseconds after its presentation. Therefore, temporal imprecision and Poisson-like firing variability are not an obstacle to fast temporal coding. STDP provides an appealing mechanism to learn such rate patterns, which, beyond sensory processing, may also be involved in many cognitive tasks. In vivo neural responses to stimuli are known to have a lot of variability across trials. If the same number of spikes is emitted from trial to trial, the neuron is said to be reliable. If the timing of such spikes is roughly preserved across trials, the neuron is said to be precise. Here we demonstrate both analytically and numerically that the well-established Hebbian learning rule of spike-timing-dependent plasticity (STDP) can learn response patterns despite relatively low reliability (Poisson-like variability) and low temporal precision (10–20 ms). These features are in line with many experimental observations, in which a poststimulus time histogram (PSTH) is evaluated over multiple trials. In our model, however, information is extracted from the relative spike times between afferents without the need of an absolute reference time, such as a stimulus onset. Relevantly, recent experiments show that relative timing is often more informative than the absolute timing. Furthermore, the scope of application for our study is not restricted to sensory systems. Taken together, our results suggest a fine temporal resolution for the neural code, and that STDP is an appropriate candidate for encoding and decoding such activity.
Collapse
Affiliation(s)
- Matthieu Gilson
- Department of Electrical and Electronic Engineering, The University of Melbourne, Melbourne, Australia
- Lab for Neural Circuit Theory, Riken Brain Science Insitute, Wako-shi, Saitama, Japan
- * E-mail: (MG); (TM)
| | - Timothée Masquelier
- Unit for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain
- * E-mail: (MG); (TM)
| | - Etienne Hugues
- Unit for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain
| |
Collapse
|
35
|
Pernice V, Staude B, Cardanobile S, Rotter S. How structure determines correlations in neuronal networks. PLoS Comput Biol 2011; 7:e1002059. [PMID: 21625580 PMCID: PMC3098224 DOI: 10.1371/journal.pcbi.1002059] [Citation(s) in RCA: 150] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2010] [Accepted: 04/01/2011] [Indexed: 11/19/2022] Open
Abstract
Networks are becoming a ubiquitous metaphor for the understanding of complex biological systems, spanning the range between molecular signalling pathways, neural networks in the brain, and interacting species in a food web. In many models, we face an intricate interplay between the topology of the network and the dynamics of the system, which is generally very hard to disentangle. A dynamical feature that has been subject of intense research in various fields are correlations between the noisy activity of nodes in a network. We consider a class of systems, where discrete signals are sent along the links of the network. Such systems are of particular relevance in neuroscience, because they provide models for networks of neurons that use action potentials for communication. We study correlations in dynamic networks with arbitrary topology, assuming linear pulse coupling. With our novel approach, we are able to understand in detail how specific structural motifs affect pairwise correlations. Based on a power series decomposition of the covariance matrix, we describe the conditions under which very indirect interactions will have a pronounced effect on correlations and population dynamics. In random networks, we find that indirect interactions may lead to a broad distribution of activation levels with low average but highly variable correlations. This phenomenon is even more pronounced in networks with distance dependent connectivity. In contrast, networks with highly connected hubs or patchy connections often exhibit strong average correlations. Our results are particularly relevant in view of new experimental techniques that enable the parallel recording of spiking activity from a large number of neurons, an appropriate interpretation of which is hampered by the currently limited understanding of structure-dynamics relations in complex networks.
Collapse
|
36
|
Helias M, Deger M, Rotter S, Diesmann M. Finite post synaptic potentials cause a fast neuronal response. Front Neurosci 2011; 5:19. [PMID: 21427776 PMCID: PMC3047297 DOI: 10.3389/fnins.2011.00019] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2010] [Accepted: 02/07/2011] [Indexed: 01/23/2023] Open
Abstract
A generic property of the communication between neurons is the exchange of pulses at discrete time points, the action potentials. However, the prevalent theory of spiking neuronal networks of integrate-and-fire model neurons relies on two assumptions: the superposition of many afferent synaptic impulses is approximated by Gaussian white noise, equivalent to a vanishing magnitude of the synaptic impulses, and the transfer of time varying signals by neurons is assessable by linearization. Going beyond both approximations, we find that in the presence of synaptic impulses the response to transient inputs differs qualitatively from previous predictions. It is instantaneous rather than exhibiting low-pass characteristics, depends non-linearly on the amplitude of the impulse, is asymmetric for excitation and inhibition and is promoted by a characteristic level of synaptic background noise. These findings resolve contradictions between the earlier theory and experimental observations. Here we review the recent theoretical progress that enabled these insights. We explain why the membrane potential near threshold is sensitive to properties of the afferent noise and show how this shapes the neural response. A further extension of the theory to time evolution in discrete steps quantifies simulation artifacts and yields improved methods to cross check results.
Collapse
Affiliation(s)
| | - Moritz Deger
- Bernstein Center Freiburg, Albert-Ludwig UniversityFreiburg, Germany
| | - Stefan Rotter
- Bernstein Center Freiburg, Albert-Ludwig UniversityFreiburg, Germany
- Computational Neuroscience, Faculty of Biology, Albert-Ludwig UniversityFreiburg, Germany
| | - Markus Diesmann
- RIKEN Brain Science InstituteWako City, Japan
- Bernstein Center Freiburg, Albert-Ludwig UniversityFreiburg, Germany
- Institute for Neuroscience and Medicine (INM-6), Computational and Systems Neuroscience, Research Center JülichGermany
- Brain and Neural Systems Team, Computational Science Research Program, RIKENWako City, Japan
| |
Collapse
|
37
|
Kunkel S, Diesmann M, Morrison A. Limits to the development of feed-forward structures in large recurrent neuronal networks. Front Comput Neurosci 2011; 4:160. [PMID: 21415913 PMCID: PMC3042733 DOI: 10.3389/fncom.2010.00160] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2010] [Accepted: 12/25/2010] [Indexed: 11/25/2022] Open
Abstract
Spike-timing dependent plasticity (STDP) has traditionally been of great interest to theoreticians, as it seems to provide an answer to the question of how the brain can develop functional structure in response to repeated stimuli. However, despite this high level of interest, convincing demonstrations of this capacity in large, initially random networks have not been forthcoming. Such demonstrations as there are typically rely on constraining the problem artificially. Techniques include employing additional pruning mechanisms or STDP rules that enhance symmetry breaking, simulating networks with low connectivity that magnify competition between synapses, or combinations of the above. In this paper, we first review modeling choices that carry particularly high risks of producing non-generalizable results in the context of STDP in recurrent networks. We then develop a theory for the development of feed-forward structure in random networks and conclude that an unstable fixed point in the dynamics prevents the stable propagation of structure in recurrent networks with weight-dependent STDP. We demonstrate that the key predictions of the theory hold in large-scale simulations. The theory provides insight into the reasons why such development does not take place in unconstrained systems and enables us to identify biologically motivated candidate adaptations to the balanced random network model that might enable it.
Collapse
Affiliation(s)
- Susanne Kunkel
- Functional Neural Circuits Group, Faculty of Biology, Albert-Ludwig University of Freiburg Germany
| | | | | |
Collapse
|
38
|
Gilson M, Burkitt AN, Grayden DB, Thomas DA, van Hemmen JL. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks V: self-organization schemes and weight dependence. BIOLOGICAL CYBERNETICS 2010; 103:365-386. [PMID: 20882297 DOI: 10.1007/s00422-010-0405-7] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/25/2009] [Accepted: 08/23/2010] [Indexed: 05/29/2023]
Abstract
Spike-timing-dependent plasticity (STDP) determines the evolution of the synaptic weights according to their pre- and post-synaptic activity, which in turn changes the neuronal activity on a (much) slower time scale. This paper examines the effect of STDP in a recurrently connected network stimulated by external pools of input spike trains, where both input and recurrent synapses are plastic. Our previously developed theoretical framework is extended to incorporate weight-dependent STDP and dendritic delays. The weight dynamics is determined by an interplay between the neuronal activation mechanisms, the input spike-time correlations, and the learning parameters. For the case of two external input pools, the resulting learning scheme can exhibit a symmetry breaking of the input connections such that two neuronal groups emerge, each specialized to one input pool only. In addition, we show how the recurrent connections within each neuronal group can be strengthened by STDP at the expense of those between the two groups. This neuronal self-organization can be seen as a basic dynamical ingredient for the emergence of neuronal maps induced by activity-dependent plasticity.
Collapse
Affiliation(s)
- Matthieu Gilson
- Department of Electrical and Electronic Engineering, University of Melbourne, Melbourne, VIC 3010, Australia.
| | | | | | | | | |
Collapse
|
39
|
Kolodziejski C, Tetzlaff C, Wörgötter F. Closed-Form Treatment of the Interactions between Neuronal Activity and Timing-Dependent Plasticity in Networks of Linear Neurons. Front Comput Neurosci 2010; 4:134. [PMID: 21152348 PMCID: PMC2998049 DOI: 10.3389/fncom.2010.00134] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2010] [Accepted: 08/23/2010] [Indexed: 11/30/2022] Open
Abstract
Network activity and network connectivity mutually influence each other. Especially for fast processes, like spike-timing-dependent plasticity (STDP), which depends on the interaction of few (two) signals, the question arises how these interactions are continuously altering the behavior and structure of the network. To address this question a time-continuous treatment of plasticity is required. However, this is - even in simple recurrent network structures - currently not possible. Thus, here we develop for a linear differential Hebbian learning system a method by which we can analytically investigate the dynamics and stability of the connections in recurrent networks. We use noisy periodic external input signals, which through the recurrent connections lead to complex actual ongoing inputs and observe that large stable ranges emerge in these networks without boundaries or weight-normalization. Somewhat counter-intuitively, we find that about 40% of these cases are obtained with a long-term potentiation-dominated STDP curve. Noise can reduce stability in some cases, but generally this does not occur. Instead stable domains are often enlarged. This study is a first step toward a better understanding of the ongoing interactions between activity and plasticity in recurrent networks using STDP. The results suggest that stability of (sub-)networks should generically be present also in larger structures.
Collapse
|
40
|
Gilson M, Burkitt A, van Hemmen LJ. STDP in Recurrent Neuronal Networks. Front Comput Neurosci 2010; 4. [PMID: 20890448 PMCID: PMC2947928 DOI: 10.3389/fncom.2010.00023] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2010] [Accepted: 06/28/2010] [Indexed: 11/13/2022] Open
Abstract
Recent results about spike-timing-dependent plasticity (STDP) in recurrently connected neurons are reviewed, with a focus on the relationship between the weight dynamics and the emergence of network structure. In particular, the evolution of synaptic weights in the two cases of incoming connections for a single neuron and recurrent connections are compared and contrasted. A theoretical framework is used that is based upon Poisson neurons with a temporally inhomogeneous firing rate and the asymptotic distribution of weights generated by the learning dynamics. Different network configurations examined in recent studies are discussed and an overview of the current understanding of STDP in recurrently connected neuronal networks is presented.
Collapse
|
41
|
Helias M, Deger M, Rotter S, Diesmann M. Instantaneous non-linear processing by pulse-coupled threshold units. PLoS Comput Biol 2010; 6. [PMID: 20856583 PMCID: PMC2936519 DOI: 10.1371/journal.pcbi.1000929] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2010] [Accepted: 08/10/2010] [Indexed: 11/18/2022] Open
Abstract
Contemporary theory of spiking neuronal networks is based on the linear response of the integrate-and-fire neuron model derived in the diffusion limit. We find that for non-zero synaptic weights, the response to transient inputs differs qualitatively from this approximation. The response is instantaneous rather than exhibiting low-pass characteristics, non-linearly dependent on the input amplitude, asymmetric for excitation and inhibition, and is promoted by a characteristic level of synaptic background noise. We show that at threshold the probability density of the potential drops to zero within the range of one synaptic weight and explain how this shapes the response. The novel mechanism is exhibited on the network level and is a generic property of pulse-coupled networks of threshold units. Our work demonstrates a fast-firing response of nerve cells that remained unconsidered in network analysis, because it is inaccessible by the otherwise successful linear response theory. For the sake of analytic tractability, this theory assumes infinitesimally weak synaptic coupling. However, realistic synaptic impulses cause a measurable deflection of the membrane potential. Here we quantify the effect of this pulse-coupling on the firing rate and the membrane-potential distribution. We demonstrate how the postsynaptic potentials give rise to a fast, non-linear rate transient present for excitatory, but not for inhibitory, inputs. It is particularly pronounced in the presence of a characteristic level of synaptic background noise. We show that feed-forward inhibition enhances the fast response on the network level. This enables a mode of information processing based on short-lived activity transients. Moreover, the non-linear neural response appears on a time scale that critically interacts with spike-timing dependent synaptic plasticity rules. Our results are derived for biologically realistic synaptic amplitudes, but also extend earlier work based on Gaussian white noise. The novel theoretical framework is generically applicable to any threshold unit governed by a stochastic differential equation driven by finite jumps. Therefore, our results are relevant for a wide range of biological, physical, and technical systems.
Collapse
|
42
|
Pfister JP, Tass PA. STDP in Oscillatory Recurrent Networks: Theoretical Conditions for Desynchronization and Applications to Deep Brain Stimulation. Front Comput Neurosci 2010; 4. [PMID: 20802859 PMCID: PMC2928668 DOI: 10.3389/fncom.2010.00022] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2010] [Accepted: 06/18/2010] [Indexed: 11/13/2022] Open
Abstract
Highly synchronized neural networks can be the source of various pathologies such as Parkinson's disease or essential tremor. Therefore, it is crucial to better understand the dynamics of such networks and the conditions under which a high level of synchronization can be observed. One of the key factors that influences the level of synchronization is the type of learning rule that governs synaptic plasticity. Most of the existing work on synchronization in recurrent networks with synaptic plasticity are based on numerical simulations and there is a clear lack of a theoretical framework for studying the effects of various synaptic plasticity rules. In this paper we derive analytically the conditions for spike-timing dependent plasticity (STDP) to lead a network into a synchronized or a desynchronized state. We also show that under appropriate conditions bistability occurs in recurrent networks governed by STDP. Indeed, a pathological regime with strong connections and therefore strong synchronized activity, as well as a physiological regime with weaker connections and lower levels of synchronization are found to coexist. Furthermore, we show that with appropriate stimulation, the network dynamics can be pushed to the low synchronization stable state. This type of therapeutical stimulation is very different from the existing high-frequency stimulation for deep brain stimulation since once the stimulation is stopped the network stays in the low synchronization regime.
Collapse
Affiliation(s)
- Jean-Pascal Pfister
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge Cambridge, UK
| | | |
Collapse
|
43
|
Iannella NL, Launey T, Tanaka S. Spike timing-dependent plasticity as the origin of the formation of clustered synaptic efficacy engrams. Front Comput Neurosci 2010; 4. [PMID: 20725522 PMCID: PMC2914531 DOI: 10.3389/fncom.2010.00021] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2010] [Accepted: 06/14/2010] [Indexed: 12/03/2022] Open
Abstract
Synapse location, dendritic active properties and synaptic plasticity are all known to play some role in shaping the different input streams impinging onto a neuron. It remains unclear however, how the magnitude and spatial distribution of synaptic efficacies emerge from this interplay. Here, we investigate this interplay using a biophysically detailed neuron model of a reconstructed layer 2/3 pyramidal cell and spike timing-dependent plasticity (STDP). Specifically, we focus on the issue of how the efficacy of synapses contributed by different input streams are spatially represented in dendrites after STDP learning. We construct a simple feed forward network where a detailed model neuron receives synaptic inputs independently from multiple yet equally sized groups of afferent fibers with correlated activity, mimicking the spike activity from different neuronal populations encoding, for example, different sensory modalities. Interestingly, ensuing STDP learning, we observe that for all afferent groups, STDP leads to synaptic efficacies arranged into spatially segregated clusters effectively partitioning the dendritic tree. These segregated clusters possess a characteristic global organization in space, where they form a tessellation in which each group dominates mutually exclusive regions of the dendrite. Put simply, the dendritic imprint from different input streams left after STDP learning effectively forms what we term a “dendritic efficacy mosaic.” Furthermore, we show how variations of the inputs and STDP rule affect such an organization. Our model suggests that STDP may be an important mechanism for creating a clustered plasticity engram, which shapes how different input streams are spatially represented in dendrite.
Collapse
|
44
|
Gilson M, Burkitt AN, Grayden DB, Thomas DA, van Hemmen JL. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks III: Partially connected neurons driven by spontaneous activity. BIOLOGICAL CYBERNETICS 2009; 101:411-426. [PMID: 19937071 DOI: 10.1007/s00422-009-0343-4] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/23/2009] [Accepted: 10/19/2009] [Indexed: 05/28/2023]
Abstract
In contrast to a feed-forward architecture, the weight dynamics induced by spike-timing-dependent plasticity (STDP) in a recurrent neuronal network is not yet well understood. In this article, we extend a previous study of the impact of additive STDP in a recurrent network that is driven by spontaneous activity (no external stimulating inputs) from a fully connected network to one that is only partially connected. The asymptotic state of the network is analyzed, and it is found that the equilibrium and stability conditions for the firing rates are similar for both full and partial connectivity: STDP causes the firing rates to converge toward the same value and remain quasi-homogeneous. However, when STDP induces strong weight competition, the connectivity affects the weight dynamics in that the distribution of the weights disperses more quickly for lower density than for higher density. The asymptotic weight distribution strongly depends upon that at the beginning of the learning epoch; consequently, homogeneous connectivity alone is not sufficient to obtain homogeneous neuronal activity. In the absence of external inputs, STDP can nevertheless generate structure in the network through autocorrelation effects, for example, by introducing asymmetry in network topology.
Collapse
Affiliation(s)
- Matthieu Gilson
- Department of Electrical and Electronic Engineering, The University of Melbourne, Melbourne, VIC 3010, Australia.
| | | | | | | | | |
Collapse
|