1
|
Aceituno PV, Farinha MT, Loidl R, Grewe BF. Learning cortical hierarchies with temporal Hebbian updates. Front Comput Neurosci 2023; 17:1136010. [PMID: 37293353 PMCID: PMC10244748 DOI: 10.3389/fncom.2023.1136010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 04/25/2023] [Indexed: 06/10/2023] Open
Abstract
A key driver of mammalian intelligence is the ability to represent incoming sensory information across multiple abstraction levels. For example, in the visual ventral stream, incoming signals are first represented as low-level edge filters and then transformed into high-level object representations. Similar hierarchical structures routinely emerge in artificial neural networks (ANNs) trained for object recognition tasks, suggesting that similar structures may underlie biological neural networks. However, the classical ANN training algorithm, backpropagation, is considered biologically implausible, and thus alternative biologically plausible training methods have been developed such as Equilibrium Propagation, Deep Feedback Control, Supervised Predictive Coding, and Dendritic Error Backpropagation. Several of those models propose that local errors are calculated for each neuron by comparing apical and somatic activities. Notwithstanding, from a neuroscience perspective, it is not clear how a neuron could compare compartmental signals. Here, we propose a solution to this problem in that we let the apical feedback signal change the postsynaptic firing rate and combine this with a differential Hebbian update, a rate-based version of classical spiking time-dependent plasticity (STDP). We prove that weight updates of this form minimize two alternative loss functions that we prove to be equivalent to the error-based losses used in machine learning: the inference latency and the amount of top-down feedback necessary. Moreover, we show that the use of differential Hebbian updates works similarly well in other feedback-based deep learning frameworks such as Predictive Coding or Equilibrium Propagation. Finally, our work removes a key requirement of biologically plausible models for deep learning and proposes a learning mechanism that would explain how temporal Hebbian learning rules can implement supervised hierarchical learning.
Collapse
Affiliation(s)
- Pau Vilimelis Aceituno
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
- ETH AI Center, ETH Zurich, Zurich, Switzerland
| | | | - Reinhard Loidl
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Benjamin F. Grewe
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
- ETH AI Center, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
2
|
Zhou W, Wen S, Liu Y, Liu L, Liu X, Chen L. Forgetting memristor based STDP learning circuit for neural networks. Neural Netw 2023; 158:293-304. [PMID: 36493532 DOI: 10.1016/j.neunet.2022.11.023] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Revised: 10/18/2022] [Accepted: 11/14/2022] [Indexed: 11/21/2022]
Abstract
The circuit implementation of STDP based on memristor is of great significance for the application of neural network. However, recent research shows that the research on the pure circuit implementation of forgetting memristor and STDP is still rare. This paper proposes a new STDP learning rule implementation circuit based on the forgetting memristor. This kind of forgetting memory resistance synapse makes the neural network have the function of time-division multiplexing, but the instability of short-term memory will affect the learning ability of the neural network. This paper analyzes and discusses the influence of synapses with long-term and short-term memory on the learning characteristics of neural network STDP, which lays a foundation for the construction of time-division multiplexing neural network with long-term and short-term memory synapses. Through this circuit, it is found that the volatile memristor has different behaviors to the stimulus signal in different initial states, and the resulting LTP phenomenon is more in line with the forgetting effect in biology. This circuit has multiple adjustable parameters, which can fit the STDP learning rules under different conditions. The application of neural network proves the availability of this circuit.
Collapse
Affiliation(s)
- Wenhao Zhou
- Electronic Information and Engineering, Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, 400715, China.
| | - Shiping Wen
- Centre for Artificial Intelligence, Faculty of Engineering and Information Technology, University of Technology Sydney, Australia.
| | - Yi Liu
- Electronic Information and Engineering, Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, 400715, China
| | - Lu Liu
- Electronic Information and Engineering, Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, 400715, China
| | - Xin Liu
- Computer Vision and Pattern Recognition Laboratory, School of Engineering Science, Lappeenranta-Lahti University of Technology LUT, Finland.
| | - Ling Chen
- Electronic Information and Engineering, Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, 400715, China; Computer Vision and Pattern Recognition Laboratory, School of Engineering Science, Lappeenranta-Lahti University of Technology LUT, Finland.
| |
Collapse
|
3
|
Differential Hebbian learning with time-continuous signals for active noise reduction. PLoS One 2022; 17:e0266679. [PMID: 35617161 PMCID: PMC9135254 DOI: 10.1371/journal.pone.0266679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 05/09/2022] [Indexed: 11/19/2022] Open
Abstract
Spike timing-dependent plasticity, related to differential Hebb-rules, has become a leading paradigm in neuronal learning, because weights can grow or shrink depending on the timing of pre- and post-synaptic signals. Here we use this paradigm to reduce unwanted (acoustic) noise. Our system relies on heterosynaptic differential Hebbian learning and we show that it can efficiently eliminate noise by up to -140 dB in multi-microphone setups under various conditions. The system quickly learns, most often within a few seconds, and it is robust with respect to different geometrical microphone configurations, too. Hence, this theoretical study demonstrates that it is possible to successfully transfer differential Hebbian learning, derived from the neurosciences, into a technical domain.
Collapse
|
4
|
Nagarajan K, Li J, Ensan SS, Kannan S, Ghosh S. Fault Injection Attacks in Spiking Neural Networks and Countermeasures. FRONTIERS IN NANOTECHNOLOGY 2022. [DOI: 10.3389/fnano.2021.801999] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Spiking Neural Networks (SNN) are fast emerging as an alternative option to Deep Neural Networks (DNN). They are computationally more powerful and provide higher energy-efficiency than DNNs. While exciting at first glance, SNNs contain security-sensitive assets (e.g., neuron threshold voltage) and vulnerabilities (e.g., sensitivity of classification accuracy to neuron threshold voltage change) that can be exploited by the adversaries. We explore global fault injection attacks using external power supply and laser-induced local power glitches on SNN designed using common analog neurons to corrupt critical training parameters such as spike amplitude and neuron’s membrane threshold potential. We also analyze the impact of power-based attacks on the SNN for digit classification task and observe a worst-case classification accuracy degradation of −85.65%. We explore the impact of various design parameters of SNN (e.g., learning rate, spike trace decay constant, and number of neurons) and identify design choices for robust implementation of SNN. We recover classification accuracy degradation by 30–47% for a subset of power-based attacks by modifying SNN training parameters such as learning rate, trace decay constant, and neurons per layer. We also propose hardware-level defenses, e.g., a robust current driver design that is immune to power-oriented attacks, improved circuit sizing of neuron components to reduce/recover the adversarial accuracy degradation at the cost of negligible area, and 25% power overhead. We also propose a dummy neuron-based detection of voltage fault injection at ∼1% power and area overhead each.
Collapse
|
5
|
Espinosa-Ramos JI, Capecci E, Kasabov N. A Computational Model of Neuroreceptor-Dependent Plasticity (NRDP) Based on Spiking Neural Networks. IEEE Trans Cogn Dev Syst 2019. [DOI: 10.1109/tcds.2017.2776863] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
6
|
Zappacosta S, Mannella F, Mirolli M, Baldassarre G. General differential Hebbian learning: Capturing temporal relations between events in neural networks and the brain. PLoS Comput Biol 2018; 14:e1006227. [PMID: 30153263 PMCID: PMC6130884 DOI: 10.1371/journal.pcbi.1006227] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2017] [Revised: 09/10/2018] [Accepted: 05/23/2018] [Indexed: 11/19/2022] Open
Abstract
Learning in biologically relevant neural-network models usually relies on Hebb learning rules. The typical implementations of these rules change the synaptic strength on the basis of the co-occurrence of the neural events taking place at a certain time in the pre- and post-synaptic neurons. Differential Hebbian learning (DHL) rules, instead, are able to update the synapse by taking into account the temporal relation, captured with derivatives, between the neural events happening in the recent past. The few DHL rules proposed so far can update the synaptic weights only in few ways: this is a limitation for the study of dynamical neurons and neural-network models. Moreover, empirical evidence on brain spike-timing-dependent plasticity (STDP) shows that different neurons express a surprisingly rich repertoire of different learning processes going far beyond existing DHL rules. This opens up a second problem of how capturing such processes with DHL rules. Here we propose a general DHL (G-DHL) rule generating the existing rules and many others. The rule has a high expressiveness as it combines in different ways the pre- and post-synaptic neuron signals and derivatives. The rule flexibility is shown by applying it to various signals of artificial neurons and by fitting several different STDP experimental data sets. To these purposes, we propose techniques to pre-process the neural signals and capture the temporal relations between the neural events of interest. We also propose a procedure to automatically identify the rule components and parameters that best fit different STDP data sets, and show how the identified components might be used to heuristically guide the search of the biophysical mechanisms underlying STDP. Overall, the results show that the G-DHL rule represents a useful means to study time-sensitive learning processes in both artificial neural networks and brain.
Collapse
Affiliation(s)
- Stefano Zappacosta
- Laboratory of Computational Embodied Neuroscience, Institute of Cognitive Sciences and Technologies, National Research Council of Italy (LOCEN-ISTC-CNR), Roma, Italy
| | - Francesco Mannella
- Laboratory of Computational Embodied Neuroscience, Institute of Cognitive Sciences and Technologies, National Research Council of Italy (LOCEN-ISTC-CNR), Roma, Italy
| | - Marco Mirolli
- Laboratory of Computational Embodied Neuroscience, Institute of Cognitive Sciences and Technologies, National Research Council of Italy (LOCEN-ISTC-CNR), Roma, Italy
| | - Gianluca Baldassarre
- Laboratory of Computational Embodied Neuroscience, Institute of Cognitive Sciences and Technologies, National Research Council of Italy (LOCEN-ISTC-CNR), Roma, Italy
| |
Collapse
|
7
|
Transient response characteristic of memristor circuits and biological-like current spikes. Neural Comput Appl 2016. [DOI: 10.1007/s00521-016-2248-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
8
|
Saudargienė A, Graham BP. Inhibitory control of site-specific synaptic plasticity in a model CA1 pyramidal neuron. Biosystems 2015; 130:37-50. [PMID: 25769669 DOI: 10.1016/j.biosystems.2015.03.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2014] [Revised: 10/31/2014] [Accepted: 03/06/2015] [Indexed: 11/28/2022]
Abstract
A computational model of a biochemical network underlying synaptic plasticity is combined with simulated on-going electrical activity in a model of a hippocampal pyramidal neuron to study the impact of synapse location and inhibition on synaptic plasticity. The simulated pyramidal neuron is activated by the realistic stimulation protocol of causal and anticausal spike pairings of presynaptic and postsynaptic action potentials in the presence and absence of spatially targeted inhibition provided by basket, bistratified and oriens-lacunosum moleculare (OLM) interneurons. The resulting Spike-timing-dependent plasticity (STDP) curves depend strongly on the number of pairing repetitions, the synapse location and the timing and strength of inhibition.
Collapse
Affiliation(s)
- Aušra Saudargienė
- Department of Informatics, Vytautas Magnus University, Kaunas LT-44404, Lithuania.
| | - Bruce P Graham
- Computer Science and Mathematics, School of Natural Sciences, University of Stirling, Stirling FK9 4LA, UK
| |
Collapse
|
9
|
Graham BP, Saudargiene A, Cobb S. Spine Head Calcium as a Measure of Summed Postsynaptic Activity for Driving Synaptic Plasticity. Neural Comput 2014; 26:2194-222. [DOI: 10.1162/neco_a_00640] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We use a computational model of a hippocampal CA1 pyramidal cell to demonstrate that spine head calcium provides an instantaneous readout at each synapse of the postsynaptic weighted sum of all presynaptic activity impinging on the cell. The form of the readout is equivalent to the functions of weighted, summed inputs used in neural network learning rules. Within a dendritic layer, peak spine head calcium levels are either a linear or sigmoidal function of the number of coactive synapses, with nonlinearity depending on the ability of voltage spread in the dendrites to reach calcium spike threshold. This is strongly controlled by the potassium A-type current, with calcium spikes and the consequent sigmoidal increase in peak spine head calcium present only when the A-channel density is low. Other membrane characteristics influence the gain of the relationship between peak calcium and the number of active synapses. In particular, increasing spine neck resistance increases the gain due to increased voltage responses to synaptic input in spine heads. Colocation of stimulated synapses on a single dendritic branch also increases the gain of the response. Input pathways cooperate: CA3 inputs to the proximal apical dendrites can strongly amplify peak calcium levels due to weak EC input to the distal dendrites, but not so strongly vice versa. CA3 inputs to the basal dendrites can boost calcium levels in the proximal apical dendrites, but the relative electrical compactness of the basal dendrites results in the reverse effect being less significant. These results give pointers as to how to better describe the contributions of pre- and postsynaptic activity in the learning “rules” that apply in these cells. The calcium signal is closer in form to the activity measures used in traditional neural network learning rules than to the spike times used in spike-timing-dependent plasticity.
Collapse
Affiliation(s)
- Bruce P. Graham
- Computing Science and Mathematics, School of Natural Sciences, University of Stirling, Stirling, FK9 4LA, U.K
| | - Ausra Saudargiene
- Department of Informatics, Vytautas Magnus University, Kaunas, LT-44404, Lithuania
| | - Stuart Cobb
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, G12 8QB, U.K
| |
Collapse
|
10
|
Krieg D, Triesch J. A unifying theory of synaptic long-term plasticity based on a sparse distribution of synaptic strength. Front Synaptic Neurosci 2014; 6:3. [PMID: 24624080 PMCID: PMC3941589 DOI: 10.3389/fnsyn.2014.00003] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2013] [Accepted: 02/13/2014] [Indexed: 11/30/2022] Open
Abstract
Long-term synaptic plasticity is fundamental to learning and network function. It has been studied under various induction protocols and depends on firing rates, membrane voltage, and precise timing of action potentials. These protocols show different facets of a common underlying mechanism but they are mostly modeled as distinct phenomena. Here, we show that all of these different dependencies can be explained from a single computational principle. The objective is a sparse distribution of excitatory synaptic strength, which may help to reduce metabolic costs associated with synaptic transmission. Based on this objective we derive a stochastic gradient ascent learning rule which is of differential-Hebbian type. It is formulated in biophysical quantities and can be related to current mechanistic theories of synaptic plasticity. The learning rule accounts for experimental findings from all major induction protocols and explains a classic phenomenon of metaplasticity. Furthermore, our model predicts the existence of metaplasticity for spike-timing-dependent plasticity Thus, we provide a theory of long-term synaptic plasticity that unifies different induction protocols and provides a connection between functional and mechanistic levels of description.
Collapse
Affiliation(s)
- Daniel Krieg
- Frankfurt Institute for Advanced Studies, Goethe University Frankfurt, Germany
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies, Goethe University Frankfurt, Germany
| |
Collapse
|
11
|
The effects of NMDA subunit composition on calcium influx and spike timing-dependent plasticity in striatal medium spiny neurons. PLoS Comput Biol 2012; 8:e1002493. [PMID: 22536151 PMCID: PMC3334887 DOI: 10.1371/journal.pcbi.1002493] [Citation(s) in RCA: 51] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2011] [Accepted: 03/12/2012] [Indexed: 11/25/2022] Open
Abstract
Calcium through NMDA receptors (NMDARs) is necessary for the long-term potentiation (LTP) of synaptic strength; however, NMDARs differ in several properties that can influence the amount of calcium influx into the spine. These properties, such as sensitivity to magnesium block and conductance decay kinetics, change the receptor's response to spike timing dependent plasticity (STDP) protocols, and thereby shape synaptic integration and information processing. This study investigates the role of GluN2 subunit differences on spine calcium concentration during several STDP protocols in a model of a striatal medium spiny projection neuron (MSPN). The multi-compartment, multi-channel model exhibits firing frequency, spike width, and latency to first spike similar to current clamp data from mouse dorsal striatum MSPN. We find that NMDAR-mediated calcium is dependent on GluN2 subunit type, action potential timing, duration of somatic depolarization, and number of action potentials. Furthermore, the model demonstrates that in MSPNs, GluN2A and GluN2B control which STDP intervals allow for substantial calcium elevation in spines. The model predicts that blocking GluN2B subunits would modulate the range of intervals that cause long term potentiation. We confirmed this prediction experimentally, demonstrating that blocking GluN2B in the striatum, narrows the range of STDP intervals that cause long term potentiation. This ability of the GluN2 subunit to modulate the shape of the STDP curve could underlie the role that GluN2 subunits play in learning and development. The striatum of the basal ganglia plays a key role in fluent motor control; pathology in this structure causes the motor symptoms of Parkinson's Disease and Huntington's Chorea. A putative cellular mechanism underlying learning of motor control is synaptic plasticity, which is an activity dependent change in synaptic strength. A known mediator of synaptic potentiation is calcium influx through the NMDA-type glutamate receptor. The NMDA receptor is sensitive to the timing of neuronal activity, allowing calcium influx only when glutamate release and a post-synaptic depolarization coincide temporally. The NMDA receptor is comprised of specific subunits that modify its sensitivity to neuronal activity and these subunits are altered in animal models of Parkinson's disease. Here we use a multi-compartmental model of a striatal neuron to investigate the effect of different NMDA subunits on calcium influx through the NMDA receptor. Simulations show that the subunit composition changes the temporal intervals that allow coincidence detection and strong calcium influx. Our experiments manipulating the dominate subunit in brain slices show that the subunit effect on calcium influx predicted by our computational model is mirrored by a change in the amount of potentiation that occurs in our experimental preparation.
Collapse
|
12
|
How feedback inhibition shapes spike-timing-dependent plasticity and its implications for recent Schizophrenia models. Neural Netw 2011; 24:560-7. [DOI: 10.1016/j.neunet.2011.03.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2010] [Revised: 02/28/2011] [Accepted: 03/03/2011] [Indexed: 11/21/2022]
|
13
|
Zamarreño-Ramos C, Camuñas-Mesa LA, Pérez-Carrasco JA, Masquelier T, Serrano-Gotarredona T, Linares-Barranco B. On spike-timing-dependent-plasticity, memristive devices, and building a self-learning visual cortex. Front Neurosci 2011; 5:26. [PMID: 21442012 PMCID: PMC3062969 DOI: 10.3389/fnins.2011.00026] [Citation(s) in RCA: 283] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2010] [Accepted: 02/19/2011] [Indexed: 11/13/2022] Open
Abstract
In this paper we present a very exciting overlap between emergent nanotechnology and neuroscience, which has been discovered by neuromorphic engineers. Specifically, we are linking one type of memristor nanotechnology devices to the biological synaptic update rule known as spike-time-dependent-plasticity (STDP) found in real biological synapses. Understanding this link allows neuromorphic engineers to develop circuit architectures that use this type of memristors to artificially emulate parts of the visual cortex. We focus on the type of memristors referred to as voltage or flux driven memristors and focus our discussions on a behavioral macro-model for such devices. The implementations result in fully asynchronous architectures with neurons sending their action potentials not only forward but also backward. One critical aspect is to use neurons that generate spikes of specific shapes. We will see how by changing the shapes of the neuron action potential spikes we can tune and manipulate the STDP learning rules for both excitatory and inhibitory synapses. We will see how neurons and memristors can be interconnected to achieve large scale spiking learning systems, that follow a type of multiplicative STDP learning rule. We will briefly extend the architectures to use three-terminal transistors with similar memristive behavior. We will illustrate how a V1 visual cortex layer can assembled and how it is capable of learning to extract orientations from visual data coming from a real artificial CMOS spiking retina observing real life scenes. Finally, we will discuss limitations of currently available memristors. The results presented are based on behavioral simulations and do not take into account non-idealities of devices and interconnects. The aim of this paper is to present, in a tutorial manner, an initial framework for the possible development of fully asynchronous STDP learning neuromorphic architectures exploiting two or three-terminal memristive type devices. All files used for the simulations are made available through the journal web site.
Collapse
Affiliation(s)
- Carlos Zamarreño-Ramos
- Mixed Signal Design, Instituto de Microelectrónica de Sevilla (IMSE–CNM–CSIC)Sevilla, Spain
| | - Luis A. Camuñas-Mesa
- Mixed Signal Design, Instituto de Microelectrónica de Sevilla (IMSE–CNM–CSIC)Sevilla, Spain
| | - Jose A. Pérez-Carrasco
- Mixed Signal Design, Instituto de Microelectrónica de Sevilla (IMSE–CNM–CSIC)Sevilla, Spain
| | | | | | | |
Collapse
|
14
|
Roberts PD, Leen TK. Anti-hebbian spike-timing-dependent plasticity and adaptive sensory processing. Front Comput Neurosci 2010; 4:156. [PMID: 21228915 PMCID: PMC3018773 DOI: 10.3389/fncom.2010.00156] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2010] [Accepted: 12/15/2010] [Indexed: 11/13/2022] Open
Abstract
Adaptive sensory processing influences the central nervous system's interpretation of incoming sensory information. One of the functions of this adaptive sensory processing is to allow the nervous system to ignore predictable sensory information so that it may focus on important novel information needed to improve performance of specific tasks. The mechanism of spike-timing-dependent plasticity (STDP) has proven to be intriguing in this context because of its dual role in long-term memory and ongoing adaptation to maintain optimal tuning of neural responses. Some of the clearest links between STDP and adaptive sensory processing have come from in vitro, in vivo, and modeling studies of the electrosensory systems of weakly electric fish. Plasticity in these systems is anti-Hebbian, so that presynaptic inputs that repeatedly precede, and possibly could contribute to, a postsynaptic neuron's firing are weakened. The learning dynamics of anti-Hebbian STDP learning rules are stable if the timing relations obey strict constraints. The stability of these learning rules leads to clear predictions of how functional consequences can arise from the detailed structure of the plasticity. Here we review the connection between theoretical predictions and functional consequences of anti-Hebbian STDP, focusing on adaptive processing in the electrosensory system of weakly electric fish. After introducing electrosensory adaptive processing and the dynamics of anti-Hebbian STDP learning rules, we address issues of predictive sensory cancelation and novelty detection, descending control of plasticity, synaptic scaling, and optimal sensory tuning. We conclude with examples in other systems where these principles may apply.
Collapse
Affiliation(s)
- Patrick D Roberts
- Biomedical Engineering, Oregon Health and Science University Portland, OR, USA
| | | |
Collapse
|
15
|
Kolodziejski C, Tetzlaff C, Wörgötter F. Closed-Form Treatment of the Interactions between Neuronal Activity and Timing-Dependent Plasticity in Networks of Linear Neurons. Front Comput Neurosci 2010; 4:134. [PMID: 21152348 PMCID: PMC2998049 DOI: 10.3389/fncom.2010.00134] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2010] [Accepted: 08/23/2010] [Indexed: 11/30/2022] Open
Abstract
Network activity and network connectivity mutually influence each other. Especially for fast processes, like spike-timing-dependent plasticity (STDP), which depends on the interaction of few (two) signals, the question arises how these interactions are continuously altering the behavior and structure of the network. To address this question a time-continuous treatment of plasticity is required. However, this is - even in simple recurrent network structures - currently not possible. Thus, here we develop for a linear differential Hebbian learning system a method by which we can analytically investigate the dynamics and stability of the connections in recurrent networks. We use noisy periodic external input signals, which through the recurrent connections lead to complex actual ongoing inputs and observe that large stable ranges emerge in these networks without boundaries or weight-normalization. Somewhat counter-intuitively, we find that about 40% of these cases are obtained with a long-term potentiation-dominated STDP curve. Noise can reduce stability in some cases, but generally this does not occur. Instead stable domains are often enlarged. This study is a first step toward a better understanding of the ongoing interactions between activity and plasticity in recurrent networks using STDP. The results suggest that stability of (sub-)networks should generically be present also in larger structures.
Collapse
|
16
|
Kulvicius T, Kolodziejski C, Tamosiunaite M, Porr B, Wörgötter F. Behavioral analysis of differential Hebbian learning in closed-loop systems. BIOLOGICAL CYBERNETICS 2010; 103:255-271. [PMID: 20556620 DOI: 10.1007/s00422-010-0396-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2010] [Accepted: 05/31/2010] [Indexed: 05/29/2023]
Abstract
Understanding closed loop behavioral systems is a non-trivial problem, especially when they change during learning. Descriptions of closed loop systems in terms of information theory date back to the 1950s, however, there have been only a few attempts which take into account learning, mostly measuring information of inputs. In this study we analyze a specific type of closed loop system by looking at the input as well as the output space. For this, we investigate simulated agents that perform differential Hebbian learning (STDP). In the first part we show that analytical solutions can be found for the temporal development of such systems for relatively simple cases. In the second part of this study we try to answer the following question: How can we predict which system from a given class would be the best for a particular scenario? This question is addressed using energy, input/output ratio and entropy measures and investigating their development during learning. This way we can show that within well-specified scenarios there are indeed agents which are optimal with respect to their structure and adaptive properties.
Collapse
Affiliation(s)
- Tomas Kulvicius
- Bernstein Center for Computational Neuroscience, Department for Computational Neuroscience, III Physikalisches Institut - Biophysik, Georg-August-Universität Göttingen, Friedrich-Hund Platz 1, 37077, Göttingen, Germany.
| | | | | | | | | |
Collapse
|
17
|
Mayr CG, Partzsch J. Rate and pulse based plasticity governed by local synaptic state variables. Front Synaptic Neurosci 2010; 2:33. [PMID: 21423519 PMCID: PMC3059700 DOI: 10.3389/fnsyn.2010.00033] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2010] [Accepted: 07/08/2010] [Indexed: 11/17/2022] Open
Abstract
Classically, action-potential-based learning paradigms such as the Bienenstock–Cooper–Munroe (BCM) rule for pulse rates or spike timing-dependent plasticity for pulse pairings have been experimentally demonstrated to evoke long-lasting synaptic weight changes (i.e., plasticity). However, several recent experiments have shown that plasticity also depends on the local dynamics at the synapse, such as membrane voltage, Calcium time course and level, or dendritic spikes. In this paper, we introduce a formulation of the BCM rule which is based on the instantaneous postsynaptic membrane potential as well as the transmission profile of the presynaptic spike. While this rule incorporates only simple local voltage- and current dynamics and is thus neither directly rate nor timing based, it can replicate a range of experiments, such as various rate and spike pairing protocols, combinations of the two, as well as voltage-dependent plasticity. A detailed comparison of current plasticity models with respect to this range of experiments also demonstrates the efficacy of the new plasticity rule. All experiments can be replicated with a limited set of parameters, avoiding the overfitting problem of more involved plasticity rules.
Collapse
Affiliation(s)
- Christian G Mayr
- Endowed Chair of Highly Parallel VLSI Systems and Neural Microelectronics, Institute of Circuits and Systems, Faculty of Electrical Engineering and Information Science, University of Technology Dresden Dresden, Sachsen, Germany
| | | |
Collapse
|
18
|
Clopath C, Gerstner W. Voltage and Spike Timing Interact in STDP - A Unified Model. Front Synaptic Neurosci 2010; 2:25. [PMID: 21423511 PMCID: PMC3059665 DOI: 10.3389/fnsyn.2010.00025] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2010] [Accepted: 06/07/2010] [Indexed: 11/13/2022] Open
Abstract
A phenomenological model of synaptic plasticity is able to account for a large body of experimental data on spike-timing-dependent plasticity (STDP). The basic ingredient of the model is the correlation of presynaptic spike arrival with postsynaptic voltage. The local membrane voltage is used twice: a first term accounts for the instantaneous voltage and the second one for a low-pass filtered voltage trace. Spike-timing effects emerge as a special case. We hypothesize that the voltage dependence can explain differential effects of STDP in dendrites, since the amplitude and time course of backpropagating action potentials or dendritic spikes influences the plasticity results in the model. The dendritic effects are simulated by variable choices of voltage time course at the site of the synapse, i.e., without an explicit model of the spatial structure of the neuron.
Collapse
Affiliation(s)
- Claudia Clopath
- Laboratory of Computational Neuroscience, Brain-Mind Institute, Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland
| | | |
Collapse
|
19
|
Froemke RC, Letzkus JJ, Kampa BM, Hang GB, Stuart GJ. Dendritic synapse location and neocortical spike-timing-dependent plasticity. Front Synaptic Neurosci 2010; 2:29. [PMID: 21423515 PMCID: PMC3059711 DOI: 10.3389/fnsyn.2010.00029] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2010] [Accepted: 06/27/2010] [Indexed: 11/30/2022] Open
Abstract
While it has been appreciated for decades that synapse location in the dendritic tree has a powerful influence on signal processing in neurons, the role of dendritic synapse location on the induction of long-term synaptic plasticity has only recently been explored. Here, we review recent work revealing how learning rules for spike-timing-dependent plasticity (STDP) in cortical neurons vary with the spatial location of synaptic input. A common principle appears to be that proximal synapses show conventional STDP, whereas distal inputs undergo plasticity according to novel learning rules. One crucial factor determining location-dependent STDP is the backpropagating action potential, which tends to decrease in amplitude and increase in width as it propagates into the dendritic tree of cortical neurons. We discuss additional location-dependent mechanisms as well as the functional implications of heterogeneous learning rules at different dendritic locations for the organization of synaptic inputs.
Collapse
Affiliation(s)
- Robert C Froemke
- Departments of Otolaryngology and Physiology/Neuroscience, Molecular Neurobiology Program, The Helen and Martin Kimmel Center for Biology and Medicine, Skirball Institute of Biomolecular Medicine, New York University School of Medicine New York, NY, USA
| | | | | | | | | |
Collapse
|
20
|
Clopath C, Büsing L, Vasilaki E, Gerstner W. Connectivity reflects coding: a model of voltage-based STDP with homeostasis. Nat Neurosci 2010; 13:344-52. [PMID: 20098420 DOI: 10.1038/nn.2479] [Citation(s) in RCA: 347] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2009] [Accepted: 12/01/2009] [Indexed: 11/10/2022]
Abstract
Electrophysiological connectivity patterns in cortex often have a few strong connections, which are sometimes bidirectional, among a lot of weak connections. To explain these connectivity patterns, we created a model of spike timing-dependent plasticity (STDP) in which synaptic changes depend on presynaptic spike arrival and the postsynaptic membrane potential, filtered with two different time constants. Our model describes several nonlinear effects that are observed in STDP experiments, as well as the voltage dependence of plasticity. We found that, in a simulated recurrent network of spiking neurons, our plasticity rule led not only to development of localized receptive fields but also to connectivity patterns that reflect the neural code. For temporal coding procedures with spatio-temporal input correlations, strong connections were predominantly unidirectional, whereas they were bidirectional under rate-coded input with spatial correlations only. Thus, variable connectivity patterns in the brain could reflect different coding principles across brain areas; moreover, our simulations suggested that plasticity is fast.
Collapse
Affiliation(s)
- Claudia Clopath
- Laboratory of Computational Neuroscience, Brain-Mind Institute and School of Computer and Communication Sciences, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
| | | | | | | |
Collapse
|
21
|
Manoonpong P, Geng T, Kulvicius T, Porr B, Wörgötter F. Adaptive, fast walking in a biped robot under neuronal control and learning. PLoS Comput Biol 2008; 3:e134. [PMID: 17630828 PMCID: PMC1914373 DOI: 10.1371/journal.pcbi.0030134] [Citation(s) in RCA: 77] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2007] [Accepted: 05/30/2007] [Indexed: 11/22/2022] Open
Abstract
Human walking is a dynamic, partly self-stabilizing process relying on the interaction of the biomechanical design with its neuronal control. The coordination of this process is a very difficult problem, and it has been suggested that it involves a hierarchy of levels, where the lower ones, e.g., interactions between muscles and the spinal cord, are largely autonomous, and where higher level control (e.g., cortical) arises only pointwise, as needed. This requires an architecture of several nested, sensori–motor loops where the walking process provides feedback signals to the walker's sensory systems, which can be used to coordinate its movements. To complicate the situation, at a maximal walking speed of more than four leg-lengths per second, the cycle period available to coordinate all these loops is rather short. In this study we present a planar biped robot, which uses the design principle of nested loops to combine the self-stabilizing properties of its biomechanical design with several levels of neuronal control. Specifically, we show how to adapt control by including online learning mechanisms based on simulated synaptic plasticity. This robot can walk with a high speed (>3.0 leg length/s), self-adapting to minor disturbances, and reacting in a robust way to abruptly induced gait changes. At the same time, it can learn walking on different terrains, requiring only few learning experiences. This study shows that the tight coupling of physical with neuronal control, guided by sensory feedback from the walking pattern itself, combined with synaptic learning may be a way forward to better understand and solve coordination problems in other complex motor tasks. The problem of motor coordination of complex multi-joint movements has been recognized as very difficult in biological as well as in technical systems. The high degree of redundancy of such movements and the complexity of their dynamics make it hard to arrive at robust solutions. Biological systems, however, are able to move with elegance and efficiency, and they have solved this problem by a combination of appropriate biomechanics, neuronal control, and adaptivity. Human walking is a prominent example of this, combining dynamic control with the physics of the body and letting it interact with the terrain in a highly energy-efficient way during walking or running. The current study is the first to use a similar hybrid and adaptive, mechano–neuronal design strategy to build and control a small, fast biped walking robot and to make it learn to adapt to changes in the terrain to a certain degree. This study thus presents a proof of concept for a design principle suggested by physiological findings and may help us to better understand the interplay of these different components in human walking as well as in other complex movement patterns.
Collapse
Affiliation(s)
- Poramate Manoonpong
- Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany
| | - Tao Geng
- Department of Psychology, University of Stirling, Stirling, Scotland, United Kingdom
| | - Tomas Kulvicius
- Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany
| | - Bernd Porr
- Department of Electronics and Electrical Engineering, University of Glasgow, Glasgow, Scotland, United Kingdom
| | - Florentin Wörgötter
- Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany
- Department of Psychology, University of Stirling, Stirling, Scotland, United Kingdom
- * To whom correspondence should be addressed. E-mail:
| |
Collapse
|
22
|
Sjöström PJ, Rancz EA, Roth A, Häusser M. Dendritic excitability and synaptic plasticity. Physiol Rev 2008; 88:769-840. [PMID: 18391179 DOI: 10.1152/physrev.00016.2007] [Citation(s) in RCA: 418] [Impact Index Per Article: 26.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Most synaptic inputs are made onto the dendritic tree. Recent work has shown that dendrites play an active role in transforming synaptic input into neuronal output and in defining the relationships between active synapses. In this review, we discuss how these dendritic properties influence the rules governing the induction of synaptic plasticity. We argue that the location of synapses in the dendritic tree, and the type of dendritic excitability associated with each synapse, play decisive roles in determining the plastic properties of that synapse. Furthermore, since the electrical properties of the dendritic tree are not static, but can be altered by neuromodulators and by synaptic activity itself, we discuss how learning rules may be dynamically shaped by tuning dendritic function. We conclude by describing how this reciprocal relationship between plasticity of dendritic excitability and synaptic plasticity has changed our view of information processing and memory storage in neuronal networks.
Collapse
Affiliation(s)
- P Jesper Sjöström
- Wolfson Institute for Biomedical Research and Department of Physiology, University College London, London, United Kingdom
| | | | | | | |
Collapse
|
23
|
Morrison A, Diesmann M, Gerstner W. Phenomenological models of synaptic plasticity based on spike timing. BIOLOGICAL CYBERNETICS 2008; 98:459-78. [PMID: 18491160 PMCID: PMC2799003 DOI: 10.1007/s00422-008-0233-1] [Citation(s) in RCA: 277] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2008] [Accepted: 04/09/2008] [Indexed: 05/20/2023]
Abstract
Synaptic plasticity is considered to be the biological substrate of learning and memory. In this document we review phenomenological models of short-term and long-term synaptic plasticity, in particular spike-timing dependent plasticity (STDP). The aim of the document is to provide a framework for classifying and evaluating different models of plasticity. We focus on phenomenological synaptic models that are compatible with integrate-and-fire type neuron models where each neuron is described by a small number of variables. This implies that synaptic update rules for short-term or long-term plasticity can only depend on spike timing and, potentially, on membrane potential, as well as on the value of the synaptic weight, or on low-pass filtered (temporally averaged) versions of the above variables. We examine the ability of the models to account for experimental data and to fulfill expectations derived from theoretical considerations. We further discuss their relations to teacher-based rules (supervised learning) and reward-based rules (reinforcement learning). All models discussed in this paper are suitable for large-scale network simulations.
Collapse
Affiliation(s)
- Abigail Morrison
- Computational Neuroscience Group, RIKEN Brain Science Institute, Wako City, Japan
| | - Markus Diesmann
- Computational Neuroscience Group, RIKEN Brain Science Institute, Wako City, Japan
- Bernstein Center for Computational Neuroscience, Albert-Ludwigs-University, Freiburg, Germany
| | - Wulfram Gerstner
- Laboratory of Computational Neuroscience, LCN, Brain Mind Institute and School of Computer and Communication Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Station 15, 1015 Lausanne, Switzerland
| |
Collapse
|
24
|
Kolodziejski C, Porr B, Wörgötter F. Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison. BIOLOGICAL CYBERNETICS 2008; 98:259-272. [PMID: 18196266 PMCID: PMC2798052 DOI: 10.1007/s00422-007-0209-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/19/2007] [Accepted: 12/19/2007] [Indexed: 05/25/2023]
Abstract
A confusingly wide variety of temporally asymmetric learning rules exists related to reinforcement learning and/or to spike-timing dependent plasticity, many of which look exceedingly similar, while displaying strongly different behavior. These rules often find their use in control tasks, for example in robotics and for this rigorous convergence and numerical stability is required. The goal of this article is to review these rules and compare them to provide a better overview over their different properties. Two main classes will be discussed: temporal difference (TD) rules and correlation based (differential hebbian) rules and some transition cases. In general we will focus on neuronal implementations with changeable synaptic weights and a time-continuous representation of activity. In a machine learning (non-neuronal) context, for TD-learning a solid mathematical theory has existed since several years. This can partly be transferred to a neuronal framework, too. On the other hand, only now a more complete theory has also emerged for differential Hebb rules. In general rules differ by their convergence conditions and their numerical stability, which can lead to very undesirable behavior, when wanting to apply them. For TD, convergence can be enforced with a certain output condition assuring that the delta-error drops on average to zero (output control). Correlation based rules, on the other hand, converge when one input drops to zero (input control). Temporally asymmetric learning rules treat situations where incoming stimuli follow each other in time. Thus, it is necessary to remember the first stimulus to be able to relate it to the later occurring second one. To this end different types of so-called eligibility traces are being used by these two different types of rules. This aspect leads again to different properties of TD and differential Hebbian learning as discussed here. Thus, this paper, while also presenting several novel mathematical results, is mainly meant to provide a road map through the different neuronally emulated temporal asymmetrical learning rules and their behavior to provide some guidance for possible applications.
Collapse
Affiliation(s)
- Christoph Kolodziejski
- Bernstein Center for Computational Neuroscience, University of Göttingen, Bunsenstr. 10, 37073 Göttingen, Germany
| | - Bernd Porr
- Department of Electronics and Electrical Engineering, University of Glasgow, Glasgow, GT12 8LT Scotland
| | - Florentin Wörgötter
- Bernstein Center for Computational Neuroscience, University of Göttingen, Bunsenstr. 10, 37073 Göttingen, Germany
| |
Collapse
|
25
|
Brader JM, Senn W, Fusi S. Learning Real-World Stimuli in a Neural Network with Spike-Driven Synaptic Dynamics. Neural Comput 2007; 19:2881-912. [PMID: 17883345 DOI: 10.1162/neco.2007.19.11.2881] [Citation(s) in RCA: 143] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We present a model of spike-driven synaptic plasticity inspired by experimental observations and motivated by the desire to build an electronic hardware device that can learn to classify complex stimuli in a semisupervised fashion. During training, patterns of activity are sequentially imposed on the input neurons, and an additional instructor signal drives the output neurons toward the desired activity. The network is made of integrate-and-fire neurons with constant leak and a floor. The synapses are bistable, and they are modified by the arrival of presynaptic spikes. The sign of the change is determined by both the depolarization and the state of a variable that integrates the postsynaptic action potentials. Following the training phase, the instructor signal is removed, and the output neurons are driven purely by the activity of the input neurons weighted by the plastic synapses. In the absence of stimulation, the synapses preserve their internal state indefinitely. Memories are also very robust to the disruptive action of spontaneous activity. A network of 2000 input neurons is shown to be able to classify correctly a large number (thousands) of highly overlapping patterns (300 classes of preprocessed Latex characters, 30 patterns per class, and a subset of the NIST characters data set) and to generalize with performances that are better than or comparable to those of artificial neural networks. Finally we show that the synaptic dynamics is compatible with many of the experimental observations on the induction of long-term modifications (spike-timing-dependent plasticity and its dependence on both the postsynaptic depolarization and the frequency of pre- and postsynaptic neurons).
Collapse
|
26
|
Porr B, Wörgötter F. Learning with "relevance": using a third factor to stabilize Hebbian learning. Neural Comput 2007; 19:2694-719. [PMID: 17716008 DOI: 10.1162/neco.2007.19.10.2694] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
It is a well-known fact that Hebbian learning is inherently unstable because of its self-amplifying terms: the more a synapse grows, the stronger the postsynaptic activity, and therefore the faster the synaptic growth. This unwanted weight growth is driven by the autocorrelation term of Hebbian learning where the same synapse drives its own growth. On the other hand, the cross-correlation term performs actual learning where different inputs are correlated with each other. Consequently, we would like to minimize the autocorrelation and maximize the cross-correlation. Here we show that we can achieve this with a third factor that switches on learning when the autocorrelation is minimal or zero and the cross-correlation is maximal. The biological counterpart of such a third factor is a neuromodulator that switches on learning at a certain moment in time. We show in a behavioral experiment that our three-factor learning clearly outperforms classical Hebbian learning.
Collapse
Affiliation(s)
- Bernd Porr
- Department of Electronics and Electrical Engineering, University of Glasgow, Glasgow, GT12 8LT, Scotland.
| | | |
Collapse
|
27
|
Abstract
The quest to build an electronic computer based on the operational principles of biological brains has attracted attention over many years. The hope is that, by emulating the brain, it will be possible to capture some of its capabilities and thereby bridge the very large gulf that separates mankind from machines. At present, however, knowledge about the operational principles of the brain is far from complete, so attempts at emulation must employ a great deal of assumption and guesswork to fill the gaps in the experimental evidence. The sheer scale and complexity of the human brain still defies attempts to model it in its entirety at the neuronal level, but Moore's Law is closing this gap and machines with the potential to emulate the brain (so far as we can estimate the computing power required) are no more than a decade or so away. Do computer engineers have something to contribute, alongside neuroscientists, psychologists, mathematicians and others, to the understanding of brain and mind, which remains as one of the great frontiers of science?
Collapse
Affiliation(s)
- Steve Furber
- School of Computer Science, University of Manchester, Oxford Road, Manchester M13 9PL, UK.
| | | |
Collapse
|
28
|
Porr B, Wörgötter F. Fast heterosynaptic learning in a robot food retrieval task inspired by the limbic system. Biosystems 2007; 89:294-9. [PMID: 17292537 DOI: 10.1016/j.biosystems.2006.04.026] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2005] [Accepted: 04/20/2006] [Indexed: 11/27/2022]
Abstract
Hebbian learning is the most prominent paradigm in correlation based learning: if pre- and postsynaptic activity coincides the weight of the synapse is strengthened. Hebbian learning however, is not stable because of an autocorrelation term which causes the weights to grow exponentially. The standard solution would be to compensate the autocorrelation term. However, in this work we present a heterosynaptic learning rule which does not have an autocorrelation term and therefore does not show the instability of Hebbian learning. Consequently our heterosynaptic learning is much more stable than the classical Hebbian learning. The performance of our learning rule is demonstrated in a model which is inspired by the limbic system where an agent has to retrieve food.
Collapse
Affiliation(s)
- Bernd Porr
- Department of Electronics & Electrical Engineering, University of Glasgow, Oakfield Avenue, Glasgow GT12 8LT, UK.
| | | |
Collapse
|
29
|
Tamosiunaite M, Porr B, Wörgötter F. Developing velocity sensitivity in a model neuron by local synaptic plasticity. BIOLOGICAL CYBERNETICS 2007; 96:507-18. [PMID: 17431665 DOI: 10.1007/s00422-007-0146-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2006] [Accepted: 02/12/2007] [Indexed: 05/14/2023]
Abstract
Sensor neurons, like those in the visual cortex, display specific functional properties, e.g., tuning for the orientation, direction and velocity of a moving stimulus. It is still unclear how these properties arise from the processing of the inputs which converge at a given cell. Specifically, little is known how such properties can develop by ways of synaptic plasticity. In this study we investigate the hypothesis that velocity sensitivity can develop at a neuron from different types of synaptic plasticity at different dendritic sub-structures. Specifically we are implementing spike-timing dependent plasticity at one dendritic branch and conventional long-term potentiation at another branch, both driven by dendritic spikes triggered by moving inputs. In the first part of the study, we show how velocity sensitivity can arise from such a spatially localized difference in the plasticity. In the second part we show how this scenario is augmented by the interaction between dendritic spikes and back-propagating spikes also at different dendritic branches. Recent theoretical (Saudargiene et al. in Neural Comput 16:595-626, 2004) and experimental (Froemke et al. in Nature 434:221-225, 2005) results on spatially localized plasticity suggest that such processes may play a major role in determining how synapses will change depending on their site. The current study suggests that such mechanisms could be used to develop the functional specificities of a neuron.
Collapse
|
30
|
Bohte SM, Mozer MC. Reducing the Variability of Neural Responses: A Computational Theory of Spike-Timing-Dependent Plasticity. Neural Comput 2007; 19:371-403. [PMID: 17206869 DOI: 10.1162/neco.2007.19.2.371] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Experimental studies have observed synaptic potentiation when a presynaptic neuron fires shortly before a postsynaptic neuron and synaptic depression when the presynaptic neuron fires shortly after. The dependence of synaptic modulation on the precise timing of the two action potentials is known as spike-timing dependent plasticity (STDP). We derive STDP from a simple computational principle: synapses adapt so as to minimize the postsynaptic neuron's response variability to a given presynaptic input, causing the neuron's output to become more reliable in the face of noise. Using an objective function that minimizes response variability and the biophysically realistic spike-response model of Gerstner (2001), we simulate neurophysiological experiments and obtain the characteristic STDP curve along with other phenomena, including the reduction in synaptic plasticity as synaptic efficacy increases. We compare our account to other efforts to derive STDP from computational principles and argue that our account provides the most comprehensive coverage of the phenomena. Thus, reliability of neural response in the face of noise may be a key goal of unsupervised cortical adaptation.
Collapse
Affiliation(s)
- Sander M Bohte
- Netherlands Centre for Mathematics and Computer Science (CWI), 1098 SJ Amsterdam, The Netherlands.
| | | |
Collapse
|
31
|
Tamosiunaite M, Porr B, Wörgötter F. Self-influencing synaptic plasticity: recurrent changes of synaptic weights can lead to specific functional properties. J Comput Neurosci 2007; 23:113-27. [PMID: 17265145 DOI: 10.1007/s10827-007-0021-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2006] [Revised: 12/21/2006] [Accepted: 01/10/2007] [Indexed: 10/23/2022]
Abstract
Recent experimental results suggest that dendritic and back-propagating spikes can influence synaptic plasticity in different ways (Holthoff, 2004; Holthoff et al., 2005). In this study we investigate how these signals could interact at dendrites in space and time leading to changing plasticity properties at local synapse clusters. Similar to a previous study (Saudargiene et al., 2004) we employ a differential Hebbian learning rule to emulate spike-timing dependent plasticity and investigate how the interaction of dendritic and back-propagating spikes, as the post-synaptic signals, could influence plasticity. Specifically, we will show that local synaptic plasticity driven by spatially confined dendritic spikes can lead to the emergence of synaptic clusters with different properties. If one of these clusters can drive the neuron into spiking, plasticity may change and the now arising global influence of a back-propagating spike can lead to a further segregation of the clusters and possibly the dying-off of some of them leading to more functional specificity. These results suggest that through plasticity being a spatial and temporal local process, the computational properties of dendrites or complete neurons can be substantially augmented.
Collapse
Affiliation(s)
- Minija Tamosiunaite
- Department of Psychology, University of Stirling, Stirling, FK9 4LA, Scotland.
| | | | | |
Collapse
|
32
|
|
33
|
Zhu L, Lai YC, Hoppensteadt FC, He J. Cooperation of spike timing-dependent and heterosynaptic plasticities in neural networks: a Fokker-Planck approach. CHAOS (WOODBURY, N.Y.) 2006; 16:023105. [PMID: 16822008 DOI: 10.1063/1.2189969] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
It is believed that both Hebbian and homeostatic mechanisms are essential in neural learning. While Hebbian plasticity selectively modifies synaptic connectivity according to activity experienced, homeostatic plasticity constrains this change so that neural activity is always within reasonable physiological limits. Recent experiments reveal spike timing-dependent plasticity (STDP) as a new type of Hebbian learning with high time precision and heterosynaptic plasticity (HSP) as a new homeostatic mechanism acting directly on synapses. Here, we study the effect of STDP and HSP on randomly connected neural networks. Despite the reported successes of STDP to account for neural activities at the single-cell level, we find that, surprisingly, at the network level, networks trained using STDP alone cannot seem to generate realistic neural activities. For instance, STDP would stipulate that past sensory experience be maintained forever if it is no longer activated. To overcome this difficulty, motivated by the fact that HSP can induce strong competition between sensory experiences, we propose a biophysically plausible learning rule by combining STDP and HSP. Based on the Fokker-Planck theory and extensive numerical computations, we demonstrate that HSP and STDP operated on different time scales can complement each other, resulting in more realistic network activities. Our finding may provide fresh insight into the learning mechanism of the brain.
Collapse
Affiliation(s)
- Liqiang Zhu
- School of Mechanical, Electronic and Control Engineering, Beijing Jiaotong University, Beijing, China
| | | | | | | |
Collapse
|
34
|
Abstract
Timing of cellular and subcellular events contributes to spiking-induced modification of synapses in a variety of ways. Initially, the timing of presynaptic and postsynaptic action potentials must be translated into signals that can initiate intracellular processes. Recent experimental and computational findings suggest that the spatiotemporal details of such signals, in particular the time courses and locations of postsynaptic Ca(2+) transients, might themselves be crucial for driving potentiation and depression modules that interact in a time-dependent way to determine plasticity outcomes. On longer timescales, the effects of multiple spikes are integrated in a nonlinear manner, yielding non-intuitive plasticity results that are likely to be sensitive to local conditions and, finally, additional elements must be called into action to stabilize changes in synaptic strengths. This review is part of the TINS Synaptic Connectivity series.
Collapse
Affiliation(s)
- Guo-Qiang Bi
- Department of Neurobiology and Center for the Neural Basis of Cognition, University of Pittsburgh School of Medicine, Pittsburgh, PA 15261, USA.
| | | |
Collapse
|
35
|
Saudargiene A, Porr B, Wörgötter F. Synaptic modifications depend on synapse location and activity: a biophysical model of STDP. Biosystems 2005; 79:3-10. [PMID: 15649584 DOI: 10.1016/j.biosystems.2004.09.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
In spike-timing-dependent plasticity (STDP) the synapses are potentiated or depressed depending on the temporal order and temporal difference of the pre- and post-synaptic signals. We present a biophysical model of STDP which assumes that not only the timing, but also the shapes of these signals influence the synaptic modifications. The model is based on a Hebbian learning rule which correlates the NMDA synaptic conductance with the post-synaptic signal at synaptic location as the pre- and post-synaptic quantities. As compared to a previous paper [Saudargiene, A., Porr, B., Worgotter, F., 2004. How the shape of pre- and post-synaptic signals can influence stdp: a biophysical model. Neural Comp.], here we show that this rule reproduces the generic STDP weight change curve by using real neuronal input signals and combinations of more than two (pre- and post-synaptic) spikes. We demonstrate that the shape of the STDP curve strongly depends on the shape of the depolarising membrane potentials, which induces learning. As these potentials vary at different locations of the dendritic tree, model predicts that synaptic changes are location dependent. The model is extended to account for the patterns of more than two spikes of the pre- and post-synaptic cells. The results show that STDP weight change curve is also activity dependent.
Collapse
Affiliation(s)
- A Saudargiene
- Department of Psychology, University of Stirling, Stirling FK9 4LA, Scotland, UK.
| | | | | |
Collapse
|
36
|
Rubin JE, Gerkin RC, Bi GQ, Chow CC. Calcium Time Course as a Signal for Spike-Timing–Dependent Plasticity. J Neurophysiol 2005; 93:2600-13. [PMID: 15625097 DOI: 10.1152/jn.00803.2004] [Citation(s) in RCA: 139] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Calcium has been proposed as a postsynaptic signal underlying synaptic spike-timing–dependent plasticity (STDP). We examine this hypothesis with computational modeling based on experimental results from hippocampal cultures, some of which are presented here, in which pairs and triplets of pre- and postsynaptic spikes induce potentiation and depression in a temporally asymmetric way. Specifically, we present a set of model biochemical detectors, based on plausible molecular pathways, which make direct use of the time course of the calcium signal to reproduce these experimental STDP results. Our model features a modular structure, in which long-term potentiation (LTP) and depression (LTD) components compete to determine final plasticity outcomes; one aspect of this competition is a veto through which appropriate calcium time courses suppress LTD. Simulations of our model are also shown to be consistent with classical LTP and LTD induced by several presynaptic stimulation paradigms. Overall, our results provide computational evidence that, while the postsynaptic calcium time course contains sufficient information to distinguish various experimental long-term plasticity paradigms, small changes in the properties of back-propagation of action potentials or in synaptic dynamics can alter the calcium time course in ways that will significantly affect STDP induction by any detector based exclusively on postsynaptic calcium. This may account for the variability of STDP outcomes seen within hippocampal cultures, under repeated application of a single experimental protocol, as well as for that seen in multiple spike experiments across different systems.
Collapse
Affiliation(s)
- Jonathan E Rubin
- Department of Mathematics, University of Pittsburgh, 301 Thackeray Hall, Pittsburgh, PA 15260, USA.
| | | | | | | |
Collapse
|
37
|
Wörgötter F, Porr B. Temporal sequence learning, prediction, and control: a review of different models and their relation to biological mechanisms. Neural Comput 2005; 17:245-319. [PMID: 15720770 DOI: 10.1162/0899766053011555] [Citation(s) in RCA: 147] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In this review, we compare methods for temporal sequence learning (TSL) across the disciplines machine-control, classical conditioning, neuronal models for TSL as well as spike-timing-dependent plasticity (STDP). This review introduces the most influential models and focuses on two questions: To what degree are reward-based (e.g., TD learning) and correlation-based (Hebbian) learning related? and How do the different models correspond to possibly underlying biological mechanisms of synaptic plasticity? We first compare the different models in an open-loop condition, where behavioral feedback does not alter the learning. Here we observe that reward-based and correlation-based learning are indeed very similar. Machine control is then used to introduce the problem of closed-loop control (e.g., actor-critic architectures). Here the problem of evaluative (rewards) versus nonevaluative (correlations) feedback from the environment will be discussed, showing that both learning approaches are fundamentally different in the closed-loop condition. In trying to answer the second question, we compare neuronal versions of the different learning architectures to the anatomy of the involved brain structures (basal-ganglia, thalamus, and cortex) and the molecular biophysics of glutamatergic and dopaminergic synapses. Finally, we discuss the different algorithms used to model STDP and compare them to reward-based learning rules. Certain similarities are found in spite of the strongly different timescales. Here we focus on the biophysics of the different calcium-release mechanisms known to be involved in STDP.
Collapse
Affiliation(s)
- Florentin Wörgötter
- Department of Psychology, University of Stirling, Stirling FK9 4LA, Scotland.
| | | |
Collapse
|
38
|
Saudargiene A, Porr B, Wörgötter F. Local learning rules: predicted influence of dendritic location on synaptic modification in spike-timing-dependent plasticity. BIOLOGICAL CYBERNETICS 2005; 92:128-138. [PMID: 15696313 DOI: 10.1007/s00422-004-0525-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2004] [Accepted: 09/28/2004] [Indexed: 05/24/2023]
Abstract
Recent indirect experimental evidence suggests that synaptic plasticity changes along the dendrites of a neuron. Here we present a synaptic plasticity rule which is controlled by the properties of the pre- and postsynaptic signals. Using recorded membrane traces of back-propagating and dendritic spikes we demonstrate that LTP and LTD will depend specifically on the shape of the postsynaptic depolarization at a given dendritic site. We find that asymmetrical spike-timing-dependent plasticity (STDP) can be replaced by temporally symmetrical plasticity within physiologically relevant time windows if the postsynaptic depolarization rises shallow. Presynaptically the rule depends on the NMDA channel characteristic, and the model predicts that an increase in Mg(2+) will attenuate the STDP curve without changing its shape. Furthermore, the model suggests that the profile of LTD should be governed by the postsynaptic signal while that of LTP mainly depends on the presynaptic signal shape.
Collapse
|
39
|
Suri RE. A computational framework for cortical learning. BIOLOGICAL CYBERNETICS 2004; 90:400-409. [PMID: 15316786 DOI: 10.1007/s00422-004-0487-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2004] [Accepted: 04/24/2004] [Indexed: 05/24/2023]
Abstract
Recent physiological findings have revealed that long-term adaptation of the synaptic strengths between cortical pyramidal neurons depends on the temporal order of presynaptic and postsynaptic spikes, which is called spike-timing-dependent plasticity (STDP) or temporally asymmetric Hebbian (TAH) learning. Here I prove by analytical means that a physiologically plausible variant of STDP adapts synaptic strengths such that the presynaptic spikes predict the postsynaptic spikes with minimal error. This prediction error model of STDP implies a mechanism for cortical memory: cortical tissue learns temporal spike patterns if these spike patterns are repeatedly elicited in a set of pyramidal neurons. The trained network finishes these patterns if their beginnings are presented, thereby recalling the memory. Implementations of the proposed algorithms may be useful for applications in voice recognition and computer vision.
Collapse
Affiliation(s)
- Roland E Suri
- 3641, Midvale Ave. #205, CA 90034, Los Angeles, USA.
| |
Collapse
|