1
|
Kunz L, Staresina BP, Reinacher PC, Brandt A, Guth TA, Schulze-Bonhage A, Jacobs J. Ripple-locked coactivity of stimulus-specific neurons and human associative memory. Nat Neurosci 2024; 27:587-599. [PMID: 38366143 PMCID: PMC10917673 DOI: 10.1038/s41593-023-01550-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 12/11/2023] [Indexed: 02/18/2024]
Abstract
Associative memory enables the encoding and retrieval of relations between different stimuli. To better understand its neural basis, we investigated whether associative memory involves temporally correlated spiking of medial temporal lobe (MTL) neurons that exhibit stimulus-specific tuning. Using single-neuron recordings from patients with epilepsy performing an associative object-location memory task, we identified the object-specific and place-specific neurons that represented the separate elements of each memory. When patients encoded and retrieved particular memories, the relevant object-specific and place-specific neurons activated together during hippocampal ripples. This ripple-locked coactivity of stimulus-specific neurons emerged over time as the patients' associative learning progressed. Between encoding and retrieval, the ripple-locked timing of coactivity shifted, suggesting flexibility in the interaction between MTL neurons and hippocampal ripples according to behavioral demands. Our results are consistent with a cellular account of associative memory, in which hippocampal ripples coordinate the activity of specialized cellular populations to facilitate links between stimuli.
Collapse
Affiliation(s)
- Lukas Kunz
- Department of Epileptology, University Hospital Bonn, Bonn, Germany.
- Epilepsy Center, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.
| | - Bernhard P Staresina
- Department of Experimental Psychology, University of Oxford, Oxford, UK
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| | - Peter C Reinacher
- Department of Stereotactic and Functional Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- Fraunhofer Institute for Laser Technology, Aachen, Germany
| | - Armin Brandt
- Epilepsy Center, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Tim A Guth
- Department of Epileptology, University Hospital Bonn, Bonn, Germany
- Epilepsy Center, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Andreas Schulze-Bonhage
- Epilepsy Center, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Joshua Jacobs
- Department of Biomedical Engineering, Columbia University, New York, NY, USA
- Department of Neurological Surgery, Columbia University Medical Center, New York, NY, USA
| |
Collapse
|
2
|
Echeverria V, Mendoza C, Iarkov A. Nicotinic acetylcholine receptors and learning and memory deficits in Neuroinflammatory diseases. Front Neurosci 2023; 17:1179611. [PMID: 37255751 PMCID: PMC10225599 DOI: 10.3389/fnins.2023.1179611] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Accepted: 04/07/2023] [Indexed: 06/01/2023] Open
Abstract
Animal survival depends on cognitive abilities such as learning and memory to adapt to environmental changes. Memory functions require an enhanced activity and connectivity of a particular arrangement of engram neurons, supported by the concerted action of neurons, glia, and vascular cells. The deterioration of the cholinergic system is a common occurrence in neurological conditions exacerbated by aging such as traumatic brain injury (TBI), posttraumatic stress disorder (PTSD), Alzheimer's disease (AD), and Parkinson's disease (PD). Cotinine is a cholinergic modulator with neuroprotective, antidepressant, anti-inflammatory, antioxidant, and memory-enhancing effects. Current evidence suggests Cotinine's beneficial effects on cognition results from the positive modulation of the α7-nicotinic acetylcholine receptors (nAChRs) and the inhibition of the toll-like receptors (TLRs). The α7nAChR affects brain functions by modulating the function of neurons, glia, endothelial, immune, and dendritic cells and regulates inhibitory and excitatory neurotransmission throughout the GABA interneurons. In addition, Cotinine acting on the α7 nAChRs and TLR reduces neuroinflammation by inhibiting the release of pro-inflammatory cytokines by the immune cells. Also, α7nAChRs stimulate signaling pathways supporting structural, biochemical, electrochemical, and cellular changes in the Central nervous system during the cognitive processes, including Neurogenesis. Here, the mechanisms of memory formation as well as potential mechanisms of action of Cotinine on memory preservation in aging and neurological diseases are discussed.
Collapse
Affiliation(s)
- Valentina Echeverria
- Facultad de Medicina y Ciencia, Universidad San Sebastián, Concepción, Chile
- Research and Development Department, Bay Pines VAHCS, Bay Pines, FL, United States
| | - Cristhian Mendoza
- Facultad de Odontologia y Ciencias de la Rehabilitacion, Universidad San Sebastián, Concepción, Chile
| | - Alex Iarkov
- Facultad de Medicina y Ciencia, Universidad San Sebastián, Concepción, Chile
| |
Collapse
|
3
|
Liu CQ, Qu XC, He MF, Liang DH, Xie SM, Zhang XX, Lin YM, Zhang WJ, Wu KC, Qiao JD. Efficient strategies based on behavioral and electrophysiological methods for epilepsy-related gene screening in the Drosophila model. Front Mol Neurosci 2023; 16:1121877. [PMID: 37152436 PMCID: PMC10157486 DOI: 10.3389/fnmol.2023.1121877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 03/27/2023] [Indexed: 05/09/2023] Open
Abstract
Introduction With the advent of trio-based whole-exome sequencing, the identification of epilepsy candidate genes has become easier, resulting in a large number of potential genes that need to be validated in a whole-organism context. However, conducting animal experiments systematically and efficiently remains a challenge due to their laborious and time-consuming nature. This study aims to develop optimized strategies for validating epilepsy candidate genes using the Drosophila model. Methods This study incorporate behavior, morphology, and electrophysiology for genetic manipulation and phenotypic examination. We utilized the Gal4/UAS system in combination with RNAi techniques to generate loss-of-function models. We performed a range of behavioral tests, including two previously unreported seizure phenotypes, to evaluate the seizure behavior of mutant and wild-type flies. We used Gal4/UAS-mGFP flies to observe the morphological alterations in the brain under a confocal microscope. We also implemented patch-clamp recordings, including a novel electrophysiological method for studying synapse function and improved methods for recording action potential currents and spontaneous EPSCs on targeted neurons. Results We applied different techniques or methods mentioned above to investigate four epilepsy-associated genes, namely Tango14, Klp3A, Cac, and Sbf, based on their genotype-phenotype correlation. Our findings showcase the feasibility and efficiency of our screening system for confirming epilepsy candidate genes in the Drosophila model. Discussion This efficient screening system holds the potential to significantly accelerate and optimize the process of identifying epilepsy candidate genes, particularly in conjunction with trio-based whole-exome sequencing.
Collapse
Affiliation(s)
- Chu-Qiao Liu
- Department of Neurology, Institute of Neuroscience, Key Laboratory of Neurogenetics and Channelopathies of Guangdong Province and the Ministry of Education of China, The Second Affiliated Hospital, Guangzhou Medical University, Guangzhou, China
- The Second Clinical Medicine School of Guangzhou Medical University, Guangzhou, China
| | - Xiao-Chong Qu
- Department of Neurology, Institute of Neuroscience, Key Laboratory of Neurogenetics and Channelopathies of Guangdong Province and the Ministry of Education of China, The Second Affiliated Hospital, Guangzhou Medical University, Guangzhou, China
| | - Ming-Feng He
- Department of Neurology, Institute of Neuroscience, Key Laboratory of Neurogenetics and Channelopathies of Guangdong Province and the Ministry of Education of China, The Second Affiliated Hospital, Guangzhou Medical University, Guangzhou, China
| | - De-Hai Liang
- Department of Neurology, Institute of Neuroscience, Key Laboratory of Neurogenetics and Channelopathies of Guangdong Province and the Ministry of Education of China, The Second Affiliated Hospital, Guangzhou Medical University, Guangzhou, China
| | - Shi-Ming Xie
- The First Clinical Medicine School of Guangzhou Medical University, Guangzhou, China
| | - Xi-Xing Zhang
- The Second Clinical Medicine School of Guangzhou Medical University, Guangzhou, China
| | - Yong-Miao Lin
- The Second Clinical Medicine School of Guangzhou Medical University, Guangzhou, China
| | - Wen-Jun Zhang
- Department of Neurology, Institute of Neuroscience, Key Laboratory of Neurogenetics and Channelopathies of Guangdong Province and the Ministry of Education of China, The Second Affiliated Hospital, Guangzhou Medical University, Guangzhou, China
| | - Ka-Chun Wu
- School of Clinical Medicine, LKS Faculty of Medicine, The University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Jing-Da Qiao
- Department of Neurology, Institute of Neuroscience, Key Laboratory of Neurogenetics and Channelopathies of Guangdong Province and the Ministry of Education of China, The Second Affiliated Hospital, Guangzhou Medical University, Guangzhou, China
- *Correspondence: Jing-Da Qiao, ; orcid.org/0000-0002-4693-8390
| |
Collapse
|
4
|
Abstract
Neuromorphic devices and systems have attracted attention as next-generation computing due to their high efficiency in processing complex data. So far, they have been demonstrated using both machine-learning software and complementary metal-oxide-semiconductor-based hardware. However, these approaches have drawbacks in power consumption and learning speed. An energy-efficient neuromorphic computing system requires hardware that can mimic the functions of a brain. Therefore, various materials have been introduced for the development of neuromorphic devices. Here, recent advances in neuromorphic devices are reviewed. First, the functions of biological synapses and neurons are discussed. Also, deep neural networks and spiking neural networks are described. Then, the operation mechanism and the neuromorphic functions of emerging devices are reviewed. Finally, the challenges and prospects for developing neuromorphic devices that use emerging materials are discussed.
Collapse
Affiliation(s)
- Min-Kyu Kim
- Department of Materials Science and Engineering, Pohang University of Science and Technology (POSTECH), Pohang 37673, Republic of Korea
| | - Youngjun Park
- Department of Materials Science and Engineering, Pohang University of Science and Technology (POSTECH), Pohang 37673, Republic of Korea
| | - Ik-Jyae Kim
- Department of Materials Science and Engineering, Pohang University of Science and Technology (POSTECH), Pohang 37673, Republic of Korea
| | - Jang-Sik Lee
- Department of Materials Science and Engineering, Pohang University of Science and Technology (POSTECH), Pohang 37673, Republic of Korea
| |
Collapse
|
5
|
Kim SY, Lim W. Effect of interpopulation spike-timing-dependent plasticity on synchronized rhythms in neuronal networks with inhibitory and excitatory populations. Cogn Neurodyn 2020; 14:535-567. [PMID: 32655716 DOI: 10.1007/s11571-020-09580-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2019] [Revised: 02/11/2020] [Accepted: 03/06/2020] [Indexed: 02/07/2023] Open
Abstract
We consider a two-population network consisting of both inhibitory (I) interneurons and excitatory (E) pyramidal cells. This I-E neuronal network has adaptive dynamic I to E and E to I interpopulation synaptic strengths, governed by interpopulation spike-timing-dependent plasticity (STDP). In previous works without STDPs, fast sparsely synchronized rhythms, related to diverse cognitive functions, were found to appear in a range of noise intensity D for static synaptic strengths. Here, by varying D, we investigate the effect of interpopulation STDPs on fast sparsely synchronized rhythms that emerge in both the I- and the E-populations. Depending on values of D, long-term potentiation (LTP) and long-term depression (LTD) for population-averaged values of saturated interpopulation synaptic strengths are found to occur. Then, the degree of fast sparse synchronization varies due to effects of LTP and LTD. In a broad region of intermediate D, the degree of good synchronization (with higher synchronization degree) becomes decreased, while in a region of large D, the degree of bad synchronization (with lower synchronization degree) gets increased. Consequently, in each I- or E-population, the synchronization degree becomes nearly the same in a wide range of D (including both the intermediate and the large D regions). This kind of "equalization effect" is found to occur via cooperative interplay between the average occupation and pacing degrees of spikes (i.e., the average fraction of firing neurons and the average degree of phase coherence between spikes in each synchronized stripe of spikes in the raster plot of spikes) in fast sparsely synchronized rhythms. Finally, emergences of LTP and LTD of interpopulation synaptic strengths (leading to occurrence of equalization effect) are intensively investigated via a microscopic method based on the distributions of time delays between the pre- and the post-synaptic spike times.
Collapse
Affiliation(s)
- Sang-Yoon Kim
- Institute for Computational Neuroscience and Department of Science Education, Daegu National University of Education, Daegu, 42411 Korea
| | - Woochang Lim
- Institute for Computational Neuroscience and Department of Science Education, Daegu National University of Education, Daegu, 42411 Korea
| |
Collapse
|
6
|
Röhr V, Berner R, Lameu EL, Popovych OV, Yanchuk S. Frequency cluster formation and slow oscillations in neural populations with plasticity. PLoS One 2019; 14:e0225094. [PMID: 31725782 DOI: 10.1371/journal.pone.0225094] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Accepted: 10/29/2019] [Indexed: 11/20/2022] Open
Abstract
We report the phenomenon of frequency clustering in a network of Hodgkin-Huxley neurons with spike timing-dependent plasticity. The clustering leads to a splitting of a neural population into a few groups synchronized at different frequencies. In this regime, the amplitude of the mean field undergoes low-frequency modulations, which may contribute to the mechanism of the emergence of slow oscillations of neural activity observed in spectral power of local field potentials or electroencephalographic signals at high frequencies. In addition to numerical simulations of such multi-clusters, we investigate the mechanisms of the observed phenomena using the simplest case of two clusters. In particular, we propose a phenomenological model which describes the dynamics of two clusters taking into account the adaptation of coupling weights. We also determine the set of plasticity functions (update rules), which lead to multi-clustering.
Collapse
|
7
|
Kim SY, Lim W. Burst synchronization in a scale-free neuronal network with inhibitory spike-timing-dependent plasticity. Cogn Neurodyn 2018; 13:53-73. [PMID: 30728871 DOI: 10.1007/s11571-018-9505-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2018] [Revised: 08/19/2018] [Accepted: 08/28/2018] [Indexed: 01/09/2023] Open
Abstract
We are concerned about burst synchronization (BS), related to neural information processes in health and disease, in the Barabási-Albert scale-free network (SFN) composed of inhibitory bursting Hindmarsh-Rose neurons. This inhibitory neuronal population has adaptive dynamic synaptic strengths governed by the inhibitory spike-timing-dependent plasticity (iSTDP). In previous works without considering iSTDP, BS was found to appear in a range of noise intensities for fixed synaptic inhibition strengths. In contrast, in our present work, we take into consideration iSTDP and investigate its effect on BS by varying the noise intensity. Our new main result is to find occurrence of a Matthew effect in inhibitory synaptic plasticity: good BS gets better via LTD, while bad BS get worse via LTP. This kind of Matthew effect in inhibitory synaptic plasticity is in contrast to that in excitatory synaptic plasticity where good (bad) synchronization gets better (worse) via LTP (LTD). We note that, due to inhibition, the roles of LTD and LTP in inhibitory synaptic plasticity are reversed in comparison with those in excitatory synaptic plasticity. Moreover, emergences of LTD and LTP of synaptic inhibition strengths are intensively investigated via a microscopic method based on the distributions of time delays between the pre- and the post-synaptic burst onset times. Finally, in the presence of iSTDP we investigate the effects of network architecture on BS by varying the symmetric attachment degree l ∗ and the asymmetry parameter Δ l in the SFN.
Collapse
Affiliation(s)
- Sang-Yoon Kim
- Institute for Computational Neuroscience and Department of Science Education, Daegu National University of Education, Daegu, 42411 Korea
| | - Woochang Lim
- Institute for Computational Neuroscience and Department of Science Education, Daegu National University of Education, Daegu, 42411 Korea
| |
Collapse
|
8
|
Kim SY, Lim W. Effect of inhibitory spike-timing-dependent plasticity on fast sparsely synchronized rhythms in a small-world neuronal network. Neural Netw 2018; 106:50-66. [PMID: 30025272 DOI: 10.1016/j.neunet.2018.06.013] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 05/14/2018] [Accepted: 06/25/2018] [Indexed: 02/06/2023]
Abstract
We consider the Watts-Strogatz small-world network (SWN) consisting of inhibitory fast spiking Izhikevich interneurons. This inhibitory neuronal population has adaptive dynamic synaptic strengths governed by the inhibitory spike-timing-dependent plasticity (iSTDP). In previous works without iSTDP, fast sparsely synchronized rhythms, associated with diverse cognitive functions, were found to appear in a range of large noise intensities for fixed strong synaptic inhibition strengths. Here, we investigate the effect of iSTDP on fast sparse synchronization (FSS) by varying the noise intensity D. We employ an asymmetric anti-Hebbian time window for the iSTDP update rule [which is in contrast to the Hebbian time window for the excitatory STDP (eSTDP)]. Depending on values of D, population-averaged values of saturated synaptic inhibition strengths are potentiated [long-term potentiation (LTP)] or depressed [long-term depression (LTD)] in comparison with the initial mean value, and dispersions from the mean values of LTP/LTD are much increased when compared with the initial dispersion, independently of D. In most cases of LTD where the effect of mean LTD is dominant in comparison with the effect of dispersion, good synchronization (with higher spiking measure) is found to get better via LTD, while bad synchronization (with lower spiking measure) is found to get worse via LTP. This kind of Matthew effect in inhibitory synaptic plasticity is in contrast to that in excitatory synaptic plasticity where good (bad) synchronization gets better (worse) via LTP (LTD). Emergences of LTD and LTP of synaptic inhibition strengths are intensively investigated via a microscopic method based on the distributions of time delays between the pre- and the post-synaptic spike times. Furthermore, we also investigate the effects of network architecture on FSS by changing the rewiring probability p of the SWN in the presence of iSTDP.
Collapse
Affiliation(s)
- Sang-Yoon Kim
- Institute for Computational Neuroscience and Department of Science Education, Daegu National University of Education, Daegu 42411, Republic of Korea.
| | - Woochang Lim
- Institute for Computational Neuroscience and Department of Science Education, Daegu National University of Education, Daegu 42411, Republic of Korea.
| |
Collapse
|
9
|
Kim SY, Lim W. Effect of spike-timing-dependent plasticity on stochastic burst synchronization in a scale-free neuronal network. Cogn Neurodyn 2018; 12:315-42. [PMID: 29765480 DOI: 10.1007/s11571-017-9470-0] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Revised: 11/29/2017] [Accepted: 12/26/2017] [Indexed: 01/02/2023] Open
Abstract
We consider an excitatory population of subthreshold Izhikevich neurons which cannot fire spontaneously without noise. As the coupling strength passes a threshold, individual neurons exhibit noise-induced burstings. This neuronal population has adaptive dynamic synaptic strengths governed by the spike-timing-dependent plasticity (STDP). However, STDP was not considered in previous works on stochastic burst synchronization (SBS) between noise-induced burstings of sub-threshold neurons. Here, we study the effect of additive STDP on SBS by varying the noise intensity D in the Barabási-Albert scale-free network (SFN). One of our main findings is a Matthew effect in synaptic plasticity which occurs due to a positive feedback process. Good burst synchronization (with higher bursting measure) gets better via long-term potentiation (LTP) of synaptic strengths, while bad burst synchronization (with lower bursting measure) gets worse via long-term depression (LTD). Consequently, a step-like rapid transition to SBS occurs by changing D, in contrast to a relatively smooth transition in the absence of STDP. We also investigate the effects of network architecture on SBS by varying the symmetric attachment degree [Formula: see text] and the asymmetry parameter [Formula: see text] in the SFN, and Matthew effects are also found to occur by varying [Formula: see text] and [Formula: see text]. Furthermore, emergences of LTP and LTD of synaptic strengths are investigated in details via our own microscopic methods based on both the distributions of time delays between the burst onset times of the pre- and the post-synaptic neurons and the pair-correlations between the pre- and the post-synaptic instantaneous individual burst rates (IIBRs). Finally, a multiplicative STDP case (depending on states) with soft bounds is also investigated in comparison with the additive STDP case (independent of states) with hard bounds. Due to the soft bounds, a Matthew effect with some quantitative differences is also found to occur for the case of multiplicative STDP.
Collapse
|
10
|
Kim SY, Lim W. Stochastic spike synchronization in a small-world neural network with spike-timing-dependent plasticity. Neural Netw 2017; 97:92-106. [PMID: 29096205 DOI: 10.1016/j.neunet.2017.09.016] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2017] [Revised: 08/17/2017] [Accepted: 09/29/2017] [Indexed: 10/18/2022]
Abstract
We consider the Watts-Strogatz small-world network (SWN) consisting of subthreshold neurons which exhibit noise-induced spikings. This neuronal network has adaptive dynamic synaptic strengths governed by the spike-timing-dependent plasticity (STDP). In previous works without STDP, stochastic spike synchronization (SSS) between noise-induced spikings of subthreshold neurons was found to occur in a range of intermediate noise intensities. Here, we investigate the effect of additive STDP on the SSS by varying the noise intensity. Occurrence of a "Matthew" effect in synaptic plasticity is found due to a positive feedback process. As a result, good synchronization gets better via long-term potentiation of synaptic strengths, while bad synchronization gets worse via long-term depression. Emergences of long-term potentiation and long-term depression of synaptic strengths are intensively investigated via microscopic studies based on the pair-correlations between the pre- and the post-synaptic IISRs (instantaneous individual spike rates) as well as the distributions of time delays between the pre- and the post-synaptic spike times. Furthermore, the effects of multiplicative STDP (which depends on states) on the SSS are studied and discussed in comparison with the case of additive STDP (independent of states). These effects of STDP on the SSS in the SWN are also compared with those in the regular lattice and the random graph.
Collapse
Affiliation(s)
- Sang-Yoon Kim
- Institute for Computational Neuroscience and Department of Science Education, Daegu National University of Education, Daegu 42411, Republic of Korea.
| | - Woochang Lim
- Institute for Computational Neuroscience and Department of Science Education, Daegu National University of Education, Daegu 42411, Republic of Korea.
| |
Collapse
|
11
|
Suen JY, Navlakha S. Using Inspiration from Synaptic Plasticity Rules to Optimize Traffic Flow in Distributed Engineered Networks. Neural Comput 2017; 29:1204-1228. [DOI: 10.1162/neco_a_00945] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Controlling the flow and routing of data is a fundamental problem in many distributed networks, including transportation systems, integrated circuits, and the Internet. In the brain, synaptic plasticity rules have been discovered that regulate network activity in response to environmental inputs, which enable circuits to be stable yet flexible. Here, we develop a new neuro-inspired model for network flow control that depends only on modifying edge weights in an activity-dependent manner. We show how two fundamental plasticity rules, long-term potentiation and long-term depression, can be cast as a distributed gradient descent algorithm for regulating traffic flow in engineered networks. We then characterize, both by simulation and analytically, how different forms of edge-weight-update rules affect network routing efficiency and robustness. We find a close correspondence between certain classes of synaptic weight update rules derived experimentally in the brain and rules commonly used in engineering, suggesting common principles to both.
Collapse
Affiliation(s)
- Jonathan Y. Suen
- Duke University, Department of Electrical and Computer Engineering. Durham, NC 27708, U.S.A
| | - Saket Navlakha
- Salk Institute for Biological Studies, Integrative Biology Laboratory, La Jolla, CA 92037, U.S.A
| |
Collapse
|
12
|
Bouchard KE, Ganguli S, Brainard MS. Role of the site of synaptic competition and the balance of learning forces for Hebbian encoding of probabilistic Markov sequences. Front Comput Neurosci 2015; 9:92. [PMID: 26257637 PMCID: PMC4508839 DOI: 10.3389/fncom.2015.00092] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2015] [Accepted: 06/30/2015] [Indexed: 12/11/2022] Open
Abstract
The majority of distinct sensory and motor events occur as temporally ordered sequences with rich probabilistic structure. Sequences can be characterized by the probability of transitioning from the current state to upcoming states (forward probability), as well as the probability of having transitioned to the current state from previous states (backward probability). Despite the prevalence of probabilistic sequencing of both sensory and motor events, the Hebbian mechanisms that mold synapses to reflect the statistics of experienced probabilistic sequences are not well understood. Here, we show through analytic calculations and numerical simulations that Hebbian plasticity (correlation, covariance, and STDP) with pre-synaptic competition can develop synaptic weights equal to the conditional forward transition probabilities present in the input sequence. In contrast, post-synaptic competition can develop synaptic weights proportional to the conditional backward probabilities of the same input sequence. We demonstrate that to stably reflect the conditional probability of a neuron's inputs and outputs, local Hebbian plasticity requires balance between competitive learning forces that promote synaptic differentiation and homogenizing learning forces that promote synaptic stabilization. The balance between these forces dictates a prior over the distribution of learned synaptic weights, strongly influencing both the rate at which structure emerges and the entropy of the final distribution of synaptic weights. Together, these results demonstrate a simple correspondence between the biophysical organization of neurons, the site of synaptic competition, and the temporal flow of information encoded in synaptic weights by Hebbian plasticity while highlighting the utility of balancing learning forces to accurately encode probability distributions, and prior expectations over such probability distributions.
Collapse
Affiliation(s)
- Kristofer E Bouchard
- Life Sciences and Computational Research Divisions, Lawrence Berkeley National Laboratory Berkeley, CA, USA
| | - Surya Ganguli
- Department of Applied Physics, Stanford University Stanford, CA, USA
| | - Michael S Brainard
- Department of Physiology, University of California, San Francisco and Center for Integrative Neuroscience, University of California, San Francisco San Francisco, CA, USA ; Howard Hughes Medical Institute Chevy Chase, MD, USA
| |
Collapse
|
13
|
Abstract
OBJECTIVE Continuous application of high-frequency deep brain stimulation (DBS) often effectively reduces motor symptoms of Parkinson's disease patients. While there is a growing need for more effective and less traumatic stimulation, the exact mechanism of DBS is still unknown. Here, we present a methodology to exploit the plasticity of GABAergic synapses inside the external globus pallidus (GPe) for the optimization of DBS. APPROACH Assuming the existence of spike-timing-dependent plasticity (STDP) at GABAergic GPe-GPe synapses, we simulate neural activity in a network model of the subthalamic nucleus and GPe. In particular, we test different DBS protocols in our model and quantify their influence on neural synchrony. MAIN RESULTS In an exemplary set of biologically plausible model parameters, we show that STDP in the GPe has a direct influence on neural activity and especially the stability of firing patterns. STDP stabilizes both uncorrelated firing in the healthy state and correlated firing in the parkinsonian state. Alternative stimulation protocols such as coordinated reset stimulation can clearly profit from the stabilizing effect of STDP. These results are widely independent of the STDP learning rule. SIGNIFICANCE Once the model settings, e.g., connection architectures, have been described experimentally, our model can be adjusted and directly applied in the development of novel stimulation protocols. More efficient stimulation leads to both minimization of side effects and savings in battery power.
Collapse
Affiliation(s)
- Marcel A J Lourens
- MIRA: Institute for Biomedical Technology and Technical Medicine, University of Twente, Enschede, 7500 AE, The Netherlands
| | | | | | | | | |
Collapse
|
14
|
Abstract
Single spikes and their timing matter in changing synaptic efficacy, which is known as spike-timing-dependent plasticity (STDP). Most previous studies treated spikes as all-or-none events, and considered their duration and magnitude as negligible. Here we explore the effects of action potential (AP) duration on synaptic plasticity in a simplified model neuron using computer simulations. We propose a novel STDP model that depresses synapses using an AP duration dependent LTD window and induces potentiation of synaptic strength when presynaptic spikes arrive before and during a postsynaptic AP (dSTDP). We demonstrate that AP duration is another key factor for insensitizing the postsynaptic neural firing and for controlling the shape of synaptic weight distribution. Extended AP durations produce a wide unimodal weight distribution that resembles the ones reported experimentally and make the postsynaptic neuron tranquil when disturbed by poisson noise spike trains, while equivalently sensitive to the synchronized. Our results suggest that the impact of AP duration, modeled here as an AP-dependent STDP window, on synaptic plasticity can be dramatic and should motivate future STDP studies.
Collapse
Affiliation(s)
- Youwei Zheng
- Faculty of Computer Science and Electrical Engineering, University of Rostock, Rostock, Germany
- * E-mail:
| | - Lars Schwabe
- Faculty of Computer Science and Electrical Engineering, University of Rostock, Rostock, Germany
| |
Collapse
|
15
|
Abstract
Although sleep is a fundamental behavior observed in virtually all animal species, its functions remain unclear. One leading proposal, known as the synaptic renormalization hypothesis, suggests that sleep is necessary to counteract a global strengthening of synapses that occurs during wakefulness. Evidence for sleep-dependent synaptic downscaling (or synaptic renormalization) has been observed experimentally, but the physiological mechanisms which generate this phenomenon are unknown. In this study, we propose that changes in neuronal membrane excitability induced by acetylcholine may provide a dynamical mechanism for both wake-dependent synaptic upscaling and sleep-dependent downscaling. We show in silico that cholinergically-induced changes in network firing patterns alter overall network synaptic potentiation when synaptic strengths evolve through spike-timing dependent plasticity mechanisms. Specifically, network synaptic potentiation increases dramatically with high cholinergic concentration and decreases dramatically with low levels of acetylcholine. We demonstrate that this phenomenon is robust across variation of many different network parameters.
Collapse
Affiliation(s)
- Christian G Fink
- Department of Physics, University of Michigan, Ann Arbor, Michigan, USA.
| | | | | | | |
Collapse
|
16
|
Bol K, Marsat G, Mejias JF, Maler L, Longtin A. Modeling cancelation of periodic inputs with burst-STDP and feedback. Neural Netw 2013; 47:120-33. [PMID: 23332545 DOI: 10.1016/j.neunet.2012.12.011] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2012] [Revised: 10/17/2012] [Accepted: 12/17/2012] [Indexed: 11/15/2022]
Abstract
Prediction and cancelation of redundant information is an important feature that many neural systems must display in order to efficiently code external signals. We develop an analytic framework for such cancelation in sensory neurons produced by a cerebellar-like structure in wave-type electric fish. Our biologically plausible mechanism is motivated by experimental evidence of cancelation of periodic input arising from the proximity of conspecifics as well as tail motion. This mechanism involves elements present in a wide range of systems: (1) stimulus-driven feedback to the neurons acting as detectors, (2) a large variety of temporal delays in the pathways transmitting such feedback, responsible for producing frequency channels, and (3) burst-induced long-term plasticity. The bursting arises from back-propagating action potentials. Bursting events drive the input frequency-dependent learning rule, which in turn affects the feedback input and thus the burst rate. We show how the mean firing rate and the rate of production of 2- and 4-spike bursts (the main learning events) can be estimated analytically for a leaky integrate-and-fire model driven by (slow) sinusoidal, back-propagating and feedback inputs as well as rectified filtered noise. The effect of bursts on the average synaptic strength is also derived. Our results shed light on why bursts rather than single spikes can drive learning in such networks "online", i.e. in the absence of a correlative discharge. Phase locked spiking in frequency specific channels together with a frequency-dependent STDP window size regulate burst probability and duration self-consistently to implement cancelation.
Collapse
Affiliation(s)
- K Bol
- Department of Physics, University of Ottawa, K1N 6N5 Ottawa, Canada
| | | | | | | | | |
Collapse
|
17
|
Chersi F, Mirolli M, Pezzulo G, Baldassarre G. A spiking neuron model of the cortico-basal ganglia circuits for goal-directed and habitual action learning. Neural Netw 2012; 41:212-24. [PMID: 23266482 DOI: 10.1016/j.neunet.2012.11.009] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2012] [Revised: 08/13/2012] [Accepted: 11/16/2012] [Indexed: 10/27/2022]
Abstract
Dual-system theories postulate that actions are supported either by a goal-directed or by a habit-driven response system. Neuroimaging and anatomo-functional studies have provided evidence that the prefrontal cortex plays a fundamental role in the first type of action control, while internal brain areas such as the basal ganglia are more active during habitual and overtrained responses. Additionally, it has been shown that areas of the cortex and the basal ganglia are connected through multiple parallel "channels", which are thought to function as an action selection mechanism resolving competitions between alternative options available in a given context. In this paper we propose a multi-layer network of spiking neurons that implements in detail the thalamo-cortical circuits that are believed to be involved in action learning and execution. A key feature of this model is that neurons are organized in small pools in the motor cortex and form independent loops with specific pools of the basal ganglia where inhibitory circuits implement a multistep selection mechanism. The described model has been validated utilizing it to control the actions of a virtual monkey that has to learn to turn on briefly flashing lights by pressing corresponding buttons on a board. When the animal is able to fluently execute the task the button-light associations are remapped so that it has to suppress its habitual behavior in order to execute goal-directed actions. The model nicely shows how sensory-motor associations for action sequences are formed at the cortico-basal ganglia level and how goal-directed decisions may override automatic motor responses.
Collapse
Affiliation(s)
- Fabian Chersi
- Institute of Cognitive Sciences and Technologies, National Research Council. Via San Martino della Battaglia 44, 00185 Roma, Italy.
| | | | | | | |
Collapse
|
18
|
Kubota S. Activity-dependent competition regulated by nonlinear interspike interaction in STDP: a model for visual cortical plasticity. Artif Life Robotics 2012; 17:152-157. [DOI: 10.1007/s10015-012-0029-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
19
|
Abstract
Coordinated reset (CR) stimulation is a desynchronizing stimulation technique based on timely coordinated phase resets of sub-populations of a synchronized neuronal ensemble. It has initially been computationally developed for electrical deep brain stimulation (DBS), to enable an effective desynchronization and unlearning of pathological synchrony and connectivity (anti-kindling). Here we computationally show for ensembles of spiking and bursting model neurons interacting via excitatory and inhibitory adaptive synapses that a phase reset of neuronal populations as well as a desynchronization and an anti-kindling can robustly be achieved by direct electrical stimulation or indirect (synaptically-mediated) excitatory and inhibitory stimulation. Our findings are relevant for DBS as well as for sensory stimulation in neurological disorders characterized by pathological neuronal synchrony. Based on the obtained results, we may expect that the local effects in the vicinity of a depth electrode (realized by direct stimulation of the neurons' somata or stimulation of axon terminals) and the non-local CR effects (realized by stimulation of excitatory or inhibitory efferent fibers) of deep brain CR neuromodulation may be similar or even identical. Furthermore, our results indicate that an effective desynchronization and anti-kindling can even be achieved by non-invasive, sensory CR neuromodulation. We discuss the concept of sensory CR neuromodulation in the context of neurological disorders.
Collapse
Affiliation(s)
- Oleksandr V Popovych
- Research Center Jülich, Institute of Neuroscience and Medicine - Neuromodulation (INM-7) Jülich, Germany
| | | |
Collapse
|
20
|
Leen TK, Friel R, Nielsen D. Approximating distributions in stochastic learning. Neural Netw 2012; 32:219-28. [PMID: 22418034 DOI: 10.1016/j.neunet.2012.02.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2011] [Revised: 12/27/2011] [Accepted: 02/07/2012] [Indexed: 11/22/2022]
Abstract
On-line machine learning algorithms, many biological spike-timing-dependent plasticity (STDP) learning rules, and stochastic neural dynamics evolve by Markov processes. A complete description of such systems gives the probability densities for the variables. The evolution and equilibrium state of these densities are given by a Chapman-Kolmogorov equation in discrete time, or a master equation in continuous time. These formulations are analytically intractable for most cases of interest, and to make progress a nonlinear Fokker-Planck equation (FPE) is often used in their place. The FPE is limited, and some argue that its application to describe jump processes (such as in these problems) is fundamentally flawed. We develop a well-grounded perturbation expansion that provides approximations for both the density and its moments. The approach is based on the system size expansion in statistical physics (which does not give approximations for the density), but our simple development makes the methods accessible and invites application to diverse problems. We apply the method to calculate the equilibrium distributions for two biologically-observed STDP learning rules and for a simple nonlinear machine-learning problem. In all three examples, we show that our perturbation series provides good agreement with Monte-Carlo simulations in regimes where the FPE breaks down.
Collapse
|
21
|
Burbank KS, Kreiman G. Depression-biased reverse plasticity rule is required for stable learning at top-down connections. PLoS Comput Biol 2012; 8:e1002393. [PMID: 22396630 PMCID: PMC3291526 DOI: 10.1371/journal.pcbi.1002393] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2011] [Accepted: 01/01/2012] [Indexed: 11/19/2022] Open
Abstract
Top-down synapses are ubiquitous throughout neocortex and play a central role in cognition, yet little is known about their development and specificity. During sensory experience, lower neocortical areas are activated before higher ones, causing top-down synapses to experience a preponderance of post-synaptic activity preceding pre-synaptic activity. This timing pattern is the opposite of that experienced by bottom-up synapses, which suggests that different versions of spike-timing dependent synaptic plasticity (STDP) rules may be required at top-down synapses. We consider a two-layer neural network model and investigate which STDP rules can lead to a distribution of top-down synaptic weights that is stable, diverse and avoids strong loops. We introduce a temporally reversed rule (rSTDP) where top-down synapses are potentiated if post-synaptic activity precedes pre-synaptic activity. Combining analytical work and integrate-and-fire simulations, we show that only depression-biased rSTDP (and not classical STDP) produces stable and diverse top-down weights. The conclusions did not change upon addition of homeostatic mechanisms, multiplicative STDP rules or weak external input to the top neurons. Our prediction for rSTDP at top-down synapses, which are distally located, is supported by recent neurophysiological evidence showing the existence of temporally reversed STDP in synapses that are distal to the post-synaptic cell body. The complex circuitry in the cerebral cortex is characterized by bottom-up connections, which carry feedforward information from the sensory periphery to higher areas, and top-down connections, where the information flow is reversed. Changes over time in the strength of synaptic connections between neurons underlie development, learning and memory. A fundamental mechanism to change synaptic strength is spike timing dependent plasticity, whereby synapses are strengthened whenever pre-synaptic spikes shortly precede post-synaptic spikes and are weakened otherwise; the relative timing of spikes therefore dictates the direction of plasticity. Spike timing dependent plasticity has been observed in multiple species and different brain areas. Here, we argue that top-down connections obey a learning rule with a reversed temporal dependence, which we call reverse spike timing dependent plasticity. We use mathematical analysis and computational simulations to show that this reverse time learning rule, and not previous learning rules, leads to a biologically plausible connectivity pattern with stable synaptic strengths. This reverse time learning rule is supported by recent neuroanatomical and neurophysiological experiments and can explain empirical observations about the development and function of top-down synapses in the brain.
Collapse
Affiliation(s)
- Kendra S. Burbank
- Department of Neurology and Ophthalmology, Children's Hospital Boston, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Gabriel Kreiman
- Department of Neurology and Ophthalmology, Children's Hospital Boston, Harvard Medical School, Boston, Massachusetts, United States of America
- Center for Brain Science, Harvard University, Cambridge, Massachusetts, United States of America
- Swartz Center for Theoretical Neuroscience, Harvard University, Cambridge, Massachusetts, United States of America
- * E-mail:
| |
Collapse
|
22
|
Abstract
Online machine learning rules and many biological spike-timing-dependent plasticity (STDP) learning rules generate jump process Markov chains for the synaptic weights. We give a perturbation expansion for the dynamics that, unlike the usual approximation by a Fokker-Planck equation (FPE), is well justified. Our approach extends the related system size expansion by giving an expansion for the probability density as well as its moments. We apply the approach to two observed STDP learning rules and show that in regimes where the FPE breaks down, the new perturbation expansion agrees well with Monte Carlo simulations. The methods are also applicable to the dynamics of stochastic neural activity. Like previous ensemble analyses of STDP, we focus on equilibrium solutions, although the methods can in principle be applied to transients as well.
Collapse
Affiliation(s)
- Todd K Leen
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239, USA.
| | | |
Collapse
|
23
|
Abstract
The inferior part of the parietal lobe (IPL) is known to play a very important role in sensorimotor integration. Neurons in this region code goal-related motor acts performed with the mouth, with the hand and with the arm. It has been demonstrated that most IPL motor neurons coding a specific motor act (e.g., grasping) show markedly different activation patterns according to the final goal of the action sequence in which the act is embedded (grasping for eating or grasping for placing). Some of these neurons (parietal mirror neurons) show a similar selectivity also during the observation of the same action sequences when executed by others. Thus, it appears that the neuronal response occurring during the execution and the observation of a specific grasping act codes not only the executed motor act, but also the agent's final goal (intention).In this work we present a biologically inspired neural network architecture that models mechanisms of motor sequences execution and recognition. In this network, pools composed of motor and mirror neurons that encode motor acts of a sequence are arranged in form of action goal-specific neuronal chains. The execution and the recognition of actions is achieved through the propagation of activity bursts along specific chains modulated by visual and somatosensory inputs.The implemented spiking neuron network is able to reproduce the results found in neurophysiological recordings of parietal neurons during task performance and provides a biologically plausible implementation of the action selection and recognition process.Finally, the present paper proposes a mechanism for the formation of new neural chains by linking together in a sequential manner neurons that represent subsequent motor acts, thus producing goal-directed sequences.
Collapse
Affiliation(s)
- Fabian Chersi
- Institute of Science and Technology of Cognition, CNR Rome, Rome, Italy.
| | | | | |
Collapse
|
24
|
Rachmuth G, Shouval HZ, Bear MF, Poon CS. A biophysically-based neuromorphic model of spike rate- and timing-dependent plasticity. Proc Natl Acad Sci U S A 2011; 108:E1266-74. [PMID: 22089232 DOI: 10.1073/pnas.1106161108] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023] Open
Abstract
Current advances in neuromorphic engineering have made it possible to emulate complex neuronal ion channel and intracellular ionic dynamics in real time using highly compact and power-efficient complementary metal-oxide-semiconductor (CMOS) analog very-large-scale-integrated circuit technology. Recently, there has been growing interest in the neuromorphic emulation of the spike-timing-dependent plasticity (STDP) Hebbian learning rule by phenomenological modeling using CMOS, memristor or other analog devices. Here, we propose a CMOS circuit implementation of a biophysically grounded neuromorphic (iono-neuromorphic) model of synaptic plasticity that is capable of capturing both the spike rate-dependent plasticity (SRDP, of the Bienenstock-Cooper-Munro or BCM type) and STDP rules. The iono-neuromorphic model reproduces bidirectional synaptic changes with NMDA receptor-dependent and intracellular calcium-mediated long-term potentiation or long-term depression assuming retrograde endocannabinoid signaling as a second coincidence detector. Changes in excitatory or inhibitory synaptic weights are registered and stored in a nonvolatile and compact digital format analogous to the discrete insertion and removal of AMPA or GABA receptor channels. The versatile Hebbian synapse device is applicable to a variety of neuroprosthesis, brain-machine interface, neurorobotics, neuromimetic computation, machine learning, and neural-inspired adaptive control problems.
Collapse
|
25
|
Monaco JD, Knierim JJ, Zhang K. Sensory feedback, error correction, and remapping in a multiple oscillator model of place-cell activity. Front Comput Neurosci 2011; 5:39. [PMID: 21994494 PMCID: PMC3182374 DOI: 10.3389/fncom.2011.00039] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2011] [Accepted: 09/07/2011] [Indexed: 11/13/2022] Open
Abstract
Mammals navigate by integrating self-motion signals ("path integration") and occasionally fixing on familiar environmental landmarks. The rat hippocampus is a model system of spatial representation in which place cells are thought to integrate both sensory and spatial information from entorhinal cortex. The localized firing fields of hippocampal place cells and entorhinal grid-cells demonstrate a phase relationship with the local theta (6-10 Hz) rhythm that may be a temporal signature of path integration. However, encoding self-motion in the phase of theta oscillations requires high temporal precision and is susceptible to idiothetic noise, neuronal variability, and a changing environment. We present a model based on oscillatory interference theory, previously studied in the context of grid cells, in which transient temporal synchronization among a pool of path-integrating theta oscillators produces hippocampal-like place fields. We hypothesize that a spatiotemporally extended sensory interaction with external cues modulates feedback to the theta oscillators. We implement a form of this cue-driven feedback and show that it can retrieve fixed points in the phase code of position. A single cue can smoothly reset oscillator phases to correct for both systematic errors and continuous noise in path integration. Further, simulations in which local and global cues are rotated against each other reveal a phase-code mechanism in which conflicting cue arrangements can reproduce experimentally observed distributions of "partial remapping" responses. This abstract model demonstrates that phase-code feedback can provide stability to the temporal coding of position during navigation and may contribute to the context-dependence of hippocampal spatial representations. While the anatomical substrates of these processes have not been fully characterized, our findings suggest several signatures that can be evaluated in future experiments.
Collapse
Affiliation(s)
- Joseph D Monaco
- Krieger Mind/Brain Institute, Johns Hopkins University Baltimore, MD, USA
| | | | | |
Collapse
|
26
|
Nathan A, Barbosa VC. Network algorithmics and the emergence of information integration in cortical models. Phys Rev E Stat Nonlin Soft Matter Phys 2011; 84:011904. [PMID: 21867210 DOI: 10.1103/physreve.84.011904] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2010] [Revised: 05/25/2011] [Indexed: 05/31/2023]
Abstract
An information-theoretic framework known as integrated information theory (IIT) has been introduced recently for the study of the emergence of consciousness in the brain [D. Balduzzi and G. Tononi, PLoS Comput. Biol. 4, e1000091 (2008)]. IIT purports that this phenomenon is to be equated with the generation of information by the brain surpassing the information that the brain's constituents already generate independently of one another. IIT is not fully plausible in its modeling assumptions nor is it testable due to severe combinatorial growth embedded in its key definitions. Here we introduce an alternative to IIT which, while inspired in similar information-theoretic principles, seeks to address some of IIT's shortcomings to some extent. Our alternative framework uses the same network-algorithmic cortical model we introduced earlier [A. Nathan and V. C. Barbosa, Phys. Rev. E 81, 021916 (2010)] and, to allow for somewhat improved testability relative to IIT, adopts the well-known notions of information gain and total correlation applied to a set of variables representing the reachability of neurons by messages in the model's dynamics. We argue that these two quantities relate to each other in such a way that can be used to quantify the system's efficiency in generating information beyond that which does not depend on integration. We give computational results on our cortical model and on variants thereof that are either structurally random in the sense of an Erdős-Rényi random directed graph or structurally deterministic. We have found that our cortical model stands out with respect to the others in the sense that many of its instances are capable of integrating information more efficiently than most of those others' instances.
Collapse
Affiliation(s)
- Andre Nathan
- Programa de Engenharia de Sistemas e Computação, COPPE, Universidade Federal do Rio de Janeiro, Caixa Postal 68511, 21941-972 Rio de Janeiro RJ, Brazil
| | | |
Collapse
|
27
|
Abstract
Plastic changes in synaptic efficacy can depend on the time ordering of presynaptic and postsynaptic spikes. This phenomenon is called spike-timing-dependent plasticity (STDP). One of the most striking aspects of this plasticity mechanism is that the STDP windows display a great variety of forms in different parts of the nervous system. We explore this issue from a theoretical point of view. We choose as the optimization principle the minimization of conditional entropy or maximization of reliability in the transmission of information. We apply this principle to two types of postsynaptic dynamics, designated type I and type II. The first is characterized as being an integrator, while the second is a resonator. We find that, depending on the parameters of the models, the optimization principle can give rise to a wide variety of STDP windows, such as antisymmetric Hebbian, predominantly depressing or symmetric with one positive region and two lateral negative regions. We can relate each of these forms to the dynamical behavior of the different models. We also propose experimental tests to assess the validity of the optimization principle.
Collapse
Affiliation(s)
- R Rossi Pool
- Comisión Nacional de Energía Atómica and CONICET, Centro Atómico Bariloche and Instituto Balseiro, 8400 San Carlos de Bariloche, RN, Argentina.
| | | |
Collapse
|
28
|
Zheng Y, Schwabe L. Knowledge Representation Meets Simulation to Investigate Memory Problems after Seizures. Brain Inform 2011. [DOI: 10.1007/978-3-642-23605-1_11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
|
29
|
Abstract
Adaptive sensory processing influences the central nervous system's interpretation of incoming sensory information. One of the functions of this adaptive sensory processing is to allow the nervous system to ignore predictable sensory information so that it may focus on important novel information needed to improve performance of specific tasks. The mechanism of spike-timing-dependent plasticity (STDP) has proven to be intriguing in this context because of its dual role in long-term memory and ongoing adaptation to maintain optimal tuning of neural responses. Some of the clearest links between STDP and adaptive sensory processing have come from in vitro, in vivo, and modeling studies of the electrosensory systems of weakly electric fish. Plasticity in these systems is anti-Hebbian, so that presynaptic inputs that repeatedly precede, and possibly could contribute to, a postsynaptic neuron's firing are weakened. The learning dynamics of anti-Hebbian STDP learning rules are stable if the timing relations obey strict constraints. The stability of these learning rules leads to clear predictions of how functional consequences can arise from the detailed structure of the plasticity. Here we review the connection between theoretical predictions and functional consequences of anti-Hebbian STDP, focusing on adaptive processing in the electrosensory system of weakly electric fish. After introducing electrosensory adaptive processing and the dynamics of anti-Hebbian STDP learning rules, we address issues of predictive sensory cancelation and novelty detection, descending control of plasticity, synaptic scaling, and optimal sensory tuning. We conclude with examples in other systems where these principles may apply.
Collapse
Affiliation(s)
- Patrick D Roberts
- Biomedical Engineering, Oregon Health and Science University Portland, OR, USA
| | | |
Collapse
|
30
|
Abstract
Classically, action-potential-based learning paradigms such as the Bienenstock–Cooper–Munroe (BCM) rule for pulse rates or spike timing-dependent plasticity for pulse pairings have been experimentally demonstrated to evoke long-lasting synaptic weight changes (i.e., plasticity). However, several recent experiments have shown that plasticity also depends on the local dynamics at the synapse, such as membrane voltage, Calcium time course and level, or dendritic spikes. In this paper, we introduce a formulation of the BCM rule which is based on the instantaneous postsynaptic membrane potential as well as the transmission profile of the presynaptic spike. While this rule incorporates only simple local voltage- and current dynamics and is thus neither directly rate nor timing based, it can replicate a range of experiments, such as various rate and spike pairing protocols, combinations of the two, as well as voltage-dependent plasticity. A detailed comparison of current plasticity models with respect to this range of experiments also demonstrates the efficacy of the new plasticity rule. All experiments can be replicated with a limited set of parameters, avoiding the overfitting problem of more involved plasticity rules.
Collapse
Affiliation(s)
- Christian G Mayr
- Endowed Chair of Highly Parallel VLSI Systems and Neural Microelectronics, Institute of Circuits and Systems, Faculty of Electrical Engineering and Information Science, University of Technology Dresden Dresden, Sachsen, Germany
| | | |
Collapse
|
31
|
|
32
|
Nathan A, Barbosa VC. Network algorithmics and the emergence of the cortical synaptic-weight distribution. Phys Rev E Stat Nonlin Soft Matter Phys 2010; 81:021916. [PMID: 20365604 DOI: 10.1103/physreve.81.021916] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/06/2009] [Revised: 01/11/2010] [Indexed: 05/29/2023]
Abstract
When a neuron fires and the resulting action potential travels down its axon toward other neurons' dendrites, the effect on each of those neurons is mediated by the strength of the synapse that separates it from the firing neuron. This strength, in turn, is affected by the postsynaptic neuron's response through a mechanism that is thought to underlie important processes such as learning and memory. Although of difficult quantification, cortical synaptic strengths have been found to obey a long-tailed unimodal distribution peaking near the lowest values (approximately lognormal), thus confirming some of the predictive models built previously. Most of these models are causally local, in the sense that they refer to the situation in which a number of neurons all fire directly at the same postsynaptic neuron. Consequently, they necessarily embody assumptions regarding the generation of action potentials by the presynaptic neurons that have little biological interpretability. We introduce a network model of large groups of interconnected neurons and demonstrate, making none of the assumptions that characterize the causally local models, that its long-term behavior gives rise to a distribution of synaptic weights (the mathematical surrogates of synaptic strengths) with the same properties that were experimentally observed. In our model, the action potentials that create a neuron's input are, ultimately, the product of network-wide causal chains relating what happens at a neuron to the firings of others. Our model is then of a causally global nature and predicates the emergence of the synaptic-weight distribution on network structure and function. As such, it has the potential to become instrumental also in the study of other emergent cortical phenomena.
Collapse
Affiliation(s)
- Andre Nathan
- Programa de Engenharia de Sistemas e Computação, COPPE, Universidade Federal do Rio de Janeiro, Caixa Postal 68511, 21941-972 Rio de Janeiro, RJ, Brazil
| | | |
Collapse
|
33
|
Kubota S, Kitajima T. Possible role of cooperative action of NMDA receptor and GABA function in developmental plasticity. J Comput Neurosci 2010; 28:347-59. [DOI: 10.1007/s10827-010-0212-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2009] [Revised: 12/04/2009] [Accepted: 01/05/2010] [Indexed: 11/27/2022]
|
34
|
Lin X, De Wilde P. Synchronization enhances synaptic efficacy through spike timing-dependent plasticity in the olfactory system. Neurocomputing 2009; 73:381-8. [DOI: 10.1016/j.neucom.2009.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
35
|
Abstract
Spike timing-dependent plasticity (STDP) is a form of Hebbian learning which is thought to underlie structure formation during development, and learning and memory in later life. In this paper we show that the intrinsic properties of the postsynaptic neuron might have a deep influence on STDP dynamics by shaping the causal correlation between the pre- and the postsynaptic spike trains. The cell-specific effect of STDP is particularly evident in the presence of an oscillatory component in a cell input. In this case, the cell-specific phase response to an oscillatory modulation biases the oscillating afferents towards potentiation or depression, depending upon the intrinsic dynamics of the postsynaptic neuron and the period of the modulation.
Collapse
Affiliation(s)
- Fabiano Baroni
- GNB, Dpto. de Ing. Informatica, Escuela Politecnica Superior, Universidad Autonoma de Madrid, Spain.
| | | |
Collapse
|
36
|
Abstract
Proper wiring up of the nervous system is critical to the development of organisms capable of complex and adaptable behaviors. Besides the many experimental advances in determining the cellular and molecular machinery that carries out this remarkable task precisely and robustly, theoretical approaches have also proven to be useful tools in analyzing this machinery. A quantitative understanding of these processes can allow us to make predictions, test hypotheses, and appraise established concepts in a new light. Three areas that have been fruitful in this regard are axon guidance, retinotectal mapping, and activity-dependent development. This chapter reviews some of the contributions made by mathematical modeling in these areas, illustrated by important examples of models in each section. For axon guidance, we discuss models of how growth cones respond to their environment, and how this environment can place constraints on growth cone behavior. Retinotectal mapping looks at computational models for how topography can be generated in populations of neurons based on molecular gradients and other mechanisms such as competition. In activity-dependent development, we discuss theoretical approaches largely based on Hebbian synaptic plasticity rules, and how they can generate maps in the visual cortex very similar to those seen in vivo. We show how theoretical approaches have substantially contributed to the advancement of developmental neuroscience, and discuss future directions for mathematical modeling in the field.
Collapse
|
37
|
Nageswaran JM, Dutt N, Krichmar JL, Nicolau A, Veidenbaum AV. A configurable simulation environment for the efficient simulation of large-scale spiking neural networks on graphics processors. Neural Netw 2009; 22:791-800. [PMID: 19615853 DOI: 10.1016/j.neunet.2009.06.028] [Citation(s) in RCA: 90] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2009] [Revised: 06/04/2009] [Accepted: 06/25/2009] [Indexed: 10/20/2022]
|
38
|
Satel J, Trappenberg T, Fine A. Are binary synapses superior to graded weight representations in stochastic attractor networks? Cogn Neurodyn 2009; 3:243-50. [PMID: 19424822 DOI: 10.1007/s11571-009-9083-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2008] [Revised: 04/08/2009] [Accepted: 04/08/2009] [Indexed: 11/29/2022] Open
Abstract
Synaptic plasticity is an underlying mechanism of learning and memory in neural systems, but it is controversial whether synaptic efficacy is modulated in a graded or binary manner. It has been argued that binary synaptic weights would be less susceptible to noise than graded weights, which has impelled some theoretical neuroscientists to shift from the use of graded to binary weights in their models. We compare retrieval performance of models using both binary and graded weight representations through numerical simulations of stochastic attractor networks. We also investigate stochastic attractor models using multiple discrete levels of weight states, and then investigate the optimal threshold for dilution of binary weight representations. Our results show that a binary weight representation is not less susceptible to noise than a graded weight representation in stochastic attractor models, and we find that the load capacities with an increasing number of weight states rapidly reach the load capacity with graded weights. The optimal threshold for dilution of binary weight representations under stochastic conditions occurs when approximately 50% of the smallest weights are set to zero.
Collapse
Affiliation(s)
- Jason Satel
- Faculty of Computer Science, Dalhousie University, 6050 University Avenue, Halifax, NS, B3H 1W5, Canada
| | | | | |
Collapse
|
39
|
Takahashi YK, Kori H, Masuda N. Self-organization of feed-forward structure and entrainment in excitatory neural networks with spike-timing-dependent plasticity. Phys Rev E Stat Nonlin Soft Matter Phys 2009; 79:051904. [PMID: 19518477 DOI: 10.1103/physreve.79.051904] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2008] [Indexed: 05/27/2023]
Abstract
Spike-timing dependent plasticity (STDP) is an organizing principle of biological neural networks. While synchronous firing of neurons is considered to be an important functional block in the brain, how STDP shapes neural networks possibly toward synchrony is not entirely clear. We examine relations between STDP and synchronous firing in spontaneously firing neural populations. Using coupled heterogeneous phase oscillators placed on initial networks, we show numerically that STDP prunes some synapses and promotes formation of a feedforward network. Eventually a pacemaker, which is the neuron with the fastest inherent frequency in our numerical simulations, emerges at the root of the feedforward network. In each oscillatory cycle, a packet of neural activity is propagated from the pacemaker to downstream neurons along layers of the feedforward network. This event occurs above a clear-cut threshold value of the initial synaptic weight. Below the threshold, neurons are self-organized into separate clusters each of which is a feedforward network.
Collapse
Affiliation(s)
- Yuko K Takahashi
- Faculty of Engineering, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
| | | | | |
Collapse
|
40
|
Hardie J, Spruston N. Synaptic depolarization is more effective than back-propagating action potentials during induction of associative long-term potentiation in hippocampal pyramidal neurons. J Neurosci 2009; 29:3233-41. [PMID: 19279260 DOI: 10.1523/JNEUROSCI.6000-08.2009] [Citation(s) in RCA: 52] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Long-term potentiation (LTP) requires postsynaptic depolarization that can result from EPSPs paired with action potentials or larger EPSPs that trigger dendritic spikes. We explored the relative contribution of these sources of depolarization to LTP induction during synaptically driven action potential firing in hippocampal CA1 pyramidal neurons. Pairing of a weak test input with a strong input resulted in large LTP (approximately 75% increase) when the weak and strong inputs were both located in the apical dendrites. This form of LTP did not require somatic action potentials. When the strong input was located in the basal dendrites, the resulting LTP was smaller (< or =25% increase). Pairing the test input with somatically evoked action potentials mimicked this form of LTP. Thus, back-propagating action potentials may contribute to modest LTP, but local synaptic depolarization and/or dendritic spikes mediate a stronger form of LTP that requires spatial proximity of the associated synaptic inputs.
Collapse
|
41
|
Abstract
Memory systems should be plastic to allow for learning; however, they should also retain earlier memories. Here we explore how synaptic weights and memories are retained in models of single neurons and networks equipped with spike-timing-dependent plasticity. We show that for single neuron models, the precise learning rule has a strong effect on the memory retention time. In particular, a soft-bound, weight-dependent learning rule has a very short retention time as compared with a learning rule that is independent of the synaptic weights. Next, we explore how the retention time is reflected in receptive field stability in networks. As in the single neuron case, the weight-dependent learning rule yields less stable receptive fields than a weight-independent rule. However, receptive fields stabilize in the presence of sufficient lateral inhibition, demonstrating that plasticity in networks can be regulated by inhibition and suggesting a novel role for inhibition in neural circuits.
Collapse
Affiliation(s)
- Guy Billings
- Neuroinformatics Doctoral Training Centre, University of Edinburgh, Edinburgh, United Kingdom.
| | | |
Collapse
|
42
|
Abstract
We show that the dynamical multistability of a network of bursting subthalamic neurons, caused by synaptic plasticity has a strong impact on the stimulus-response properties when exposed to weak and short desynchronizing stimuli. Intriguingly, such stimuli can reliably shift the network from a stable state with pathological synchrony and connectivity to a stable desynchronized state with down-regulated connectivity. However, unlike in the case of stronger coordinated reset stimulation, after termination of weaker stimulation the network may undergo a transient rebound of synchrony. When the coordinated reset stimulation is even weaker and/or shorter, so that a single stimulation epoch is not effective, the network dynamics and connectivity can still be reshaped in a cumulative manner by repetitive stimulation delivery.
Collapse
Affiliation(s)
- C Hauptmann
- Institute of Neuroscience and Biophysics 3, Medicine and Virtual Institute of Neuromodulation, Research Center Jülich, Leo-Brandt-Str., D-52425 Jülich, Germany.
| | | |
Collapse
|
43
|
|
44
|
Abstract
Most synaptic inputs are made onto the dendritic tree. Recent work has shown that dendrites play an active role in transforming synaptic input into neuronal output and in defining the relationships between active synapses. In this review, we discuss how these dendritic properties influence the rules governing the induction of synaptic plasticity. We argue that the location of synapses in the dendritic tree, and the type of dendritic excitability associated with each synapse, play decisive roles in determining the plastic properties of that synapse. Furthermore, since the electrical properties of the dendritic tree are not static, but can be altered by neuromodulators and by synaptic activity itself, we discuss how learning rules may be dynamically shaped by tuning dendritic function. We conclude by describing how this reciprocal relationship between plasticity of dendritic excitability and synaptic plasticity has changed our view of information processing and memory storage in neuronal networks.
Collapse
Affiliation(s)
- P Jesper Sjöström
- Wolfson Institute for Biomedical Research and Department of Physiology, University College London, London, United Kingdom
| | | | | | | |
Collapse
|
45
|
Farajidavar A, Saeb S, Behbehani K. Incorporating synaptic time-dependent plasticity and dynamic synapse into a computational model of wind-up. Neural Netw 2008; 21:241-9. [DOI: 10.1016/j.neunet.2007.12.021] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2007] [Revised: 11/29/2007] [Accepted: 12/03/2007] [Indexed: 11/16/2022]
|
46
|
Abstract
Spike timing-dependent synaptic plasticity (STDP) has emerged as the preferred framework linking patterns of pre- and postsynaptic activity to changes in synaptic strength. Although synaptic plasticity is widely believed to be a major component of learning, it is unclear how STDP itself could serve as a mechanism for general purpose learning. On the other hand, algorithms for reinforcement learning work on a wide variety of problems, but lack an experimentally established neural implementation. Here, we combine these paradigms in a novel model in which a modified version of STDP achieves reinforcement learning. We build this model in stages, identifying a minimal set of conditions needed to make it work. Using a performance-modulated modification of STDP in a two-layer feedforward network, we can train output neurons to generate arbitrarily selected spike trains or population responses. Furthermore, a given network can learn distinct responses to several different input patterns. We also describe in detail how this model might be implemented biologically. Thus our model offers a novel and biologically plausible implementation of reinforcement learning that is capable of training a neural population to produce a very wide range of possible mappings between synaptic input and spiking output.
Collapse
Affiliation(s)
- Michael A Farries
- Department of Biology, University of Texas at San Antonio, San Antonio, TX 78249, USA.
| | | |
Collapse
|
47
|
Abstract
Our nervous system can efficiently recognize objects in spite of changes in contextual variables such as perspective or lighting conditions. Several lines of research have proposed that this ability for invariant recognition is learned by exploiting the fact that object identities typically vary more slowly in time than contextual variables or noise. Here, we study the question of how this "temporal stability" or "slowness" approach can be implemented within the limits of biologically realistic spike-based learning rules. We first show that slow feature analysis, an algorithm that is based on slowness, can be implemented in linear continuous model neurons by means of a modified Hebbian learning rule. This approach provides a link to the trace rule, which is another implementation of slowness learning. Then, we show analytically that for linear Poisson neurons, slowness learning can be implemented by spike-timing-dependent plasticity (STDP) with a specific learning window. By studying the learning dynamics of STDP, we show that for functional interpretations of STDP, it is not the learning window alone that is relevant but rather the convolution of the learning window with the postsynaptic potential. We then derive STDP learning windows that implement slow feature analysis and the "trace rule." The resulting learning windows are compatible with physiological data both in shape and timescale. Moreover, our analysis shows that the learning window can be split into two functionally different components that are sensitive to reversible and irreversible aspects of the input statistics, respectively. The theory indicates that irreversible input statistics are not in favor of stable weight distributions but may generate oscillatory weight dynamics. Our analysis offers a novel interpretation for the functional role of STDP in physiological neurons.
Collapse
Affiliation(s)
- Henning Sprekeler
- Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin, Germany.
| | | | | |
Collapse
|
48
|
Kubota S, Kitajima T. A model for synaptic development regulated by NMDA receptor subunit expression. J Comput Neurosci 2007; 24:1-20. [PMID: 18202921 DOI: 10.1007/s10827-007-0036-8] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2006] [Revised: 03/28/2007] [Accepted: 03/29/2007] [Indexed: 11/26/2022]
Abstract
Activation of NMDA receptors (NMDARs) is highly involved in the potentiation and depression of synaptic transmission. NMDARs comprise NR1 and NR2B subunits in the neonatal forebrain, while the expression of NR2A subunit is increased over time, leading to shortening of NMDAR-mediated synaptic currents. It has been suggested that the developmental switch in the NMDAR subunit composition regulates synaptic plasticity, but its physiological role remains unclear. In this study, we examine the effects of the NMDAR subunit switch on the spike-timing-dependent plasticity and the synaptic weight dynamics and demonstrate that the subunit switch contributes to inducing two consecutive processes-the potentiation of weak synapses and the induction of the competition between them-at an adequately rapid rate. Regulation of NMDAR subunit expression can be considered as a mechanism that promotes rapid and stable growth of immature synapses.
Collapse
Affiliation(s)
- Shigeru Kubota
- Department of Bio-System Engineering, Yamagata University, 4-3-16 Jonan, Yonezawa, Yamagata, 992-8510, Japan.
| | | |
Collapse
|
49
|
Suzuki I, Yasuda K. Detection of tetanus-induced effects in linearly lined-up micropatterned neuronal networks: Application of a multi-electrode array chip combined with agarose microstructures. Biochem Biophys Res Commun 2007; 356:470-5. [PMID: 17362877 DOI: 10.1016/j.bbrc.2007.03.006] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2007] [Accepted: 03/01/2007] [Indexed: 10/23/2022]
Abstract
One of the best approaches to understanding the mechanism of information acquisition and storage is to characterize the plasticity of network activity by monitoring and stimulating individual neurons in a topologically defined network and doing this for extended periods of time. We therefore previously developed an on-chip multi-electrode array (MEA) system combined with an array of agarose microchambers (AMCs). It is possible to record the firing at multiple cells simultaneously for long term and topographically control the cells position and their connections. In our present study, we demonstrated the effect of tetanic stimulation in a linearly lined-up patterned network on the AMC/MEA chip. We detected reproducible activity changes that were induced by tetanic stimulation and saw that these changes were maintained for 6-24 h. The results show the advantage of our AMC/MEA cultivation and measurements methods and suggest they will be useful for investigating the long-term plasticity depending on network topology and size.
Collapse
Affiliation(s)
- Ikurou Suzuki
- Department of Life Sciences, Graduate School of Arts and Sciences, University of Tokyo, 3-8-1 Komaba, Meguro, Tokyo 153-8902, Japan
| | | |
Collapse
|
50
|
Abstract
Experimental studies have observed synaptic potentiation when a presynaptic neuron fires shortly before a postsynaptic neuron and synaptic depression when the presynaptic neuron fires shortly after. The dependence of synaptic modulation on the precise timing of the two action potentials is known as spike-timing dependent plasticity (STDP). We derive STDP from a simple computational principle: synapses adapt so as to minimize the postsynaptic neuron's response variability to a given presynaptic input, causing the neuron's output to become more reliable in the face of noise. Using an objective function that minimizes response variability and the biophysically realistic spike-response model of Gerstner (2001), we simulate neurophysiological experiments and obtain the characteristic STDP curve along with other phenomena, including the reduction in synaptic plasticity as synaptic efficacy increases. We compare our account to other efforts to derive STDP from computational principles and argue that our account provides the most comprehensive coverage of the phenomena. Thus, reliability of neural response in the face of noise may be a key goal of unsupervised cortical adaptation.
Collapse
Affiliation(s)
- Sander M Bohte
- Netherlands Centre for Mathematics and Computer Science (CWI), 1098 SJ Amsterdam, The Netherlands.
| | | |
Collapse
|