1
|
Ahn H, Kim Y, Seo S, Lee J, Lee S, Oh S, Kim B, Park J, Kang S, Kim Y, Ham A, Lee J, Park D, Kwon S, Lee D, Ryu JE, Shin JC, Sahasrabudhe A, Kim KS, Bae SH, Kang K, Kim J, Oh S, Park JH. Artificial Optoelectronic Synapse Featuring Bidirectional Post-Synaptic Current for Compact and Energy-Efficient Neural Hardware. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2025:e2418582. [PMID: 40434208 DOI: 10.1002/adma.202418582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2024] [Revised: 03/27/2025] [Indexed: 05/29/2025]
Abstract
Conventional hardware neural networks (HW-NNs) have relied on unidirectional current flow of artificial synapses, necessitating a differential pair of the synapses for weight core implementation. Here, an artificial optoelectronic synapse capable of bidirectional post-synaptic current (IPSC) is presented, eliminating the need for differential synapse pairs. This is achieved through an asymmetric metal contact structure that induces a built-in electric field for directional flow of photogenerated carriers, and a charge trapping/de-trapping layer in the gate stack (h-BN/weight control layer) that can modulate the surface potential of the semiconductor channel (WSe2) using electrical signals. This structure enables precise control over the direction and magnitude of injected charge. The device demonstrates key synaptic behaviors, such as long-term potentiation/depression and spike-timing-dependent plasticity. A fabricated 3 × 2 artificial synapse array shows that the bidirectional IPSC characteristic is compatible with multiply-accumulate operations. Finally, the feasibility of these synapses in HW-NNs is demonstrated through training and inference simulations using the MNIST handwritten digits dataset, yielding competitive recognition rates and reduced total energy consumption for updating weights of the weight core compared to unidirectional IPSC-based systems. This approach paves the way toward more compact and energy-efficient brain-inspired computing systems.
Collapse
Affiliation(s)
- Hogeun Ahn
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, 16419, Republic of Korea
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02138, USA
| | - Yena Kim
- Department of Semiconductor Convergence Engineering, Sungkyunkwan University, Suwon, 16419, Republic of Korea
| | - Seunghwan Seo
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02138, USA
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Republic of Korea
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA, 02138, USA
| | - Junseo Lee
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, 16419, Republic of Korea
| | - Sehee Lee
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, 16419, Republic of Korea
| | - Saeyoung Oh
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Republic of Korea
| | - Byeongchan Kim
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, 16419, Republic of Korea
| | - Jeongwon Park
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Republic of Korea
| | - Sumin Kang
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Republic of Korea
| | - Yuseok Kim
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Republic of Korea
| | - Ayoung Ham
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Republic of Korea
| | - Jaehyun Lee
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Republic of Korea
| | - Donggeon Park
- Graduate School of Semiconductor Technology, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Republic of Korea
| | - Seongdae Kwon
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Republic of Korea
| | - Doyoon Lee
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02138, USA
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA, 02138, USA
| | - Jung-El Ryu
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02138, USA
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA, 02138, USA
| | - June-Chul Shin
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02138, USA
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA, 02138, USA
| | - Atharva Sahasrabudhe
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02138, USA
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA, 02138, USA
| | - Ki Seok Kim
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02138, USA
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA, 02138, USA
| | - Sang-Hoon Bae
- Department of Mechanical Engineering and Materials Science, Washington University, Saint Louis, MO, 63130, USA
- Institute of Materials Science and Engineering, Washington University, Saint Louis, MO, 63130, USA
| | - Kibum Kang
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Republic of Korea
- Graduate School of Semiconductor Technology, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Republic of Korea
| | - Jeehwan Kim
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02138, USA
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA, 02138, USA
| | - Saeroonter Oh
- Department of Semiconductor Convergence Engineering, Sungkyunkwan University, Suwon, 16419, Republic of Korea
| | - Jin-Hong Park
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, 16419, Republic of Korea
- Department of Semiconductor Convergence Engineering, Sungkyunkwan University, Suwon, 16419, Republic of Korea
- SKKU Advanced Institute of Nano-Technology, Sungkyunkwan University, Suwon, 16419, Republic of Korea
| |
Collapse
|
2
|
Ahmadipour M, Shakib MA, Gao Z, Sarles SA, Lamuta C, Montazami R. Scaled-down ionic liquid-functionalized geopolymer memristors. MATERIALS HORIZONS 2025. [PMID: 40358460 DOI: 10.1039/d5mh00231a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2025]
Abstract
Whereas most memristors are fabricated using sophisticated and expensive manufacturing methods, we recently introduced low-cost memristors constructed from sustainable, porous geopolymers (GP) at room temperature via simple casting processes. These devices exhibit resistive switching via electroosmosis and voltage-driven ion mobility inside water-filled channels within the porous material, enabling promising synaptic properties. However, GP memristors were previously fabricated at the centimeter scale, too large for space-efficient neuromorphic computing applications, and displayed limited memory retention durations due to water evaporation from the pores of the GP material. In this work, we overcome these limitations by implementing (i) an inexpensive manufacturing method that allows fabrication at micron-scale (99.998% smaller in volume than their centimeter-scale counterparts) and (ii) functionalization of GPs with EMIM+ Otf- ionic liquid (IL), which prolonged retention of the memristive switching properties by 50%. This improved class of GP-based memristors also demonstrated ideal synaptic properties in terms of paired-pulse facilitation (PPF), paired-pulse depression (PPD), and spike time dependent plasticity (STDP). These improvements pave the way for using IL-functionalized GP memristors in neuromorphic computing applications, including reservoir computing, in-memory computing, memristors crossbar arrays, and more.
Collapse
Affiliation(s)
- Maedeh Ahmadipour
- Department of Mechanical Engineering, Iowa State University, Ames, Iowa 50011, USA.
| | - Mahmudul Alam Shakib
- Department of Mechanical Engineering, College of Engineering, University of Iowa, Iowa City, Iowa 52242, USA.
| | - Zhaolin Gao
- Department of Mechanical Engineering, College of Engineering, University of Iowa, Iowa City, Iowa 52242, USA.
| | - Stephen A Sarles
- Department of Mechanical, Aerospace, and Biomedical Engineering, University of Tennessee, Knoxville, Tennessee 37916, USA.
| | - Caterina Lamuta
- Department of Mechanical Engineering, College of Engineering, University of Iowa, Iowa City, Iowa 52242, USA.
| | - Reza Montazami
- Department of Mechanical Engineering, Iowa State University, Ames, Iowa 50011, USA.
- Department of Agricultural and Biosystems Engineering, Iowa State University, Ames, Iowa 50011, USA
| |
Collapse
|
3
|
Confavreux B, Agnes EJ, Zenke F, Sprekeler H, Vogels TP. Balancing complexity, performance and plausibility to meta learn plasticity rules in recurrent spiking networks. PLoS Comput Biol 2025; 21:e1012910. [PMID: 40273284 PMCID: PMC12021293 DOI: 10.1371/journal.pcbi.1012910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Accepted: 02/25/2025] [Indexed: 04/26/2025] Open
Abstract
Synaptic plasticity is a key player in the brain's life-long learning abilities. However, due to experimental limitations, the mechanistic link between synaptic plasticity rules and the network-level computations they enable remain opaque. Here we use evolutionary strategies (ES) to meta learn local co-active plasticity rules in large recurrent spiking networks with excitatory (E) and inhibitory (I) neurons, using parameterizations of increasing complexity. We discover rules that robustly stabilize network dynamics for all four synapse types acting in isolation (E-to-E, E-to-I, I-to-E and I-to-I). More complex functions such as familiarity detection can also be included in the search constraints. However, our meta learning strategy begins to fail for co-active rules of increasing complexity, as it is challenging to devise loss functions that effectively constrain network dynamics to plausible solutions a priori. Moreover, in line with previous work, we can find multiple degenerate solutions with identical network behaviour. As a local optimization strategy, ES provides one solution at a time and makes exploration of this degeneracy cumbersome. Regardless, we can glean the interdependecies of various plasticity parameters by considering the covariance matrix learned alongside the optimal rule with ES. Our work provides a proof of principle for the success of machine-learning-guided discovery of plasticity rules in large spiking networks, and points at the necessity of more elaborate search strategies going forward.
Collapse
Affiliation(s)
- Basile Confavreux
- Institute of Science and Technology Austria, Klosterneuburg, Austria
- Gatsby Computational Neuroscience Unit, University College London, London, United Kingdom
| | | | - Friedemann Zenke
- Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland
| | | | - Tim P. Vogels
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| |
Collapse
|
4
|
Xu M, Liu F, Hu Y, Li H, Wei Y, Zhong S, Pei J, Deng L. Adaptive Synaptic Scaling in Spiking Networks for Continual Learning and Enhanced Robustness. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:5151-5165. [PMID: 38536699 DOI: 10.1109/tnnls.2024.3373599] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Synaptic plasticity plays a critical role in the expression power of brain neural networks. Among diverse plasticity rules, synaptic scaling presents indispensable effects on homeostasis maintenance and synaptic strength regulation. In the current modeling of brain-inspired spiking neural networks (SNN), backpropagation through time is widely adopted because it can achieve high performance using a small number of time steps. Nevertheless, the synaptic scaling mechanism has not yet been well touched. In this work, we propose an experience-dependent adaptive synaptic scaling mechanism (AS-SNN) for spiking neural networks. The learning process has two stages: First, in the forward path, adaptive short-term potentiation or depression is triggered for each synapse according to afferent stimuli intensity accumulated by presynaptic historical neural activities. Second, in the backward path, long-term consolidation is executed through gradient signals regulated by the corresponding scaling factor. This mechanism shapes the pattern selectivity of synapses and the information transfer they mediate. We theoretically prove that the proposed adaptive synaptic scaling function follows a contraction map and finally converges to an expected fixed point, in accordance with state-of-the-art results in three tasks on perturbation resistance, continual learning, and graph learning. Specifically, for the perturbation resistance and continual learning tasks, our approach improves the accuracy on the N-MNIST benchmark over the baseline by 44% and 25%, respectively. An expected firing rate callback and sparse coding can be observed in graph learning. Extensive experiments on ablation study and cost evaluation evidence the effectiveness and efficiency of our nonparametric adaptive scaling method, which demonstrates the great potential of SNN in continual learning and robust learning.
Collapse
|
5
|
Devalle F, Roxin A. How plasticity shapes the formation of neuronal assemblies driven by oscillatory and stochastic inputs. J Comput Neurosci 2025; 53:9-23. [PMID: 39661297 DOI: 10.1007/s10827-024-00885-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Revised: 11/21/2024] [Accepted: 11/25/2024] [Indexed: 12/12/2024]
Abstract
Synaptic connections in neuronal circuits are modulated by pre- and post-synaptic spiking activity. Previous theoretical work has studied how such Hebbian plasticity rules shape network connectivity when firing rates are constant, or slowly varying in time. However, oscillations and fluctuations, which can arise through sensory inputs or intrinsic brain mechanisms, are ubiquitous in neuronal circuits. Here we study how oscillatory and fluctuating inputs shape recurrent network connectivity given a temporally asymmetric plasticity rule. We do this analytically using a separation of time scales approach for pairs of neurons, and then show that the analysis can be extended to understand the structure in large networks. In the case of oscillatory inputs, the resulting network structure is strongly affected by the phase relationship between drive to different neurons. In large networks, distributed phases tend to lead to hierarchical clustering. The analysis for stochastic inputs reveals a rich phase plane in which there is multistability between different possible connectivity motifs. Our results may be of relevance for understanding the effect of sensory-driven inputs, which are by nature time-varying, on synaptic plasticity, and hence on learning and memory.
Collapse
Affiliation(s)
- Federico Devalle
- Computational Neuroscience Group, Centre de Recerca Matemàtica, Campus de Bellaterra, Edifici C, 08193, Bellterra, Spain
| | - Alex Roxin
- Computational Neuroscience Group, Centre de Recerca Matemàtica, Campus de Bellaterra, Edifici C, 08193, Bellterra, Spain.
| |
Collapse
|
6
|
Vieth M, Triesch J. Stabilizing sequence learning in stochastic spiking networks with GABA-Modulated STDP. Neural Netw 2025; 183:106985. [PMID: 39667218 DOI: 10.1016/j.neunet.2024.106985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Revised: 11/11/2024] [Accepted: 11/26/2024] [Indexed: 12/14/2024]
Abstract
Cortical networks are capable of unsupervised learning and spontaneous replay of complex temporal sequences. Endowing artificial spiking neural networks with similar learning abilities remains a challenge. In particular, it is unresolved how different plasticity rules can contribute to both learning and the maintenance of network stability during learning. Here we introduce a biologically inspired form of GABA-Modulated Spike Timing-Dependent Plasticity (GMS) and demonstrate its ability to permit stable learning of complex temporal sequences including natural language in recurrent spiking neural networks. Motivated by biological findings, GMS utilizes the momentary level of inhibition onto excitatory cells to adjust both the magnitude and sign of Spike Timing-Dependent Plasticity (STDP) of connections between excitatory cells. In particular, high levels of inhibition in the network cause depression of excitatory-to-excitatory connections. We demonstrate the effectiveness of this mechanism during several sequence learning experiments with character- and token-based text inputs as well as visual input sequences. We show that GMS maintains stability during learning and spontaneous replay and permits the network to form a clustered hierarchical representation of its input sequences. Overall, we provide a biologically inspired model of unsupervised learning of complex sequences in recurrent spiking neural networks.
Collapse
Affiliation(s)
- Marius Vieth
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany.
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany.
| |
Collapse
|
7
|
Pemberton J, Chadderton P, Costa RP. Cerebellar-driven cortical dynamics can enable task acquisition, switching and consolidation. Nat Commun 2024; 15:10913. [PMID: 39738061 PMCID: PMC11686095 DOI: 10.1038/s41467-024-55315-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2024] [Accepted: 12/06/2024] [Indexed: 01/01/2025] Open
Abstract
The brain must maintain a stable world model while rapidly adapting to the environment, but the underlying mechanisms are not known. Here, we posit that cortico-cerebellar loops play a key role in this process. We introduce a computational model of cerebellar networks that learn to drive cortical networks with task-outcome predictions. First, using sensorimotor tasks, we show that cerebellar feedback in the presence of stable cortical networks is sufficient for rapid task acquisition and switching. Next, we demonstrate that, when trained in working memory tasks, the cerebellum can also underlie the maintenance of cognitive-specific dynamics in the cortex, explaining a range of optogenetic and behavioural observations. Finally, using our model, we introduce a systems consolidation theory in which task information is gradually transferred from the cerebellum to the cortex. In summary, our findings suggest that cortico-cerebellar loops are an important component of task acquisition, switching, and consolidation in the brain.
Collapse
Affiliation(s)
- Joseph Pemberton
- Computational Neuroscience Unit, Intelligent Systems Labs, Faculty of Engineering, University of Bristol, Bristol, UK.
- Centre for Neural Circuits and Behaviour, Department of Physiology, Anatomy and Genetics, Medical Sciences Division, University of Oxford, Oxford, UK.
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA.
| | - Paul Chadderton
- School of Physiology, Pharmacology and Neuroscience, Faculty of Life Sciences, University of Bristol, Bristol, UK
| | - Rui Ponte Costa
- Computational Neuroscience Unit, Intelligent Systems Labs, Faculty of Engineering, University of Bristol, Bristol, UK.
- Centre for Neural Circuits and Behaviour, Department of Physiology, Anatomy and Genetics, Medical Sciences Division, University of Oxford, Oxford, UK.
| |
Collapse
|
8
|
Zendrikov D, Paraskevov A. The vitals for steady nucleation maps of spontaneous spiking coherence in autonomous two-dimensional neuronal networks. Neural Netw 2024; 180:106589. [PMID: 39217864 DOI: 10.1016/j.neunet.2024.106589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Revised: 07/06/2024] [Accepted: 07/28/2024] [Indexed: 09/04/2024]
Abstract
Thin pancake-like neuronal networks cultured on top of a planar microelectrode array have been extensively tried out in neuroengineering, as a substrate for the mobile robot's control unit, i.e., as a cyborg's brain. Most of these attempts failed due to intricate self-organizing dynamics in the neuronal systems. In particular, the networks may exhibit an emergent spatial map of steady nucleation sites ("n-sites") of spontaneous population spikes. Being unpredictable and independent of the surface electrode locations, the n-sites drastically change local ability of the network to generate spikes. Here, using a spiking neuronal network model with generative spatially-embedded connectome, we systematically show in simulations that the number, location, and relative activity of spontaneously formed n-sites ("the vitals") crucially depend on the samplings of three distributions: (1) the network distribution of neuronal excitability, (2) the distribution of connections between neurons of the network, and (3) the distribution of maximal amplitudes of a single synaptic current pulse. Moreover, blocking the dynamics of a small fraction (about 4%) of non-pacemaker neurons having the highest excitability was enough to completely suppress the occurrence of population spikes and their n-sites. This key result is explained theoretically. Remarkably, the n-sites occur taking into account only short-term synaptic plasticity, i.e., without a Hebbian-type plasticity. As the spiking network model used in this study is strictly deterministic, all simulation results can be accurately reproduced. The model, which has already demonstrated a very high richness-to-complexity ratio, can also be directly extended into the three-dimensional case, e.g., for targeting peculiarities of spiking dynamics in cerebral (or brain) organoids. We recommend the model as an excellent illustrative tool for teaching network-level computational neuroscience, complementing a few benchmark models.
Collapse
Affiliation(s)
- Dmitrii Zendrikov
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, 8057 Zurich, Switzerland.
| | | |
Collapse
|
9
|
Kappel D, Tetzlaff C. Synapses learn to utilize stochastic pre-synaptic release for the prediction of postsynaptic dynamics. PLoS Comput Biol 2024; 20:e1012531. [PMID: 39495714 PMCID: PMC11534197 DOI: 10.1371/journal.pcbi.1012531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 10/01/2024] [Indexed: 11/06/2024] Open
Abstract
Synapses in the brain are highly noisy, which leads to a large trial-by-trial variability. Given how costly synapses are in terms of energy consumption these high levels of noise are surprising. Here we propose that synapses use noise to represent uncertainties about the somatic activity of the postsynaptic neuron. To show this, we developed a mathematical framework, in which the synapse as a whole interacts with the soma of the postsynaptic neuron in a similar way to an agent that is situated and behaves in an uncertain, dynamic environment. This framework suggests that synapses use an implicit internal model of the somatic membrane dynamics that is being updated by a synaptic learning rule, which resembles experimentally well-established LTP/LTD mechanisms. In addition, this approach entails that a synapse utilizes its inherently noisy synaptic release to also encode its uncertainty about the state of the somatic potential. Although each synapse strives for predicting the somatic dynamics of its postsynaptic neuron, we show that the emergent dynamics of many synapses in a neuronal network resolve different learning problems such as pattern classification or closed-loop control in a dynamic environment. Hereby, synapses coordinate themselves to represent and utilize uncertainties on the network level in behaviorally ambiguous situations.
Collapse
Affiliation(s)
- David Kappel
- III. Physikalisches Institut – Biophysik, Georg-August Universität, Göttingen, Germany
- Institut für Neuroinformatik, Ruhr-Universität Bochum, Bochum, Germany
| | - Christian Tetzlaff
- III. Physikalisches Institut – Biophysik, Georg-August Universität, Göttingen, Germany
- Group of Computational Synaptic Physiology, Department for Neuro- and Sensory Physiology, University Medical Center Göttingen, Göttingen, Germany
| |
Collapse
|
10
|
Sosis B, Rubin JE. Distinct dopaminergic spike-timing-dependent plasticity rules are suited to different functional roles. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.24.600372. [PMID: 38979377 PMCID: PMC11230239 DOI: 10.1101/2024.06.24.600372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/10/2024]
Abstract
Various mathematical models have been formulated to describe the changes in synaptic strengths resulting from spike-timing-dependent plasticity (STDP). A subset of these models include a third factor, dopamine, which interacts with spike timing to contribute to plasticity at specific synapses, notably those from cortex to striatum at the input layer of the basal ganglia. Theoretical work to analyze these plasticity models has largely focused on abstract issues, such as the conditions under which they may promote synchronization and the weight distributions induced by inputs with simple correlation structures, rather than on scenarios associated with specific tasks, and has generally not considered dopamine-dependent forms of STDP. In this paper we introduce three forms of dopamine-modulated STDP adapted from previously proposed plasticity rules. We then analyze, mathematically and with simulations, their performance in three biologically relevant scenarios. We test the ability of each of the three models to maintain its weights in the face of noise and to complete simple reward prediction and action selection tasks, studying the learned weight distributions and corresponding task performance in each setting. Interestingly, we find that each plasticity rule is well suited to a subset of the scenarios studied but falls short in others. Different tasks may therefore require different forms of synaptic plasticity, yielding the prediction that the precise form of the STDP mechanism present may vary across regions of the striatum, and other brain areas impacted by dopamine, that are involved in distinct computational functions.
Collapse
Affiliation(s)
- Baram Sosis
- *Department of Mathematics, University of Pittsburgh, 301 Thackeray Hall, Pittsburgh, 15260, PA, USA
| | - Jonathan E. Rubin
- *Department of Mathematics, University of Pittsburgh, 301 Thackeray Hall, Pittsburgh, 15260, PA, USA
- Center for the Neural Basis of Cognition, University of Pittsburgh, 4400 Fifth Ave, Pittsburgh, 15213, PA, USA
| |
Collapse
|
11
|
Shakhawat AMD, Foltz JG, Nance AB, Bhateja J, Raymond JL. Systemic pharmacological suppression of neural activity reverses learning impairment in a mouse model of Fragile X syndrome. eLife 2024; 12:RP92543. [PMID: 38953282 PMCID: PMC11219043 DOI: 10.7554/elife.92543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/03/2024] Open
Abstract
The enhancement of associative synaptic plasticity often results in impaired rather than enhanced learning. Previously, we proposed that such learning impairments can result from saturation of the plasticity mechanism (Nguyen-Vu et al., 2017), or, more generally, from a history-dependent change in the threshold for plasticity. This hypothesis was based on experimental results from mice lacking two class I major histocompatibility molecules, MHCI H2-Kb and H2-Db (MHCI KbDb-/-), which have enhanced associative long-term depression at the parallel fiber-Purkinje cell synapses in the cerebellum (PF-Purkinje cell LTD). Here, we extend this work by testing predictions of the threshold metaplasticity hypothesis in a second mouse line with enhanced PF-Purkinje cell LTD, the Fmr1 knockout mouse model of Fragile X syndrome (FXS). Mice lacking Fmr1 gene expression in cerebellar Purkinje cells (L7-Fmr1 KO) were selectively impaired on two oculomotor learning tasks in which PF-Purkinje cell LTD has been implicated, with no impairment on LTD-independent oculomotor learning tasks. Consistent with the threshold metaplasticity hypothesis, behavioral pre-training designed to reverse LTD at the PF-Purkinje cell synapses eliminated the oculomotor learning deficit in the L7-Fmr1 KO mice, as previously reported in MHCI KbDb-/-mice. In addition, diazepam treatment to suppress neural activity and thereby limit the induction of associative LTD during the pre-training period also eliminated the learning deficits in L7-Fmr1 KO mice. These results support the hypothesis that cerebellar LTD-dependent learning is governed by an experience-dependent sliding threshold for plasticity. An increased threshold for LTD in response to elevated neural activity would tend to oppose firing rate stability, but could serve to stabilize synaptic weights and recently acquired memories. The metaplasticity perspective could inform the development of new clinical approaches for addressing learning impairments in autism and other disorders of the nervous system.
Collapse
Affiliation(s)
- Amin MD Shakhawat
- Department of Neurobiology, Stanford UniversityStanfordUnited States
| | | | - Adam B Nance
- Department of Neurobiology, Stanford UniversityStanfordUnited States
| | - Jaydev Bhateja
- Department of Neurobiology, Stanford UniversityStanfordUnited States
| | | |
Collapse
|
12
|
Agnes EJ, Vogels TP. Co-dependent excitatory and inhibitory plasticity accounts for quick, stable and long-lasting memories in biological networks. Nat Neurosci 2024; 27:964-974. [PMID: 38509348 PMCID: PMC11089004 DOI: 10.1038/s41593-024-01597-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 02/08/2024] [Indexed: 03/22/2024]
Abstract
The brain's functionality is developed and maintained through synaptic plasticity. As synapses undergo plasticity, they also affect each other. The nature of such 'co-dependency' is difficult to disentangle experimentally, because multiple synapses must be monitored simultaneously. To help understand the experimentally observed phenomena, we introduce a framework that formalizes synaptic co-dependency between different connection types. The resulting model explains how inhibition can gate excitatory plasticity while neighboring excitatory-excitatory interactions determine the strength of long-term potentiation. Furthermore, we show how the interplay between excitatory and inhibitory synapses can account for the quick rise and long-term stability of a variety of synaptic weight profiles, such as orientation tuning and dendritic clustering of co-active synapses. In recurrent neuronal networks, co-dependent plasticity produces rich and stable motor cortex-like dynamics with high input sensitivity. Our results suggest an essential role for the neighborly synaptic interaction during learning, connecting micro-level physiology with network-wide phenomena.
Collapse
Affiliation(s)
- Everton J Agnes
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, UK.
- Biozentrum, University of Basel, Basel, Switzerland.
| | - Tim P Vogels
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, UK
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| |
Collapse
|
13
|
Albesa-González A, Clopath C. Learning with filopodia and spines: Complementary strong and weak competition lead to specialized, graded, and protected receptive fields. PLoS Comput Biol 2024; 20:e1012110. [PMID: 38743789 PMCID: PMC11125506 DOI: 10.1371/journal.pcbi.1012110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 05/24/2024] [Accepted: 04/25/2024] [Indexed: 05/16/2024] Open
Abstract
Filopodia are thin synaptic protrusions that have been long known to play an important role in early development. Recently, they have been found to be more abundant in the adult cortex than previously thought, and more plastic than spines (button-shaped mature synapses). Inspired by these findings, we introduce a new model of synaptic plasticity that jointly describes learning of filopodia and spines. The model assumes that filopodia exhibit strongly competitive learning dynamics -similarly to additive spike-timing-dependent plasticity (STDP). At the same time it proposes that, if filopodia undergo sufficient potentiation, they consolidate into spines. Spines follow weakly competitive learning, classically associated with multiplicative, soft-bounded models of STDP. This makes spines more stable and sensitive to the fine structure of input correlations. We show that our learning rule has a selectivity comparable to additive STDP and captures input correlations as well as multiplicative models of STDP. We also show how it can protect previously formed memories and perform synaptic consolidation. Overall, our results can be seen as a phenomenological description of how filopodia and spines could cooperate to overcome the individual difficulties faced by strong and weak competition mechanisms.
Collapse
Affiliation(s)
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
14
|
Zhou H, Bi GQ, Liu G. Intracellular magnesium optimizes transmission efficiency and plasticity of hippocampal synapses by reconfiguring their connectivity. Nat Commun 2024; 15:3406. [PMID: 38649706 PMCID: PMC11035601 DOI: 10.1038/s41467-024-47571-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 04/02/2024] [Indexed: 04/25/2024] Open
Abstract
Synapses at dendritic branches exhibit specific properties for information processing. However, how the synapses are orchestrated to dynamically modify their properties, thus optimizing information processing, remains elusive. Here, we observed at hippocampal dendritic branches diverse configurations of synaptic connectivity, two extremes of which are characterized by low transmission efficiency, high plasticity and coding capacity, or inversely. The former favors information encoding, pertinent to learning, while the latter prefers information storage, relevant to memory. Presynaptic intracellular Mg2+ crucially mediates the dynamic transition continuously between the two extreme configurations. Consequently, varying intracellular Mg2+ levels endow individual branches with diverse synaptic computations, thus modulating their ability to process information. Notably, elevating brain Mg2+ levels in aging animals restores synaptic configuration resembling that of young animals, coincident with improved learning and memory. These findings establish intracellular Mg2+ as a crucial factor reconfiguring synaptic connectivity at dendrites, thus optimizing their branch-specific properties in information processing.
Collapse
Affiliation(s)
- Hang Zhou
- Faculty of Life and Health Sciences, Shenzhen University of Advanced Technology, Shenzhen, 518107, China.
- Interdisciplinary Center for Brain Information, Brain Cognition and Brain Disease Institute, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| | - Guo-Qiang Bi
- Faculty of Life and Health Sciences, Shenzhen University of Advanced Technology, Shenzhen, 518107, China
- Interdisciplinary Center for Brain Information, Brain Cognition and Brain Disease Institute, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Shenzhen-Hong Kong Institute of Brain Science, Shenzhen, 518055, China
- Hefei National Laboratory for Physical Sciences at the Microscale, and School of Life Sciences, University of Science and Technology of China, Hefei, 230031, China
| | - Guosong Liu
- School of Medicine, Tsinghua University, Beijing, 100084, China.
- NeuroCentria Inc., Walnut Creek, CA, 94596, USA.
| |
Collapse
|
15
|
Gouda M, Abreu S, Bienstman P. Surrogate gradient learning in spiking networks trained on event-based cytometry dataset. OPTICS EXPRESS 2024; 32:16260-16272. [PMID: 38859258 DOI: 10.1364/oe.518323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Accepted: 04/03/2024] [Indexed: 06/12/2024]
Abstract
Spiking neural networks (SNNs) are bio-inspired neural networks that - to an extent - mimic the workings of our brains. In a similar fashion, event-based vision sensors try to replicate a biological eye as closely as possible. In this work, we integrate both technologies for the purpose of classifying micro-particles in the context of label-free flow cytometry. We follow up on our previous work in which we used simple logistic regression with binary labels. Although this model was able to achieve an accuracy of over 98%, our goal is to utilize the system for a wider variety of cells, some of which may have less noticeable morphological variations. Therefore, a more advanced machine learning model like the SNNs discussed here would be required. This comes with the challenge of training such networks, since they typically suffer from vanishing gradients. We effectively apply the surrogate gradient method to overcome this issue achieving over 99% classification accuracy on test data for a four-class problem. Finally, rather than treating the neural network as a black box, we explore the dynamics inside the network and make use of that to enhance its accuracy and sparsity.
Collapse
|
16
|
Elliott T. Stability against fluctuations: a two-dimensional study of scaling, bifurcations and spontaneous symmetry breaking in stochastic models of synaptic plasticity. BIOLOGICAL CYBERNETICS 2024; 118:39-81. [PMID: 38583095 PMCID: PMC11602831 DOI: 10.1007/s00422-024-00985-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 02/12/2024] [Indexed: 04/08/2024]
Abstract
Stochastic models of synaptic plasticity must confront the corrosive influence of fluctuations in synaptic strength on patterns of synaptic connectivity. To solve this problem, we have proposed that synapses act as filters, integrating plasticity induction signals and expressing changes in synaptic strength only upon reaching filter threshold. Our earlier analytical study calculated the lifetimes of quasi-stable patterns of synaptic connectivity with synaptic filtering. We showed that the plasticity step size in a stochastic model of spike-timing-dependent plasticity (STDP) acts as a temperature-like parameter, exhibiting a critical value below which neuronal structure formation occurs. The filter threshold scales this temperature-like parameter downwards, cooling the dynamics and enhancing stability. A key step in this calculation was a resetting approximation, essentially reducing the dynamics to one-dimensional processes. Here, we revisit our earlier study to examine this resetting approximation, with the aim of understanding in detail why it works so well by comparing it, and a simpler approximation, to the system's full dynamics consisting of various embedded two-dimensional processes without resetting. Comparing the full system to the simpler approximation, to our original resetting approximation, and to a one-afferent system, we show that their equilibrium distributions of synaptic strengths and critical plasticity step sizes are all qualitatively similar, and increasingly quantitatively similar as the filter threshold increases. This increasing similarity is due to the decorrelation in changes in synaptic strength between different afferents caused by our STDP model, and the amplification of this decorrelation with larger synaptic filters.
Collapse
Affiliation(s)
- Terry Elliott
- Department of Electronics and Computer Science, University of Southampton, Highfield, Southampton, SO17 1BJ, UK.
| |
Collapse
|
17
|
Lagzi F, Fairhall AL. Emergence of co-tuning in inhibitory neurons as a network phenomenon mediated by randomness, correlations, and homeostatic plasticity. SCIENCE ADVANCES 2024; 10:eadi4350. [PMID: 38507489 DOI: 10.1126/sciadv.adi4350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 02/15/2024] [Indexed: 03/22/2024]
Abstract
Cortical excitatory neurons show clear tuning to stimulus features, but the tuning properties of inhibitory interneurons are ambiguous. While inhibitory neurons have been considered to be largely untuned, some studies show that some parvalbumin-expressing (PV) neurons do show feature selectivity and participate in co-tuned subnetworks with pyramidal neurons. In this study, we first use mean-field theory to demonstrate that a combination of homeostatic plasticity governing the synaptic dynamics of the connections from PV to excitatory neurons, heterogeneity in the excitatory postsynaptic potentials that impinge on PV neurons, and shared correlated input from layer 4 results in the functional and structural self-organization of PV subnetworks. Second, we show that structural and functional feature tuning of PV neurons emerges more clearly at the network level, i.e., that population-level measures identify functional and structural co-tuning of PV neurons that are not evident in pairwise individual-level measures. Finally, we show that such co-tuning can enhance network stability at the cost of reduced feature selectivity.
Collapse
Affiliation(s)
- Fereshteh Lagzi
- Department of Physiology and Biophysics, University of Washington, 1705 NE Pacific Street, Seattle, WA 98195-7290, USA
- Computational Neuroscience Center, University of Washington, 1705 NE Pacific Street, Seattle, WA 98195-7290, USA
| | - Adrienne L Fairhall
- Department of Physiology and Biophysics, University of Washington, 1705 NE Pacific Street, Seattle, WA 98195-7290, USA
- Computational Neuroscience Center, University of Washington, 1705 NE Pacific Street, Seattle, WA 98195-7290, USA
| |
Collapse
|
18
|
Rich MT, Worobey SJ, Mankame S, Pang ZP, Swinford-Jackson SE, Pierce RC. Sex-dependent fear memory impairment in cocaine-sired rat offspring. SCIENCE ADVANCES 2023; 9:eadf6039. [PMID: 37851809 PMCID: PMC10584337 DOI: 10.1126/sciadv.adf6039] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 09/14/2023] [Indexed: 10/20/2023]
Abstract
Cocaine self-administration by male rats results in neuronal and behavioral alterations in offspring, including responses to cocaine. Given the high degree of overlap between the brain systems underlying the pathological responses to cocaine and stress, we examined whether sire cocaine taking would influence fear-associated behavioral effects in drug-naïve adult male and female progeny. Sire cocaine exposure had no effect on contextual fear conditioning or its extinction in either male or female offspring. During cued fear conditioning, freezing behavior was enhanced in female, but not male, cocaine-sired progeny. In contrast, male cocaine-sired progeny exhibited enhanced expression of cue-conditioned fear during extinction. Long-term potentiation (LTP) was robust in the basolateral amygdala (BLA), which encodes fear conditioning, of female offspring but was completely absent in male offspring of cocaine-exposed sires. Collectively, these results indicate that cued fear memory is enhanced in the male progeny of cocaine exposed sires, which also have BLA synaptic plasticity deficits.
Collapse
Affiliation(s)
- Matthew T. Rich
- Brain Health Institute and Department of Psychiatry, Rutgers University, Piscataway, NJ 08854 USA
| | - Samantha J. Worobey
- Brain Health Institute and Department of Psychiatry, Rutgers University, Piscataway, NJ 08854 USA
| | - Sharvari Mankame
- Brain Health Institute and Department of Psychiatry, Rutgers University, Piscataway, NJ 08854 USA
| | - Zhiping P. Pang
- Child Health Institute and Department of Neuroscience & Cell Biology, Rutgers University, New Brunswick, NJ 08901, USA
| | | | - R. Christopher Pierce
- Brain Health Institute and Department of Psychiatry, Rutgers University, Piscataway, NJ 08854 USA
| |
Collapse
|
19
|
Eggl MF, Chater TE, Petkovic J, Goda Y, Tchumatchenko T. Linking spontaneous and stimulated spine dynamics. Commun Biol 2023; 6:930. [PMID: 37696988 PMCID: PMC10495434 DOI: 10.1038/s42003-023-05303-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 08/29/2023] [Indexed: 09/13/2023] Open
Abstract
Our brains continuously acquire and store memories through synaptic plasticity. However, spontaneous synaptic changes can also occur and pose a challenge for maintaining stable memories. Despite fluctuations in synapse size, recent studies have shown that key population-level synaptic properties remain stable over time. This raises the question of how local synaptic plasticity affects the global population-level synaptic size distribution and whether individual synapses undergoing plasticity escape the stable distribution to encode specific memories. To address this question, we (i) studied spontaneously evolving spines and (ii) induced synaptic potentiation at selected sites while observing the spine distribution pre- and post-stimulation. We designed a stochastic model to describe how the current size of a synapse affects its future size under baseline and stimulation conditions and how these local effects give rise to population-level synaptic shifts. Our study offers insights into how seemingly spontaneous synaptic fluctuations and local plasticity both contribute to population-level synaptic dynamics.
Collapse
Affiliation(s)
- Maximilian F Eggl
- University of Mainz Medical Center, Anselm-Franz-von-Bentzel-Weg 3, 55128, Mainz, Germany
| | - Thomas E Chater
- Laboratory for Synaptic Plasticity and Connectivity, RIKEN Center for Brain Science, Wako-shi, Saitama, Japan
- Department of Physiology, Keio University School of Medicine, Tokyo, Japan
| | - Janko Petkovic
- University of Mainz Medical Center, Anselm-Franz-von-Bentzel-Weg 3, 55128, Mainz, Germany
| | - Yukiko Goda
- Laboratory for Synaptic Plasticity and Connectivity, RIKEN Center for Brain Science, Wako-shi, Saitama, Japan
- Synapse Biology Unit, Okinawa Institute of Science and Technology Graduate University, Onna-son, Kunigami-gun, Okinawa, Japan
| | - Tatjana Tchumatchenko
- University of Mainz Medical Center, Anselm-Franz-von-Bentzel-Weg 3, 55128, Mainz, Germany.
- Institute of Experimental Epileptology and Cognition Research, University of Bonn Medical Center, Venusberg-Campus 1, 53127, Bonn, Germany.
| |
Collapse
|
20
|
Li J, Gong M, Wang X, Fan F, Zhang B. Triphenylamine-Based Helical Polymer for Flexible Memristors. Biomimetics (Basel) 2023; 8:391. [PMID: 37754142 PMCID: PMC10526500 DOI: 10.3390/biomimetics8050391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 08/15/2023] [Accepted: 08/24/2023] [Indexed: 09/28/2023] Open
Abstract
Flexible nonvolatile memristors have potential applications in wearable devices. In this work, a helical polymer, poly (N, N-diphenylanline isocyanide) (PPIC), was synthesized as the active layer, and flexible electronic devices with an Al/PPIC/ITO architecture were prepared on a polyethylene terephthalate (PET) substrate. The device showed typical nonvolatile rewritable memristor characteristics. The high-molecular-weight helical structure stabilized the active layer under different bending degrees, bending times, and number of bending cycles. The memristor was further employed to simulate the information transmission capability of neural fibers, providing new perspectives for the development of flexible wearable memristors and biomimetic neural synapses. This demonstration highlights the promising possibilities for the advancement of artificial intelligence skin and intelligent flexible robots in the future.
Collapse
Affiliation(s)
- Jinyong Li
- Key Laboratory for Advanced Materials and Joint International Research Laboratory of Precision Chemistry and Molecular Engineering, School of Chemistry and Molecular Engineering, East China University of Science and Technology, Shanghai 200237, China
| | - Minglei Gong
- Shanghai i-Reader Biotech Co., Ltd., Shanghai 201100, China
| | - Xiaoyang Wang
- Guangxi Key Laboratory of Information Material, Engineering Research Center of Electronic Information Materials and Devices, School of Material Science and Engineering, Guilin University of Electronic Technology, Guilin 541200, China
| | - Fei Fan
- Shanghai i-Reader Biotech Co., Ltd., Shanghai 201100, China
| | - Bin Zhang
- Key Laboratory for Advanced Materials and Joint International Research Laboratory of Precision Chemistry and Molecular Engineering, School of Chemistry and Molecular Engineering, East China University of Science and Technology, Shanghai 200237, China
| |
Collapse
|
21
|
Yiling Y, Shapcott K, Peter A, Klon-Lipok J, Xuhui H, Lazar A, Singer W. Robust encoding of natural stimuli by neuronal response sequences in monkey visual cortex. Nat Commun 2023; 14:3021. [PMID: 37231014 DOI: 10.1038/s41467-023-38587-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 05/08/2023] [Indexed: 05/27/2023] Open
Abstract
Parallel multisite recordings in the visual cortex of trained monkeys revealed that the responses of spatially distributed neurons to natural scenes are ordered in sequences. The rank order of these sequences is stimulus-specific and maintained even if the absolute timing of the responses is modified by manipulating stimulus parameters. The stimulus specificity of these sequences was highest when they were evoked by natural stimuli and deteriorated for stimulus versions in which certain statistical regularities were removed. This suggests that the response sequences result from a matching operation between sensory evidence and priors stored in the cortical network. Decoders trained on sequence order performed as well as decoders trained on rate vectors but the former could decode stimulus identity from considerably shorter response intervals than the latter. A simulated recurrent network reproduced similarly structured stimulus-specific response sequences, particularly once it was familiarized with the stimuli through non-supervised Hebbian learning. We propose that recurrent processing transforms signals from stationary visual scenes into sequential responses whose rank order is the result of a Bayesian matching operation. If this temporal code were used by the visual system it would allow for ultrafast processing of visual scenes.
Collapse
Affiliation(s)
- Yang Yiling
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany
- International Max Planck Research School (IMPRS) for Neural Circuits, 60438, Frankfurt am Main, Germany
- Faculty of Biological Sciences, Goethe-University Frankfurt am Main, 60438, Frankfurt am Main, Germany
| | - Katharine Shapcott
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany
- Frankfurt Institute for Advanced Studies, 60438, Frankfurt am Main, Germany
| | - Alina Peter
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany
- International Max Planck Research School (IMPRS) for Neural Circuits, 60438, Frankfurt am Main, Germany
- Faculty of Biological Sciences, Goethe-University Frankfurt am Main, 60438, Frankfurt am Main, Germany
| | - Johanna Klon-Lipok
- Max Planck Institute for Brain Research, 60438, Frankfurt am Main, Germany
| | - Huang Xuhui
- Intelligent Science and Technology Academy, China Aerospace Science and Industry Corporation (CASIC), 100144, Beijing, China
- Institute of Automation, Chinese Academy of Sciences, 100190, Beijing, China
| | - Andreea Lazar
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany
| | - Wolf Singer
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany.
- Frankfurt Institute for Advanced Studies, 60438, Frankfurt am Main, Germany.
- Max Planck Institute for Brain Research, 60438, Frankfurt am Main, Germany.
| |
Collapse
|
22
|
Aceituno PV, Farinha MT, Loidl R, Grewe BF. Learning cortical hierarchies with temporal Hebbian updates. Front Comput Neurosci 2023; 17:1136010. [PMID: 37293353 PMCID: PMC10244748 DOI: 10.3389/fncom.2023.1136010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 04/25/2023] [Indexed: 06/10/2023] Open
Abstract
A key driver of mammalian intelligence is the ability to represent incoming sensory information across multiple abstraction levels. For example, in the visual ventral stream, incoming signals are first represented as low-level edge filters and then transformed into high-level object representations. Similar hierarchical structures routinely emerge in artificial neural networks (ANNs) trained for object recognition tasks, suggesting that similar structures may underlie biological neural networks. However, the classical ANN training algorithm, backpropagation, is considered biologically implausible, and thus alternative biologically plausible training methods have been developed such as Equilibrium Propagation, Deep Feedback Control, Supervised Predictive Coding, and Dendritic Error Backpropagation. Several of those models propose that local errors are calculated for each neuron by comparing apical and somatic activities. Notwithstanding, from a neuroscience perspective, it is not clear how a neuron could compare compartmental signals. Here, we propose a solution to this problem in that we let the apical feedback signal change the postsynaptic firing rate and combine this with a differential Hebbian update, a rate-based version of classical spiking time-dependent plasticity (STDP). We prove that weight updates of this form minimize two alternative loss functions that we prove to be equivalent to the error-based losses used in machine learning: the inference latency and the amount of top-down feedback necessary. Moreover, we show that the use of differential Hebbian updates works similarly well in other feedback-based deep learning frameworks such as Predictive Coding or Equilibrium Propagation. Finally, our work removes a key requirement of biologically plausible models for deep learning and proposes a learning mechanism that would explain how temporal Hebbian learning rules can implement supervised hierarchical learning.
Collapse
Affiliation(s)
- Pau Vilimelis Aceituno
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
- ETH AI Center, ETH Zurich, Zurich, Switzerland
| | | | - Reinhard Loidl
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Benjamin F. Grewe
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
- ETH AI Center, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
23
|
Schmidgall S, Hays J. Meta-SpikePropamine: learning to learn with synaptic plasticity in spiking neural networks. Front Neurosci 2023; 17:1183321. [PMID: 37250397 PMCID: PMC10213417 DOI: 10.3389/fnins.2023.1183321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 04/06/2023] [Indexed: 05/31/2023] Open
Abstract
We propose that in order to harness our understanding of neuroscience toward machine learning, we must first have powerful tools for training brain-like models of learning. Although substantial progress has been made toward understanding the dynamics of learning in the brain, neuroscience-derived models of learning have yet to demonstrate the same performance capabilities as methods in deep learning such as gradient descent. Inspired by the successes of machine learning using gradient descent, we introduce a bi-level optimization framework that seeks to both solve online learning tasks and improve the ability to learn online using models of plasticity from neuroscience. We demonstrate that models of three-factor learning with synaptic plasticity taken from the neuroscience literature can be trained in Spiking Neural Networks (SNNs) with gradient descent via a framework of learning-to-learn to address challenging online learning problems. This framework opens a new path toward developing neuroscience inspired online learning algorithms.
Collapse
Affiliation(s)
- Samuel Schmidgall
- U.S. Naval Research Laboratory, Spacecraft Engineering Department, Washington, DC, United States
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Joe Hays
- U.S. Naval Research Laboratory, Spacecraft Engineering Department, Washington, DC, United States
| |
Collapse
|
24
|
Wilmes KA, Clopath C. Dendrites help mitigate the plasticity-stability dilemma. Sci Rep 2023; 13:6543. [PMID: 37085642 PMCID: PMC10121616 DOI: 10.1038/s41598-023-32410-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Accepted: 03/27/2023] [Indexed: 04/23/2023] Open
Abstract
With Hebbian learning 'who fires together wires together', well-known problems arise. Hebbian plasticity can cause unstable network dynamics and overwrite stored memories. Because the known homeostatic plasticity mechanisms tend to be too slow to combat unstable dynamics, it has been proposed that plasticity must be highly gated and synaptic strengths limited. While solving the issue of stability, gating and limiting plasticity does not solve the stability-plasticity dilemma. We propose that dendrites enable both stable network dynamics and considerable synaptic changes, as they allow the gating of plasticity in a compartment-specific manner. We investigate how gating plasticity influences network stability in plastic balanced spiking networks of neurons with dendrites. We compare how different ways to gate plasticity, namely via modulating excitability, learning rate, and inhibition increase stability. We investigate how dendritic versus perisomatic gating allows for different amounts of weight changes in stable networks. We suggest that the compartmentalisation of pyramidal cells enables dendritic synaptic changes while maintaining stability. We show that the coupling between dendrite and soma is critical for the plasticity-stability trade-off. Finally, we show that spatially restricted plasticity additionally improves stability.
Collapse
Affiliation(s)
- Katharina A Wilmes
- Imperial College London, London, United Kingdom.
- University of Bern, Bern, Switzerland.
| | | |
Collapse
|
25
|
Research Progress of spiking neural network in image classification: a review. APPL INTELL 2023. [DOI: 10.1007/s10489-023-04553-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/11/2023]
|
26
|
Dehghani-Habibabadi M, Pawelzik K. Synaptic self-organization of spatio-temporal pattern selectivity. PLoS Comput Biol 2023; 19:e1010876. [PMID: 36780564 PMCID: PMC9977062 DOI: 10.1371/journal.pcbi.1010876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 03/01/2023] [Accepted: 01/17/2023] [Indexed: 02/15/2023] Open
Abstract
Spiking model neurons can be set up to respond selectively to specific spatio-temporal spike patterns by optimization of their input weights. It is unknown, however, if existing synaptic plasticity mechanisms can achieve this temporal mode of neuronal coding and computation. Here it is shown that changes of synaptic efficacies which tend to balance excitatory and inhibitory synaptic inputs can make neurons sensitive to particular input spike patterns. Simulations demonstrate that a combination of Hebbian mechanisms, hetero-synaptic plasticity and synaptic scaling is sufficient for self-organizing sensitivity for spatio-temporal spike patterns that repeat in the input. In networks inclusion of hetero-synaptic plasticity that depends on the pre-synaptic neurons leads to specialization and faithful representation of pattern sequences by a group of target neurons. Pattern detection is robust against a range of distortions and noise. The proposed combination of Hebbian mechanisms, hetero-synaptic plasticity and synaptic scaling is found to protect the memories for specific patterns from being overwritten by ongoing learning during extended periods when the patterns are not present. This suggests a novel explanation for the long term robustness of memory traces despite ongoing activity with substantial synaptic plasticity. Taken together, our results promote the plausibility of precise temporal coding in the brain.
Collapse
Affiliation(s)
| | - Klaus Pawelzik
- Institute for Theoretical Physics, University of Bremen, Bremen, Germany
| |
Collapse
|
27
|
Liu W, Liu X. Pre-stimulus network responses affect information coding in neural variability quenching. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.02.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/12/2023]
|
28
|
The effects of distractors on brightness perception based on a spiking network. Sci Rep 2023; 13:1517. [PMID: 36707550 PMCID: PMC9883501 DOI: 10.1038/s41598-023-28326-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 01/17/2023] [Indexed: 01/28/2023] Open
Abstract
Visual perception can be modified by the surrounding context. Particularly, experimental observations have demonstrated that visual perception and primary visual cortical responses could be modified by properties of surrounding distractors. However, the underlying mechanism remains unclear. To simulate primary visual cortical activities in this paper, we design a k-winner-take-all (k-WTA) spiking network whose responses are generated through probabilistic inference. In simulations, images with the same target and various surrounding distractors perform as stimuli. Distractors are designed with multiple varying properties, including the luminance, the sizes and the distances to the target. Simulations for each varying property are performed with other properties fixed. Each property could modify second-layer neural responses and interactions in the network. To the same target in the designed images, the modified network responses could simulate distinguishing brightness perception consistent with experimental observations. Our model provides a possible explanation of how the surrounding distractors modify primary visual cortical responses to induce various brightness perception of the given target.
Collapse
|
29
|
Sun C, Liu X, Jiang Q, Ye X, Zhu X, Li RW. Emerging electrolyte-gated transistors for neuromorphic perception. SCIENCE AND TECHNOLOGY OF ADVANCED MATERIALS 2023; 24:2162325. [PMID: 36684849 PMCID: PMC9848240 DOI: 10.1080/14686996.2022.2162325] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 12/18/2022] [Accepted: 12/21/2022] [Indexed: 05/31/2023]
Abstract
With the rapid development of intelligent robotics, the Internet of Things, and smart sensor technologies, great enthusiasm has been devoted to developing next-generation intelligent systems for the emulation of advanced perception functions of humans. Neuromorphic devices, capable of emulating the learning, memory, analysis, and recognition functions of biological neural systems, offer solutions to intelligently process sensory information. As one of the most important neuromorphic devices, Electrolyte-gated transistors (EGTs) have shown great promise in implementing various vital neural functions and good compatibility with sensors. This review introduces the materials, operating principle, and performances of EGTs, followed by discussing the recent progress of EGTs for synapse and neuron emulation. Integrating EGTs with sensors that faithfully emulate diverse perception functions of humans such as tactile and visual perception is discussed. The challenges of EGTs for further development are given.
Collapse
Affiliation(s)
- Cui Sun
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Xuerong Liu
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Qian Jiang
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- College of Materials Sciences and Opto-Electronic Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Xiaoyu Ye
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- College of Materials Sciences and Opto-Electronic Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Xiaojian Zhu
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- College of Materials Sciences and Opto-Electronic Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Run-Wei Li
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- College of Materials Sciences and Opto-Electronic Technology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
30
|
KASAI H. Unraveling the mysteries of dendritic spine dynamics: Five key principles shaping memory and cognition. PROCEEDINGS OF THE JAPAN ACADEMY. SERIES B, PHYSICAL AND BIOLOGICAL SCIENCES 2023; 99:254-305. [PMID: 37821392 PMCID: PMC10749395 DOI: 10.2183/pjab.99.018] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2023] [Accepted: 07/11/2023] [Indexed: 10/13/2023]
Abstract
Recent research extends our understanding of brain processes beyond just action potentials and chemical transmissions within neural circuits, emphasizing the mechanical forces generated by excitatory synapses on dendritic spines to modulate presynaptic function. From in vivo and in vitro studies, we outline five central principles of synaptic mechanics in brain function: P1: Stability - Underpinning the integral relationship between the structure and function of the spine synapses. P2: Extrinsic dynamics - Highlighting synapse-selective structural plasticity which plays a crucial role in Hebbian associative learning, distinct from pathway-selective long-term potentiation (LTP) and depression (LTD). P3: Neuromodulation - Analyzing the role of G-protein-coupled receptors, particularly dopamine receptors, in time-sensitive modulation of associative learning frameworks such as Pavlovian classical conditioning and Thorndike's reinforcement learning (RL). P4: Instability - Addressing the intrinsic dynamics crucial to memory management during continual learning, spotlighting their role in "spine dysgenesis" associated with mental disorders. P5: Mechanics - Exploring how synaptic mechanics influence both sides of synapses to establish structural traces of short- and long-term memory, thereby aiding the integration of mental functions. We also delve into the historical background and foresee impending challenges.
Collapse
Affiliation(s)
- Haruo KASAI
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Bunkyo-ku, Tokyo, Japan
- Laboratory of Structural Physiology, Center for Disease Biology and Integrative Medicine, Faculty of Medicine, The University of Tokyo, Bunkyo-ku, Tokyo, Japan
| |
Collapse
|
31
|
Fanselow MS. Negative valence systems: sustained threat and the predatory imminence continuum. Emerg Top Life Sci 2022; 6:467-477. [PMID: 36286244 PMCID: PMC9788377 DOI: 10.1042/etls20220003] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Revised: 07/25/2022] [Accepted: 08/18/2022] [Indexed: 01/09/2023]
Abstract
This review describes the relationship between the National Institute of Mental Health (U.S.A.) Research Domain Criteria (RDoC) Negative Valence System related to responses to threat and the Predatory Imminence Continuum model of antipredator defensive behavior. While the original RDoC constructs of Potential Threat (anxiety) and Acute Threat (fear) fit well with the pre-encounter and post-encounter defense modes of the predatory imminence model, the Sustained Threat construct does not. Early research on the bed nuclei of the stria terminalis (BST) suggested that when fear responding needed to be sustained for a prolonged duration this region was important. However, follow-up studies indicated that the BST becomes critical not because responses needed to be sustained but rather when the stimuli triggering fear were more difficult to learn about, particularly when aversive stimuli were difficult to accurately predict. Instead, it is argued that the BST and the hippocampus act to expand the range of conditions that can trigger post-encounter defense (Acute Threat). It is further suggested that sustained threat refers to situations where the predatory imminence continuum becomes distorted causing defensive behavior to intrude into times when organisms should be engaging in other adaptive behaviors. Stress is seen as something that can cause a long-term disturbance of the continuum and this disturbance is a state of sustained threat.
Collapse
Affiliation(s)
- Michael S Fanselow
- Staglin Center for Brain and Behavioral Health, University of California, Los Angeles, California, U.S.A
- Department of Psychology, University of California, Los Angeles, California, U.S.A
- Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, California, U.S.A
| |
Collapse
|
32
|
Dorkenwald S, Turner NL, Macrina T, Lee K, Lu R, Wu J, Bodor AL, Bleckert AA, Brittain D, Kemnitz N, Silversmith WM, Ih D, Zung J, Zlateski A, Tartavull I, Yu SC, Popovych S, Wong W, Castro M, Jordan CS, Wilson AM, Froudarakis E, Buchanan J, Takeno MM, Torres R, Mahalingam G, Collman F, Schneider-Mizell CM, Bumbarger DJ, Li Y, Becker L, Suckow S, Reimer J, Tolias AS, Macarico da Costa N, Reid RC, Seung HS. Binary and analog variation of synapses between cortical pyramidal neurons. eLife 2022; 11:e76120. [PMID: 36382887 PMCID: PMC9704804 DOI: 10.7554/elife.76120] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 11/15/2022] [Indexed: 11/17/2022] Open
Abstract
Learning from experience depends at least in part on changes in neuronal connections. We present the largest map of connectivity to date between cortical neurons of a defined type (layer 2/3 [L2/3] pyramidal cells in mouse primary visual cortex), which was enabled by automated analysis of serial section electron microscopy images with improved handling of image defects (250 × 140 × 90 μm3 volume). We used the map to identify constraints on the learning algorithms employed by the cortex. Previous cortical studies modeled a continuum of synapse sizes by a log-normal distribution. A continuum is consistent with most neural network models of learning, in which synaptic strength is a continuously graded analog variable. Here, we show that synapse size, when restricted to synapses between L2/3 pyramidal cells, is well modeled by the sum of a binary variable and an analog variable drawn from a log-normal distribution. Two synapses sharing the same presynaptic and postsynaptic cells are known to be correlated in size. We show that the binary variables of the two synapses are highly correlated, while the analog variables are not. Binary variation could be the outcome of a Hebbian or other synaptic plasticity rule depending on activity signals that are relatively uniform across neuronal arbors, while analog variation may be dominated by other influences such as spontaneous dynamical fluctuations. We discuss the implications for the longstanding hypothesis that activity-dependent plasticity switches synapses between bistable states.
Collapse
Affiliation(s)
- Sven Dorkenwald
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
- Computer Science Department, Princeton UniversityPrincetonUnited States
| | - Nicholas L Turner
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
- Computer Science Department, Princeton UniversityPrincetonUnited States
| | - Thomas Macrina
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
- Computer Science Department, Princeton UniversityPrincetonUnited States
| | - Kisuk Lee
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
- Brain & Cognitive Sciences Department, Massachusetts Institute of TechnologyCambridgeUnited States
| | - Ran Lu
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Jingpeng Wu
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Agnes L Bodor
- Allen Institute for Brain ScienceSeattleUnited States
| | | | | | - Nico Kemnitz
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | | | - Dodam Ih
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Jonathan Zung
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Aleksandar Zlateski
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Ignacio Tartavull
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Szi-Chieh Yu
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Sergiy Popovych
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
- Computer Science Department, Princeton UniversityPrincetonUnited States
| | - William Wong
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Manuel Castro
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Chris S Jordan
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Alyssa M Wilson
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Emmanouil Froudarakis
- Department of Neuroscience, Baylor College of MedicineHoustonUnited States
- Center for Neuroscience and Artificial Intelligence, Baylor College of MedicineHoustonUnited States
| | | | - Marc M Takeno
- Allen Institute for Brain ScienceSeattleUnited States
| | - Russel Torres
- Allen Institute for Brain ScienceSeattleUnited States
| | | | | | | | | | - Yang Li
- Allen Institute for Brain ScienceSeattleUnited States
| | - Lynne Becker
- Allen Institute for Brain ScienceSeattleUnited States
| | - Shelby Suckow
- Allen Institute for Brain ScienceSeattleUnited States
| | - Jacob Reimer
- Department of Neuroscience, Baylor College of MedicineHoustonUnited States
- Center for Neuroscience and Artificial Intelligence, Baylor College of MedicineHoustonUnited States
| | - Andreas S Tolias
- Department of Neuroscience, Baylor College of MedicineHoustonUnited States
- Center for Neuroscience and Artificial Intelligence, Baylor College of MedicineHoustonUnited States
- Department of Electrical and Computer Engineering, Rice UniversityHoustonUnited States
| | | | - R Clay Reid
- Allen Institute for Brain ScienceSeattleUnited States
| | - H Sebastian Seung
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
- Computer Science Department, Princeton UniversityPrincetonUnited States
| |
Collapse
|
33
|
Paradoxical self-sustained dynamics emerge from orchestrated excitatory and inhibitory homeostatic plasticity rules. Proc Natl Acad Sci U S A 2022; 119:e2200621119. [PMID: 36251988 PMCID: PMC9618084 DOI: 10.1073/pnas.2200621119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Abstract
Cortical networks have the remarkable ability to self-assemble into dynamic regimes in which excitatory positive feedback is balanced by recurrent inhibition. This inhibition-stabilized regime is increasingly viewed as the default dynamic regime of the cortex, but how it emerges in an unsupervised manner remains unknown. We prove that classic forms of homeostatic plasticity are unable to drive recurrent networks to an inhibition-stabilized regime due to the well-known paradoxical effect. We next derive a novel family of cross-homeostatic rules that lead to the unsupervised emergence of inhibition-stabilized networks. These rules shed new light on how the brain may reach its default dynamic state and provide a valuable tool to self-assemble artificial neural networks into ideal computational regimes. Self-sustained neural activity maintained through local recurrent connections is of fundamental importance to cortical function. Converging theoretical and experimental evidence indicates that cortical circuits generating self-sustained dynamics operate in an inhibition-stabilized regime. Theoretical work has established that four sets of weights (WE←E, WE←I, WI←E, and WI←I) must obey specific relationships to produce inhibition-stabilized dynamics, but it is not known how the brain can appropriately set the values of all four weight classes in an unsupervised manner to be in the inhibition-stabilized regime. We prove that standard homeostatic plasticity rules are generally unable to generate inhibition-stabilized dynamics and that their instability is caused by a signature property of inhibition-stabilized networks: the paradoxical effect. In contrast, we show that a family of “cross-homeostatic” rules overcome the paradoxical effect and robustly lead to the emergence of stable dynamics. This work provides a model of how—beginning from a silent network—self-sustained inhibition-stabilized dynamics can emerge from learning rules governing all four synaptic weight classes in an orchestrated manner.
Collapse
|
34
|
Garg N, Balafrej I, Stewart TC, Portal JM, Bocquet M, Querlioz D, Drouin D, Rouat J, Beilliard Y, Alibart F. Voltage-dependent synaptic plasticity: Unsupervised probabilistic Hebbian plasticity rule based on neurons membrane potential. Front Neurosci 2022; 16:983950. [PMID: 36340782 PMCID: PMC9634260 DOI: 10.3389/fnins.2022.983950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 09/05/2022] [Indexed: 11/27/2022] Open
Abstract
This study proposes voltage-dependent-synaptic plasticity (VDSP), a novel brain-inspired unsupervised local learning rule for the online implementation of Hebb’s plasticity mechanism on neuromorphic hardware. The proposed VDSP learning rule updates the synaptic conductance on the spike of the postsynaptic neuron only, which reduces by a factor of two the number of updates with respect to standard spike timing dependent plasticity (STDP). This update is dependent on the membrane potential of the presynaptic neuron, which is readily available as part of neuron implementation and hence does not require additional memory for storage. Moreover, the update is also regularized on synaptic weight and prevents explosion or vanishing of weights on repeated stimulation. Rigorous mathematical analysis is performed to draw an equivalence between VDSP and STDP. To validate the system-level performance of VDSP, we train a single-layer spiking neural network (SNN) for the recognition of handwritten digits. We report 85.01 ± 0.76% (Mean ± SD) accuracy for a network of 100 output neurons on the MNIST dataset. The performance improves when scaling the network size (89.93 ± 0.41% for 400 output neurons, 90.56 ± 0.27 for 500 neurons), which validates the applicability of the proposed learning rule for spatial pattern recognition tasks. Future work will consider more complicated tasks. Interestingly, the learning rule better adapts than STDP to the frequency of input signal and does not require hand-tuning of hyperparameters.
Collapse
Affiliation(s)
- Nikhil Garg
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
- Institute of Electronics, Microelectronics and Nanotechnology (IEMN), Université de Lille, Villeneuve-d’Ascq, France
- *Correspondence: Nikhil Garg,
| | - Ismael Balafrej
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
- NECOTIS Research Lab, Department of Electrical and Computer Engineering, University of Sherbrooke, Sherbrooke, QC, Canada
| | - Terrence C. Stewart
- National Research Council Canada, University of Waterloo Collaboration Centre, Waterloo, ON, Canada
| | - Jean-Michel Portal
- Aix-Marseille Université, Université de Toulon, CNRS, IM2NP, Marseille, France
| | - Marc Bocquet
- Institute of Electronics, Microelectronics and Nanotechnology (IEMN), Université de Lille, Villeneuve-d’Ascq, France
| | - Damien Querlioz
- Université Paris-Saclay, CNRS, Centre de Nanosciences et de Nanotechnologies, Palaiseau, France
| | - Dominique Drouin
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Jean Rouat
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
- NECOTIS Research Lab, Department of Electrical and Computer Engineering, University of Sherbrooke, Sherbrooke, QC, Canada
| | - Yann Beilliard
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Fabien Alibart
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
- Institute of Electronics, Microelectronics and Nanotechnology (IEMN), Université de Lille, Villeneuve-d’Ascq, France
- Fabien Alibart,
| |
Collapse
|
35
|
Ehsani M, Jost J. Self-organized criticality in a mesoscopic model of excitatory-inhibitory neuronal populations by short-term and long-term synaptic plasticity. Front Comput Neurosci 2022; 16:910735. [PMID: 36299476 PMCID: PMC9588946 DOI: 10.3389/fncom.2022.910735] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 09/16/2022] [Indexed: 09/19/2023] Open
Abstract
Dynamics of an interconnected population of excitatory and inhibitory spiking neurons wandering around a Bogdanov-Takens (BT) bifurcation point can generate the observed scale-free avalanches at the population level and the highly variable spike patterns of individual neurons. These characteristics match experimental findings for spontaneous intrinsic activity in the brain. In this paper, we address the mechanisms causing the system to get and remain near this BT point. We propose an effective stochastic neural field model which captures the dynamics of the mean-field model. We show how the network tunes itself through local long-term synaptic plasticity by STDP and short-term synaptic depression to be close to this bifurcation point. The mesoscopic model that we derive matches the directed percolation model at the absorbing state phase transition.
Collapse
Affiliation(s)
- Masud Ehsani
- Max Planck Institute for Mathematics in Sciences, Leipzig, Germany
| | - Jürgen Jost
- Max Planck Institute for Mathematics in Sciences, Leipzig, Germany
- Santa Fe Institute, Santa Fe, NM, United States
| |
Collapse
|
36
|
Bialas M, Mandziuk J. Spike-Timing-Dependent Plasticity With Activation-Dependent Scaling for Receptive Fields Development. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:5215-5228. [PMID: 33844634 DOI: 10.1109/tnnls.2021.3069683] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Spike-timing-dependent plasticity (STDP) is one of the most popular and deeply biologically motivated forms of unsupervised Hebbian-type learning. In this article, we propose a variant of STDP extended by an additional activation-dependent scale factor. The consequent learning rule is an efficient algorithm, which is simple to implement and applicable to spiking neural networks (SNNs). It is demonstrated that the proposed plasticity mechanism combined with competitive learning can serve as an effective mechanism for the unsupervised development of receptive fields (RFs). Furthermore, the relationship between synaptic scaling and lateral inhibition is explored in the context of the successful development of RFs. Specifically, we demonstrate that maintaining a high level of synaptic scaling followed by its rapid increase is crucial for the development of neuronal mechanisms of selectivity. The strength of the proposed solution is assessed in classification tasks performed on the Modified National Institute of Standards and Technology (MNIST) data set with an accuracy level of 94.65% (a single network) and 95.17% (a network committee)-comparable to the state-of-the-art results of single-layer SNN architectures trained in an unsupervised manner. Furthermore, the training process leads to sparse data representation and the developed RFs have the potential to serve as local feature detectors in multilayered spiking networks. We also prove theoretically that when applied to linear Poisson neurons, our rule conserves total synaptic strength, guaranteeing the convergence of the learning process.
Collapse
|
37
|
Dorman DB, Blackwell KT. Synaptic Plasticity Is Predicted by Spatiotemporal Firing Rate Patterns and Robust to In Vivo-like Variability. Biomolecules 2022; 12:1402. [PMID: 36291612 PMCID: PMC9599115 DOI: 10.3390/biom12101402] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 09/13/2022] [Accepted: 09/28/2022] [Indexed: 11/22/2022] Open
Abstract
Synaptic plasticity, the experience-induced change in connections between neurons, underlies learning and memory in the brain. Most of our understanding of synaptic plasticity derives from in vitro experiments with precisely repeated stimulus patterns; however, neurons exhibit significant variability in vivo during repeated experiences. Further, the spatial pattern of synaptic inputs to the dendritic tree influences synaptic plasticity, yet is not considered in most synaptic plasticity rules. Here, we investigate how spatiotemporal synaptic input patterns produce plasticity with in vivo-like conditions using a data-driven computational model with a plasticity rule based on calcium dynamics. Using in vivo spike train recordings as inputs to different size clusters of spines, we show that plasticity is strongly robust to trial-to-trial variability of spike timing. In addition, we derive general synaptic plasticity rules describing how spatiotemporal patterns of synaptic inputs control the magnitude and direction of plasticity. Synapses that strongly potentiated have greater firing rates and calcium concentration later in the trial, whereas strongly depressing synapses have hiring firing rates early in the trial. The neighboring synaptic activity influences the direction and magnitude of synaptic plasticity, with small clusters of spines producing the greatest increase in synaptic strength. Together, our results reveal that calcium dynamics can unify diverse plasticity rules and reveal how spatiotemporal firing rate patterns control synaptic plasticity.
Collapse
Affiliation(s)
- Daniel B. Dorman
- Interdisciplinary Program in Neuroscience, George Mason University, Fairfax, VA 22030, USA
| | - Kim T. Blackwell
- Interdisciplinary Program in Neuroscience, George Mason University, Fairfax, VA 22030, USA
- Department of Bioengineering, Volgenau School of Engineering, George Mason University, Fairfax, VA 22030, USA
| |
Collapse
|
38
|
Miehl C, Onasch S, Festa D, Gjorgjieva J. Formation and computational implications of assemblies in neural circuits. J Physiol 2022. [PMID: 36068723 DOI: 10.1113/jp282750] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 08/22/2022] [Indexed: 11/08/2022] Open
Abstract
In the brain, patterns of neural activity represent sensory information and store it in non-random synaptic connectivity. A prominent theoretical hypothesis states that assemblies, groups of neurons that are strongly connected to each other, are the key computational units underlying perception and memory formation. Compatible with these hypothesised assemblies, experiments have revealed groups of neurons that display synchronous activity, either spontaneously or upon stimulus presentation, and exhibit behavioural relevance. While it remains unclear how assemblies form in the brain, theoretical work has vastly contributed to the understanding of various interacting mechanisms in this process. Here, we review the recent theoretical literature on assembly formation by categorising the involved mechanisms into four components: synaptic plasticity, symmetry breaking, competition and stability. We highlight different approaches and assumptions behind assembly formation and discuss recent ideas of assemblies as the key computational unit in the brain. Abstract figure legend Assembly Formation. Assemblies are groups of strongly connected neurons formed by the interaction of multiple mechanisms and with vast computational implications. Four interacting components are thought to drive assembly formation: synaptic plasticity, symmetry breaking, competition and stability. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Christoph Miehl
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Sebastian Onasch
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Dylan Festa
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Julijana Gjorgjieva
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| |
Collapse
|
39
|
Chrysanthidis N, Fiebig F, Lansner A, Herman P. Traces of semantization - from episodic to semantic memory in a spiking cortical network model. eNeuro 2022; 9:ENEURO.0062-22.2022. [PMID: 35803714 PMCID: PMC9347313 DOI: 10.1523/eneuro.0062-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Revised: 05/05/2022] [Accepted: 05/28/2022] [Indexed: 11/21/2022] Open
Abstract
Episodic memory is a recollection of past personal experiences associated with particular times and places. This kind of memory is commonly subject to loss of contextual information or" semantization", which gradually decouples the encoded memory items from their associated contexts while transforming them into semantic or gist-like representations. Novel extensions to the classical Remember/Know behavioral paradigm attribute the loss of episodicity to multiple exposures of an item in different contexts. Despite recent advancements explaining semantization at a behavioral level, the underlying neural mechanisms remain poorly understood. In this study, we suggest and evaluate a novel hypothesis proposing that Bayesian-Hebbian synaptic plasticity mechanisms might cause semantization of episodic memory. We implement a cortical spiking neural network model with a Bayesian-Hebbian learning rule called Bayesian Confidence Propagation Neural Network (BCPNN), which captures the semantization phenomenon and offers a mechanistic explanation for it. Encoding items across multiple contexts leads to item-context decoupling akin to semantization. We compare BCPNN plasticity with the more commonly used spike-timing dependent plasticity (STDP) learning rule in the same episodic memory task. Unlike BCPNN, STDP does not explain the decontextualization process. We further examine how selective plasticity modulation of isolated salient events may enhance preferential retention and resistance to semantization. Our model reproduces important features of episodicity on behavioral timescales under various biological constraints whilst also offering a novel neural and synaptic explanation for semantization, thereby casting new light on the interplay between episodic and semantic memory processes.Significance StatementRemembering single episodes is a fundamental attribute of cognition. Difficulties recollecting contextual information is a key sign of episodic memory loss or semantization. Behavioral studies demonstrate that semantization of episodic memory can occur rapidly, yet the neural mechanisms underlying this effect are insufficiently investigated. In line with recent behavioral findings, we show that multiple stimulus exposures in different contexts may advance item-context decoupling. We suggest a Bayesian-Hebbian synaptic plasticity hypothesis of memory semantization and further show that a transient modulation of plasticity during salient events may disrupt the decontextualization process by strengthening memory traces, and thus, enhancing preferential retention. The proposed cortical network-of-networks model thus bridges micro and mesoscale synaptic effects with network dynamics and behavior.
Collapse
Affiliation(s)
- Nikolaos Chrysanthidis
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 10044 Stockholm, Sweden
| | - Florian Fiebig
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 10044 Stockholm, Sweden
| | - Anders Lansner
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 10044 Stockholm, Sweden
- Department of Mathematics, Stockholm University, 10691 Stockholm, Sweden
| | - Pawel Herman
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 10044 Stockholm, Sweden
- Digital Futures, Stockholm, Sweden
- Swedish e-Science Research Centre, Stockholm, Sweden
| |
Collapse
|
40
|
Albesa-González A, Froc M, Williamson O, Rossum MCWV. Weight dependence in BCM leads to adjustable synaptic competition. J Comput Neurosci 2022; 50:431-444. [PMID: 35764852 PMCID: PMC9666303 DOI: 10.1007/s10827-022-00824-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 05/15/2022] [Accepted: 06/08/2022] [Indexed: 11/28/2022]
Abstract
Models of synaptic plasticity have been used to better understand neural development as well as learning and memory. One prominent classic model is the Bienenstock-Cooper-Munro (BCM) model that has been particularly successful in explaining plasticity of the visual cortex. Here, in an effort to include more biophysical detail in the BCM model, we incorporate 1) feedforward inhibition, and 2) the experimental observation that large synapses are relatively harder to potentiate than weak ones, while synaptic depression is proportional to the synaptic strength. These modifications change the outcome of unsupervised plasticity under the BCM model. The amount of feed-forward inhibition adds a parameter to BCM that turns out to determine the strength of competition. In the limit of strong inhibition the learning outcome is identical to standard BCM and the neuron becomes selective to one stimulus only (winner-take-all). For smaller values of inhibition, competition is weaker and the receptive fields are less selective. However, both BCM variants can yield realistic receptive fields.
Collapse
Affiliation(s)
- Albert Albesa-González
- School of Psychology and School of Mathematical Sciences, University of Nottingham, Nottingham, NH7 2RD, UK
| | - Maxime Froc
- School of Psychology and School of Mathematical Sciences, University of Nottingham, Nottingham, NH7 2RD, UK
| | - Oliver Williamson
- School of Psychology and School of Mathematical Sciences, University of Nottingham, Nottingham, NH7 2RD, UK
| | - Mark C W van Rossum
- School of Psychology and School of Mathematical Sciences, University of Nottingham, Nottingham, NH7 2RD, UK.
| |
Collapse
|
41
|
Mizusaki BEP, Li SSY, Costa RP, Sjöström PJ. Pre- and postsynaptically expressed spike-timing-dependent plasticity contribute differentially to neuronal learning. PLoS Comput Biol 2022; 18:e1009409. [PMID: 35700188 PMCID: PMC9236267 DOI: 10.1371/journal.pcbi.1009409] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 06/27/2022] [Accepted: 05/11/2022] [Indexed: 11/18/2022] Open
Abstract
A plethora of experimental studies have shown that long-term synaptic plasticity can be expressed pre- or postsynaptically depending on a range of factors such as developmental stage, synapse type, and activity patterns. The functional consequences of this diversity are not clear, although it is understood that whereas postsynaptic expression of plasticity predominantly affects synaptic response amplitude, presynaptic expression alters both synaptic response amplitude and short-term dynamics. In most models of neuronal learning, long-term synaptic plasticity is implemented as changes in connective weights. The consideration of long-term plasticity as a fixed change in amplitude corresponds more closely to post- than to presynaptic expression, which means theoretical outcomes based on this choice of implementation may have a postsynaptic bias. To explore the functional implications of the diversity of expression of long-term synaptic plasticity, we adapted a model of long-term plasticity, more specifically spike-timing-dependent plasticity (STDP), such that it was expressed either independently pre- or postsynaptically, or in a mixture of both ways. We compared pair-based standard STDP models and a biologically tuned triplet STDP model, and investigated the outcomes in a minimal setting, using two different learning schemes: in the first, inputs were triggered at different latencies, and in the second a subset of inputs were temporally correlated. We found that presynaptic changes adjusted the speed of learning, while postsynaptic expression was more efficient at regulating spike timing and frequency. When combining both expression loci, postsynaptic changes amplified the response range, while presynaptic plasticity allowed control over postsynaptic firing rates, potentially providing a form of activity homeostasis. Our findings highlight how the seemingly innocuous choice of implementing synaptic plasticity by single weight modification may unwittingly introduce a postsynaptic bias in modelling outcomes. We conclude that pre- and postsynaptically expressed plasticity are not interchangeable, but enable complimentary functions. Differences between functional properties of pre- or postsynaptically expressed long-term plasticity have not yet been explored in much detail. In this paper, we used minimalist models of STDP with different expression loci, in search of fundamental functional consequences. Biologically, presynaptic expression acts mostly on neurotransmitter release, thereby altering short-term synaptic dynamics, whereas postsynaptic expression affects mainly synaptic gain. We compared models where plasticity was expressed only presynaptically or postsynaptically, or in both ways. We found that postsynaptic plasticity had a bigger impact over response times, while both pre- and postsynaptic plasticity were similarly capable of detecting correlated inputs. A model with biologically tuned expression of plasticity achieved the same outcome over a range of frequencies. Also, postsynaptic spiking frequency was not directly affected by presynaptic plasticity of short-term plasticity alone, however in combination with a postsynaptic component, it helped restrain positive feedback, contributing to activity homeostasis. In conclusion, expression locus may determine affinity for distinct coding schemes while also contributing to keep activity within bounds. Our findings highlight the importance of carefully implementing expression of plasticity in biological modelling, since the locus of expression may affect functional outcomes in simulations.
Collapse
Affiliation(s)
- Beatriz Eymi Pimentel Mizusaki
- Centre for Research in Neuroscience, Brain Repair and Integrative Neuroscience Programme, Departments of Medicine, Neurology and Neurosurgery, The Research Institute of the McGill University Health Centre, Montreal General Hospital, Montreal, Quebec, Canada
- Instituto de Física, Universidade Federal do Rio Grande do Sul, Porto Alegre, Rio Grande do Sul, Brazil
- Computational Neuroscience Unit, Department of Computer Science, SCEEM, Faculty of Engineering, University of Bristol, Bristol, United Kingdom
| | - Sally Si Ying Li
- Centre for Research in Neuroscience, Brain Repair and Integrative Neuroscience Programme, Departments of Medicine, Neurology and Neurosurgery, The Research Institute of the McGill University Health Centre, Montreal General Hospital, Montreal, Quebec, Canada
| | - Rui Ponte Costa
- Computational Neuroscience Unit, Department of Computer Science, SCEEM, Faculty of Engineering, University of Bristol, Bristol, United Kingdom
- Department of Physiology, University of Bern, Bern, Switzerland
- Centre for Neural Circuits and Behaviour, Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Per Jesper Sjöström
- Centre for Research in Neuroscience, Brain Repair and Integrative Neuroscience Programme, Departments of Medicine, Neurology and Neurosurgery, The Research Institute of the McGill University Health Centre, Montreal General Hospital, Montreal, Quebec, Canada
- * E-mail:
| |
Collapse
|
42
|
Anwar H, Caby S, Dura-Bernal S, D’Onofrio D, Hasegan D, Deible M, Grunblatt S, Chadderdon GL, Kerr CC, Lakatos P, Lytton WW, Hazan H, Neymotin SA. Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning. PLoS One 2022; 17:e0265808. [PMID: 35544518 PMCID: PMC9094569 DOI: 10.1371/journal.pone.0265808] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 03/08/2022] [Indexed: 11/18/2022] Open
Abstract
Recent models of spiking neuronal networks have been trained to perform behaviors in static environments using a variety of learning rules, with varying degrees of biological realism. Most of these models have not been tested in dynamic visual environments where models must make predictions on future states and adjust their behavior accordingly. The models using these learning rules are often treated as black boxes, with little analysis on circuit architectures and learning mechanisms supporting optimal performance. Here we developed visual/motor spiking neuronal network models and trained them to play a virtual racket-ball game using several reinforcement learning algorithms inspired by the dopaminergic reward system. We systematically investigated how different architectures and circuit-motifs (feed-forward, recurrent, feedback) contributed to learning and performance. We also developed a new biologically-inspired learning rule that significantly enhanced performance, while reducing training time. Our models included visual areas encoding game inputs and relaying the information to motor areas, which used this information to learn to move the racket to hit the ball. Neurons in the early visual area relayed information encoding object location and motion direction across the network. Neuronal association areas encoded spatial relationships between objects in the visual scene. Motor populations received inputs from visual and association areas representing the dorsal pathway. Two populations of motor neurons generated commands to move the racket up or down. Model-generated actions updated the environment and triggered reward or punishment signals that adjusted synaptic weights so that the models could learn which actions led to reward. Here we demonstrate that our biologically-plausible learning rules were effective in training spiking neuronal network models to solve problems in dynamic environments. We used our models to dissect the circuit architectures and learning rules most effective for learning. Our model shows that learning mechanisms involving different neural circuits produce similar performance in sensory-motor tasks. In biological networks, all learning mechanisms may complement one another, accelerating the learning capabilities of animals. Furthermore, this also highlights the resilience and redundancy in biological systems.
Collapse
Affiliation(s)
- Haroon Anwar
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Simon Caby
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Salvador Dura-Bernal
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
- Dept. Physiology & Pharmacology, State University of New York Downstate, Brooklyn, New York, United States of America
| | - David D’Onofrio
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Daniel Hasegan
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Matt Deible
- University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Sara Grunblatt
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - George L. Chadderdon
- Dept. Physiology & Pharmacology, State University of New York Downstate, Brooklyn, New York, United States of America
| | - Cliff C. Kerr
- Dept Physics, University of Sydney, Sydney, Australia
- Institute for Disease Modeling, Global Health Division, Bill & Melinda Gates Foundation, Seattle, Washington, United States of America
| | - Peter Lakatos
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
- Dept. Psychiatry, NYU Grossman School of Medicine, New York, New York, United States of America
| | - William W. Lytton
- Dept. Physiology & Pharmacology, State University of New York Downstate, Brooklyn, New York, United States of America
- Dept Neurology, Kings County Hospital Center, Brooklyn, New York, United States of America
| | - Hananel Hazan
- Dept of Biology, Tufts University, Medford, Massachusetts, United States of America
| | - Samuel A. Neymotin
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
- Dept. Psychiatry, NYU Grossman School of Medicine, New York, New York, United States of America
| |
Collapse
|
43
|
Gu J, Lim S. Unsupervised learning for robust working memory. PLoS Comput Biol 2022; 18:e1009083. [PMID: 35500033 PMCID: PMC9098088 DOI: 10.1371/journal.pcbi.1009083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 05/12/2022] [Accepted: 03/16/2022] [Indexed: 11/18/2022] Open
Abstract
Working memory is a core component of critical cognitive functions such as planning and decision-making. Persistent activity that lasts long after the stimulus offset has been considered a neural substrate for working memory. Attractor dynamics based on network interactions can successfully reproduce such persistent activity. However, it requires a fine-tuning of network connectivity, in particular, to form continuous attractors which were suggested for encoding continuous signals in working memory. Here, we investigate whether a specific form of synaptic plasticity rules can mitigate such tuning problems in two representative working memory models, namely, rate-coded and location-coded persistent activity. We consider two prominent types of plasticity rules, differential plasticity correcting the rapid activity changes and homeostatic plasticity regularizing the long-term average of activity, both of which have been proposed to fine-tune the weights in an unsupervised manner. Consistent with the findings of previous works, differential plasticity alone was enough to recover a graded-level persistent activity after perturbations in the connectivity. For the location-coded memory, differential plasticity could also recover persistent activity. However, its pattern can be irregular for different stimulus locations under slow learning speed or large perturbation in the connectivity. On the other hand, homeostatic plasticity shows a robust recovery of smooth spatial patterns under particular types of synaptic perturbations, such as perturbations in incoming synapses onto the entire or local populations. However, homeostatic plasticity was not effective against perturbations in outgoing synapses from local populations. Instead, combining it with differential plasticity recovers location-coded persistent activity for a broader range of perturbations, suggesting compensation between two plasticity rules.
Collapse
Affiliation(s)
- Jintao Gu
- Neural Science, New York University Shanghai, Shanghai, China
| | - Sukbin Lim
- Neural Science, New York University Shanghai, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
- * E-mail:
| |
Collapse
|
44
|
Sarwat SG, Kersting B, Moraitis T, Jonnalagadda VP, Sebastian A. Phase-change memtransistive synapses for mixed-plasticity neural computations. NATURE NANOTECHNOLOGY 2022; 17:507-513. [PMID: 35347271 DOI: 10.1038/s41565-022-01095-3] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Accepted: 01/12/2022] [Indexed: 06/14/2023]
Abstract
In the mammalian nervous system, various synaptic plasticity rules act, either individually or synergistically, over wide-ranging timescales to enable learning and memory formation. Hence, in neuromorphic computing platforms, there is a significant need for artificial synapses that can faithfully express such multi-timescale plasticity mechanisms. Although some plasticity rules have been emulated with elaborate complementary metal oxide semiconductor and memristive circuitry, device-level hardware realizations of long-term and short-term plasticity with tunable dynamics are lacking. Here we introduce a phase-change memtransistive synapse that leverages both the non-volatility of the phase configurations and the volatility of field-effect modulation for implementing tunable plasticities. We show that these mixed-plasticity synapses can enable plasticity rules such as short-term spike-timing-dependent plasticity that helps with the modelling of dynamic environments. Further, we demonstrate the efficacy of the memtransistive synapses in realizing accelerators for Hopfield neural networks for solving combinatorial optimization problems.
Collapse
|
45
|
Vignoud G, Robert P. Spontaneous dynamics of synaptic weights in stochastic models with pair-based spike-timing-dependent plasticity. Phys Rev E 2022; 105:054405. [PMID: 35706237 DOI: 10.1103/physreve.105.054405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 03/31/2022] [Indexed: 06/15/2023]
Abstract
We investigate spike-timing dependent plasticity (STPD) in the case of a synapse connecting two neuronal cells. We develop a theoretical analysis of several STDP rules using Markovian theory. In this context there are two different timescales, fast neuronal activity and slower synaptic weight updates. Exploiting this timescale separation, we derive the long-time limits of a single synaptic weight subject to STDP. We show that the pairing model of presynaptic and postsynaptic spikes controls the synaptic weight dynamics for small external input on an excitatory synapse. This result implies in particular that mean-field analysis of plasticity may miss some important properties of STDP. Anti-Hebbian STDP favors the emergence of a stable synaptic weight. In the case of an inhibitory synapse the pairing schemes matter less, and we observe convergence of the synaptic weight to a nonnull value only for Hebbian STDP. We extensively study different asymptotic regimes for STDP rules, raising interesting questions for future work on adaptative neuronal networks and, more generally, on adaptative systems.
Collapse
Affiliation(s)
- Gaëtan Vignoud
- INRIA Paris, 2 rue Simone Iff, 75589 Paris Cedex 12, France and Center for Interdisciplinary Research in Biology (CIRB), Collège de France (CNRS UMR 7241, INSERM U1050), 11 Place Marcelin Berthelot, 75005 Paris, France
| | | |
Collapse
|
46
|
Suen JY, Navlakha S. A feedback control principle common to several biological and engineered systems. J R Soc Interface 2022; 19:20210711. [PMID: 35232277 PMCID: PMC8889180 DOI: 10.1098/rsif.2021.0711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 02/02/2022] [Indexed: 11/12/2022] Open
Abstract
Feedback control is used by many distributed systems to optimize behaviour. Traditional feedback control algorithms spend significant resources to constantly sense and stabilize a continuous control variable of interest, such as vehicle speed for implementing cruise control, or body temperature for maintaining homeostasis. By contrast, discrete-event feedback (e.g. a server acknowledging when data are successfully transmitted, or a brief antennal interaction when an ant returns to the nest after successful foraging) can reduce costs associated with monitoring a continuous variable; however, optimizing behaviour in this setting requires alternative strategies. Here, we studied parallels between discrete-event feedback control strategies in biological and engineered systems. We found that two common engineering rules-additive-increase, upon positive feedback, and multiplicative-decrease, upon negative feedback, and multiplicative-increase multiplicative-decrease-are used by diverse biological systems, including for regulating foraging by harvester ant colonies, for maintaining cell-size homeostasis, and for synaptic learning and adaptation in neural circuits. These rules support several goals of these systems, including optimizing efficiency (i.e. using all available resources); splitting resources fairly among cooperating agents, or conversely, acquiring resources quickly among competing agents; and minimizing the latency of responses, especially when conditions change. We hypothesize that theoretical frameworks from distributed computing may offer new ways to analyse adaptation behaviour of biology systems, and in return, biological strategies may inspire new algorithms for discrete-event feedback control in engineering.
Collapse
Affiliation(s)
- Jonathan Y. Suen
- Cold Spring Harbor Laboratory, Simons Center for Quantitative Biology, Cold Spring Harbor, NY, USA
| | - Saket Navlakha
- Cold Spring Harbor Laboratory, Simons Center for Quantitative Biology, Cold Spring Harbor, NY, USA
| |
Collapse
|
47
|
Chakraborty B, Mukhopadhyay S. Characterization of Generalizability of Spike Timing Dependent Plasticity Trained Spiking Neural Networks. Front Neurosci 2021; 15:695357. [PMID: 34776837 PMCID: PMC8589121 DOI: 10.3389/fnins.2021.695357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 09/29/2021] [Indexed: 11/30/2022] Open
Abstract
A Spiking Neural Network (SNN) is trained with Spike Timing Dependent Plasticity (STDP), which is a neuro-inspired unsupervised learning method for various machine learning applications. This paper studies the generalizability properties of the STDP learning processes using the Hausdorff dimension of the trajectories of the learning algorithm. The paper analyzes the effects of STDP learning models and associated hyper-parameters on the generalizability properties of an SNN. The analysis is used to develop a Bayesian optimization approach to optimize the hyper-parameters for an STDP model for improving the generalizability properties of an SNN.
Collapse
Affiliation(s)
- Biswadeep Chakraborty
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | | |
Collapse
|
48
|
|
49
|
Nishi Y, Nomura K, Marukame T, Mizushima K. Stochastic binary synapses having sigmoidal cumulative distribution functions for unsupervised learning with spike timing-dependent plasticity. Sci Rep 2021; 11:18282. [PMID: 34521895 PMCID: PMC8440757 DOI: 10.1038/s41598-021-97583-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Accepted: 08/23/2021] [Indexed: 11/17/2022] Open
Abstract
Spike timing-dependent plasticity (STDP), which is widely studied as a fundamental synaptic update rule for neuromorphic hardware, requires precise control of continuous weights. From the viewpoint of hardware implementation, a simplified update rule is desirable. Although simplified STDP with stochastic binary synapses was proposed previously, we find that it leads to degradation of memory maintenance during learning, which is unfavourable for unsupervised online learning. In this work, we propose a stochastic binary synaptic model where the cumulative probability of the weight change evolves in a sigmoidal fashion with potentiation or depression trials, which can be implemented using a pair of switching devices consisting of serially connected multiple binary memristors. As a benchmark test we perform simulations of unsupervised learning of MNIST images with a two-layer network and show that simplified STDP in combination with this model can outperform conventional rules with continuous weights not only in memory maintenance but also in recognition accuracy. Our method achieves 97.3% in recognition accuracy, which is higher than that reported with standard STDP in the same framework. We also show that the high performance of our learning rule is robust against device-to-device variability of the memristor's probabilistic behaviour.
Collapse
Affiliation(s)
- Yoshifumi Nishi
- Frontier Research Laboratory, Corporate R&D Center, Toshiba Corporation, 1, Komukai-Toshiba-Cho, Saiwai-ku, Kawasaki, 212-8582, Japan.
| | - Kumiko Nomura
- Frontier Research Laboratory, Corporate R&D Center, Toshiba Corporation, 1, Komukai-Toshiba-Cho, Saiwai-ku, Kawasaki, 212-8582, Japan
| | - Takao Marukame
- Frontier Research Laboratory, Corporate R&D Center, Toshiba Corporation, 1, Komukai-Toshiba-Cho, Saiwai-ku, Kawasaki, 212-8582, Japan
| | - Koichi Mizushima
- Frontier Research Laboratory, Corporate R&D Center, Toshiba Corporation, 1, Komukai-Toshiba-Cho, Saiwai-ku, Kawasaki, 212-8582, Japan
| |
Collapse
|
50
|
Nikitin O, Lukyanova O, Kunin A. Constrained plasticity reserve as a natural way to control frequency and weights in spiking neural networks. Neural Netw 2021; 143:783-797. [PMID: 34488014 DOI: 10.1016/j.neunet.2021.08.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 06/14/2021] [Accepted: 08/12/2021] [Indexed: 11/16/2022]
Abstract
Biological neurons have adaptive nature and perform complex computations involving the filtering of redundant information. However, most common neural cell models, including biologically plausible, such as Hodgkin-Huxley or Izhikevich, do not possess predictive dynamics on a single-cell level. Moreover, the modern rules of synaptic plasticity or interconnections weights adaptation also do not provide grounding for the ability of neurons to adapt to the ever-changing input signal intensity. While natural neuron synaptic growth is precisely controlled and restricted by protein supply and recycling, weight correction rules such as widely used STDP are efficiently unlimited in change rate and scale. The present article introduces new mechanics of interconnection between neuron firing rate homeostasis and weight change through STDP growth bounded by abstract protein reserve, controlled by the intracellular optimization algorithm. We show how these cellular dynamics help neurons filter out the intense noise signals to help neurons keep a stable firing rate. We also examine that such filtering does not affect the ability of neurons to recognize the correlated inputs in unsupervised mode. Such an approach might be used in the machine learning domain to improve the robustness of AI systems.
Collapse
Affiliation(s)
- Oleg Nikitin
- Computing Center of the Far Eastern Branch of the Russian Academy of Sciences, 680000, Khabarovsk, Russia.
| | - Olga Lukyanova
- Computing Center of the Far Eastern Branch of the Russian Academy of Sciences, 680000, Khabarovsk, Russia.
| | - Alex Kunin
- Computing Center of the Far Eastern Branch of the Russian Academy of Sciences, 680000, Khabarovsk, Russia.
| |
Collapse
|