1
|
Makovkin SY, Gordleeva SY, Kastalskiy IA. Toward a Biologically Plausible SNN-Based Associative Memory with Context-Dependent Hebbian Connectivity. Int J Neural Syst 2025:2550027. [PMID: 40253681 DOI: 10.1142/s0129065725500273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/22/2025]
Abstract
In this paper, we propose a spiking neural network model with Hebbian connectivity for implementing energy-efficient associative memory, whose activity is determined by input stimuli. The model consists of three interacting layers of Hodgkin-Huxley-Mainen spiking neurons with excitatory and inhibitory synaptic connections. Information patterns are stored in memory using a symmetric Hebbian matrix and can be retrieved in response to a specific stimulus pattern. Binary images are encoded using in-phase and anti-phase oscillations relative to a global clock signal. Utilizing the phase-locking effect allows for cluster synchronization of neurons (both on the input and output layers). Interneurons in the intermediate layer filter signal propagation pathways depending on the context of the input layer, effectively engaging only a portion of the synaptic connections within the Hebbian matrix for recognition. The stability of the oscillation phase is investigated for both in-phase and anti-phase synchronization modes when recognizing direct and inverse images. This context-dependent effect opens promising avenues for the development of analog hardware circuits for energy-efficient neurocomputing applications, potentially leading to breakthroughs in artificial intelligence and cognitive computing.
Collapse
Affiliation(s)
- S Yu Makovkin
- Department of Applied Mathematics, Institute of Information Technology, Mathematics and Mechanics, Lobachevsky State University of Nizhny Novgorod, 23 Gagarin Avenue, Nizhny Novgorod 603022, Russia
| | - S Yu Gordleeva
- Neuromorphic Computing Center, Neimark University, 6 Nartova Street, Nizhny Novgorod 603081, Russia
- Baltic Center for Neurotechnology and Artificial Intelligence, Immanuel Kant Baltic Federal University, 14 A. Nevskogo Street, Kaliningrad 236041, Russia
- Scientific and Educational Mathematical Center "Mathematics of Future Technologies", Lobachevsky State University of Nizhny Novgorod, 23 Gagarin Avenue, Nizhny Novgorod 603022, Russia
| | - I A Kastalskiy
- Department of Neurotechnology, Institute of Biology and Biomedicine, Lobachevsky State University of Nizhny Novgorod, 23 Gagarin Avenue, Nizhny Novgorod 603022, Russia
- Laboratory of Neurobiomorphic Technologies, Moscow Institute of Physics and Technology, 9 Institutskiy Lane, Dolgoprudny 141701, Moscow Region, Russia
| |
Collapse
|
2
|
Lobov SA, Zharinov AI, Makarov VA, Kazantsev VB. Spatial Memory in a Spiking Neural Network with Robot Embodiment. SENSORS 2021; 21:s21082678. [PMID: 33920246 PMCID: PMC8070389 DOI: 10.3390/s21082678] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 04/06/2021] [Accepted: 04/07/2021] [Indexed: 11/16/2022]
Abstract
Cognitive maps and spatial memory are fundamental paradigms of brain functioning. Here, we present a spiking neural network (SNN) capable of generating an internal representation of the external environment and implementing spatial memory. The SNN initially has a non-specific architecture, which is then shaped by Hebbian-type synaptic plasticity. The network receives stimuli at specific loci, while the memory retrieval operates as a functional SNN response in the form of population bursts. The SNN function is explored through its embodiment in a robot moving in an arena with safe and dangerous zones. We propose a measure of the global network memory using the synaptic vector field approach to validate results and calculate information characteristics, including learning curves. We show that after training, the SNN can effectively control the robot’s cognitive behavior, allowing it to avoid dangerous regions in the arena. However, the learning is not perfect. The robot eventually visits dangerous areas. Such behavior, also observed in animals, enables relearning in time-evolving environments. If a dangerous zone moves into another place, the SNN remaps positive and negative areas, allowing escaping the catastrophic interference phenomenon known for some AI architectures. Thus, the robot adapts to changing world.
Collapse
Affiliation(s)
- Sergey A. Lobov
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, 23 Gagarin Ave., 603950 Nizhny Novgorod, Russia; (A.I.Z.); (V.A.M.); (V.B.K.)
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, 1 Universitetskaya Str., 420500 Innopolis, Russia
- Center For Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, 14 Nevsky Str., 236016 Kaliningrad, Russia
- Correspondence:
| | - Alexey I. Zharinov
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, 23 Gagarin Ave., 603950 Nizhny Novgorod, Russia; (A.I.Z.); (V.A.M.); (V.B.K.)
| | - Valeri A. Makarov
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, 23 Gagarin Ave., 603950 Nizhny Novgorod, Russia; (A.I.Z.); (V.A.M.); (V.B.K.)
- Instituto de Matemática Interdisciplinar, Facultad de Ciencias Matemáticas, Universidad Complutense de Madrid, 28040 Madrid, Spain
| | - Victor B. Kazantsev
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, 23 Gagarin Ave., 603950 Nizhny Novgorod, Russia; (A.I.Z.); (V.A.M.); (V.B.K.)
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, 1 Universitetskaya Str., 420500 Innopolis, Russia
- Center For Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, 14 Nevsky Str., 236016 Kaliningrad, Russia
- Lab of Neurocybernetics, Russian State Scientific Center for Robotics and Technical Cybernetics, 21 Tikhoretsky Ave., St., 194064 Petersburg, Russia
| |
Collapse
|
3
|
Gordleeva SY, Tsybina YA, Krivonosov MI, Ivanchenko MV, Zaikin AA, Kazantsev VB, Gorban AN. Modeling Working Memory in a Spiking Neuron Network Accompanied by Astrocytes. Front Cell Neurosci 2021; 15:631485. [PMID: 33867939 PMCID: PMC8044545 DOI: 10.3389/fncel.2021.631485] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Accepted: 03/04/2021] [Indexed: 01/07/2023] Open
Abstract
We propose a novel biologically plausible computational model of working memory (WM) implemented by a spiking neuron network (SNN) interacting with a network of astrocytes. The SNN is modeled by synaptically coupled Izhikevich neurons with a non-specific architecture connection topology. Astrocytes generating calcium signals are connected by local gap junction diffusive couplings and interact with neurons via chemicals diffused in the extracellular space. Calcium elevations occur in response to the increased concentration of the neurotransmitter released by spiking neurons when a group of them fire coherently. In turn, gliotransmitters are released by activated astrocytes modulating the strength of the synaptic connections in the corresponding neuronal group. Input information is encoded as two-dimensional patterns of short applied current pulses stimulating neurons. The output is taken from frequencies of transient discharges of corresponding neurons. We show how a set of information patterns with quite significant overlapping areas can be uploaded into the neuron-astrocyte network and stored for several seconds. Information retrieval is organized by the application of a cue pattern representing one from the memory set distorted by noise. We found that successful retrieval with the level of the correlation between the recalled pattern and ideal pattern exceeding 90% is possible for the multi-item WM task. Having analyzed the dynamical mechanism of WM formation, we discovered that astrocytes operating at a time scale of a dozen of seconds can successfully store traces of neuronal activations corresponding to information patterns. In the retrieval stage, the astrocytic network selectively modulates synaptic connections in the SNN leading to successful recall. Information and dynamical characteristics of the proposed WM model agrees with classical concepts and other WM models.
Collapse
Affiliation(s)
- Susanna Yu Gordleeva
- Scientific and Educational Mathematical Center "Mathematics of Future Technology," Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia.,Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
| | - Yuliya A Tsybina
- Scientific and Educational Mathematical Center "Mathematics of Future Technology," Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Mikhail I Krivonosov
- Scientific and Educational Mathematical Center "Mathematics of Future Technology," Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Mikhail V Ivanchenko
- Scientific and Educational Mathematical Center "Mathematics of Future Technology," Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Alexey A Zaikin
- Scientific and Educational Mathematical Center "Mathematics of Future Technology," Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia.,Center for Analysis of Complex Systems, Sechenov First Moscow State Medical University, Sechenov University, Moscow, Russia.,Institute for Women's Health and Department of Mathematics, University College London, London, United Kingdom
| | - Victor B Kazantsev
- Scientific and Educational Mathematical Center "Mathematics of Future Technology," Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia.,Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia.,Neuroscience Research Institute, Samara State Medical University, Samara, Russia
| | - Alexander N Gorban
- Scientific and Educational Mathematical Center "Mathematics of Future Technology," Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia.,Department of Mathematics, University of Leicester, Leicester, United Kingdom
| |
Collapse
|
4
|
Davis GP, Katz GE, Gentili RJ, Reggia JA. Compositional memory in attractor neural networks with one-step learning. Neural Netw 2021; 138:78-97. [PMID: 33631609 DOI: 10.1016/j.neunet.2021.01.031] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 12/06/2020] [Accepted: 01/28/2021] [Indexed: 10/22/2022]
Abstract
Compositionality refers to the ability of an intelligent system to construct models out of reusable parts. This is critical for the productivity and generalization of human reasoning, and is considered a necessary ingredient for human-level artificial intelligence. While traditional symbolic methods have proven effective for modeling compositionality, artificial neural networks struggle to learn systematic rules for encoding generalizable structured models. We suggest that this is due in part to short-term memory that is based on persistent maintenance of activity patterns without fast weight changes. We present a recurrent neural network that encodes structured representations as systems of contextually-gated dynamical attractors called attractor graphs. This network implements a functionally compositional working memory that is manipulated using top-down gating and fast local learning. We evaluate this approach with empirical experiments on storage and retrieval of graph-based data structures, as well as an automated hierarchical planning task. Our results demonstrate that compositional structures can be stored in and retrieved from neural working memory without persistent maintenance of multiple activity patterns. Further, memory capacity is improved by the use of a fast store-erase learning rule that permits controlled erasure and mutation of previously learned associations. We conclude that the combination of top-down gating and fast associative learning provides recurrent neural networks with a robust functional mechanism for compositional working memory.
Collapse
Affiliation(s)
- Gregory P Davis
- Department of Computer Science, University of Maryland, College Park, MD, USA.
| | - Garrett E Katz
- Department of Elec. Engr. and Comp. Sci., Syracuse University, Syracuse, NY, USA.
| | - Rodolphe J Gentili
- Department of Kinesiology, University of Maryland, College Park, MD, USA.
| | - James A Reggia
- Department of Computer Science, University of Maryland, College Park, MD, USA.
| |
Collapse
|
5
|
|
6
|
Scarpetta S, de Candia A. Neural avalanches at the critical point between replay and non-replay of spatiotemporal patterns. PLoS One 2013; 8:e64162. [PMID: 23840301 PMCID: PMC3688722 DOI: 10.1371/journal.pone.0064162] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2013] [Accepted: 04/08/2013] [Indexed: 12/02/2022] Open
Abstract
We model spontaneous cortical activity with a network of coupled spiking units, in which multiple spatio-temporal patterns are stored as dynamical attractors. We introduce an order parameter, which measures the overlap (similarity) between the activity of the network and the stored patterns. We find that, depending on the excitability of the network, different working regimes are possible. For high excitability, the dynamical attractors are stable, and a collective activity that replays one of the stored patterns emerges spontaneously, while for low excitability, no replay is induced. Between these two regimes, there is a critical region in which the dynamical attractors are unstable, and intermittent short replays are induced by noise. At the critical spiking threshold, the order parameter goes from zero to one, and its fluctuations are maximized, as expected for a phase transition (and as observed in recent experimental results in the brain). Notably, in this critical region, the avalanche size and duration distributions follow power laws. Critical exponents are consistent with a scaling relationship observed recently in neural avalanches measurements. In conclusion, our simple model suggests that avalanche power laws in cortical spontaneous activity may be the effect of a network at the critical point between the replay and non-replay of spatio-temporal patterns.
Collapse
Affiliation(s)
- Silvia Scarpetta
- Dipartimento di Fisica E. R. Caianiello, Università di Salerno, Fisciano (SA), Italy.
| | | |
Collapse
|