1
|
Wang D, Yan X, Yang Y, Stathis D, Hemani A, Lansner A, Xu J, Zheng LR, Zou Z. Scalable Multi-FPGA HPC Architecture for Associative Memory System. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2025; 19:454-468. [PMID: 39163180 DOI: 10.1109/tbcas.2024.3446660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/22/2024]
Abstract
Associative memory is a cornerstone of cognitive intelligence within the human brain. The Bayesian confidence propagation neural network (BCPNN), a cortex-inspired model with high biological plausibility, has proven effective in emulating high-level cognitive functions like associative memory. However, the current approach using GPUs to simulate BCPNN-based associative memory tasks encounters challenges in latency and power efficiency as the model size scales. This work proposes a scalable multi-FPGA high performance computing (HPC) architecture designed for the associative memory system. The architecture integrates a set of hypercolumn unit (HCU) computing cores for intra-board online learning and inference, along with a spike-based synchronization scheme for inter-board communication among multiple FPGAs. Several design strategies, including population-based model mapping, packet-based spike synchronization, and cluster-based timing optimization, are presented to facilitate the multi-FPGA implementation. The architecture is implemented and validated on two Xilinx Alveo U50 FPGA cards, achieving a maximum model size of 20010 and a peak working frequency of 220 MHz for the associative memory system. Both the memory-bounded spatial scalability and compute-bounded temporal scalability of the architecture are evaluated and optimized, achieving a maximum scale-latency ratio (SLR) of 268.82 for the two-FPGA implementation. Compared to a two-GPU counterpart, the two-FPGA approach demonstrates a maximum latency reduction of 51.72 and a power reduction exceeding 5.28 under the same network configuration. Compared with the state-of-the-art works, the two-FPGA implementation exhibits a high pattern storage capacity for the associative memory task.
Collapse
|
2
|
Lu Y, Wu S. Learning sequence attractors in recurrent networks with hidden neurons. Neural Netw 2024; 178:106466. [PMID: 38968778 DOI: 10.1016/j.neunet.2024.106466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 05/15/2024] [Accepted: 06/13/2024] [Indexed: 07/07/2024]
Abstract
The brain is targeted for processing temporal sequence information. It remains largely unclear how the brain learns to store and retrieve sequence memories. Here, we study how recurrent networks of binary neurons learn sequence attractors to store predefined pattern sequences and retrieve them robustly. We show that to store arbitrary pattern sequences, it is necessary for the network to include hidden neurons even though their role in displaying sequence memories is indirect. We develop a local learning algorithm to learn sequence attractors in the networks with hidden neurons. The algorithm is proven to converge and lead to sequence attractors. We demonstrate that the network model can store and retrieve sequences robustly on synthetic and real-world datasets. We hope that this study provides new insights in understanding sequence memory and temporal information processing in the brain.
Collapse
Affiliation(s)
- Yao Lu
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Beijing Key Laboratory of Behavior and Mental Health, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Peking University, China
| | - Si Wu
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Beijing Key Laboratory of Behavior and Mental Health, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Peking University, China.
| |
Collapse
|
3
|
Nicola W, Newton TR, Clopath C. The impact of spike timing precision and spike emission reliability on decoding accuracy. Sci Rep 2024; 14:10536. [PMID: 38719897 PMCID: PMC11078995 DOI: 10.1038/s41598-024-58524-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 04/01/2024] [Indexed: 05/12/2024] Open
Abstract
Precisely timed and reliably emitted spikes are hypothesized to serve multiple functions, including improving the accuracy and reproducibility of encoding stimuli, memories, or behaviours across trials. When these spikes occur as a repeating sequence, they can be used to encode and decode a potential time series. Here, we show both analytically and in simulations that the error incurred in approximating a time series with precisely timed and reliably emitted spikes decreases linearly with the number of neurons or spikes used in the decoding. This was verified numerically with synthetically generated patterns of spikes. Further, we found that if spikes were imprecise in their timing, or unreliable in their emission, the error incurred in decoding with these spikes would be sub-linear. However, if the spike precision or spike reliability increased with network size, the error incurred in decoding a time-series with sequences of spikes would maintain a linear decrease with network size. The spike precision had to increase linearly with network size, while the probability of spike failure had to decrease with the square-root of the network size. Finally, we identified a candidate circuit to test this scaling relationship: the repeating sequences of spikes with sub-millisecond precision in area HVC (proper name) of the zebra finch. This scaling relationship can be tested using both neural data and song-spectrogram-based recordings while taking advantage of the natural fluctuation in HVC network size due to neurogenesis.
Collapse
Affiliation(s)
- Wilten Nicola
- University of Calgary, Calgary, Canada.
- Department of Cell Biology and Anatomy, Calgary, Canada.
- Hotchkiss Brain Institute, Calgary, Canada.
| | | | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK
| |
Collapse
|
4
|
Chrysanthidis N, Fiebig F, Lansner A, Herman P. Traces of semantization - from episodic to semantic memory in a spiking cortical network model. eNeuro 2022; 9:ENEURO.0062-22.2022. [PMID: 35803714 PMCID: PMC9347313 DOI: 10.1523/eneuro.0062-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Revised: 05/05/2022] [Accepted: 05/28/2022] [Indexed: 11/21/2022] Open
Abstract
Episodic memory is a recollection of past personal experiences associated with particular times and places. This kind of memory is commonly subject to loss of contextual information or" semantization", which gradually decouples the encoded memory items from their associated contexts while transforming them into semantic or gist-like representations. Novel extensions to the classical Remember/Know behavioral paradigm attribute the loss of episodicity to multiple exposures of an item in different contexts. Despite recent advancements explaining semantization at a behavioral level, the underlying neural mechanisms remain poorly understood. In this study, we suggest and evaluate a novel hypothesis proposing that Bayesian-Hebbian synaptic plasticity mechanisms might cause semantization of episodic memory. We implement a cortical spiking neural network model with a Bayesian-Hebbian learning rule called Bayesian Confidence Propagation Neural Network (BCPNN), which captures the semantization phenomenon and offers a mechanistic explanation for it. Encoding items across multiple contexts leads to item-context decoupling akin to semantization. We compare BCPNN plasticity with the more commonly used spike-timing dependent plasticity (STDP) learning rule in the same episodic memory task. Unlike BCPNN, STDP does not explain the decontextualization process. We further examine how selective plasticity modulation of isolated salient events may enhance preferential retention and resistance to semantization. Our model reproduces important features of episodicity on behavioral timescales under various biological constraints whilst also offering a novel neural and synaptic explanation for semantization, thereby casting new light on the interplay between episodic and semantic memory processes.Significance StatementRemembering single episodes is a fundamental attribute of cognition. Difficulties recollecting contextual information is a key sign of episodic memory loss or semantization. Behavioral studies demonstrate that semantization of episodic memory can occur rapidly, yet the neural mechanisms underlying this effect are insufficiently investigated. In line with recent behavioral findings, we show that multiple stimulus exposures in different contexts may advance item-context decoupling. We suggest a Bayesian-Hebbian synaptic plasticity hypothesis of memory semantization and further show that a transient modulation of plasticity during salient events may disrupt the decontextualization process by strengthening memory traces, and thus, enhancing preferential retention. The proposed cortical network-of-networks model thus bridges micro and mesoscale synaptic effects with network dynamics and behavior.
Collapse
Affiliation(s)
- Nikolaos Chrysanthidis
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 10044 Stockholm, Sweden
| | - Florian Fiebig
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 10044 Stockholm, Sweden
| | - Anders Lansner
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 10044 Stockholm, Sweden
- Department of Mathematics, Stockholm University, 10691 Stockholm, Sweden
| | - Pawel Herman
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 10044 Stockholm, Sweden
- Digital Futures, Stockholm, Sweden
- Swedish e-Science Research Centre, Stockholm, Sweden
| |
Collapse
|
5
|
Bouhadjar Y, Wouters DJ, Diesmann M, Tetzlaff T. Sequence learning, prediction, and replay in networks of spiking neurons. PLoS Comput Biol 2022; 18:e1010233. [PMID: 35727857 PMCID: PMC9273101 DOI: 10.1371/journal.pcbi.1010233] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 07/11/2022] [Accepted: 05/20/2022] [Indexed: 11/24/2022] Open
Abstract
Sequence learning, prediction and replay have been proposed to constitute the universal computations performed by the neocortex. The Hierarchical Temporal Memory (HTM) algorithm realizes these forms of computation. It learns sequences in an unsupervised and continuous manner using local learning rules, permits a context specific prediction of future sequence elements, and generates mismatch signals in case the predictions are not met. While the HTM algorithm accounts for a number of biological features such as topographic receptive fields, nonlinear dendritic processing, and sparse connectivity, it is based on abstract discrete-time neuron and synapse dynamics, as well as on plasticity mechanisms that can only partly be related to known biological mechanisms. Here, we devise a continuous-time implementation of the temporal-memory (TM) component of the HTM algorithm, which is based on a recurrent network of spiking neurons with biophysically interpretable variables and parameters. The model learns high-order sequences by means of a structural Hebbian synaptic plasticity mechanism supplemented with a rate-based homeostatic control. In combination with nonlinear dendritic input integration and local inhibitory feedback, this type of plasticity leads to the dynamic self-organization of narrow sequence-specific subnetworks. These subnetworks provide the substrate for a faithful propagation of sparse, synchronous activity, and, thereby, for a robust, context specific prediction of future sequence elements as well as for the autonomous replay of previously learned sequences. By strengthening the link to biology, our implementation facilitates the evaluation of the TM hypothesis based on experimentally accessible quantities. The continuous-time implementation of the TM algorithm permits, in particular, an investigation of the role of sequence timing for sequence learning, prediction and replay. We demonstrate this aspect by studying the effect of the sequence speed on the sequence learning performance and on the speed of autonomous sequence replay. Essentially all data processed by mammals and many other living organisms is sequential. This holds true for all types of sensory input data as well as motor output activity. Being able to form memories of such sequential data, to predict future sequence elements, and to replay learned sequences is a necessary prerequisite for survival. It has been hypothesized that sequence learning, prediction and replay constitute the fundamental computations performed by the neocortex. The Hierarchical Temporal Memory (HTM) constitutes an abstract powerful algorithm implementing this form of computation and has been proposed to serve as a model of neocortical processing. In this study, we are reformulating this algorithm in terms of known biological ingredients and mechanisms to foster the verifiability of the HTM hypothesis based on electrophysiological and behavioral data. The proposed model learns continuously in an unsupervised manner by biologically plausible, local plasticity mechanisms, and successfully predicts and replays complex sequences. Apart from establishing contact to biology, the study sheds light on the mechanisms determining at what speed we can process sequences and provides an explanation of fast sequence replay observed in the hippocampus and in the neocortex.
Collapse
Affiliation(s)
- Younes Bouhadjar
- Institute of Neuroscience and Medicine (INM-6), & Institute for Advanced Simulation (IAS-6), & JARA BRAIN Institute Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Peter Grünberg Institute (PGI-7,10), Jülich Research Centre and JARA, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
- * E-mail:
| | - Dirk J. Wouters
- Institute of Electronic Materials (IWE 2) & JARA-FIT, RWTH Aachen University, Aachen, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), & Institute for Advanced Simulation (IAS-6), & JARA BRAIN Institute Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, & Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6), & Institute for Advanced Simulation (IAS-6), & JARA BRAIN Institute Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
6
|
Wong EC. Distributed Phase Oscillatory Excitation Efficiently Produces Attractors Using Spike-Timing-Dependent Plasticity. Neural Comput 2021; 34:415-436. [PMID: 34915556 DOI: 10.1162/neco_a_01466] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Accepted: 09/18/2021] [Indexed: 11/04/2022]
Abstract
The brain is thought to represent information in the form of activity in distributed groups of neurons known as attractors. We show here that in a randomly connected network of simulated spiking neurons, periodic stimulation of neurons with distributed phase offsets, along with standard spike-timing-dependent plasticity (STDP), efficiently creates distributed attractors. These attractors may have a consistent ordered firing pattern or become irregular, depending on the conditions. We also show that when two such attractors are stimulated in sequence, the same STDP mechanism can create a directed association between them, forming the basis of an associative network. We find that for an STDP time constant of 20 ms, the dependence of the efficiency of attractor creation on the driving frequency has a broad peak centered around 8 Hz. Upon restimulation, the attractors self-oscillate, but with an oscillation frequency that is higher than the driving frequency, ranging from 10 to 100 Hz.
Collapse
Affiliation(s)
- Eric C Wong
- Departments of Radiology and Psychiatry, University of California, San Diego, La Jolla, CA 92093, U.S.A.
| |
Collapse
|
7
|
Frölich S, Marković D, Kiebel SJ. Neuronal Sequence Models for Bayesian Online Inference. Front Artif Intell 2021; 4:530937. [PMID: 34095815 PMCID: PMC8176225 DOI: 10.3389/frai.2021.530937] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Accepted: 04/13/2021] [Indexed: 11/13/2022] Open
Abstract
Various imaging and electrophysiological studies in a number of different species and brain regions have revealed that neuronal dynamics associated with diverse behavioral patterns and cognitive tasks take on a sequence-like structure, even when encoding stationary concepts. These neuronal sequences are characterized by robust and reproducible spatiotemporal activation patterns. This suggests that the role of neuronal sequences may be much more fundamental for brain function than is commonly believed. Furthermore, the idea that the brain is not simply a passive observer but an active predictor of its sensory input, is supported by an enormous amount of evidence in fields as diverse as human ethology and physiology, besides neuroscience. Hence, a central aspect of this review is to illustrate how neuronal sequences can be understood as critical for probabilistic predictive information processing, and what dynamical principles can be used as generators of neuronal sequences. Moreover, since different lines of evidence from neuroscience and computational modeling suggest that the brain is organized in a functional hierarchy of time scales, we will also review how models based on sequence-generating principles can be embedded in such a hierarchy, to form a generative model for recognition and prediction of sensory input. We shortly introduce the Bayesian brain hypothesis as a prominent mathematical description of how online, i.e., fast, recognition, and predictions may be computed by the brain. Finally, we briefly discuss some recent advances in machine learning, where spatiotemporally structured methods (akin to neuronal sequences) and hierarchical networks have independently been developed for a wide range of tasks. We conclude that the investigation of specific dynamical and structural principles of sequential brain activity not only helps us understand how the brain processes information and generates predictions, but also informs us about neuroscientific principles potentially useful for designing more efficient artificial neuronal networks for machine learning tasks.
Collapse
Affiliation(s)
- Sascha Frölich
- Department of Psychology, Technische Universität Dresden, Dresden, Germany
| | | | | |
Collapse
|
8
|
Cone I, Shouval HZ. Learning precise spatiotemporal sequences via biophysically realistic learning rules in a modular, spiking network. eLife 2021; 10:e63751. [PMID: 33734085 PMCID: PMC7972481 DOI: 10.7554/elife.63751] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 02/16/2021] [Indexed: 11/13/2022] Open
Abstract
Multiple brain regions are able to learn and express temporal sequences, and this functionality is an essential component of learning and memory. We propose a substrate for such representations via a network model that learns and recalls discrete sequences of variable order and duration. The model consists of a network of spiking neurons placed in a modular microcolumn based architecture. Learning is performed via a biophysically realistic learning rule that depends on synaptic 'eligibility traces'. Before training, the network contains no memory of any particular sequence. After training, presentation of only the first element in that sequence is sufficient for the network to recall an entire learned representation of the sequence. An extended version of the model also demonstrates the ability to successfully learn and recall non-Markovian sequences. This model provides a possible framework for biologically plausible sequence learning and memory, in agreement with recent experimental results.
Collapse
Affiliation(s)
- Ian Cone
- Neurobiology and Anatomy, University of Texas Medical School at HoustonHouston, TXUnited States
- Applied Physics, Rice UniversityHouston, TXUnited States
| | - Harel Z Shouval
- Neurobiology and Anatomy, University of Texas Medical School at HoustonHouston, TXUnited States
| |
Collapse
|
9
|
Characteristics of sequential activity in networks with temporally asymmetric Hebbian learning. Proc Natl Acad Sci U S A 2020; 117:29948-29958. [PMID: 33177232 PMCID: PMC7703604 DOI: 10.1073/pnas.1918674117] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Sequential activity is a prominent feature of many neural systems, in multiple behavioral contexts. Here, we investigate how Hebbian rules lead to storage and recall of random sequences of inputs in both rate and spiking recurrent networks. In the case of the simplest (bilinear) rule, we characterize extensively the regions in parameter space that allow sequence retrieval and compute analytically the storage capacity of the network. We show that nonlinearities in the learning rule can lead to sparse sequences and find that sequences maintain robust decoding but display highly labile dynamics to continuous changes in the connectivity matrix, similar to recent observations in hippocampus and parietal cortex. Sequential activity has been observed in multiple neuronal circuits across species, neural structures, and behaviors. It has been hypothesized that sequences could arise from learning processes. However, it is still unclear whether biologically plausible synaptic plasticity rules can organize neuronal activity to form sequences whose statistics match experimental observations. Here, we investigate temporally asymmetric Hebbian rules in sparsely connected recurrent rate networks and develop a theory of the transient sequential activity observed after learning. These rules transform a sequence of random input patterns into synaptic weight updates. After learning, recalled sequential activity is reflected in the transient correlation of network activity with each of the stored input patterns. Using mean-field theory, we derive a low-dimensional description of the network dynamics and compute the storage capacity of these networks. Multiple temporal characteristics of the recalled sequential activity are consistent with experimental observations. We find that the degree of sparseness of the recalled sequences can be controlled by nonlinearities in the learning rule. Furthermore, sequences maintain robust decoding, but display highly labile dynamics, when synaptic connectivity is continuously modified due to noise or storage of other patterns, similar to recent observations in hippocampus and parietal cortex. Finally, we demonstrate that our results also hold in recurrent networks of spiking neurons with separate excitatory and inhibitory populations.
Collapse
|
10
|
Yang Y, Stathis D, Jordão R, Hemani A, Lansner A. Optimizing BCPNN Learning Rule for Memory Access. Front Neurosci 2020; 14:878. [PMID: 32982673 PMCID: PMC7487417 DOI: 10.3389/fnins.2020.00878] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2020] [Accepted: 07/28/2020] [Indexed: 11/13/2022] Open
Abstract
Simulation of large scale biologically plausible spiking neural networks, e.g., Bayesian Confidence Propagation Neural Network (BCPNN), usually requires high-performance supercomputers with dedicated accelerators, such as GPUs, FPGAs, or even Application-Specific Integrated Circuits (ASICs). Almost all of these computers are based on the von Neumann architecture that separates storage and computation. In all these solutions, memory access is the dominant cost even for highly customized computation and memory architecture, such as ASICs. In this paper, we propose an optimization technique that can make the BCPNN simulation memory access friendly by avoiding a dual-access pattern. The BCPNN synaptic traces and weights are organized as matrices accessed both row-wise and column-wise. Accessing data stored in DRAM with a dual-access pattern is extremely expensive. A post-synaptic history buffer and an approximation function thus are introduced to eliminate the troublesome column update. The error analysis combining theoretical analysis and experiments suggests that the probability of introducing intolerable errors by such optimization can be bounded to a very small number, which makes it almost negligible. Derivation and validation of such a bound is the core contribution of this paper. Experiments on a GPU platform shows that compared to the previously reported baseline simulation strategy, the proposed optimization technique reduces the storage requirement by 33%, the global memory access demand by more than 27% and DRAM access rate by more than 5%; the latency of updating synaptic traces decreases by roughly 50%. Compared with the other similar optimization technique reported in the literature, our method clearly shows considerably better results. Although the BCPNN is used as the targeted neural network model, the proposed optimization method can be applied to other artificial neural network models based on a Hebbian learning rule.
Collapse
Affiliation(s)
- Yu Yang
- Division of Electronics and Embedded Systems, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
- *Correspondence: Yu Yang
| | - Dimitrios Stathis
- Division of Electronics and Embedded Systems, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Rodolfo Jordão
- Division of Electronics and Embedded Systems, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Ahmed Hemani
- Division of Electronics and Embedded Systems, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Anders Lansner
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
- Department of Mathematics, Stockholm University, Stockholm, Sweden
| |
Collapse
|
11
|
Liang Q, Zeng Y, Xu B. Temporal-Sequential Learning With a Brain-Inspired Spiking Neural Network and Its Application to Musical Memory. Front Comput Neurosci 2020; 14:51. [PMID: 32714173 PMCID: PMC7343962 DOI: 10.3389/fncom.2020.00051] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Accepted: 05/11/2020] [Indexed: 11/13/2022] Open
Abstract
Sequence learning is a fundamental cognitive function of the brain. However, the ways in which sequential information is represented and memorized are not dealt with satisfactorily by existing models. To overcome this deficiency, this paper introduces a spiking neural network based on psychological and neurobiological findings at multiple scales. Compared with existing methods, our model has four novel features: (1) It contains several collaborative subnetworks similar to those in brain regions with different cognitive functions. The individual building blocks of the simulated areas are neural functional minicolumns composed of biologically plausible neurons. Both excitatory and inhibitory connections between neurons are modulated dynamically using a spike-timing-dependent plasticity learning rule. (2) Inspired by the mechanisms of the brain's cortical-striatal loop, a dependent timing module is constructed to encode temporal information, which is essential in sequence learning but has not been processed well by traditional algorithms. (3) Goal-based and episodic retrievals can be achieved at different time scales. (4) Musical memory is used as an application to validate the model. Experiments show that the model can store a huge amount of data on melodies and recall them with high accuracy. In addition, it can remember the entirety of a melody given only an episode or the melody played at different paces.
Collapse
Affiliation(s)
- Qian Liang
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Yi Zeng
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Bo Xu
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
12
|
An Indexing Theory for Working Memory Based on Fast Hebbian Plasticity. eNeuro 2020; 7:ENEURO.0374-19.2020. [PMID: 32127347 PMCID: PMC7189483 DOI: 10.1523/eneuro.0374-19.2020] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Revised: 01/17/2020] [Accepted: 01/27/2020] [Indexed: 12/21/2022] Open
Abstract
Working memory (WM) is a key component of human memory and cognition. Computational models have been used to study the underlying neural mechanisms, but neglected the important role of short-term memory (STM) and long-term memory (LTM) interactions for WM. Here, we investigate these using a novel multiarea spiking neural network model of prefrontal cortex (PFC) and two parietotemporal cortical areas based on macaque data. We propose a WM indexing theory that explains how PFC could associate, maintain, and update multimodal LTM representations. Our simulations demonstrate how simultaneous, brief multimodal memory cues could build a temporary joint memory representation as an “index” in PFC by means of fast Hebbian synaptic plasticity. This index can then reactivate spontaneously and thereby also the associated LTM representations. Cueing one LTM item rapidly pattern completes the associated uncued item via PFC. The PFC–STM network updates flexibly as new stimuli arrive, thereby gradually overwriting older representations.
Collapse
|
13
|
Hey, look over there: Distraction effects on rapid sequence recall. PLoS One 2020; 15:e0223743. [PMID: 32275703 PMCID: PMC7147745 DOI: 10.1371/journal.pone.0223743] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Accepted: 03/11/2020] [Indexed: 11/19/2022] Open
Abstract
In the course of everyday life, the brain must store and recall a huge variety of representations of stimuli which are presented in an ordered or sequential way. The processes by which the ordering of these various things is stored and recalled are moderately well understood. We use here a computational model of a cortex-like recurrent neural network adapted by a multitude of plasticity mechanisms. We first demonstrate the learning of a sequence. Then, we examine the influence of different types of distractors on the network dynamics during the recall of the encoded ordered information being ordered in a sequence. We are able to broadly arrive at two distinct effect-categories for distractors, arrive at a basic understanding of why this is so, and predict what distractors will fall into each category.
Collapse
|
14
|
Maes A, Barahona M, Clopath C. Learning spatiotemporal signals using a recurrent spiking network that discretizes time. PLoS Comput Biol 2020; 16:e1007606. [PMID: 31961853 PMCID: PMC7028299 DOI: 10.1371/journal.pcbi.1007606] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 02/18/2020] [Accepted: 12/13/2019] [Indexed: 12/15/2022] Open
Abstract
Learning to produce spatiotemporal sequences is a common task that the brain has to solve. The same neurons may be used to produce different sequential behaviours. The way the brain learns and encodes such tasks remains unknown as current computational models do not typically use realistic biologically-plausible learning. Here, we propose a model where a spiking recurrent network of excitatory and inhibitory spiking neurons drives a read-out layer: the dynamics of the driver recurrent network is trained to encode time which is then mapped through the read-out neurons to encode another dimension, such as space or a phase. Different spatiotemporal patterns can be learned and encoded through the synaptic weights to the read-out neurons that follow common Hebbian learning rules. We demonstrate that the model is able to learn spatiotemporal dynamics on time scales that are behaviourally relevant and we show that the learned sequences are robustly replayed during a regime of spontaneous activity.
Collapse
Affiliation(s)
- Amadeus Maes
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Mauricio Barahona
- Department of Mathematics, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
15
|
Joshi VV, Patel ND, Rehan MA, Kuppa A. Mysterious Mechanisms of Memory Formation: Are the Answers Hidden in Synapses? Cureus 2019; 11:e5795. [PMID: 31728242 PMCID: PMC6827877 DOI: 10.7759/cureus.5795] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Accepted: 09/28/2019] [Indexed: 12/18/2022] Open
Abstract
After decades of research on memory formation and retention, we are still searching for the definite concept and process behind neuroplasticity. This review article will address the relationship between synapses, memory formation, and memory retention and their genetic correlations. In the last six decades, there have been enormous improvements in the neurochemistry domain, especially in the area of neural plasticity. In the central nervous system, the complexity of the synapses between neurons allows communication among them. It is believed that each time certain types of sensory signals pass through sequences of synapses, these synapses can transmit the same signals more efficiently the following time. The concept of Hebb synapse has provided revolutionary thinking about the nature of neural mechanisms of learning and memory formation. To improve the local circuitry for memory formation and behavioral change and stabilization in the mammalian central nervous system, long-term potentiation and long-term depression are the crucial components of Hebbian plasticity. In this review, we will be discussing the role of glutamatergic synapses, engram cells, cytokines, neuropeptides, neurosteroids and many aspects, covering the synaptic basis of memory. Lastly, we have tried to cover the etiology of neurodegenerative disorders due to synaptic dysfunction. To enhance pharmacological interventions for neurodegenerative diseases, we need more research in this direction. With the help of technology, and a better understanding of the disease etiology, not only can we identify the missing pieces of synaptic functions, but we might also cure or even prevent serious neurodegenerative diseases like Alzheimer's disease (AD).
Collapse
Affiliation(s)
- Viraj V Joshi
- Neuropsychiatry, California Instititute of Behavioral Neurosciences and Psychology, Fairfield, USA
| | - Nishita D Patel
- Research, California Institute of Behavioral Neurosciences & Psychology, Fairfield, USA
| | - Muhammad Awais Rehan
- Miscellenous, California Institute of Behavioral Neurosciences & Psychology, Fairfield, USA
| | - Annapurna Kuppa
- Internal Medicine and Gastroenterology, University of Michigan, Ann Arbor, USA
| |
Collapse
|
16
|
Introducing double bouquet cells into a modular cortical associative memory model. J Comput Neurosci 2019; 47:223-230. [PMID: 31502234 PMCID: PMC6879442 DOI: 10.1007/s10827-019-00729-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2018] [Revised: 08/02/2019] [Accepted: 08/21/2019] [Indexed: 01/01/2023]
Abstract
We present an electrophysiological model of double bouquet cells and integrate them into an established cortical columnar microcircuit model that has previously been used as a spiking attractor model for memory. Learning in that model relies on a Hebbian-Bayesian learning rule to condition recurrent connectivity between pyramidal cells. We here demonstrate that the inclusion of a biophysically plausible double bouquet cell model can solve earlier concerns about learning rules that simultaneously learn excitation and inhibition and might thus violate Dale’s principle. We show that learning ability and resulting effective connectivity between functional columns of previous network models is preserved when pyramidal synapses onto double bouquet cells are plastic under the same Hebbian-Bayesian learning rule. The proposed architecture draws on experimental evidence on double bouquet cells and effectively solves the problem of duplexed learning of inhibition and excitation by replacing recurrent inhibition between pyramidal cells in functional columns of different stimulus selectivity with a plastic disynaptic pathway. We thus show that the resulting change to the microcircuit architecture improves the model’s biological plausibility without otherwise impacting the model’s spiking activity, basic operation, and learning abilities.
Collapse
|
17
|
Martinez RH, Lansner A, Herman P. Probabilistic associative learning suffices for learning the temporal structure of multiple sequences. PLoS One 2019; 14:e0220161. [PMID: 31369571 PMCID: PMC6675053 DOI: 10.1371/journal.pone.0220161] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Accepted: 07/08/2019] [Indexed: 11/19/2022] Open
Abstract
From memorizing a musical tune to navigating a well known route, many of our underlying behaviors have a strong temporal component. While the mechanisms behind the sequential nature of the underlying brain activity are likely multifarious and multi-scale, in this work we attempt to characterize to what degree some of this properties can be explained as a consequence of simple associative learning. To this end, we employ a parsimonious firing-rate attractor network equipped with the Hebbian-like Bayesian Confidence Propagating Neural Network (BCPNN) learning rule relying on synaptic traces with asymmetric temporal characteristics. The proposed network model is able to encode and reproduce temporal aspects of the input, and offers internal control of the recall dynamics by gain modulation. We provide an analytical characterisation of the relationship between the structure of the weight matrix, the dynamical network parameters and the temporal aspects of sequence recall. We also present a computational study of the performance of the system under the effects of noise for an extensive region of the parameter space. Finally, we show how the inclusion of modularity in our network structure facilitates the learning and recall of multiple overlapping sequences even in a noisy regime.
Collapse
Affiliation(s)
- Ramon H. Martinez
- Computational Brain Science Lab, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Anders Lansner
- Computational Brain Science Lab, KTH Royal Institute of Technology, Stockholm, Sweden
- Mathematics Department, Stockholm University, Stockholm, Sweden
| | - Pawel Herman
- Computational Brain Science Lab, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
18
|
Herpich J, Tetzlaff C. Principles underlying the input-dependent formation and organization of memories. Netw Neurosci 2019; 3:606-634. [PMID: 31157312 PMCID: PMC6542621 DOI: 10.1162/netn_a_00086] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2018] [Accepted: 03/21/2019] [Indexed: 11/29/2022] Open
Abstract
The neuronal system exhibits the remarkable ability to dynamically store and organize incoming information into a web of memory representations (items), which is essential for the generation of complex behaviors. Central to memory function is that such memory items must be (1) discriminated from each other, (2) associated to each other, or (3) brought into a sequential order. However, how these three basic mechanisms are robustly implemented in an input-dependent manner by the underlying complex neuronal and synaptic dynamics is still unknown. Here, we develop a mathematical framework, which provides a direct link between different synaptic mechanisms, determining the neuronal and synaptic dynamics of the network, to create a network that emulates the above mechanisms. Combining correlation-based synaptic plasticity and homeostatic synaptic scaling, we demonstrate that these mechanisms enable the reliable formation of sequences and associations between two memory items still missing the capability for discrimination. We show that this shortcoming can be removed by additionally considering inhibitory synaptic plasticity. Thus, the here-presented framework provides a new, functionally motivated link between different known synaptic mechanisms leading to the self-organization of fundamental memory mechanisms.
Collapse
Affiliation(s)
- Juliane Herpich
- Department of Computational Neuroscience, Third Institute of Physics - Biophysics, Georg-August-University, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Georg-August-University, Göttingen, Germany
| | - Christian Tetzlaff
- Department of Computational Neuroscience, Third Institute of Physics - Biophysics, Georg-August-University, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Georg-August-University, Göttingen, Germany
| |
Collapse
|
19
|
Bridging structure and function: A model of sequence learning and prediction in primary visual cortex. PLoS Comput Biol 2018; 14:e1006187. [PMID: 29870532 PMCID: PMC6003695 DOI: 10.1371/journal.pcbi.1006187] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2017] [Revised: 06/15/2018] [Accepted: 05/09/2018] [Indexed: 11/29/2022] Open
Abstract
Recent experiments have demonstrated that visual cortex engages in spatio-temporal sequence learning and prediction. The cellular basis of this learning remains unclear, however. Here we present a spiking neural network model that explains a recent study on sequence learning in the primary visual cortex of rats. The model posits that the sequence learning and prediction abilities of cortical circuits result from the interaction of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. It also reproduces changes in stimulus-evoked multi-unit activity during learning. Furthermore, it makes precise predictions regarding how training shapes network connectivity to establish its prediction ability. Finally, it predicts that the adapted connectivity gives rise to systematic changes in spontaneous network activity. Taken together, our model establishes a new conceptual bridge between the structure and function of cortical circuits in the context of sequence learning and prediction. A central goal of Neuroscience is to understand the relationship between the structure and function of brain networks. Of particular interest are the circuits of the neocortex, the seat of our highest cognitive abilities. Here we provide a new link between the structure and function of neocortical circuits in the context of sequence learning. We study a spiking neural network model that self-organizes its connectivity and activity via a combination of different plasticity mechanisms known to operate in cortical circuits. We use this model to explain various findings from a recent experimental study on sequence learning and prediction in rat visual cortex. Our model reproduces the changes in activity patterns as the animal learns the sequential pattern of visual stimulation. In addition, the model predicts what stimulation-induced structural changes underlie this sequence learning ability. Finally, the model also predicts how the adapted network structure influences spontaneous network activity when there is no visual stimulation. Hence, our model provides new insights about the relationship between structure and function of cortical circuits.
Collapse
|
20
|
Bhalla US. Dendrites, deep learning, and sequences in the hippocampus. Hippocampus 2017; 29:239-251. [PMID: 29024221 DOI: 10.1002/hipo.22806] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2017] [Revised: 10/06/2017] [Accepted: 10/10/2017] [Indexed: 11/06/2022]
Abstract
The hippocampus places us both in time and space. It does so over remarkably large spans: milliseconds to years, and centimeters to kilometers. This works for sensory representations, for memory, and for behavioral context. How does it fit in such wide ranges of time and space scales, and keep order among the many dimensions of stimulus context? A key organizing principle for a wide sweep of scales and stimulus dimensions is that of order in time, or sequences. Sequences of neuronal activity are ubiquitous in sensory processing, in motor control, in planning actions, and in memory. Against this strong evidence for the phenomenon, there are currently more models than definite experiments about how the brain generates ordered activity. The flip side of sequence generation is discrimination. Discrimination of sequences has been extensively studied at the behavioral, systems, and modeling level, but again physiological mechanisms are fewer. It is against this backdrop that I discuss two recent developments in neural sequence computation, that at face value share little beyond the label "neural." These are dendritic sequence discrimination, and deep learning. One derives from channel physiology and molecular signaling, the other from applied neural network theory - apparently extreme ends of the spectrum of neural circuit detail. I suggest that each of these topics has deep lessons about the possible mechanisms, scales, and capabilities of hippocampal sequence computation.
Collapse
Affiliation(s)
- Upinder S Bhalla
- Neurobiology, National Centre for Biological Sciences, Tata Institute of Fundamental Research, Bellary Road, Bangalore 560065, Karnataka, India
| |
Collapse
|
21
|
Wang Q, Rothkopf CA, Triesch J. A model of human motor sequence learning explains facilitation and interference effects based on spike-timing dependent plasticity. PLoS Comput Biol 2017; 13:e1005632. [PMID: 28767646 PMCID: PMC5555713 DOI: 10.1371/journal.pcbi.1005632] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2016] [Revised: 08/14/2017] [Accepted: 06/16/2017] [Indexed: 01/01/2023] Open
Abstract
The ability to learn sequential behaviors is a fundamental property of our brains. Yet a long stream of studies including recent experiments investigating motor sequence learning in adult human subjects have produced a number of puzzling and seemingly contradictory results. In particular, when subjects have to learn multiple action sequences, learning is sometimes impaired by proactive and retroactive interference effects. In other situations, however, learning is accelerated as reflected in facilitation and transfer effects. At present it is unclear what the underlying neural mechanism are that give rise to these diverse findings. Here we show that a recently developed recurrent neural network model readily reproduces this diverse set of findings. The self-organizing recurrent neural network (SORN) model is a network of recurrently connected threshold units that combines a simplified form of spike-timing dependent plasticity (STDP) with homeostatic plasticity mechanisms ensuring network stability, namely intrinsic plasticity (IP) and synaptic normalization (SN). When trained on sequence learning tasks modeled after recent experiments we find that it reproduces the full range of interference, facilitation, and transfer effects. We show how these effects are rooted in the network’s changing internal representation of the different sequences across learning and how they depend on an interaction of training schedule and task similarity. Furthermore, since learning in the model is based on fundamental neuronal plasticity mechanisms, the model reveals how these plasticity mechanisms are ultimately responsible for the network’s sequence learning abilities. In particular, we find that all three plasticity mechanisms are essential for the network to learn effective internal models of the different training sequences. This ability to form effective internal models is also the basis for the observed interference and facilitation effects. This suggests that STDP, IP, and SN may be the driving forces behind our ability to learn complex action sequences. From dialing a phone number to driving home after work, much of human behavior is inherently sequential. But how do we learn such sequential behaviors and what neural plasticity mechanisms support this learning? Recent experiments on sequence learning in human adults have produced a range of confusing findings, especially when subjects have to learn multiple sequences at the same time. For example, the succes of training can strongly depend on subjects’ training schedules, i.e., whether they practice one task until they are proficient before switching to the next or whether they interleave training of the different tasks. Here we show that a model self-organizing neural network readily explains many findings on human sequence learning. The model is formulated as a recurrent network of simplified spiking neurons and incorporates multiple biologically plausible plasticity mechanisms of neurons and synapses. Therefore, it offers a theoretical bridge between basic mechanisms of synaptic and neuronal plasticity and the behavior of human subjects in sequence learning tasks.
Collapse
Affiliation(s)
- Quan Wang
- Frankfurt Institute for Advanced Studies, Ruth-Moufang Str. 1, 60438 Frankfurt, Germany
- * E-mail:
| | - Constantin A. Rothkopf
- Frankfurt Institute for Advanced Studies, Ruth-Moufang Str. 1, 60438 Frankfurt, Germany
- Centre for Cognitive Science & Institute of Psychology, Technical University Darmstadt, Darmstadt, Germany
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies, Ruth-Moufang Str. 1, 60438 Frankfurt, Germany
| |
Collapse
|
22
|
A Spiking Working Memory Model Based on Hebbian Short-Term Potentiation. J Neurosci 2017; 37:83-96. [PMID: 28053032 PMCID: PMC5214637 DOI: 10.1523/jneurosci.1989-16.2016] [Citation(s) in RCA: 60] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2016] [Revised: 10/05/2016] [Accepted: 10/19/2016] [Indexed: 11/26/2022] Open
Abstract
A dominant theory of working memory (WM), referred to as the persistent activity hypothesis, holds that recurrently connected neural networks, presumably located in the prefrontal cortex, encode and maintain WM memory items through sustained elevated activity. Reexamination of experimental data has shown that prefrontal cortex activity in single units during delay periods is much more variable than predicted by such a theory and associated computational models. Alternative models of WM maintenance based on synaptic plasticity, such as short-term nonassociative (non-Hebbian) synaptic facilitation, have been suggested but cannot account for encoding of novel associations. Here we test the hypothesis that a recently identified fast-expressing form of Hebbian synaptic plasticity (associative short-term potentiation) is a possible mechanism for WM encoding and maintenance. Our simulations using a spiking neural network model of cortex reproduce a range of cognitive memory effects in the classical multi-item WM task of encoding and immediate free recall of word lists. Memory reactivation in the model occurs in discrete oscillatory bursts rather than as sustained activity. We relate dynamic network activity as well as key synaptic characteristics to electrophysiological measurements. Our findings support the hypothesis that fast Hebbian short-term potentiation is a key WM mechanism. SIGNIFICANCE STATEMENT Working memory (WM) is a key component of cognition. Hypotheses about the neural mechanism behind WM are currently under revision. Reflecting recent findings of fast Hebbian synaptic plasticity in cortex, we test whether a cortical spiking neural network model with such a mechanism can learn a multi-item WM task (word list learning). We show that our model can reproduce human cognitive phenomena and achieve comparable memory performance in both free and cued recall while being simultaneously compatible with experimental data on structure, connectivity, and neurophysiology of the underlying cortical tissue. These findings are directly relevant to the ongoing paradigm shift in the WM field.
Collapse
|
23
|
Murray JM, Escola GS. Learning multiple variable-speed sequences in striatum via cortical tutoring. eLife 2017; 6. [PMID: 28481200 PMCID: PMC5446244 DOI: 10.7554/elife.26084] [Citation(s) in RCA: 66] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2017] [Accepted: 05/07/2017] [Indexed: 01/16/2023] Open
Abstract
Sparse, sequential patterns of neural activity have been observed in numerous brain areas during timekeeping and motor sequence tasks. Inspired by such observations, we construct a model of the striatum, an all-inhibitory circuit where sequential activity patterns are prominent, addressing the following key challenges: (i) obtaining control over temporal rescaling of the sequence speed, with the ability to generalize to new speeds; (ii) facilitating flexible expression of distinct sequences via selective activation, concatenation, and recycling of specific subsequences; and (iii) enabling the biologically plausible learning of sequences, consistent with the decoupling of learning and execution suggested by lesion studies showing that cortical circuits are necessary for learning, but that subcortical circuits are sufficient to drive learned behaviors. The same mechanisms that we describe can also be applied to circuits with both excitatory and inhibitory populations, and hence may underlie general features of sequential neural activity pattern generation in the brain. DOI:http://dx.doi.org/10.7554/eLife.26084.001
Collapse
Affiliation(s)
- James M Murray
- Center for Theoretical Neuroscience, Columbia University, New York, United States
| | - G Sean Escola
- Center for Theoretical Neuroscience, Columbia University, New York, United States
| |
Collapse
|
24
|
Knight JC, Furber SB. Synapse-Centric Mapping of Cortical Models to the SpiNNaker Neuromorphic Architecture. Front Neurosci 2016; 10:420. [PMID: 27683540 PMCID: PMC5022244 DOI: 10.3389/fnins.2016.00420] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2016] [Accepted: 08/29/2016] [Indexed: 01/30/2023] Open
Abstract
While the adult human brain has approximately 8.8 × 10(10) neurons, this number is dwarfed by its 1 × 10(15) synapses. From the point of view of neuromorphic engineering and neural simulation in general this makes the simulation of these synapses a particularly complex problem. SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Current solutions for simulating spiking neural networks on SpiNNaker are heavily inspired by work on distributed high-performance computing. However, while SpiNNaker shares many characteristics with such distributed systems, its component nodes have much more limited resources and, as the system lacks global synchronization, the computation performed on each node must complete within a fixed time step. We first analyze the performance of the current SpiNNaker neural simulation software and identify several problems that occur when it is used to simulate networks of the type often used to model the cortex which contain large numbers of sparsely connected synapses. We then present a new, more flexible approach for mapping the simulation of such networks to SpiNNaker which solves many of these problems. Finally we analyze the performance of our new approach using both benchmarks, designed to represent cortical connectivity, and larger, functional cortical models. In a benchmark network where neurons receive input from 8000 STDP synapses, our new approach allows 4× more neurons to be simulated on each SpiNNaker core than has been previously possible. We also demonstrate that the largest plastic neural network previously simulated on neuromorphic hardware can be run in real time using our new approach: double the speed that was previously achieved. Additionally this network contains two types of plastic synapse which previously had to be trained separately but, using our new approach, can be trained simultaneously.
Collapse
Affiliation(s)
- James C Knight
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester Manchester, UK
| | - Steve B Furber
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester Manchester, UK
| |
Collapse
|