51
|
Zhang T, Cheng X, Jia S, Poo MM, Zeng Y, Xu B. Self-backpropagation of synaptic modifications elevates the efficiency of spiking and artificial neural networks. SCIENCE ADVANCES 2021; 7:eabh0146. [PMID: 34669481 PMCID: PMC8528419 DOI: 10.1126/sciadv.abh0146] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Many synaptic plasticity rules found in natural circuits have not been incorporated into artificial neural networks (ANNs). We showed that incorporating a nonlocal feature of synaptic plasticity found in natural neural networks, whereby synaptic modification at output synapses of a neuron backpropagates to its input synapses made by upstream neurons, markedly reduced the computational cost without affecting the accuracy of spiking neural networks (SNNs) and ANNs in supervised learning for three benchmark tasks. For SNNs, synaptic modification at output neurons generated by spike timing–dependent plasticity was allowed to self-propagate to limited upstream synapses. For ANNs, modified synaptic weights via conventional backpropagation algorithm at output neurons self-backpropagated to limited upstream synapses. Such self-propagating plasticity may produce coordinated synaptic modifications across neuronal layers that reduce computational cost.
Collapse
Affiliation(s)
- Tielin Zhang
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xiang Cheng
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Shuncheng Jia
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Mu-ming Poo
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
- Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology, Shanghai 201210, China
| | - Yi Zeng
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Bo Xu
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
- Corresponding author.
| |
Collapse
|
52
|
Perez-Nieves N, Leung VCH, Dragotti PL, Goodman DFM. Neural heterogeneity promotes robust learning. Nat Commun 2021; 12:5791. [PMID: 34608134 PMCID: PMC8490404 DOI: 10.1038/s41467-021-26022-3] [Citation(s) in RCA: 79] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Accepted: 09/10/2021] [Indexed: 11/24/2022] Open
Abstract
The brain is a hugely diverse, heterogeneous structure. Whether or not heterogeneity at the neural level plays a functional role remains unclear, and has been relatively little explored in models which are often highly homogeneous. We compared the performance of spiking neural networks trained to carry out tasks of real-world difficulty, with varying degrees of heterogeneity, and found that heterogeneity substantially improved task performance. Learning with heterogeneity was more stable and robust, particularly for tasks with a rich temporal structure. In addition, the distribution of neuronal parameters in the trained networks is similar to those observed experimentally. We suggest that the heterogeneity observed in the brain may be more than just the byproduct of noisy processes, but rather may serve an active and important role in allowing animals to learn in changing environments.
Collapse
Affiliation(s)
- Nicolas Perez-Nieves
- Department of Electrical and Electronic Engineering, Imperial College London, London, SW7 2AZ, UK.
| | - Vincent C H Leung
- Department of Electrical and Electronic Engineering, Imperial College London, London, SW7 2AZ, UK
| | - Pier Luigi Dragotti
- Department of Electrical and Electronic Engineering, Imperial College London, London, SW7 2AZ, UK
| | - Dan F M Goodman
- Department of Electrical and Electronic Engineering, Imperial College London, London, SW7 2AZ, UK.
| |
Collapse
|
53
|
Transfer-RLS method and transfer-FORCE learning for simple and fast training of reservoir computing models. Neural Netw 2021; 143:550-563. [PMID: 34304003 DOI: 10.1016/j.neunet.2021.06.031] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Revised: 05/13/2021] [Accepted: 06/29/2021] [Indexed: 11/22/2022]
Abstract
Reservoir computing is a machine learning framework derived from a special type of recurrent neural network. Following recent advances in physical reservoir computing, some reservoir computing devices are thought to be promising as energy-efficient machine learning hardware for real-time information processing. To realize efficient online learning with low-power reservoir computing devices, it is beneficial to develop fast convergence learning methods with simpler operations. This study proposes a training method located in the middle between the recursive least squares (RLS) method and the least mean squares (LMS) method, which are standard online learning methods for reservoir computing models. The RLS method converges fast but requires updates of a huge matrix called a gain matrix, whereas the LMS method does not use a gain matrix but converges very slow. On the other hand, the proposed method called a transfer-RLS method does not require updates of the gain matrix in the main-training phase by updating that in advance (i.e., in a pre-training phase). As a result, the transfer-RLS method can work with simpler operations than the original RLS method without sacrificing much convergence speed. We numerically and analytically show that the transfer-RLS method converges much faster than the LMS method. Furthermore, we show that a modified version of the transfer-RLS method (called transfer-FORCE learning) can be applied to the first-order reduced and controlled error (FORCE) learning for a reservoir computing model with a closed-loop, which is challenging to train.
Collapse
|
54
|
Kim CM, Chow CC. Training Spiking Neural Networks in the Strong Coupling Regime. Neural Comput 2021; 33:1199-1233. [PMID: 34496392 DOI: 10.1162/neco_a_01379] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 11/23/2020] [Indexed: 11/04/2022]
Abstract
Recurrent neural networks trained to perform complex tasks can provide insight into the dynamic mechanism that underlies computations performed by cortical circuits. However, due to a large number of unconstrained synaptic connections, the recurrent connectivity that emerges from network training may not be biologically plausible. Therefore, it remains unknown if and how biological neural circuits implement dynamic mechanisms proposed by the models. To narrow this gap, we developed a training scheme that, in addition to achieving learning goals, respects the structural and dynamic properties of a standard cortical circuit model: strongly coupled excitatory-inhibitory spiking neural networks. By preserving the strong mean excitatory and inhibitory coupling of initial networks, we found that most of trained synapses obeyed Dale's law without additional constraints, exhibited large trial-to-trial spiking variability, and operated in inhibition-stabilized regime. We derived analytical estimates on how training and network parameters constrained the changes in mean synaptic strength during training. Our results demonstrate that training recurrent neural networks subject to strong coupling constraints can result in connectivity structure and dynamic regime relevant to cortical circuits.
Collapse
Affiliation(s)
- Christopher M Kim
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases/National Institutes of Health, Bethesda, MD 20814, U.S.A.
| | - Carson C Chow
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases/National Institutes of Health, Bethesda, MD 20814, U.S.A.
| |
Collapse
|
55
|
Singanamalla SKR, Lin CT. Spiking Neural Network for Augmenting Electroencephalographic Data for Brain Computer Interfaces. Front Neurosci 2021; 15:651762. [PMID: 33867928 PMCID: PMC8047134 DOI: 10.3389/fnins.2021.651762] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Accepted: 02/22/2021] [Indexed: 11/28/2022] Open
Abstract
With the advent of advanced machine learning methods, the performance of brain–computer interfaces (BCIs) has improved unprecedentedly. However, electroencephalography (EEG), a commonly used brain imaging method for BCI, is characterized by a tedious experimental setup, frequent data loss due to artifacts, and is time consuming for bulk trial recordings to take advantage of the capabilities of deep learning classifiers. Some studies have tried to address this issue by generating artificial EEG signals. However, a few of these methods are limited in retaining the prominent features or biomarker of the signal. And, other deep learning-based generative methods require a huge number of samples for training, and a majority of these models can handle data augmentation of one category or class of data at any training session. Therefore, there exists a necessity for a generative model that can generate synthetic EEG samples with as few available trials as possible and generate multi-class while retaining the biomarker of the signal. Since EEG signal represents an accumulation of action potentials from neuronal populations beneath the scalp surface and as spiking neural network (SNN), a biologically closer artificial neural network, communicates via spiking behavior, we propose an SNN-based approach using surrogate-gradient descent learning to reconstruct and generate multi-class artificial EEG signals from just a few original samples. The network was employed for augmenting motor imagery (MI) and steady-state visually evoked potential (SSVEP) data. These artificial data are further validated through classification and correlation metrics to assess its resemblance with original data and in-turn enhanced the MI classification performance.
Collapse
Affiliation(s)
- Sai Kalyan Ranga Singanamalla
- Computational Intelligence and Brain Computer Interface Lab, School of Computer Science, University of Technology Sydney, Sydney, NSW, Australia
| | - Chin-Teng Lin
- Computational Intelligence and Brain Computer Interface Lab, School of Computer Science, University of Technology Sydney, Sydney, NSW, Australia.,Centre for Artificial Intelligence, University of Technology Sydney, Sydney, NSW, Australia
| |
Collapse
|
56
|
|
57
|
Cone I, Shouval HZ. Learning precise spatiotemporal sequences via biophysically realistic learning rules in a modular, spiking network. eLife 2021; 10:e63751. [PMID: 33734085 PMCID: PMC7972481 DOI: 10.7554/elife.63751] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 02/16/2021] [Indexed: 11/13/2022] Open
Abstract
Multiple brain regions are able to learn and express temporal sequences, and this functionality is an essential component of learning and memory. We propose a substrate for such representations via a network model that learns and recalls discrete sequences of variable order and duration. The model consists of a network of spiking neurons placed in a modular microcolumn based architecture. Learning is performed via a biophysically realistic learning rule that depends on synaptic 'eligibility traces'. Before training, the network contains no memory of any particular sequence. After training, presentation of only the first element in that sequence is sufficient for the network to recall an entire learned representation of the sequence. An extended version of the model also demonstrates the ability to successfully learn and recall non-Markovian sequences. This model provides a possible framework for biologically plausible sequence learning and memory, in agreement with recent experimental results.
Collapse
Affiliation(s)
- Ian Cone
- Neurobiology and Anatomy, University of Texas Medical School at HoustonHouston, TXUnited States
- Applied Physics, Rice UniversityHouston, TXUnited States
| | - Harel Z Shouval
- Neurobiology and Anatomy, University of Texas Medical School at HoustonHouston, TXUnited States
| |
Collapse
|
58
|
Maes A, Barahona M, Clopath C. Learning compositional sequences with multiple time scales through a hierarchical network of spiking neurons. PLoS Comput Biol 2021; 17:e1008866. [PMID: 33764970 PMCID: PMC8023498 DOI: 10.1371/journal.pcbi.1008866] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Revised: 04/06/2021] [Accepted: 03/08/2021] [Indexed: 11/17/2022] Open
Abstract
Sequential behaviour is often compositional and organised across multiple time scales: a set of individual elements developing on short time scales (motifs) are combined to form longer functional sequences (syntax). Such organisation leads to a natural hierarchy that can be used advantageously for learning, since the motifs and the syntax can be acquired independently. Despite mounting experimental evidence for hierarchical structures in neuroscience, models for temporal learning based on neuronal networks have mostly focused on serial methods. Here, we introduce a network model of spiking neurons with a hierarchical organisation aimed at sequence learning on multiple time scales. Using biophysically motivated neuron dynamics and local plasticity rules, the model can learn motifs and syntax independently. Furthermore, the model can relearn sequences efficiently and store multiple sequences. Compared to serial learning, the hierarchical model displays faster learning, more flexible relearning, increased capacity, and higher robustness to perturbations. The hierarchical model redistributes the variability: it achieves high motif fidelity at the cost of higher variability in the between-motif timings.
Collapse
Affiliation(s)
- Amadeus Maes
- Bioengineering Department, Imperial College London, London, United Kingdom
| | - Mauricio Barahona
- Mathematics Department, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, United Kingdom
| |
Collapse
|
59
|
Zenke F, Bohté SM, Clopath C, Comşa IM, Göltz J, Maass W, Masquelier T, Naud R, Neftci EO, Petrovici MA, Scherr F, Goodman DFM. Visualizing a joint future of neuroscience and neuromorphic engineering. Neuron 2021; 109:571-575. [PMID: 33600754 DOI: 10.1016/j.neuron.2021.01.009] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Revised: 12/16/2020] [Accepted: 01/07/2021] [Indexed: 11/25/2022]
Abstract
Recent research resolves the challenging problem of building biophysically plausible spiking neural models that are also capable of complex information processing. This advance creates new opportunities in neuroscience and neuromorphic engineering, which we discussed at an online focus meeting.
Collapse
Affiliation(s)
- Friedemann Zenke
- Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland.
| | - Sander M Bohté
- CWI, Amsterdam, the Netherlands; Swammerdam Institute for Life Sciences (SILS), University of Amsterdam, Amsterdam, the Netherlands; AI Department, Rijksuniversiteit Groningen, Groningen, the Netherlands
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, UK
| | | | - Julian Göltz
- Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany; Department of Physiology, University of Bern, Bern, Switzerland
| | - Wolfgang Maass
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | | | - Richard Naud
- Brain and Mind Research Institute of the University of Ottawa, Department of Cellular Molecular Medicine, University of Ottawa, Ottawa, Canada
| | - Emre O Neftci
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, USA; Department of Computer Science, University of California, Irvine, Irvine, CA, USA
| | - Mihai A Petrovici
- Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany; Department of Physiology, University of Bern, Bern, Switzerland
| | - Franz Scherr
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Dan F M Goodman
- Department of Electrical and Electronic Engineering, Imperial College London, London, UK
| |
Collapse
|
60
|
Muratore P, Capone C, Paolucci PS. Target spike patterns enable efficient and biologically plausible learning for complex temporal tasks. PLoS One 2021; 16:e0247014. [PMID: 33592040 PMCID: PMC7886200 DOI: 10.1371/journal.pone.0247014] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Accepted: 01/31/2021] [Indexed: 11/28/2022] Open
Abstract
Recurrent spiking neural networks (RSNN) in the brain learn to perform a wide range of perceptual, cognitive and motor tasks very efficiently in terms of energy consumption and their training requires very few examples. This motivates the search for biologically inspired learning rules for RSNNs, aiming to improve our understanding of brain computation and the efficiency of artificial intelligence. Several spiking models and learning rules have been proposed, but it remains a challenge to design RSNNs whose learning relies on biologically plausible mechanisms and are capable of solving complex temporal tasks. In this paper, we derive a learning rule, local to the synapse, from a simple mathematical principle, the maximization of the likelihood for the network to solve a specific task. We propose a novel target-based learning scheme in which the learning rule derived from likelihood maximization is used to mimic a specific spatio-temporal spike pattern that encodes the solution to complex temporal tasks. This method makes the learning extremely rapid and precise, outperforming state of the art algorithms for RSNNs. While error-based approaches, (e.g. e-prop) trial after trial optimize the internal sequence of spikes in order to progressively minimize the MSE we assume that a signal randomly projected from an external origin (e.g. from other brain areas) directly defines the target sequence. This facilitates the learning procedure since the network is trained from the beginning to reproduce the desired internal sequence. We propose two versions of our learning rule: spike-dependent and voltage-dependent. We find that the latter provides remarkable benefits in terms of learning speed and robustness to noise. We demonstrate the capacity of our model to tackle several problems like learning multidimensional trajectories and solving the classical temporal XOR benchmark. Finally, we show that an online approximation of the gradient ascent, in addition to guaranteeing complete locality in time and space, allows learning after very few presentations of the target output. Our model can be applied to different types of biological neurons. The analytically derived plasticity learning rule is specific to each neuron model and can produce a theoretical prediction for experimental validation.
Collapse
Affiliation(s)
- Paolo Muratore
- SISSA—International School for Advanced Studies, Trieste, Italy
- * E-mail:
| | | | | |
Collapse
|
61
|
She X, Dash S, Kim D, Mukhopadhyay S. A Heterogeneous Spiking Neural Network for Unsupervised Learning of Spatiotemporal Patterns. Front Neurosci 2021; 14:615756. [PMID: 33519366 PMCID: PMC7841292 DOI: 10.3389/fnins.2020.615756] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Accepted: 12/11/2020] [Indexed: 11/19/2022] Open
Abstract
This paper introduces a heterogeneous spiking neural network (H-SNN) as a novel, feedforward SNN structure capable of learning complex spatiotemporal patterns with spike-timing-dependent plasticity (STDP) based unsupervised training. Within H-SNN, hierarchical spatial and temporal patterns are constructed with convolution connections and memory pathways containing spiking neurons with different dynamics. We demonstrate analytically the formation of long and short term memory in H-SNN and distinct response functions of memory pathways. In simulation, the network is tested on visual input of moving objects to simultaneously predict for object class and motion dynamics. Results show that H-SNN achieves prediction accuracy on similar or higher level than supervised deep neural networks (DNN). Compared to SNN trained with back-propagation, H-SNN effectively utilizes STDP to learn spatiotemporal patterns that have better generalizability to unknown motion and/or object classes encountered during inference. In addition, the improved performance is achieved with 6x fewer parameters than complex DNNs, showing H-SNN as an efficient approach for applications with constrained computation resources.
Collapse
Affiliation(s)
- Xueyuan She
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | | | | | | |
Collapse
|
62
|
Márton CD, Schultz SR, Averbeck BB. Learning to select actions shapes recurrent dynamics in the corticostriatal system. Neural Netw 2020; 132:375-393. [PMID: 32992244 PMCID: PMC7685243 DOI: 10.1016/j.neunet.2020.09.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 09/03/2020] [Accepted: 09/11/2020] [Indexed: 01/03/2023]
Abstract
Learning to select appropriate actions based on their values is fundamental to adaptive behavior. This form of learning is supported by fronto-striatal systems. The dorsal-lateral prefrontal cortex (dlPFC) and the dorsal striatum (dSTR), which are strongly interconnected, are key nodes in this circuitry. Substantial experimental evidence, including neurophysiological recordings, have shown that neurons in these structures represent key aspects of learning. The computational mechanisms that shape the neurophysiological responses, however, are not clear. To examine this, we developed a recurrent neural network (RNN) model of the dlPFC-dSTR circuit and trained it on an oculomotor sequence learning task. We compared the activity generated by the model to activity recorded from monkey dlPFC and dSTR in the same task. This network consisted of a striatal component which encoded action values, and a prefrontal component which selected appropriate actions. After training, this system was able to autonomously represent and update action values and select actions, thus being able to closely approximate the representational structure in corticostriatal recordings. We found that learning to select the correct actions drove action-sequence representations further apart in activity space, both in the model and in the neural data. The model revealed that learning proceeds by increasing the distance between sequence-specific representations. This makes it more likely that the model will select the appropriate action sequence as learning develops. Our model thus supports the hypothesis that learning in networks drives the neural representations of actions further apart, increasing the probability that the network generates correct actions as learning proceeds. Altogether, this study advances our understanding of how neural circuit dynamics are involved in neural computation, revealing how dynamics in the corticostriatal system support task learning.
Collapse
Affiliation(s)
- Christian D Márton
- Centre for Neurotechnology & Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK; Laboratory of Neuropsychology, Section on Learning and Decision Making, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA.
| | - Simon R Schultz
- Centre for Neurotechnology & Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK
| | - Bruno B Averbeck
- Laboratory of Neuropsychology, Section on Learning and Decision Making, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
63
|
Ingrosso A. Optimal learning with excitatory and inhibitory synapses. PLoS Comput Biol 2020; 16:e1008536. [PMID: 33370266 PMCID: PMC7793294 DOI: 10.1371/journal.pcbi.1008536] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 01/08/2021] [Accepted: 11/13/2020] [Indexed: 11/22/2022] Open
Abstract
Characterizing the relation between weight structure and input/output statistics is fundamental for understanding the computational capabilities of neural circuits. In this work, I study the problem of storing associations between analog signals in the presence of correlations, using methods from statistical mechanics. I characterize the typical learning performance in terms of the power spectrum of random input and output processes. I show that optimal synaptic weight configurations reach a capacity of 0.5 for any fraction of excitatory to inhibitory weights and have a peculiar synaptic distribution with a finite fraction of silent synapses. I further provide a link between typical learning performance and principal components analysis in single cases. These results may shed light on the synaptic profile of brain circuits, such as cerebellar structures, that are thought to engage in processing time-dependent signals and performing on-line prediction.
Collapse
Affiliation(s)
- Alessandro Ingrosso
- Zuckerman Mind, Brain, Behavior Institute, Columbia University, New York, New York, United States of America
| |
Collapse
|
64
|
Rullán Buxó CE, Pillow JW. Poisson balanced spiking networks. PLoS Comput Biol 2020; 16:e1008261. [PMID: 33216741 PMCID: PMC7717583 DOI: 10.1371/journal.pcbi.1008261] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Revised: 12/04/2020] [Accepted: 08/14/2020] [Indexed: 11/18/2022] Open
Abstract
An important problem in computational neuroscience is to understand how networks of spiking neurons can carry out various computations underlying behavior. Balanced spiking networks (BSNs) provide a powerful framework for implementing arbitrary linear dynamical systems in networks of integrate-and-fire neurons. However, the classic BSN model requires near-instantaneous transmission of spikes between neurons, which is biologically implausible. Introducing realistic synaptic delays leads to an pathological regime known as "ping-ponging", in which different populations spike maximally in alternating time bins, causing network output to overshoot the target solution. Here we document this phenomenon and provide a novel solution: we show that a network can have realistic synaptic delays while maintaining accuracy and stability if neurons are endowed with conditionally Poisson firing. Formally, we propose two alternate formulations of Poisson balanced spiking networks: (1) a "local" framework, which replaces the hard integrate-and-fire spiking rule within each neuron by a "soft" threshold function, such that firing probability grows as a smooth nonlinear function of membrane potential; and (2) a "population" framework, which reformulates the BSN objective function in terms of expected spike counts over the entire population. We show that both approaches offer improved robustness, allowing for accurate implementation of network dynamics with realistic synaptic delays between neurons. Both Poisson frameworks preserve the coding accuracy and robustness to neuron loss of the original model and, moreover, produce positive correlations between similarly tuned neurons, a feature of real neural populations that is not found in the deterministic BSN. This work unifies balanced spiking networks with Poisson generalized linear models and suggests several promising avenues for future research.
Collapse
Affiliation(s)
| | - Jonathan W. Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey, USA
| |
Collapse
|
65
|
Stöckel A, Eliasmith C. Passive Nonlinear Dendritic Interactions as a Computational Resource in Spiking Neural Networks. Neural Comput 2020; 33:96-128. [PMID: 33080158 DOI: 10.1162/neco_a_01338] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Nonlinear interactions in the dendritic tree play a key role in neural computation. Nevertheless, modeling frameworks aimed at the construction of large-scale, functional spiking neural networks, such as the Neural Engineering Framework, tend to assume a linear superposition of postsynaptic currents. In this letter, we present a series of extensions to the Neural Engineering Framework that facilitate the construction of networks incorporating Dale's principle and nonlinear conductance-based synapses. We apply these extensions to a two-compartment LIF neuron that can be seen as a simple model of passive dendritic computation. We show that it is possible to incorporate neuron models with input-dependent nonlinearities into the Neural Engineering Framework without compromising high-level function and that nonlinear postsynaptic currents can be systematically exploited to compute a wide variety of multivariate, band-limited functions, including the Euclidean norm, controlled shunting, and nonnegative multiplication. By avoiding an additional source of spike noise, the function approximation accuracy of a single layer of two-compartment LIF neurons is on a par with or even surpasses that of two-layer spiking neural networks up to a certain target function bandwidth.
Collapse
Affiliation(s)
- Andreas Stöckel
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada
| | - Chris Eliasmith
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada
| |
Collapse
|
66
|
Akbarzadeh-Sherbaf K, Safari S, Vahabie AH. A digital hardware implementation of spiking neural networks with binary FORCE training. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.05.044] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
67
|
Yang GR, Wang XJ. Artificial Neural Networks for Neuroscientists: A Primer. Neuron 2020; 107:1048-1070. [PMID: 32970997 PMCID: PMC11576090 DOI: 10.1016/j.neuron.2020.09.005] [Citation(s) in RCA: 99] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 08/24/2020] [Accepted: 09/01/2020] [Indexed: 12/25/2022]
Abstract
Artificial neural networks (ANNs) are essential tools in machine learning that have drawn increasing attention in neuroscience. Besides offering powerful techniques for data analysis, ANNs provide a new approach for neuroscientists to build models for complex behaviors, heterogeneous neural activity, and circuit connectivity, as well as to explore optimization in neural systems, in ways that traditional models are not designed for. In this pedagogical Primer, we introduce ANNs and demonstrate how they have been fruitfully deployed to study neuroscientific questions. We first discuss basic concepts and methods of ANNs. Then, with a focus on bringing this mathematical framework closer to neurobiology, we detail how to customize the analysis, structure, and learning of ANNs to better address a wide range of challenges in brain research. To help readers garner hands-on experience, this Primer is accompanied with tutorial-style code in PyTorch and Jupyter Notebook, covering major topics.
Collapse
Affiliation(s)
- Guangyu Robert Yang
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
| | - Xiao-Jing Wang
- Center for Neural Science, New York University, New York, NY, USA.
| |
Collapse
|
68
|
Vincent-Lamarre P, Calderini M, Thivierge JP. Learning Long Temporal Sequences in Spiking Networks by Multiplexing Neural Oscillations. Front Comput Neurosci 2020; 14:78. [PMID: 33013342 PMCID: PMC7505196 DOI: 10.3389/fncom.2020.00078] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Accepted: 07/24/2020] [Indexed: 11/13/2022] Open
Abstract
Many cognitive and behavioral tasks-such as interval timing, spatial navigation, motor control, and speech-require the execution of precisely-timed sequences of neural activation that cannot be fully explained by a succession of external stimuli. We show how repeatable and reliable patterns of spatiotemporal activity can be generated in chaotic and noisy spiking recurrent neural networks. We propose a general solution for networks to autonomously produce rich patterns of activity by providing a multi-periodic oscillatory signal as input. We show that the model accurately learns a variety of tasks, including speech generation, motor control, and spatial navigation. Further, the model performs temporal rescaling of natural spoken words and exhibits sequential neural activity commonly found in experimental data involving temporal processing. In the context of spatial navigation, the model learns and replays compressed sequences of place cells and captures features of neural activity such as the emergence of ripples and theta phase precession. Together, our findings suggest that combining oscillatory neuronal inputs with different frequencies provides a key mechanism to generate precisely timed sequences of activity in recurrent circuits of the brain.
Collapse
|
69
|
Baker C, Zhu V, Rosenbaum R. Nonlinear stimulus representations in neural circuits with approximate excitatory-inhibitory balance. PLoS Comput Biol 2020; 16:e1008192. [PMID: 32946433 PMCID: PMC7526938 DOI: 10.1371/journal.pcbi.1008192] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Revised: 09/30/2020] [Accepted: 07/24/2020] [Indexed: 12/02/2022] Open
Abstract
Balanced excitation and inhibition is widely observed in cortex. How does this balance shape neural computations and stimulus representations? This question is often studied using computational models of neuronal networks in a dynamically balanced state. But balanced network models predict a linear relationship between stimuli and population responses. So how do cortical circuits implement nonlinear representations and computations? We show that every balanced network architecture admits stimuli that break the balanced state and these breaks in balance push the network into a "semi-balanced state" characterized by excess inhibition to some neurons, but an absence of excess excitation. The semi-balanced state produces nonlinear stimulus representations and nonlinear computations, is unavoidable in networks driven by multiple stimuli, is consistent with cortical recordings, and has a direct mathematical relationship to artificial neural networks.
Collapse
Affiliation(s)
- Cody Baker
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, IN, USA
| | - Vicky Zhu
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, IN, USA
| | - Robert Rosenbaum
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, IN, USA
- Interdisciplinary Center for Network Science and Applications, University of Notre Dame, Notre Dame, IN, USA
| |
Collapse
|
70
|
Bachmann C, Tetzlaff T, Duarte R, Morrison A. Firing rate homeostasis counteracts changes in stability of recurrent neural networks caused by synapse loss in Alzheimer's disease. PLoS Comput Biol 2020; 16:e1007790. [PMID: 32841234 PMCID: PMC7505475 DOI: 10.1371/journal.pcbi.1007790] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Revised: 09/21/2020] [Accepted: 03/17/2020] [Indexed: 11/19/2022] Open
Abstract
The impairment of cognitive function in Alzheimer's disease is clearly correlated to synapse loss. However, the mechanisms underlying this correlation are only poorly understood. Here, we investigate how the loss of excitatory synapses in sparsely connected random networks of spiking excitatory and inhibitory neurons alters their dynamical characteristics. Beyond the effects on the activity statistics, we find that the loss of excitatory synapses on excitatory neurons reduces the network's sensitivity to small perturbations. This decrease in sensitivity can be considered as an indication of a reduction of computational capacity. A full recovery of the network's dynamical characteristics and sensitivity can be achieved by firing rate homeostasis, here implemented by an up-scaling of the remaining excitatory-excitatory synapses. Mean-field analysis reveals that the stability of the linearised network dynamics is, in good approximation, uniquely determined by the firing rate, and thereby explains why firing rate homeostasis preserves not only the firing rate but also the network's sensitivity to small perturbations.
Collapse
Affiliation(s)
- Claudia Bachmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Renato Duarte
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
71
|
A solution to the learning dilemma for recurrent networks of spiking neurons. Nat Commun 2020; 11:3625. [PMID: 32681001 PMCID: PMC7367848 DOI: 10.1038/s41467-020-17236-y] [Citation(s) in RCA: 132] [Impact Index Per Article: 26.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Accepted: 06/16/2020] [Indexed: 11/09/2022] Open
Abstract
Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. Yet in spite of extensive research, how they can learn through synaptic plasticity to carry out complex network computations remains unclear. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A mathematical result tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This learning method-called e-prop-approaches the performance of backpropagation through time (BPTT), the best-known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence.
Collapse
|
72
|
Sorbaro M, Liu Q, Bortone M, Sheik S. Optimizing the Energy Consumption of Spiking Neural Networks for Neuromorphic Applications. Front Neurosci 2020; 14:662. [PMID: 32694978 PMCID: PMC7339957 DOI: 10.3389/fnins.2020.00662] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Accepted: 05/28/2020] [Indexed: 11/13/2022] Open
Abstract
In the last few years, spiking neural networks (SNNs) have been demonstrated to perform on par with regular convolutional neural networks. Several works have proposed methods to convert a pre-trained CNN to a Spiking CNN without a significant sacrifice of performance. We demonstrate first that quantization-aware training of CNNs leads to better accuracy in SNNs. One of the benefits of converting CNNs to spiking CNNs is to leverage the sparse computation of SNNs and consequently perform equivalent computation at a lower energy consumption. Here we propose an optimization strategy to train efficient spiking networks with lower energy consumption, while maintaining similar accuracy levels. We demonstrate results on the MNIST-DVS and CIFAR-10 datasets.
Collapse
Affiliation(s)
- Martino Sorbaro
- SynSense (formerly aiCTX), Zurich, Switzerland.,Institute of Neuroinformatics, University of Zürich and ETH Zürich, Zurich, Switzerland
| | - Qian Liu
- SynSense (formerly aiCTX), Zurich, Switzerland
| | | | | |
Collapse
|
73
|
Systematic Integration of Structural and Functional Data into Multi-scale Models of Mouse Primary Visual Cortex. Neuron 2020; 106:388-403.e18. [DOI: 10.1016/j.neuron.2020.01.040] [Citation(s) in RCA: 90] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2019] [Revised: 10/17/2019] [Accepted: 01/27/2020] [Indexed: 01/08/2023]
|
74
|
Hong C, Wei X, Wang J, Deng B, Yu H, Che Y. Training Spiking Neural Networks for Cognitive Tasks: A Versatile Framework Compatible With Various Temporal Codes. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:1285-1296. [PMID: 31247574 DOI: 10.1109/tnnls.2019.2919662] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Recent studies have demonstrated the effectiveness of supervised learning in spiking neural networks (SNNs). A trainable SNN provides a valuable tool not only for engineering applications but also for theoretical neuroscience studies. Here, we propose a modified SpikeProp learning algorithm, which ensures better learning stability for SNNs and provides more diverse network structures and coding schemes. Specifically, we designed a spike gradient threshold rule to solve the well-known gradient exploding problem in SNN training. In addition, regulation rules on firing rates and connection weights are proposed to control the network activity during training. Based on these rules, biologically realistic features such as lateral connections, complex synaptic dynamics, and sparse activities are included in the network to facilitate neural computation. We demonstrate the versatility of this framework by implementing three well-known temporal codes for different types of cognitive tasks, namely, handwritten digit recognition, spatial coordinate transformation, and motor sequence generation. Several important features observed in experimental studies, such as selective activity, excitatory-inhibitory balance, and weak pairwise correlation, emerged in the trained model. This agreement between experimental and computational results further confirmed the importance of these features in neural function. This work provides a new framework, in which various neural behaviors can be modeled and the underlying computational mechanisms can be studied.
Collapse
|
75
|
Rapp H, Nawrot MP, Stern M. Numerical Cognition Based on Precise Counting with a Single Spiking Neuron. iScience 2020; 23:100852. [PMID: 32058964 PMCID: PMC7005464 DOI: 10.1016/j.isci.2020.100852] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2019] [Revised: 11/24/2019] [Accepted: 01/14/2020] [Indexed: 12/24/2022] Open
Abstract
Insects are able to solve basic numerical cognition tasks. We show that estimation of numerosity can be realized and learned by a single spiking neuron with an appropriate synaptic plasticity rule. This model can be efficiently trained to detect arbitrary spatiotemporal spike patterns on a noisy and dynamic background with high precision and low variance. When put to test in a task that requires counting of visual concepts in a static image it required considerably less training epochs than a convolutional neural network to achieve equal performance. When mimicking a behavioral task in free-flying bees that requires numerical cognition, the model reaches a similar success rate in making correct decisions. We propose that using action potentials to represent basic numerical concepts with a single spiking neuron is beneficial for organisms with small brains and limited neuronal resources.
Collapse
Affiliation(s)
- Hannes Rapp
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, Zülpicher Straße 47b, 50923 Cologne, Germany.
| | - Martin Paul Nawrot
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, Zülpicher Straße 47b, 50923 Cologne, Germany
| | - Merav Stern
- Department of Applied Mathematics, University of Washington, Lewis Hall 201, Box 353925, Seattle, WA 98195-3925, USA
| |
Collapse
|
76
|
Maes A, Barahona M, Clopath C. Learning spatiotemporal signals using a recurrent spiking network that discretizes time. PLoS Comput Biol 2020; 16:e1007606. [PMID: 31961853 PMCID: PMC7028299 DOI: 10.1371/journal.pcbi.1007606] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 02/18/2020] [Accepted: 12/13/2019] [Indexed: 12/15/2022] Open
Abstract
Learning to produce spatiotemporal sequences is a common task that the brain has to solve. The same neurons may be used to produce different sequential behaviours. The way the brain learns and encodes such tasks remains unknown as current computational models do not typically use realistic biologically-plausible learning. Here, we propose a model where a spiking recurrent network of excitatory and inhibitory spiking neurons drives a read-out layer: the dynamics of the driver recurrent network is trained to encode time which is then mapped through the read-out neurons to encode another dimension, such as space or a phase. Different spatiotemporal patterns can be learned and encoded through the synaptic weights to the read-out neurons that follow common Hebbian learning rules. We demonstrate that the model is able to learn spatiotemporal dynamics on time scales that are behaviourally relevant and we show that the learned sequences are robustly replayed during a regime of spontaneous activity.
Collapse
Affiliation(s)
- Amadeus Maes
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Mauricio Barahona
- Department of Mathematics, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
77
|
Bauer FC, Muir DR, Indiveri G. Real-Time Ultra-Low Power ECG Anomaly Detection Using an Event-Driven Neuromorphic Processor. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2019; 13:1575-1582. [PMID: 31715572 DOI: 10.1109/tbcas.2019.2953001] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Accurate detection of pathological conditions in human subjects can be achieved through off-line analysis of recorded biological signals such as electrocardiograms (ECGs). However, human diagnosis is time-consuming and expensive, as it requires the time of medical professionals. This is especially inefficient when indicative patterns in the biological signals are infrequent. Moreover, patients with suspected pathologies are often monitored for extended periods, requiring the storage and examination of large amounts of non-pathological data, and entailing a difficult visual search task for diagnosing professionals. In this work we propose a compact and sub-mW low power neural processing system that can be used to perform on-line and real-time preliminary diagnosis of pathological conditions, to raise warnings for the existence of possible pathological conditions, or to trigger an off-line data recording system for further analysis by a medical professional. We apply the system to real-time classification of ECG data for distinguishing between healthy heartbeats and pathological rhythms. Multi-channel analog ECG traces are encoded as asynchronous streams of binary events and processed using a spiking recurrent neural network operated in a reservoir computing paradigm. An event-driven neuron output layer is then trained to recognize one of several pathologies. Finally, the filtered activity of this output layer is used to generate a binary trigger signal indicating the presence or absence of a pathological pattern. We validate the approach proposed using a Dynamic Neuromorphic Asynchronous Processor (DYNAP) chip, implemented using a standard 180 nm CMOS VLSI process, and present experimental results measured from the chip.
Collapse
|
78
|
Manz P, Goedeke S, Memmesheimer RM. Dynamics and computation in mixed networks containing neurons that accelerate towards spiking. Phys Rev E 2019; 100:042404. [PMID: 31770941 DOI: 10.1103/physreve.100.042404] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Indexed: 11/07/2022]
Abstract
Networks in the brain consist of different types of neurons. Here we investigate the influence of neuron diversity on the dynamics, phase space structure, and computational capabilities of spiking neural networks. We find that already a single neuron of a different type can qualitatively change the network dynamics and that mixed networks may combine the computational capabilities of ones with a single-neuron type. We study inhibitory networks of concave leaky (LIF) and convex "antileaky" (XIF) integrate-and-fire neurons that generalize irregularly spiking nonchaotic LIF neuron networks. Endowed with simple conductance-based synapses for XIF neurons, our networks can generate a balanced state of irregular asynchronous spiking as well. We determine the voltage probability distributions and self-consistent firing rates assuming Poisson input with finite-size spike impacts. Further, we compute the full spectrum of Lyapunov exponents (LEs) and the covariant Lyapunov vectors (CLVs) specifying the corresponding perturbation directions. We find that there is approximately one positive LE for each XIF neuron. This indicates in particular that a single XIF neuron renders the network dynamics chaotic. A simple mean-field approach, which can be justified by properties of the CLVs, explains the finding. As an application, we propose a spike-based computing scheme where our networks serve as computational reservoirs and their different stability properties yield different computational capabilities.
Collapse
Affiliation(s)
- Paul Manz
- Neural Network Dynamics and Computation, Institute for Genetics, University of Bonn, 53115 Bonn, Germany
| | - Sven Goedeke
- Neural Network Dynamics and Computation, Institute for Genetics, University of Bonn, 53115 Bonn, Germany
| | - Raoul-Martin Memmesheimer
- Neural Network Dynamics and Computation, Institute for Genetics, University of Bonn, 53115 Bonn, Germany
| |
Collapse
|
79
|
Kim R, Li Y, Sejnowski TJ. Simple framework for constructing functional spiking recurrent neural networks. Proc Natl Acad Sci U S A 2019; 116:22811-22820. [PMID: 31636215 DOI: 10.1101/579706] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023] Open
Abstract
Cortical microcircuits exhibit complex recurrent architectures that possess dynamically rich properties. The neurons that make up these microcircuits communicate mainly via discrete spikes, and it is not clear how spikes give rise to dynamics that can be used to perform computationally challenging tasks. In contrast, continuous models of rate-coding neurons can be trained to perform complex tasks. Here, we present a simple framework to construct biologically realistic spiking recurrent neural networks (RNNs) capable of learning a wide range of tasks. Our framework involves training a continuous-variable rate RNN with important biophysical constraints and transferring the learned dynamics and constraints to a spiking RNN in a one-to-one manner. The proposed framework introduces only 1 additional parameter to establish the equivalence between rate and spiking RNN models. We also study other model parameters related to the rate and spiking networks to optimize the one-to-one mapping. By establishing a close relationship between rate and spiking models, we demonstrate that spiking RNNs could be constructed to achieve similar performance as their counterpart continuous rate networks.
Collapse
Affiliation(s)
- Robert Kim
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037;
- Neurosciences Graduate Program, University of California San Diego, La Jolla, CA 92093
- Medical Scientist Training Program, University of California San Diego, La Jolla, CA 92093
| | - Yinghao Li
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037
| | - Terrence J Sejnowski
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037;
- Institute for Neural Computation, University of California San Diego, La Jolla, CA 92093
- Division of Biological Sciences, University of California San Diego, La Jolla, CA 92093
| |
Collapse
|
80
|
Kim R, Li Y, Sejnowski TJ. Simple framework for constructing functional spiking recurrent neural networks. Proc Natl Acad Sci U S A 2019; 116:22811-22820. [PMID: 31636215 PMCID: PMC6842655 DOI: 10.1073/pnas.1905926116] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Cortical microcircuits exhibit complex recurrent architectures that possess dynamically rich properties. The neurons that make up these microcircuits communicate mainly via discrete spikes, and it is not clear how spikes give rise to dynamics that can be used to perform computationally challenging tasks. In contrast, continuous models of rate-coding neurons can be trained to perform complex tasks. Here, we present a simple framework to construct biologically realistic spiking recurrent neural networks (RNNs) capable of learning a wide range of tasks. Our framework involves training a continuous-variable rate RNN with important biophysical constraints and transferring the learned dynamics and constraints to a spiking RNN in a one-to-one manner. The proposed framework introduces only 1 additional parameter to establish the equivalence between rate and spiking RNN models. We also study other model parameters related to the rate and spiking networks to optimize the one-to-one mapping. By establishing a close relationship between rate and spiking models, we demonstrate that spiking RNNs could be constructed to achieve similar performance as their counterpart continuous rate networks.
Collapse
Affiliation(s)
- Robert Kim
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037;
- Neurosciences Graduate Program, University of California San Diego, La Jolla, CA 92093
- Medical Scientist Training Program, University of California San Diego, La Jolla, CA 92093
| | - Yinghao Li
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037
| | - Terrence J Sejnowski
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037;
- Institute for Neural Computation, University of California San Diego, La Jolla, CA 92093
- Division of Biological Sciences, University of California San Diego, La Jolla, CA 92093
| |
Collapse
|
81
|
Maslennikov OV, Nekorkin VI. Collective dynamics of rate neurons for supervised learning in a reservoir computing system. CHAOS (WOODBURY, N.Y.) 2019; 29:103126. [PMID: 31675797 DOI: 10.1063/1.5119895] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2019] [Accepted: 09/26/2019] [Indexed: 06/10/2023]
Abstract
In this paper, we study collective dynamics of the network of rate neurons which constitute a central element of a reservoir computing system. The main objective of the paper is to identify the dynamic behaviors inside the reservoir underlying the performance of basic machine learning tasks, such as generating patterns with specified characteristics. We build a reservoir computing system which includes a reservoir-a network of interacting rate neurons-and an output element that generates a target signal. We study individual activities of interacting rate neurons, while implementing the task and analyze the impact of the dynamic parameter-a time constant-on the quality of implementation.
Collapse
Affiliation(s)
- Oleg V Maslennikov
- Institute of Applied Physics of the Russian Academy of Sciences, 46 Ulyanov Street, 603950 Nizhny Novgorod, Russia
| | - Vladimir I Nekorkin
- Institute of Applied Physics of the Russian Academy of Sciences, 46 Ulyanov Street, 603950 Nizhny Novgorod, Russia
| |
Collapse
|
82
|
|
83
|
Ponghiran W, Srinivasan G, Roy K. Reinforcement Learning With Low-Complexity Liquid State Machines. Front Neurosci 2019; 13:883. [PMID: 31507361 PMCID: PMC6718696 DOI: 10.3389/fnins.2019.00883] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Accepted: 08/07/2019] [Indexed: 11/13/2022] Open
Abstract
We propose reinforcement learning on simple networks consisting of random connections of spiking neurons (both recurrent and feed-forward) that can learn complex tasks with very little trainable parameters. Such sparse and randomly interconnected recurrent spiking networks exhibit highly non-linear dynamics that transform the inputs into rich high-dimensional representations based on the current and past context. The random input representations can be efficiently interpreted by an output (or readout) layer with trainable parameters. Systematic initialization of the random connections and training of the readout layer using Q-learning algorithm enable such small random spiking networks to learn optimally and achieve the same learning efficiency as humans on complex reinforcement learning (RL) tasks like Atari games. In fact, the sparse recurrent connections cause these networks to retain fading memory of past inputs, thereby enabling them to perform temporal integration across successive RL time-steps and learn with partial state inputs. The spike-based approach using small random recurrent networks provides a computationally efficient alternative to state-of-the-art deep reinforcement learning networks with several layers of trainable parameters.
Collapse
Affiliation(s)
| | | | - Kaushik Roy
- Department of ECE, Purdue University, West Lafayette, IN, United States
| |
Collapse
|
84
|
Training dynamically balanced excitatory-inhibitory networks. PLoS One 2019; 14:e0220547. [PMID: 31393909 PMCID: PMC6687153 DOI: 10.1371/journal.pone.0220547] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Accepted: 07/19/2019] [Indexed: 12/02/2022] Open
Abstract
The construction of biologically plausible models of neural circuits is crucial for understanding the computational properties of the nervous system. Constructing functional networks composed of separate excitatory and inhibitory neurons obeying Dale’s law presents a number of challenges. We show how a target-based approach, when combined with a fast online constrained optimization technique, is capable of building functional models of rate and spiking recurrent neural networks in which excitation and inhibition are balanced. Balanced networks can be trained to produce complicated temporal patterns and to solve input-output tasks while retaining biologically desirable features such as Dale’s law and response variability.
Collapse
|
85
|
Florescu D, Coca D. Learning with Precise Spike Times: A New Decoding Algorithm for Liquid State Machines. Neural Comput 2019; 31:1825-1852. [PMID: 31335291 DOI: 10.1162/neco_a_01218] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
There is extensive evidence that biological neural networks encode information in the precise timing of the spikes generated and transmitted by neurons, which offers several advantages over rate-based codes. Here we adopt a vector space formulation of spike train sequences and introduce a new liquid state machine (LSM) network architecture and a new forward orthogonal regression algorithm to learn an input-output signal mapping or to decode the brain activity. The proposed algorithm uses precise spike timing to select the presynaptic neurons relevant to each learning task. We show that using precise spike timing to train the LSM and selecting the readout presynaptic neurons leads to a significant increase in performance on binary classification tasks, in decoding neural activity from multielectrode array recordings, as well as in a speech recognition task, compared with what is achieved using the standard architecture and training methods.
Collapse
Affiliation(s)
- Dorian Florescu
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, S1 3JD, U.K.
| | - Daniel Coca
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, S1 3JD, U.K.
| |
Collapse
|
86
|
Nicola W, Clopath C. A diversity of interneurons and Hebbian plasticity facilitate rapid compressible learning in the hippocampus. Nat Neurosci 2019; 22:1168-1181. [PMID: 31235906 DOI: 10.1038/s41593-019-0415-2] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2018] [Accepted: 04/23/2019] [Indexed: 11/09/2022]
Abstract
The hippocampus is able to rapidly learn incoming information, even if that information is only observed once. Furthermore, this information can be replayed in a compressed format in either forward or reverse modes during sharp wave-ripples (SPW-Rs). We leveraged state-of-the-art techniques in training recurrent spiking networks to demonstrate how primarily interneuron networks can achieve the following: (1) generate internal theta sequences to bind externally elicited spikes in the presence of inhibition from the medial septum; (2) compress learned spike sequences in the form of a SPW-R when septal inhibition is removed; (3) generate and refine high-frequency assemblies during SPW-R-mediated compression; and (4) regulate the inter-SPW interval timing between SPW-Rs in ripple clusters. From the fast timescale of neurons to the slow timescale of behaviors, interneuron networks serve as the scaffolding for one-shot learning by replaying, reversing, refining, and regulating spike sequences.
Collapse
Affiliation(s)
- Wilten Nicola
- Department of Bioengineering, Imperial College London, London, UK
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK.
| |
Collapse
|
87
|
Capone C, Pastorelli E, Golosio B, Paolucci PS. Sleep-like slow oscillations improve visual classification through synaptic homeostasis and memory association in a thalamo-cortical model. Sci Rep 2019; 9:8990. [PMID: 31222151 PMCID: PMC6586839 DOI: 10.1038/s41598-019-45525-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Accepted: 06/03/2019] [Indexed: 01/19/2023] Open
Abstract
The occurrence of sleep passed through the evolutionary sieve and is widespread in animal species. Sleep is known to be beneficial to cognitive and mnemonic tasks, while chronic sleep deprivation is detrimental. Despite the importance of the phenomenon, a complete understanding of its functions and underlying mechanisms is still lacking. In this paper, we show interesting effects of deep-sleep-like slow oscillation activity on a simplified thalamo-cortical model which is trained to encode, retrieve and classify images of handwritten digits. During slow oscillations, spike-timing-dependent-plasticity (STDP) produces a differential homeostatic process. It is characterized by both a specific unsupervised enhancement of connections among groups of neurons associated to instances of the same class (digit) and a simultaneous down-regulation of stronger synapses created by the training. This hierarchical organization of post-sleep internal representations favours higher performances in retrieval and classification tasks. The mechanism is based on the interaction between top-down cortico-thalamic predictions and bottom-up thalamo-cortical projections during deep-sleep-like slow oscillations. Indeed, when learned patterns are replayed during sleep, cortico-thalamo-cortical connections favour the activation of other neurons coding for similar thalamic inputs, promoting their association. Such mechanism hints at possible applications to artificial learning systems.
Collapse
Affiliation(s)
| | - Elena Pastorelli
- INFN Sezione di Roma, Rome, Italy.,PhD Program in Behavioural Neuroscience, "Sapienza" University of Rome, Rome, Italy
| | - Bruno Golosio
- Dipartimento di Fisica, Università di Cagliari, Cagliari, Italy.,INFN Sezione di Cagliari, Cagliari, Italy
| | | |
Collapse
|
88
|
Muscinelli SP, Gerstner W, Schwalger T. How single neuron properties shape chaotic dynamics and signal transmission in random neural networks. PLoS Comput Biol 2019; 15:e1007122. [PMID: 31181063 PMCID: PMC6586367 DOI: 10.1371/journal.pcbi.1007122] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2019] [Revised: 06/20/2019] [Accepted: 05/22/2019] [Indexed: 02/07/2023] Open
Abstract
While most models of randomly connected neural networks assume single-neuron models with simple dynamics, neurons in the brain exhibit complex intrinsic dynamics over multiple timescales. We analyze how the dynamical properties of single neurons and recurrent connections interact to shape the effective dynamics in large randomly connected networks. A novel dynamical mean-field theory for strongly connected networks of multi-dimensional rate neurons shows that the power spectrum of the network activity in the chaotic phase emerges from a nonlinear sharpening of the frequency response function of single neurons. For the case of two-dimensional rate neurons with strong adaptation, we find that the network exhibits a state of "resonant chaos", characterized by robust, narrow-band stochastic oscillations. The coherence of stochastic oscillations is maximal at the onset of chaos and their correlation time scales with the adaptation timescale of single units. Surprisingly, the resonance frequency can be predicted from the properties of isolated neurons, even in the presence of heterogeneity in the adaptation parameters. In the presence of these internally-generated chaotic fluctuations, the transmission of weak, low-frequency signals is strongly enhanced by adaptation, whereas signal transmission is not influenced by adaptation in the non-chaotic regime. Our theoretical framework can be applied to other mechanisms at the level of single neurons, such as synaptic filtering, refractoriness or spike synchronization. These results advance our understanding of the interaction between the dynamics of single units and recurrent connectivity, which is a fundamental step toward the description of biologically realistic neural networks.
Collapse
Affiliation(s)
- Samuel P. Muscinelli
- School of Computer and Communication Sciences and School of Life Sciences, École polytechnique fédérale de Lausanne, Station 15, CH-1015 Lausanne EPFL, Switzerland
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and School of Life Sciences, École polytechnique fédérale de Lausanne, Station 15, CH-1015 Lausanne EPFL, Switzerland
| | - Tilo Schwalger
- Bernstein Center for Computational Neuroscience, 10115 Berlin, Germany
- Institut für Mathematik, Technische Universität Berlin, 10623 Berlin, Germany
| |
Collapse
|
89
|
Spatiotemporal discrimination in attractor networks with short-term synaptic plasticity. J Comput Neurosci 2019; 46:279-297. [PMID: 31134433 PMCID: PMC6571095 DOI: 10.1007/s10827-019-00717-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2018] [Revised: 03/04/2019] [Accepted: 04/02/2019] [Indexed: 12/28/2022]
Abstract
We demonstrate that a randomly connected attractor network with dynamic synapses can discriminate between similar sequences containing multiple stimuli suggesting such networks provide a general basis for neural computations in the brain. The network contains units representing assemblies of pools of neurons, with preferentially strong recurrent excitatory connections rendering each unit bi-stable. Weak interactions between units leads to a multiplicity of attractor states, within which information can persist beyond stimulus offset. When a new stimulus arrives, the prior state of the network impacts the encoding of the incoming information, with short-term synaptic depression ensuring an itinerancy between sets of active units. We assess the ability of such a network to encode the identity of sequences of stimuli, so as to provide a template for sequence recall, or decisions based on accumulation of evidence. Across a range of parameters, such networks produce the primacy (better final encoding of the earliest stimuli) and recency (better final encoding of the latest stimuli) observed in human recall data and can retain the information needed to make a binary choice based on total number of presentations of a specific stimulus. Similarities and differences in the final states of the network produced by different sequences lead to predictions of specific errors that could arise when an animal or human subject generalizes from training data, when the training data comprises a subset of the entire stimulus repertoire. We suggest that such networks can provide the general purpose computational engines needed for us to solve many cognitive tasks.
Collapse
|
90
|
Wärnberg E, Kumar A. Perturbing low dimensional activity manifolds in spiking neuronal networks. PLoS Comput Biol 2019; 15:e1007074. [PMID: 31150376 PMCID: PMC6586365 DOI: 10.1371/journal.pcbi.1007074] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2017] [Revised: 06/20/2019] [Accepted: 05/07/2019] [Indexed: 11/19/2022] Open
Abstract
Several recent studies have shown that neural activity in vivo tends to be constrained to a low-dimensional manifold. Such activity does not arise in simulated neural networks with homogeneous connectivity and it has been suggested that it is indicative of some other connectivity pattern in neuronal networks. In particular, this connectivity pattern appears to be constraining learning so that only neural activity patterns falling within the intrinsic manifold can be learned and elicited. Here, we use three different models of spiking neural networks (echo-state networks, the Neural Engineering Framework and Efficient Coding) to demonstrate how the intrinsic manifold can be made a direct consequence of the circuit connectivity. Using this relationship between the circuit connectivity and the intrinsic manifold, we show that learning of patterns outside the intrinsic manifold corresponds to much larger changes in synaptic weights than learning of patterns within the intrinsic manifold. Assuming larger changes to synaptic weights requires extensive learning, this observation provides an explanation of why learning is easier when it does not require the neural activity to leave its intrinsic manifold.
Collapse
Affiliation(s)
- Emil Wärnberg
- Dept. of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
- Dept. of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Arvind Kumar
- Dept. of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
91
|
Glaser JI, Benjamin AS, Farhoodi R, Kording KP. The roles of supervised machine learning in systems neuroscience. Prog Neurobiol 2019; 175:126-137. [PMID: 30738835 PMCID: PMC8454059 DOI: 10.1016/j.pneurobio.2019.01.008] [Citation(s) in RCA: 67] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Revised: 01/23/2019] [Accepted: 01/28/2019] [Indexed: 01/18/2023]
Abstract
Over the last several years, the use of machine learning (ML) in neuroscience has been rapidly increasing. Here, we review ML's contributions, both realized and potential, across several areas of systems neuroscience. We describe four primary roles of ML within neuroscience: (1) creating solutions to engineering problems, (2) identifying predictive variables, (3) setting benchmarks for simple models of the brain, and (4) serving itself as a model for the brain. The breadth and ease of its applicability suggests that machine learning should be in the toolbox of most systems neuroscientists.
Collapse
Affiliation(s)
- Joshua I Glaser
- Department of Bioengineering, University of Pennsylvania, United States.
| | - Ari S Benjamin
- Department of Bioengineering, University of Pennsylvania, United States.
| | - Roozbeh Farhoodi
- Department of Bioengineering, University of Pennsylvania, United States.
| | - Konrad P Kording
- Department of Bioengineering, University of Pennsylvania, United States; Department of Neuroscience, University of Pennsylvania, United States; Canadian Institute for Advanced Research, Canada.
| |
Collapse
|
92
|
Deep neural network models of sensory systems: windows onto the role of task constraints. Curr Opin Neurobiol 2019; 55:121-132. [DOI: 10.1016/j.conb.2019.02.003] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2018] [Revised: 01/13/2019] [Accepted: 02/07/2019] [Indexed: 01/05/2023]
|
93
|
Beiran M, Ostojic S. Contrasting the effects of adaptation and synaptic filtering on the timescales of dynamics in recurrent networks. PLoS Comput Biol 2019; 15:e1006893. [PMID: 30897092 PMCID: PMC6445477 DOI: 10.1371/journal.pcbi.1006893] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Revised: 04/02/2019] [Accepted: 02/19/2019] [Indexed: 11/19/2022] Open
Abstract
Neural activity in awake behaving animals exhibits a vast range of timescales that can be several fold larger than the membrane time constant of individual neurons. Two types of mechanisms have been proposed to explain this conundrum. One possibility is that large timescales are generated by a network mechanism based on positive feedback, but this hypothesis requires fine-tuning of the strength or structure of the synaptic connections. A second possibility is that large timescales in the neural dynamics are inherited from large timescales of underlying biophysical processes, two prominent candidates being intrinsic adaptive ionic currents and synaptic transmission. How the timescales of adaptation or synaptic transmission influence the timescale of the network dynamics has however not been fully explored. To address this question, here we analyze large networks of randomly connected excitatory and inhibitory units with additional degrees of freedom that correspond to adaptation or synaptic filtering. We determine the fixed points of the systems, their stability to perturbations and the corresponding dynamical timescales. Furthermore, we apply dynamical mean field theory to study the temporal statistics of the activity in the fluctuating regime, and examine how the adaptation and synaptic timescales transfer from individual units to the whole population. Our overarching finding is that synaptic filtering and adaptation in single neurons have very different effects at the network level. Unexpectedly, the macroscopic network dynamics do not inherit the large timescale present in adaptive currents. In contrast, the timescales of network activity increase proportionally to the time constant of the synaptic filter. Altogether, our study demonstrates that the timescales of different biophysical processes have different effects on the network level, so that the slow processes within individual neurons do not necessarily induce slow activity in large recurrent neural networks.
Collapse
Affiliation(s)
- Manuel Beiran
- Group for Neural Theory, Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Srdjan Ostojic
- Group for Neural Theory, Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| |
Collapse
|
94
|
Pang R, Fairhall AL. Fast and flexible sequence induction in spiking neural networks via rapid excitability changes. eLife 2019; 8:44324. [PMID: 31081753 PMCID: PMC6538377 DOI: 10.7554/elife.44324] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Accepted: 05/11/2019] [Indexed: 12/14/2022] Open
Abstract
Cognitive flexibility likely depends on modulation of the dynamics underlying how biological neural networks process information. While dynamics can be reshaped by gradually modifying connectivity, less is known about mechanisms operating on faster timescales. A compelling entrypoint to this problem is the observation that exploratory behaviors can rapidly cause selective hippocampal sequences to 'replay' during rest. Using a spiking network model, we asked whether simplified replay could arise from three biological components: fixed recurrent connectivity; stochastic 'gating' inputs; and rapid gating input scaling via long-term potentiation of intrinsic excitability (LTP-IE). Indeed, these enabled both forward and reverse replay of recent sensorimotor-evoked sequences, despite unchanged recurrent weights. LTP-IE 'tags' specific neurons with increased spiking probability under gating input, and ordering is reconstructed from recurrent connectivity. We further show how LTP-IE can implement temporary stimulus-response mappings. This elucidates a novel combination of mechanisms that might play a role in rapid cognitive flexibility.
Collapse
Affiliation(s)
- Rich Pang
- Neuroscience Graduate ProgramUniversity of WashingtonSeattleUnited States,Department of Physiology and BiophysicsUniversity of WashingtonSeattleUnited States,Computational Neuroscience CenterUniversity of WashingtonSeattleUnited States
| | - Adrienne L Fairhall
- Department of Physiology and BiophysicsUniversity of WashingtonSeattleUnited States,Computational Neuroscience CenterUniversity of WashingtonSeattleUnited States
| |
Collapse
|
95
|
Demin V, Nekhaev D. Recurrent Spiking Neural Network Learning Based on a Competitive Maximization of Neuronal Activity. Front Neuroinform 2018; 12:79. [PMID: 30498439 PMCID: PMC6250118 DOI: 10.3389/fninf.2018.00079] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2018] [Accepted: 10/18/2018] [Indexed: 12/21/2022] Open
Abstract
Spiking neural networks (SNNs) are believed to be highly computationally and energy efficient for specific neurochip hardware real-time solutions. However, there is a lack of learning algorithms for complex SNNs with recurrent connections, comparable in efficiency with back-propagation techniques and capable of unsupervised training. Here we suppose that each neuron in a biological neural network tends to maximize its activity in competition with other neurons, and put this principle at the basis of a new SNN learning algorithm. In such a way, a spiking network with the learned feed-forward, reciprocal and intralayer inhibitory connections, is introduced to the MNIST database digit recognition. It has been demonstrated that this SNN can be trained without a teacher, after a short supervised initialization of weights by the same algorithm. Also, it has been shown that neurons are grouped into families of hierarchical structures, corresponding to different digit classes and their associations. This property is expected to be useful to reduce the number of layers in deep neural networks and modeling the formation of various functional structures in a biological nervous system. Comparison of the learning properties of the suggested algorithm, with those of the Sparse Distributed Representation approach shows similarity in coding but also some advantages of the former. The basic principle of the proposed algorithm is believed to be practically applicable to the construction of much more complicated and diverse task solving SNNs. We refer to this new approach as "Family-Engaged Execution and Learning of Induced Neuron Groups," or FEELING.
Collapse
Affiliation(s)
- Vyacheslav Demin
- National Research Center "Kurchatov Institute", Moscow, Russia.,Moscow Institute of Phycics and Technology, Dolgoprudny, Russia
| | - Dmitry Nekhaev
- National Research Center "Kurchatov Institute", Moscow, Russia
| |
Collapse
|
96
|
Nandakumar S, Kulkarni SR, Babu AV, Rajendran B. Building Brain-Inspired Computing Systems: Examining the Role of Nanoscale Devices. IEEE NANOTECHNOLOGY MAGAZINE 2018. [DOI: 10.1109/mnano.2018.2845078] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
97
|
Kim CM, Chow CC. Learning recurrent dynamics in spiking networks. eLife 2018; 7:37124. [PMID: 30234488 PMCID: PMC6195349 DOI: 10.7554/elife.37124] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Accepted: 09/14/2018] [Indexed: 01/27/2023] Open
Abstract
Spiking activity of neurons engaged in learning and performing a task show complex spatiotemporal dynamics. While the output of recurrent network models can learn to perform various tasks, the possible range of recurrent dynamics that emerge after learning remains unknown. Here we show that modifying the recurrent connectivity with a recursive least squares algorithm provides sufficient flexibility for synaptic and spiking rate dynamics of spiking networks to produce a wide range of spatiotemporal activity. We apply the training method to learn arbitrary firing patterns, stabilize irregular spiking activity in a network of excitatory and inhibitory neurons respecting Dale's law, and reproduce the heterogeneous spiking rate patterns of cortical neurons engaged in motor planning and movement. We identify sufficient conditions for successful learning, characterize two types of learning errors, and assess the network capacity. Our findings show that synaptically-coupled recurrent spiking networks possess a vast computational capability that can support the diverse activity patterns in the brain.
Collapse
Affiliation(s)
- Christopher M Kim
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney DiseasesNational Institutes of HealthBethesdaUnited States
| | - Carson C Chow
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney DiseasesNational Institutes of HealthBethesdaUnited States
| |
Collapse
|
98
|
Nicola W, Clopath C. Supervised learning in spiking neural networks with FORCE training. Nat Commun 2017; 8:2208. [PMID: 29263361 PMCID: PMC5738356 DOI: 10.1038/s41467-017-01827-3] [Citation(s) in RCA: 99] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2016] [Accepted: 10/19/2017] [Indexed: 12/31/2022] Open
Abstract
Populations of neurons display an extraordinary diversity in the behaviors they affect and display. Machine learning techniques have recently emerged that allow us to create networks of model neurons that display behaviors of similar complexity. Here we demonstrate the direct applicability of one such technique, the FORCE method, to spiking neural networks. We train these networks to mimic dynamical systems, classify inputs, and store discrete sequences that correspond to the notes of a song. Finally, we use FORCE training to create two biologically motivated model circuits. One is inspired by the zebra finch and successfully reproduces songbird singing. The second network is motivated by the hippocampus and is trained to store and replay a movie scene. FORCE trained networks reproduce behaviors comparable in complexity to their inspired circuits and yield information not easily obtainable with other techniques, such as behavioral responses to pharmacological manipulations and spike timing statistics.
Collapse
Affiliation(s)
- Wilten Nicola
- Department of Bioengineering, Imperial College London, Royal School of Mines, London, SW7 2AZ, UK
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, Royal School of Mines, London, SW7 2AZ, UK.
| |
Collapse
|
99
|
Gilra A, Gerstner W. Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network. eLife 2017; 6:28295. [PMID: 29173280 PMCID: PMC5730383 DOI: 10.7554/elife.28295] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Accepted: 11/22/2017] [Indexed: 12/21/2022] Open
Abstract
The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically.
Collapse
Affiliation(s)
- Aditya Gilra
- Brain-Mind Institute, School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.,School of Computer and Communication Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Wulfram Gerstner
- Brain-Mind Institute, School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.,School of Computer and Communication Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|