1
|
Yang S, Wang J, Deng B, Azghadi MR, Linares-Barranco B. Neuromorphic Context-Dependent Learning Framework With Fault-Tolerant Spike Routing. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:7126-7140. [PMID: 34115596 DOI: 10.1109/tnnls.2021.3084250] [Citation(s) in RCA: 57] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Neuromorphic computing is a promising technology that realizes computation based on event-based spiking neural networks (SNNs). However, fault-tolerant on-chip learning remains a challenge in neuromorphic systems. This study presents the first scalable neuromorphic fault-tolerant context-dependent learning (FCL) hardware framework. We show how this system can learn associations between stimulation and response in two context-dependent learning tasks from experimental neuroscience, despite possible faults in the hardware nodes. Furthermore, we demonstrate how our novel fault-tolerant neuromorphic spike routing scheme can avoid multiple fault nodes successfully and can enhance the maximum throughput of the neuromorphic network by 0.9%-16.1% in comparison with previous studies. By utilizing the real-time computational capabilities and multiple-fault-tolerant property of the proposed system, the neuronal mechanisms underlying the spiking activities of neuromorphic networks can be readily explored. In addition, the proposed system can be applied in real-time learning and decision-making applications, brain-machine integration, and the investigation of brain cognition during learning.
Collapse
|
2
|
Wang J, Peng Z, Zhan Y, Li Y, Yu G, Chong KS, Wang C. A High-Accuracy and Energy-Efficient CORDIC Based Izhikevich Neuron With Error Suppression and Compensation. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2022; 16:807-821. [PMID: 35834464 DOI: 10.1109/tbcas.2022.3191004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Bio-inspired neuron models are the key building blocks of brain-like neural networks for brain-science exploration and neuromorphic engineering applications. The efficient hardware design of bio-inspired neuron models is one of the challenges to implement brain-like neural networks, as the balancing of model accuracy, energy consumption and hardware cost is very challenging. This paper proposes a high-accuracy and energy-efficient Fast-Convergence COordinate Rotation DIgital Computer (FC-CORDIC) based Izhikevich neuron design. For ensuring the model accuracy, an error propagation model of the Izhikevich neuron is presented for systematic error analysis and effective error reduction. Parameter-Tuning Error Compensation (PTEC) method and Bitwidth-Extension Error Suppression (BEES) method are proposed to reduce the error of Izhikevich neuron design effectively. In addition, by utilizing the FC-CORDIC instead of conventional CORDIC for square calculation in the Izhikevich model, the redundant CORDIC iterations are removed and therefore, both the accumulated errors and required computation are effectively reduced, which significantly improve the accuracy and energy efficiency. An optimized fixed-point design of FC-CORDIC is also proposed to save hardware overhead while ensuring the accuracy. FPGA implementation results exhibit that the proposed Izhikevich neuron design can achieve high accuracy and energy efficiency with an acceptable hardware overhead, among the state-of-the-art designs.
Collapse
|
3
|
Online subspace learning and imputation by Tensor-Ring decomposition. Neural Netw 2022; 153:314-324. [PMID: 35772252 DOI: 10.1016/j.neunet.2022.05.023] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 03/31/2022] [Accepted: 05/24/2022] [Indexed: 11/21/2022]
Abstract
This paper considers the completion problem of a partially observed high-order streaming data, which is cast as an online low-rank tensor completion problem. Though the online low-rank tensor completion problem has drawn lots of attention in recent years, most of them are designed based on the traditional decomposition method, such as CP and Tucker. Inspired by the advantages of Tensor Ring decomposition over the traditional decompositions in expressing high-order data and its superiority in missing values estimation, this paper proposes two online subspace learning and imputation methods based on Tensor Ring decomposition. Specifically, we first propose an online Tensor Ring subspace learning and imputation model by formulating an exponentially weighted least squares with Frobenium norm regularization of TR-cores. Then, two commonly used optimization algorithms, i.e. alternating recursive least squares and stochastic-gradient algorithms, are developed to solve the proposed model. Numerical experiments show that the proposed methods are more effective to exploit the time-varying subspace in comparison with the conventional Tensor Ring completion methods. Besides, the proposed methods are demonstrated to be superior to obtain better results than state-of-the-art online methods in streaming data completion under varying missing ratios and noise.
Collapse
|
4
|
Gandolfi D, Puglisi FM, Boiani GM, Pagnoni G, Friston KJ, D'Angelo EU, Mapelli J. Emergence of associative learning in a neuromorphic inference network. J Neural Eng 2022; 19. [PMID: 35508120 DOI: 10.1088/1741-2552/ac6ca7] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 05/04/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVE In the theoretical framework of predictive coding and active inference, the brain can be viewed as instantiating a rich generative model of the world that predicts incoming sensory data while continuously updating its parameters via minimization of prediction errors. While this theory has been successfully applied to cognitive processes - by modelling the activity of functional neural networks at a mesoscopic scale - the validity of the approach when modelling neurons as an ensemble of inferring agents, in a biologically plausible architecture, remained to be explored. APPROACH We modelled a simplified cerebellar circuit with individual neurons acting as Bayesian agents to simulate the classical delayed eyeblink conditioning protocol. Neurons and synapses adjusted their activity to minimize their prediction error, which was used as the network cost function. This cerebellar network was then implemented in hardware by replicating digital neuronal elements via a low-power microcontroller. MAIN RESULTS Persistent changes of synaptic strength - that mirrored neurophysiological observations - emerged via local (neurocentric) prediction error minimization, leading to the expression of associative learning. The same paradigm was effectively emulated in low-power hardware showing remarkably efficient performance compared to conventional neuromorphic architectures. SIGNIFICANCE These findings show that: i) an ensemble of free energy minimizing neurons - organized in a biological plausible architecture - can recapitulate functional self-organization observed in nature, such as associative plasticity, and ii) a neuromorphic network of inference units can learn unsupervised tasks without embedding predefined learning rules in the circuit, thus providing a potential avenue to a novel form of brain-inspired artificial intelligence.
Collapse
Affiliation(s)
- Daniela Gandolfi
- Department Biomedical, Metabolic and Neural Sciences, University of Modena and Reggio Emilia, Via Campi 287, Modena, Emilia-Romagna, 41121, ITALY
| | - Francesco Maria Puglisi
- DIEF, Universita degli Studi di Modena e Reggio Emilia, Via P. Vivarelli 10/1, Modena, MO, 41121, ITALY
| | - Giulia Maria Boiani
- Department Biomedical, Metabolic and Neural Sciences, University of Modena and Reggio Emilia, Via Campi 287, Modena, Emilia-Romagna, 41121, ITALY
| | - Giuseppe Pagnoni
- Department Biomedical, Metabolic and Neural Sciences, University of Modena and Reggio Emilia, Via Campi 287, Modena, Emilia-Romagna, 41121, ITALY
| | - Karl J Friston
- Institute of Neurology, University College London, 23 Queen Square, LONDON, WC1N 3BG, London, WC1N 3AR, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Egidio Ugo D'Angelo
- Department Brain and Behavioral Sciences, University of Pavia, Via Forlanini 6, Pavia, Pavia, Lombardia, 27100, ITALY
| | - Jonathan Mapelli
- Department Biomedical, Metabolic and Neural Sciences, University of Modena and Reggio Emilia, Via Campi 287, Modena, 41125, ITALY
| |
Collapse
|
5
|
Deng B, Fan Y, Wang J, Yang S. Reconstruction of a Fully Paralleled Auditory Spiking Neural Network and FPGA Implementation. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2021; 15:1320-1331. [PMID: 34699367 DOI: 10.1109/tbcas.2021.3122549] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This paper presents a field-programmable gate array (FPGA) implementation of an auditory system, which is biologically inspired and has the advantages of robustness and anti-noise ability. We propose an FPGA implementation of an eleven-channel hierarchical spiking neuron network (SNN) model, which has a sparsely connected architecture with low power consumption. According to the mechanism of the auditory pathway in human brain, spiking trains generated by the cochlea are analyzed in the hierarchical SNN, and the specific word can be identified by a Bayesian classifier. Modified leaky integrate-and-fire (LIF) model is used to realize the hierarchical SNN, which achieves both high efficiency and low hardware consumption. The hierarchical SNN implemented on FPGA enables the auditory system to be operated at high speed and can be interfaced and applied with external machines and sensors. A set of speech from different speakers mixed with noise are used as input to test the performance our system, and the experimental results show that the system can classify words in a biologically plausible way with the presence of noise. The method of our system is flexible and the system can be modified into desirable scale. These confirm that the proposed biologically plausible auditory system provides a better method for on-chip speech recognition. Compare to the state-of-the-art, our auditory system achieves a higher speed with a maximum frequency of 65.03 MHz and a lower energy consumption of 276.83 μJ for a single operation. It can be applied in the field of brain-computer interface and intelligent robots.
Collapse
|
6
|
Akbarzadeh-Sherbaf K, Safari S, Vahabie AH. A digital hardware implementation of spiking neural networks with binary FORCE training. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.05.044] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
7
|
Scalable Implementation of Hippocampal Network on Digital Neuromorphic System towards Brain-Inspired Intelligence. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10082857] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
In this paper, an expanded digital hippocampal spurt neural network (HSNN) is innovatively proposed to simulate the mammalian cognitive system and to perform the neuroregulatory dynamics that play a critical role in the cognitive processes of the brain, such as memory and learning. The real-time computation of a large-scale peak neural network can be realized by the scalable on-chip network and parallel topology. By exploring the latest research in the field of neurons and comparing with the results of this paper, it can be found that the implementation of the hippocampal neuron model using the coordinate rotation numerical calculation algorithm can significantly reduce the cost of hardware resources. In addition, the rational use of on-chip network technology can further improve the performance of the system, and even significantly improve the network scalability on a single field programmable gate array chip. The neuromodulation dynamics are considered in the proposed system, which can replicate more relevant biological dynamics. Based on the analysis of biological theory and the theory of hardware integration, it is shown that the innovative system proposed in this paper can reproduce the biological characteristics of the hippocampal network and may be applied to brain-inspired intelligent subjects. The study in this paper will have an unexpected effect on the future research of digital neuromorphic design of spike neural network and the dynamics of the hippocampal network.
Collapse
|
8
|
Gunasekaran H, Spigler G, Mazzoni A, Cataldo E, Oddo CM. Convergence of regular spiking and intrinsically bursting Izhikevich neuron models as a function of discretization time with Euler method. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.03.021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
9
|
Jokar E, Abolfathi H, Ahmadi A. A Novel Nonlinear Function Evaluation Approach for Efficient FPGA Mapping of Neuron and Synaptic Plasticity Models. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2019; 13:454-469. [PMID: 30802873 DOI: 10.1109/tbcas.2019.2900943] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Efficient hardware realization of spiking neural networks is of great significance in a wide variety of applications, such as high-speed modeling and simulation of large-scale neural systems. Exploiting the key features of FPGAs, this paper presents a novel nonlinear function evaluation approach, based on an effective uniform piecewise linear segmentation method, to efficiently approximate the nonlinear terms of neuron and synaptic plasticity models targeting low-cost digital implementation. The proposed approach takes advantage of a high-speed and extremely simple segment address encoder unit regardless of the number of segments, and therefore is capable of accurately approximating a given nonlinear function with a large number of straight lines. In addition, this approach can be efficiently mapped into FPGAs with minimal hardware cost. To investigate the application of the proposed nonlinear function evaluation approach in low-cost neuromorphic circuit design, it is applied to four case studies: the Izhikevich and FitzHugh-Nagumo neuron models as 2-dimensional case studies, the Hindmarsh-Rose neuron model as a relatively complex 3-dimensional model containing two nonlinear terms, and a calcium-based synaptic plasticity model capable of producing various STDP curves. Simulation and FPGA synthesis results demonstrate that the hardware proposed for each case study is capable of producing various responses remarkably similar to the original model and significantly outperforms the previously published counterparts in terms of resource utilization and maximum clock frequency.
Collapse
|
10
|
Akbarzadeh-Sherbaf K, Abdoli B, Safari S, Vahabie AH. A Scalable FPGA Architecture for Randomly Connected Networks of Hodgkin-Huxley Neurons. Front Neurosci 2018; 12:698. [PMID: 30356803 PMCID: PMC6190648 DOI: 10.3389/fnins.2018.00698] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2017] [Accepted: 09/18/2018] [Indexed: 12/17/2022] Open
Abstract
Human intelligence relies on the vast number of neurons and their interconnections that form a parallel computing engine. If we tend to design a brain-like machine, we will have no choice but to employ many spiking neurons, each one has a large number of synapses. Such a neuronal network is not only compute-intensive but also memory-intensive. The performance and the configurability of the modern FPGAs make them suitable hardware solutions to deal with these challenges. This paper presents a scalable architecture to simulate a randomly connected network of Hodgkin-Huxley neurons. To demonstrate that our architecture eliminates the need to use a high-end device, we employ the XC7A200T, a member of the mid-range Xilinx Artix®-7 family, as our target device. A set of techniques are proposed to reduce the memory usage and computational requirements. Here we introduce a multi-core architecture in which each core can update the states of a group of neurons stored in its corresponding memory bank. The proposed system uses a novel method to generate the connectivity vectors on the fly instead of storing them in a huge memory. This technique is based on a cyclic permutation of a single prestored connectivity vector per core. Moreover, to reduce both the resource usage and the computational latency even more, a novel approximate two-level counter is introduced to count the number of the spikes at the synapse for the sparse network. The first level is a low cost saturated counter implemented on FPGA lookup tables that reduces the number of inputs to the second level exact adder tree. It, therefore, results in much lower hardware cost for the counter circuit. These techniques along with pipelining make it possible to have a high-performance, scalable architecture, which could be configured for either a real-time simulation of up to 5120 neurons or a large-scale simulation of up to 65536 neurons in an appropriate execution time on a cost-optimized FPGA.
Collapse
Affiliation(s)
- Kaveh Akbarzadeh-Sherbaf
- High Performance Embedded Architecture Lab., School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran
| | - Behrooz Abdoli
- High Performance Embedded Architecture Lab., School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran
| | - Saeed Safari
- High Performance Embedded Architecture Lab., School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran
| | - Abdol-Hossein Vahabie
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences, Tehran, Iran
| |
Collapse
|