1
|
Karamimanesh M, Abiri E, Shahsavari M, Hassanli K, van Schaik A, Eshraghian J. Spiking neural networks on FPGA: A survey of methodologies and recent advancements. Neural Netw 2025; 186:107256. [PMID: 39965527 DOI: 10.1016/j.neunet.2025.107256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Revised: 12/28/2024] [Accepted: 02/05/2025] [Indexed: 02/20/2025]
Abstract
The mimicry of the biological brain's structure in information processing enables spiking neural networks (SNNs) to exhibit significantly reduced power consumption compared to conventional systems. Consequently, these networks have garnered heightened attention and spurred extensive research endeavors in recent years, proposing various structures to achieve low power consumption, high speed, and improved recognition ability. However, researchers are still in the early stages of developing more efficient neural networks that more closely resemble the biological brain. This development and research require suitable hardware for execution with appropriate capabilities, and field-programmable gate array (FPGA) serves as a highly qualified candidate compared to existing hardware such as central processing unit (CPU) and graphics processing unit (GPU). FPGA, with parallel processing capabilities similar to the brain, lower latency and power consumption, and higher throughput, is highly eligible hardware for assisting in the development of spiking neural networks. In this review, an attempt has been made to facilitate researchers' path to further develop this field by collecting and examining recent works and the challenges that hinder the implementation of these networks on FPGA.
Collapse
Affiliation(s)
- Mehrzad Karamimanesh
- Department of Electrical Engineering, Shiraz University of Technology, Shiraz, Iran.
| | - Ebrahim Abiri
- Department of Electrical Engineering, Shiraz University of Technology, Shiraz, Iran.
| | - Mahyar Shahsavari
- AI Department, Donders Institute for Brain Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.
| | - Kourosh Hassanli
- Department of Electrical Engineering, Shiraz University of Technology, Shiraz, Iran.
| | - André van Schaik
- The MARCS Institute, International Centre for Neuromorphic Systems, Western Sydney University, Australia.
| | - Jason Eshraghian
- Department of Electrical Engineering, University of California Santa Cruz, Santa Cruz, CA, USA.
| |
Collapse
|
2
|
Magnani C, Moore LE. Power spectral analysis of voltage-gated channels in neurons. Front Neuroinform 2025; 18:1472499. [PMID: 39882027 PMCID: PMC11774927 DOI: 10.3389/fninf.2024.1472499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2024] [Accepted: 12/18/2024] [Indexed: 01/31/2025] Open
Abstract
This article develops a fundamental insight into the behavior of neuronal membranes, focusing on their responses to stimuli measured with power spectra in the frequency domain. It explores the use of linear and nonlinear (quadratic sinusoidal analysis) approaches to characterize neuronal function. It further delves into the random theory of internal noise of biological neurons and the use of stochastic Markov models to investigate these fluctuations. The text also discusses the origin of conductance noise and compares different power spectra for interpreting this noise. Importantly, it introduces a novel sequential chemical state model, named p 2, which is more general than the Hodgkin-Huxley formulation, so that the probability for an ion channel to be open does not imply exponentiation. In particular, it is demonstrated that the p 2 (without exponentiation) and n 4 (with exponentiation) models can produce similar neuronal responses. A striking relationship is also shown between fluctuation and quadratic power spectra, suggesting that voltage-dependent random mechanisms can have a significant impact on deterministic nonlinear responses, themselves known to have a crucial role in the generation of action potentials in biological neural networks.
Collapse
|
3
|
Urbizagastegui P, van Schaik A, Wang R. Memory-efficient neurons and synapses for spike-timing-dependent-plasticity in large-scale spiking networks. Front Neurosci 2024; 18:1450640. [PMID: 39308944 PMCID: PMC11412959 DOI: 10.3389/fnins.2024.1450640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2024] [Accepted: 08/12/2024] [Indexed: 09/25/2024] Open
Abstract
This paper addresses the challenges posed by frequent memory access during simulations of large-scale spiking neural networks involving synaptic plasticity. We focus on the memory accesses performed during a common synaptic plasticity rule since this can be a significant factor limiting the efficiency of the simulations. We propose neuron models that are represented by only three state variables, which are engineered to enforce the appropriate neuronal dynamics. Additionally, memory retrieval is executed solely by fetching postsynaptic variables, promoting a contiguous memory storage and leveraging the capabilities of burst mode operations to reduce the overhead associated with each access. Different plasticity rules could be implemented despite the adopted simplifications, each leading to a distinct synaptic weight distribution (i.e., unimodal and bimodal). Moreover, our method requires fewer average memory accesses compared to a naive approach. We argue that the strategy described can speed up memory transactions and reduce latencies while maintaining a small memory footprint.
Collapse
Affiliation(s)
- Pablo Urbizagastegui
- International Centre for Neuromorphic Systems, The MARCS Institute for Brain, Behavior, and Development, Western Sydney University, Kingswood, NSW, Australia
| | | | | |
Collapse
|
4
|
Yang S, Wang H, Pang Y, Azghadi MR, Linares-Barranco B. NADOL: Neuromorphic Architecture for Spike-Driven Online Learning by Dendrites. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2024; 18:186-199. [PMID: 37725735 DOI: 10.1109/tbcas.2023.3316968] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/21/2023]
Abstract
Biologically plausible learning with neuronal dendrites is a promising perspective to improve the spike-driven learning capability by introducing dendritic processing as an additional hyperparameter. Neuromorphic computing is an effective and essential solution towards spike-based machine intelligence and neural learning systems. However, on-line learning capability for neuromorphic models is still an open challenge. In this study a novel neuromorphic architecture with dendritic on-line learning (NADOL) is presented, which is a novel efficient methodology for brain-inspired intelligence on embedded hardware. With the feature of distributed processing using spiking neural network, NADOL can cut down the power consumption and enhance the learning efficiency and convergence speed. A detailed analysis for NADOL is presented, which demonstrates the effects of different conditions on learning capabilities, including neuron number in hidden layer, dendritic segregation parameters, feedback connection, and connection sparseness with various levels of amplification. Piecewise linear approximation approach is used to cut down the computational resource cost. The experimental results demonstrate a remarkable learning capability that surpasses other solutions, with NADOL exhibiting superior performance over the GPU platform in dendritic learning. This study's applicability extends across diverse domains, including the Internet of Things, robotic control, and brain-machine interfaces. Moreover, it signifies a pivotal step in bridging the gap between artificial intelligence and neuroscience through the introduction of an innovative neuromorphic paradigm.
Collapse
|
5
|
Yang S, Wang J, Deng B, Azghadi MR, Linares-Barranco B. Neuromorphic Context-Dependent Learning Framework With Fault-Tolerant Spike Routing. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:7126-7140. [PMID: 34115596 DOI: 10.1109/tnnls.2021.3084250] [Citation(s) in RCA: 57] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Neuromorphic computing is a promising technology that realizes computation based on event-based spiking neural networks (SNNs). However, fault-tolerant on-chip learning remains a challenge in neuromorphic systems. This study presents the first scalable neuromorphic fault-tolerant context-dependent learning (FCL) hardware framework. We show how this system can learn associations between stimulation and response in two context-dependent learning tasks from experimental neuroscience, despite possible faults in the hardware nodes. Furthermore, we demonstrate how our novel fault-tolerant neuromorphic spike routing scheme can avoid multiple fault nodes successfully and can enhance the maximum throughput of the neuromorphic network by 0.9%-16.1% in comparison with previous studies. By utilizing the real-time computational capabilities and multiple-fault-tolerant property of the proposed system, the neuronal mechanisms underlying the spiking activities of neuromorphic networks can be readily explored. In addition, the proposed system can be applied in real-time learning and decision-making applications, brain-machine integration, and the investigation of brain cognition during learning.
Collapse
|
6
|
Yang S, Wang J, Zhang N, Deng B, Pang Y, Azghadi MR. CerebelluMorphic: Large-Scale Neuromorphic Model and Architecture for Supervised Motor Learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:4398-4412. [PMID: 33621181 DOI: 10.1109/tnnls.2021.3057070] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The cerebellum plays a vital role in motor learning and control with supervised learning capability, while neuromorphic engineering devises diverse approaches to high-performance computation inspired by biological neural systems. This article presents a large-scale cerebellar network model for supervised learning, as well as a cerebellum-inspired neuromorphic architecture to map the cerebellar anatomical structure into the large-scale model. Our multinucleus model and its underpinning architecture contain approximately 3.5 million neurons, upscaling state-of-the-art neuromorphic designs by over 34 times. Besides, the proposed model and architecture incorporate 3411k granule cells, introducing a 284 times increase compared to a previous study including only 12k cells. This large scaling induces more biologically plausible cerebellar divergence/convergence ratios, which results in better mimicking biology. In order to verify the functionality of our proposed model and demonstrate its strong biomimicry, a reconfigurable neuromorphic system is used, on which our developed architecture is realized to replicate cerebellar dynamics during the optokinetic response. In addition, our neuromorphic architecture is used to analyze the dynamical synchronization within the Purkinje cells, revealing the effects of firing rates of mossy fibers on the resonance dynamics of Purkinje cells. Our experiments show that real-time operation can be realized, with a system throughput of up to 4.70 times larger than previous works with high synaptic event rate. These results suggest that the proposed work provides both a theoretical basis and a neuromorphic engineering perspective for brain-inspired computing and the further exploration of cerebellar learning.
Collapse
|
7
|
Yu H, Meng Z, Li H, Liu C, Wang J. Intensity-Varied Closed-Loop Noise Stimulation for Oscillation Suppression in the Parkinsonian State. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:9861-9870. [PMID: 34398769 DOI: 10.1109/tcyb.2021.3079100] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This work explores the effectiveness of the intensity-varied closed-loop noise stimulation on the oscillation suppression in the Parkinsonian state. Deep brain stimulation (DBS) is the standard therapy for Parkinson's disease (PD), but its effects need to be improved. The noise stimulation has compelling results in alleviating the PD state. However, in the open-loop control scheme, the noise stimulation parameters cannot be self-adjusted to adapt to the amplitude of the synchronized neuronal activities in real time. Thus, based on the delayed-feedback control algorithm, an intensity-varied closed-loop noise stimulation strategy is proposed. Based on a computational model of the basal ganglia (BG) that can present the intrinsic properties of the BG neurons and their interactions with the thalamic neurons, the proposed stimulation strategy is tested. Simulation results show that the noise stimulation suppresses the pathological beta (12-35 Hz) oscillations without any new rhythms in other bands compared with traditional high-frequency DBS. The intensity-varied closed-loop noise stimulation has a more profound role in removing the pathological beta oscillations and improving the thalamic reliability than open-loop noise stimulation, especially for different PD states. And the closed-loop noise stimulation enlarges the parameter space of the delayed-feedback control algorithm due to the randomness of noise signals. We also provide a theoretical analysis of the effective parameter domain of the delayed-feedback control algorithm by simplifying the BG model to an oscillator model. This exploration may guide a new approach to treating PD by optimizing the noise-induced improvement of the BG dysfunction.
Collapse
|
8
|
Yuan C, Li X. Fitting of TC model according to key parameters affecting Parkinson's state based on improved particle swarm optimization algorithm. Sci Rep 2022; 12:13938. [PMID: 35977977 PMCID: PMC9385711 DOI: 10.1038/s41598-022-18267-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2021] [Accepted: 08/08/2022] [Indexed: 11/10/2022] Open
Abstract
Biophysical models contain a large number of parameters, while the spiking characteristics of neurons are related to a few key parameters. For thalamic neurons, relay reliability is an important characteristic that affects Parkinson's state. This paper proposes a method to fit key parameters of the model based on the spiking characteristics of neurons, and improves the traditional particle swarm optimization algorithm. That is, a nonlinear concave function and a Logistic chaotic mapping are combined to adjust the inertia weight of particles to avoid the particle falling into a local optimum in the search process or appearing premature convergence. In this paper, three parameters that play an important role in Parkinson's state of the thalamic cell model are selected and fitted by the improved particle swarm optimization algorithm. Using the fitted parameters to reconstruct the neuron model can predict the spiking trajectories well, which verifies the effectiveness of the fitting method. By comparing the fitting results with other particle swarm optimization algorithms, it is shown that the proposed particle swarm optimization algorithm can better avoid local optima and converge to the optimal values quickly.
Collapse
Affiliation(s)
- Chunhua Yuan
- School of Automation and Electrical Engineering, Shenyang Ligong University, Shenyang, 110159, China
| | - Xiangyu Li
- School of Automation and Electrical Engineering, Shenyang Ligong University, Shenyang, 110159, China.
| |
Collapse
|
9
|
Adaptive neural output feedback control of automobile PEM fuel cell air-supply system with prescribed performance. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03765-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
10
|
Guo W, Yantir HE, Fouda ME, Eltawil AM, Salama KN. Toward the Optimal Design and FPGA Implementation of Spiking Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:3988-4002. [PMID: 33571097 DOI: 10.1109/tnnls.2021.3055421] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The performance of a biologically plausible spiking neural network (SNN) largely depends on the model parameters and neural dynamics. This article proposes a parameter optimization scheme for improving the performance of a biologically plausible SNN and a parallel on-field-programmable gate array (FPGA) online learning neuromorphic platform for the digital implementation based on two numerical methods, namely, the Euler and third-order Runge-Kutta (RK3) methods. The optimization scheme explores the impact of biological time constants on information transmission in the SNN and improves the convergence rate of the SNN on digit recognition with a suitable choice of the time constants. The parallel digital implementation leads to a significant speedup over software simulation on a general-purpose CPU. The parallel implementation with the Euler method enables around 180× ( 20× ) training (inference) speedup over a Pytorch-based SNN simulation on CPU. Moreover, compared with previous work, our parallel implementation shows more than 300× ( 240× ) improvement on speed and 180× ( 250× ) reduction in energy consumption for training (inference). In addition, due to the high-order accuracy, the RK3 method is demonstrated to gain 2× training speedup over the Euler method, which makes it suitable for online training in real-time applications.
Collapse
|
11
|
Wang H, He Z, Wang T, He J, Zhou X, Wang Y, Liu L, Wu N, Tian M, Shi C. TripleBrain: A Compact Neuromorphic Hardware Core With Fast On-Chip Self-Organizing and Reinforcement Spike-Timing Dependent Plasticity. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2022; 16:636-650. [PMID: 35802542 DOI: 10.1109/tbcas.2022.3189240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Human brain cortex acts as a rich inspiration source for constructing efficient artificial cognitive systems. In this paper, we investigate to incorporate multiple brain-inspired computing paradigms for compact, fast and high-accuracy neuromorphic hardware implementation. We propose the TripleBrain hardware core that tightly combines three common brain-inspired factors: the spike-based processing and plasticity, the self-organizing map (SOM) mechanism and the reinforcement learning scheme, to improve object recognition accuracy and processing throughput, while keeping low resource costs. The proposed hardware core is fully event-driven to mitigate unnecessary operations, and enables various on-chip learning rules (including the proposed SOM-STDP & R-STDP rule and the R-SOM-STDP rule regarded as the two variants of our TripleBrain learning rule) with different accuracy-latency tradeoffs to satisfy user requirements. An FPGA prototype of the neuromorphic core was implemented and elaborately tested. It realized high-speed learning (1349 frame/s) and inference (2698 frame/s), and obtained comparably high recognition accuracies of 95.10%, 80.89%, 100%, 94.94%, 82.32%, 100% and 97.93% on the MNIST, ETH-80, ORL-10, Yale-10, N-MNIST, Poker-DVS and Posture-DVS datasets, respectively, while only consuming 4146 (7.59%) slices, 32 (3.56%) DSPs and 131 (24.04%) Block RAMs on a Xilinx Zynq-7045 FPGA chip. Our neuromorphic core is very attractive for real-time resource-limited edge intelligent systems.
Collapse
|
12
|
Yang S, Wang J, Hao X, Li H, Wei X, Deng B, Loparo KA. BiCoSS: Toward Large-Scale Cognition Brain With Multigranular Neuromorphic Architecture. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:2801-2815. [PMID: 33428574 DOI: 10.1109/tnnls.2020.3045492] [Citation(s) in RCA: 53] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The further exploration of the neural mechanisms underlying the biological activities of the human brain depends on the development of large-scale spiking neural networks (SNNs) with different categories at different levels, as well as the corresponding computing platforms. Neuromorphic engineering provides approaches to high-performance biologically plausible computational paradigms inspired by neural systems. In this article, we present a biological-inspired cognitive supercomputing system (BiCoSS) that integrates multiple granules (GRs) of SNNs to realize a hybrid compatible neuromorphic platform. A scalable hierarchical heterogeneous multicore architecture is presented, and a synergistic routing scheme for hybrid neural information is proposed. The BiCoSS system can accommodate different levels of GRs and biological plausibility of SNN models in an efficient and scalable manner. Over four million neurons can be realized on BiCoSS with a power efficiency of 2.8k larger than the GPU platform, and the average latency of BiCoSS is 3.62 and 2.49 times higher than conventional architectures of digital neuromorphic systems. For the verification, BiCoSS is used to replicate various biological cognitive activities, including motor learning, action selection, context-dependent learning, and movement disorders. Comprehensively considering the programmability, biological plausibility, learning capability, computational power, and scalability, BiCoSS is shown to outperform the alternative state-of-the-art works for large-scale SNN, while its real-time computational capability enables a wide range of potential applications.
Collapse
|
13
|
Online subspace learning and imputation by Tensor-Ring decomposition. Neural Netw 2022; 153:314-324. [PMID: 35772252 DOI: 10.1016/j.neunet.2022.05.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 03/31/2022] [Accepted: 05/24/2022] [Indexed: 11/21/2022]
Abstract
This paper considers the completion problem of a partially observed high-order streaming data, which is cast as an online low-rank tensor completion problem. Though the online low-rank tensor completion problem has drawn lots of attention in recent years, most of them are designed based on the traditional decomposition method, such as CP and Tucker. Inspired by the advantages of Tensor Ring decomposition over the traditional decompositions in expressing high-order data and its superiority in missing values estimation, this paper proposes two online subspace learning and imputation methods based on Tensor Ring decomposition. Specifically, we first propose an online Tensor Ring subspace learning and imputation model by formulating an exponentially weighted least squares with Frobenium norm regularization of TR-cores. Then, two commonly used optimization algorithms, i.e. alternating recursive least squares and stochastic-gradient algorithms, are developed to solve the proposed model. Numerical experiments show that the proposed methods are more effective to exploit the time-varying subspace in comparison with the conventional Tensor Ring completion methods. Besides, the proposed methods are demonstrated to be superior to obtain better results than state-of-the-art online methods in streaming data completion under varying missing ratios and noise.
Collapse
|
14
|
Constructing novel datasets for intent detection and ner in a korean healthcare advice system: guidelines and empirical results. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03400-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
15
|
Yang S, Gao T, Wang J, Deng B, Azghadi MR, Lei T, Linares-Barranco B. SAM: A Unified Self-Adaptive Multicompartmental Spiking Neuron Model for Learning With Working Memory. Front Neurosci 2022; 16:850945. [PMID: 35527819 PMCID: PMC9074872 DOI: 10.3389/fnins.2022.850945] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Accepted: 03/15/2022] [Indexed: 11/13/2022] Open
Abstract
Working memory is a fundamental feature of biological brains for perception, cognition, and learning. In addition, learning with working memory, which has been show in conventional artificial intelligence systems through recurrent neural networks, is instrumental to advanced cognitive intelligence. However, it is hard to endow a simple neuron model with working memory, and to understand the biological mechanisms that have resulted in such a powerful ability at the neuronal level. This article presents a novel self-adaptive multicompartment spiking neuron model, referred to as SAM, for spike-based learning with working memory. SAM integrates four major biological principles including sparse coding, dendritic non-linearity, intrinsic self-adaptive dynamics, and spike-driven learning. We first describe SAM's design and explore the impacts of critical parameters on its biological dynamics. We then use SAM to build spiking networks to accomplish several different tasks including supervised learning of the MNIST dataset using sequential spatiotemporal encoding, noisy spike pattern classification, sparse coding during pattern classification, spatiotemporal feature detection, meta-learning with working memory applied to a navigation task and the MNIST classification task, and working memory for spatiotemporal learning. Our experimental results highlight the energy efficiency and robustness of SAM in these wide range of challenging tasks. The effects of SAM model variations on its working memory are also explored, hoping to offer insight into the biological mechanisms underlying working memory in the brain. The SAM model is the first attempt to integrate the capabilities of spike-driven learning and working memory in a unified single neuron with multiple timescale dynamics. The competitive performance of SAM could potentially contribute to the development of efficient adaptive neuromorphic computing systems for various applications from robotics to edge computing.
Collapse
Affiliation(s)
- Shuangming Yang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Tian Gao
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Jiang Wang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Bin Deng
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | | | - Tao Lei
- School of Electronic Information and Artificial Intelligence, Shaanxi University of Science and Technology, Xi’an, China
| | | |
Collapse
|
16
|
Yang S, Tan J, Chen B. Robust Spike-Based Continual Meta-Learning Improved by Restricted Minimum Error Entropy Criterion. ENTROPY 2022; 24:e24040455. [PMID: 35455118 PMCID: PMC9031894 DOI: 10.3390/e24040455] [Citation(s) in RCA: 44] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 03/19/2022] [Accepted: 03/23/2022] [Indexed: 02/04/2023]
Abstract
The spiking neural network (SNN) is regarded as a promising candidate to deal with the great challenges presented by current machine learning techniques, including the high energy consumption induced by deep neural networks. However, there is still a great gap between SNNs and the online meta-learning performance of artificial neural networks. Importantly, existing spike-based online meta-learning models do not target the robust learning based on spatio-temporal dynamics and superior machine learning theory. In this invited article, we propose a novel spike-based framework with minimum error entropy, called MeMEE, using the entropy theory to establish the gradient-based online meta-learning scheme in a recurrent SNN architecture. We examine the performance based on various types of tasks, including autonomous navigation and the working memory test. The experimental results show that the proposed MeMEE model can effectively improve the accuracy and the robustness of the spike-based meta-learning performance. More importantly, the proposed MeMEE model emphasizes the application of the modern information theoretic learning approach on the state-of-the-art spike-based learning algorithms. Therefore, in this invited paper, we provide new perspectives for further integration of advanced information theory in machine learning to improve the learning performance of SNNs, which could be of great merit to applied developments with spike-based neuromorphic systems.
Collapse
Affiliation(s)
- Shuangming Yang
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China; (S.Y.); (J.T.)
| | - Jiangtong Tan
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China; (S.Y.); (J.T.)
| | - Badong Chen
- Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an 710049, China
- Correspondence:
| |
Collapse
|
17
|
Walking motion real-time detection method based on walking stick, IoT, COPOD and improved LightGBM. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03264-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
18
|
Channel pruning guided by global channel relation. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03198-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
19
|
Chen L, Ren J, Chen P, Mao X, Zhao Q. Limited text speech synthesis with electroglottograph based on Bi-LSTM and modified Tacotron-2. APPL INTELL 2022. [DOI: 10.1007/s10489-021-03075-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractThis paper proposes a framework of applying only the EGG signal for speech synthesis in the limited categories of contents scenario. EGG is a sort of physiological signal which can reflect the trends of the vocal cord movement. Note that EGG’s different acquisition method contrasted with speech signals, we exploit its application in speech synthesis under the following two scenarios. (1) To synthesize speeches under high noise circumstances, where clean speech signals are unavailable. (2) To enable dumb people who retain vocal cord vibration to speak again. Our study consists of two stages, EGG to text and text to speech. The first is a text content recognition model based on Bi-LSTM, which converts each EGG signal sample into the corresponding text with a limited class of contents. This model achieves 91.12% accuracy on the validation set in a 20-class content recognition experiment. Then the second step synthesizes speeches with the corresponding text and the EGG signal. Based on modified Tacotron-2, our model gains the Mel cepstral distortion (MCD) of 5.877 and the mean opinion score (MOS) of 3.87, which is comparable with the state-of-the-art performance and achieves an improvement by 0.42 and a relatively smaller model size than the origin Tacotron-2. Considering to introduce the characteristics of speakers contained in EGG to the final synthesized speech, we put forward a fine-grained fundamental frequency modification method, which adjusts the fundamental frequency according to EGG signals and achieves a lower MCD of 5.781 and a higher MOS of 3.94 than that without modification.
Collapse
|
20
|
|
21
|
Self-supervised representation learning for detection of ACL tear injury in knee MR videos. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.01.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
22
|
Enhancing cooperation by cognition differences and consistent representation in multi-agent reinforcement learning. APPL INTELL 2022. [DOI: 10.1007/s10489-021-02873-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
23
|
Huang Y, Yu Z, Guo J, Xiang Y, Yu Z, Xian Y. Abstractive document summarization via multi-template decoding. APPL INTELL 2022. [DOI: 10.1007/s10489-021-02607-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
24
|
Liu J, Wang J, Yu W, Wang Z, Zhong G, He F. Semi-supervised deep learning recognition method for the new classes of faults in wind turbine system. APPL INTELL 2022. [DOI: 10.1007/s10489-021-03024-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
25
|
|
26
|
Xu C, Liu Q. An inertial neural network approach for robust time-of-arrival localization considering clock asynchronization. Neural Netw 2021; 146:98-106. [PMID: 34852299 DOI: 10.1016/j.neunet.2021.11.012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 07/21/2021] [Accepted: 11/09/2021] [Indexed: 12/01/2022]
Abstract
This paper presents an inertial neural network to solve the source localization optimization problem with l1-norm objective function based on the time of arrival (TOA) localization technique. The convergence and stability of the inertial neural network are analyzed by the Lyapunov function method. An inertial neural network iterative approach is further used to find a better solution among the solutions with different inertial parameters. Furthermore, the clock asynchronization is considered in the TOA l1-norm model for more general real applications, and the corresponding inertial neural network iterative approach is addressed. The numerical simulations and real data are both considered in the experiments. In the simulation experiments, the noise contains uncorrelated zero-mean Gaussian noise and uniform distributed outliers. In the real experiments, the data is obtained by using the ultra wide band (UWB) technology hardware modules. Whether or not there is clock asynchronization, the results show that the proposed approach always can find a more accurate source position compared with some of the existing algorithms, which implies that the proposed approach is more effective than the compared ones.
Collapse
Affiliation(s)
- Chentao Xu
- School of Cyber Science and Engineering, Frontiers Science Center for Mobile Information Communication and Security, Southeast University, Nanjing 210096, China; Purple Mountain Laboratories, Nanjing 211111, China.
| | - Qingshan Liu
- School of Mathematics, Frontiers Science Center for Mobile Information Communication and Security, Southeast University, Nanjing 210096, China; Purple Mountain Laboratories, Nanjing 211111, China.
| |
Collapse
|
27
|
Tian X, Qiu L, Zhang J. User behavior prediction via heterogeneous information in social networks. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2021.10.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
28
|
Deng B, Fan Y, Wang J, Yang S. Reconstruction of a Fully Paralleled Auditory Spiking Neural Network and FPGA Implementation. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2021; 15:1320-1331. [PMID: 34699367 DOI: 10.1109/tbcas.2021.3122549] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This paper presents a field-programmable gate array (FPGA) implementation of an auditory system, which is biologically inspired and has the advantages of robustness and anti-noise ability. We propose an FPGA implementation of an eleven-channel hierarchical spiking neuron network (SNN) model, which has a sparsely connected architecture with low power consumption. According to the mechanism of the auditory pathway in human brain, spiking trains generated by the cochlea are analyzed in the hierarchical SNN, and the specific word can be identified by a Bayesian classifier. Modified leaky integrate-and-fire (LIF) model is used to realize the hierarchical SNN, which achieves both high efficiency and low hardware consumption. The hierarchical SNN implemented on FPGA enables the auditory system to be operated at high speed and can be interfaced and applied with external machines and sensors. A set of speech from different speakers mixed with noise are used as input to test the performance our system, and the experimental results show that the system can classify words in a biologically plausible way with the presence of noise. The method of our system is flexible and the system can be modified into desirable scale. These confirm that the proposed biologically plausible auditory system provides a better method for on-chip speech recognition. Compare to the state-of-the-art, our auditory system achieves a higher speed with a maximum frequency of 65.03 MHz and a lower energy consumption of 276.83 μJ for a single operation. It can be applied in the field of brain-computer interface and intelligent robots.
Collapse
|
29
|
Imitation and mirror systems in robots through Deep Modality Blending Networks. Neural Netw 2021; 146:22-35. [PMID: 34839090 DOI: 10.1016/j.neunet.2021.11.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 09/29/2021] [Accepted: 11/04/2021] [Indexed: 11/23/2022]
Abstract
Learning to interact with the environment not only empowers the agent with manipulation capability but also generates information to facilitate building of action understanding and imitation capabilities. This seems to be a strategy adopted by biological systems, in particular primates, as evidenced by the existence of mirror neurons that seem to be involved in multi-modal action understanding. How to benefit from the interaction experience of the robots to enable understanding actions and goals of other agents is still a challenging question. In this study, we propose a novel method, deep modality blending networks (DMBN), that creates a common latent space from multi-modal experience of a robot by blending multi-modal signals with a stochastic weighting mechanism. We show for the first time that deep learning, when combined with a novel modality blending scheme, can facilitate action recognition and produce structures to sustain anatomical and effect-based imitation capabilities. Our proposed system, which is based on conditional neural processes, can be conditioned on any desired sensory/motor value at any time step, and can generate a complete multi-modal trajectory consistent with the desired conditioning in one-shot by querying the network for all the sampled time points in parallel avoiding the accumulation of prediction errors. Based on simulation experiments with an arm-gripper robot and an RGB camera, we showed that DMBN could make accurate predictions about any missing modality (camera or joint angles) given the available ones outperforming recent multimodal variational autoencoder models in terms of long-horizon high-dimensional trajectory predictions. We further showed that given desired images from different perspectives, i.e. images generated by the observation of other robots placed on different sides of the table, our system could generate image and joint angle sequences that correspond to either anatomical or effect-based imitation behavior. To achieve this mirror-like behavior, our system does not perform a pixel-based template matching but rather benefits from and relies on the common latent space constructed by using both joint and image modalities, as shown by additional experiments. Moreover, we showed that mirror learning (in our system) does not only depend on visual experience and cannot be achieved without proprioceptive experience. Our experiments showed that out of ten training scenarios with different initial configurations, the proposed DMBN model could achieve mirror learning in all of the cases where the model that only uses visual information failed in half of them. Overall, the proposed DMBN architecture not only serves as a computational model for sustaining mirror neuron-like capabilities, but also stands as a powerful machine learning architecture for high-dimensional multi-modal temporal data with robust retrieval capabilities operating with partial information in one or multiple modalities.
Collapse
|
30
|
Datta S, Boulgouris NV. Recognition of grammatical class of imagined words from EEG signals using convolutional neural network. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.08.035] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
31
|
Leveraging label hierarchy using transfer and multi-task learning: A case study on patent classification. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.07.057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
32
|
Hadri A, Laghrib A, Oummi H. An optimal variable exponent model for Magnetic Resonance Images denoising. Pattern Recognit Lett 2021. [DOI: 10.1016/j.patrec.2021.08.031] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
33
|
Time delay system identification using controlled recurrent neural network and discrete bayesian optimization. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02823-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
34
|
Defect classification on limited labeled samples with multiscale feature fusion and semi-supervised learning. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02917-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
35
|
Chen J, Wang L, Duan S. A mixed-kernel, variable-dimension memristive CNN for electronic nose recognition. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.07.009] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
36
|
Gu L, Pang C, Zheng Y, Lyu C, Lyu L. Context-aware pyramid attention network for crowd counting. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02639-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
37
|
Thermal-based early breast cancer detection using inception V3, inception V4 and modified inception MV4. Neural Comput Appl 2021; 34:333-348. [PMID: 34393379 PMCID: PMC8349135 DOI: 10.1007/s00521-021-06372-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Accepted: 07/26/2021] [Indexed: 11/17/2022]
Abstract
Breast cancer is one of the most significant causes of death for women around the world. Breast thermography supported by deep convolutional neural networks is expected to contribute significantly to early detection and facilitate treatment at an early stage. The goal of this study is to investigate the behavior of different recent deep learning methods for identifying breast disorders. To evaluate our proposal, we built classifiers based on deep convolutional neural networks modelling inception V3, inception V4, and a modified version of the latter called inception MV4. MV4 was introduced to maintain the computational cost across all layers by making the resultant number of features and the number of pixel positions equal. DMR database was used for these deep learning models in classifying thermal images of healthy and sick patients. A set of epochs 3–30 were used in conjunction with learning rates 1 × 10–3, 1 × 10–4 and 1 × 10–5, Minibatch 10 and different optimization methods. The training results showed that inception V4 and MV4 with color images, a learning rate of 1 × 10–4, and SGDM optimization method, reached very high accuracy, verified through several experimental repetitions. With grayscale images, inception V3 outperforms V4 and MV4 by a considerable accuracy margin, for any optimization methods. In fact, the inception V3 (grayscale) performance is almost comparable to inception V4 and MV4 (color) performance but only after 20–30 epochs. inception MV4 achieved 7% faster classification response time compared to V4. The use of MV4 model is found to contribute to saving energy consumed and fluidity in arithmetic operations for the graphic processor. The results also indicate that increasing the number of layers may not necessarily be useful in improving the performance.
Collapse
|
38
|
Du N, Zhao X, Chen Z, Choubey B, Di Ventra M, Skorupa I, Bürger D, Schmidt H. Synaptic Plasticity in Memristive Artificial Synapses and Their Robustness Against Noisy Inputs. Front Neurosci 2021; 15:660894. [PMID: 34335153 PMCID: PMC8316997 DOI: 10.3389/fnins.2021.660894] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Accepted: 05/17/2021] [Indexed: 11/30/2022] Open
Abstract
Emerging brain-inspired neuromorphic computing paradigms require devices that can emulate the complete functionality of biological synapses upon different neuronal activities in order to process big data flows in an efficient and cognitive manner while being robust against any noisy input. The memristive device has been proposed as a promising candidate for emulating artificial synapses due to their complex multilevel and dynamical plastic behaviors. In this work, we exploit ultrastable analog BiFeO3 (BFO)-based memristive devices for experimentally demonstrating that BFO artificial synapses support various long-term plastic functions, i.e., spike timing-dependent plasticity (STDP), cycle number-dependent plasticity (CNDP), and spiking rate-dependent plasticity (SRDP). The study on the impact of electrical stimuli in terms of pulse width and amplitude on STDP behaviors shows that their learning windows possess a wide range of timescale configurability, which can be a function of applied waveform. Moreover, beyond SRDP, the systematical and comparative study on generalized frequency-dependent plasticity (FDP) is carried out, which reveals for the first time that the ratio modulation between pulse width and pulse interval time within one spike cycle can result in both synaptic potentiation and depression effect within the same firing frequency. The impact of intrinsic neuronal noise on the STDP function of a single BFO artificial synapse can be neglected because thermal noise is two orders of magnitude smaller than the writing voltage and because the cycle-to-cycle variation of the current–voltage characteristics of a single BFO artificial synapses is small. However, extrinsic voltage fluctuations, e.g., in neural networks, cause a noisy input into the artificial synapses of the neural network. Here, the impact of extrinsic neuronal noise on the STDP function of a single BFO artificial synapse is analyzed in order to understand the robustness of plastic behavior in memristive artificial synapses against extrinsic noisy input.
Collapse
Affiliation(s)
- Nan Du
- Department Nano Device Technology, Fraunhofer Institute for Electronic Nano Systems, Chemnitz, Germany.,Faculty of Electrical Engineering and Information Technology, Chemnitz University of Technology, Chemnitz, Germany.,Department of Quantum Detection, Leibniz Institute of Photonic Technology, Jena, Germany.,Institute for Solid State Physics, Friedrich Schiller University Jena, Jena, Germany
| | - Xianyue Zhao
- Department Nano Device Technology, Fraunhofer Institute for Electronic Nano Systems, Chemnitz, Germany.,Faculty of Electrical Engineering and Information Technology, Chemnitz University of Technology, Chemnitz, Germany
| | - Ziang Chen
- Department Nano Device Technology, Fraunhofer Institute for Electronic Nano Systems, Chemnitz, Germany.,Faculty of Electrical Engineering and Information Technology, Chemnitz University of Technology, Chemnitz, Germany
| | - Bhaskar Choubey
- Analogue Circuits and Image Sensors, Universität Siegen, Siegen, Germany.,Fraunhofer Institute of Microelectronics Circuits & Systems, ATTRACT Group Microelectronic Intelligence, Duisburg, Germany
| | | | - Ilona Skorupa
- Institute of Ion Beam Physics and Materials Research, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany
| | - Danilo Bürger
- Department Nano Device Technology, Fraunhofer Institute for Electronic Nano Systems, Chemnitz, Germany.,Faculty of Electrical Engineering and Information Technology, Chemnitz University of Technology, Chemnitz, Germany
| | - Heidemarie Schmidt
- Department Nano Device Technology, Fraunhofer Institute for Electronic Nano Systems, Chemnitz, Germany.,Faculty of Electrical Engineering and Information Technology, Chemnitz University of Technology, Chemnitz, Germany.,Department of Quantum Detection, Leibniz Institute of Photonic Technology, Jena, Germany.,Institute for Solid State Physics, Friedrich Schiller University Jena, Jena, Germany
| |
Collapse
|
39
|
A neuromimetic realization of hippocampal CA1 for theta wave generation. Neural Netw 2021; 142:548-563. [PMID: 34340189 DOI: 10.1016/j.neunet.2021.07.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 04/29/2021] [Accepted: 07/02/2021] [Indexed: 11/20/2022]
Abstract
Recent advances in neural engineering allowed the development of neuroprostheses which facilitate functionality in people with neurological problems. In this research, a real-time neuromorphic system is proposed to artificially reproduce the theta wave and firing patterns of different neuronal populations in the CA1, a sub-region of the hippocampus. The hippocampal theta oscillations (4-12 Hz) are an important electrophysiological rhythm that contributes in various cognitive functions, including navigation, memory, and novelty detection. The proposed CA1 neuromimetic circuit includes 100 linearized Pinsky-Rinzel neurons and 668 excitatory and inhibitory synapses on a field programmable gate array (FPGA). The implemented spiking neural network of the CA1 includes the main neuronal populations for the theta rhythm generation: excitatory pyramidal cells, PV+ basket cells, and Oriens Lacunosum-Moleculare (OLM) cells which are inhibitory interneurons. Moreover, the main inputs to the CA1 region from the entorhinal cortex via the perforant pathway, the CA3 via Schaffer collaterals, and the medial septum via fimbria-fornix are also implemented on the FPGA using a bursting leaky-integrate and fire (LIF) neuron model. The results of hardware realization show that the proposed CA1 neuromimetic circuit successfully reconstructs the theta oscillations and functionally illustrates the phase relations between firing responses of the different neuronal populations. It is also evaluated the impact of medial septum elimination on the firing patterns of the CA1 neuronal population and the theta wave's characteristics. This neuromorphic system can be considered as a potential platform that opens opportunities for neuroprosthetic applications in future works.
Collapse
|
40
|
Hazan A, Ezra Tsur E. Neuromorphic Analog Implementation of Neural Engineering Framework-Inspired Spiking Neuron for High-Dimensional Representation. Front Neurosci 2021; 15:627221. [PMID: 33692670 PMCID: PMC7937893 DOI: 10.3389/fnins.2021.627221] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2020] [Accepted: 01/25/2021] [Indexed: 11/13/2022] Open
Abstract
Brain-inspired hardware designs realize neural principles in electronics to provide high-performing, energy-efficient frameworks for artificial intelligence. The Neural Engineering Framework (NEF) brings forth a theoretical framework for representing high-dimensional mathematical constructs with spiking neurons to implement functional large-scale neural networks. Here, we present OZ, a programable analog implementation of NEF-inspired spiking neurons. OZ neurons can be dynamically programmed to feature varying high-dimensional response curves with positive and negative encoders for a neuromorphic distributed representation of normalized input data. Our hardware design demonstrates full correspondence with NEF across firing rates, encoding vectors, and intercepts. OZ neurons can be independently configured in real-time to allow efficient spanning of a representation space, thus using fewer neurons and therefore less power for neuromorphic data representation.
Collapse
Affiliation(s)
- Avi Hazan
- Neuro-Biomorphic Engineering Lab, Department of Mathematics and Computer Science, The Open University of Israel, Ra'anana, Israel
| | - Elishai Ezra Tsur
- Neuro-Biomorphic Engineering Lab, Department of Mathematics and Computer Science, The Open University of Israel, Ra'anana, Israel
| |
Collapse
|
41
|
Yang S, Gao T, Wang J, Deng B, Lansdell B, Linares-Barranco B. Efficient Spike-Driven Learning With Dendritic Event-Based Processing. Front Neurosci 2021; 15:601109. [PMID: 33679295 PMCID: PMC7933681 DOI: 10.3389/fnins.2021.601109] [Citation(s) in RCA: 64] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Accepted: 01/21/2021] [Indexed: 11/22/2022] Open
Abstract
A critical challenge in neuromorphic computing is to present computationally efficient algorithms of learning. When implementing gradient-based learning, error information must be routed through the network, such that each neuron knows its contribution to output, and thus how to adjust its weight. This is known as the credit assignment problem. Exactly implementing a solution like backpropagation involves weight sharing, which requires additional bandwidth and computations in a neuromorphic system. Instead, models of learning from neuroscience can provide inspiration for how to communicate error information efficiently, without weight sharing. Here we present a novel dendritic event-based processing (DEP) algorithm, using a two-compartment leaky integrate-and-fire neuron with partially segregated dendrites that effectively solves the credit assignment problem. In order to optimize the proposed algorithm, a dynamic fixed-point representation method and piecewise linear approximation approach are presented, while the synaptic events are binarized during learning. The presented optimization makes the proposed DEP algorithm very suitable for implementation in digital or mixed-signal neuromorphic hardware. The experimental results show that spiking representations can rapidly learn, achieving high performance by using the proposed DEP algorithm. We find the learning capability is affected by the degree of dendritic segregation, and the form of synaptic feedback connections. This study provides a bridge between the biological learning and neuromorphic learning, and is meaningful for the real-time applications in the field of artificial intelligence.
Collapse
Affiliation(s)
- Shuangming Yang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Tian Gao
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Jiang Wang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Bin Deng
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Benjamin Lansdell
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, United States
| | | |
Collapse
|
42
|
Chen M, Zu L, Wang H, Su F. FPGA-Based Real-Time Simulation Platform for Large-Scale STN-GPe Network. IEEE Trans Neural Syst Rehabil Eng 2020; 28:2537-2547. [PMID: 32991283 DOI: 10.1109/tnsre.2020.3027546] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The real-time simulation of large-scale subthalamic nucleus (STN)-external globus pallidus (GPe) network model is of great significance for the mechanism analysis and performance improvement of deep brain stimulation (DBS) for Parkinson's states. This paper implements the real-time simulation of a large-scale STN-GPe network containing 512 single-compartment Hodgkin-Huxley type neurons on the Altera Stratix IV field programmable gate array (FPGA) hardware platform. At the single neuron level, some resource optimization schemes such as multiplier substitution, fixed-point operation, nonlinear function approximation and function recombination are adopted, which consists the foundation of the large-scale network realization. At the network level, the simulation scale of network is expanded using module reuse method at the cost of simulation time. The correlation coefficient between the neuron firing waveform of the FPGA platform and the MATLAB software simulation waveform is 0.9756. Under the same physiological time, the simulation speed of FPGA platform is 75 times faster than the Intel Core i7-8700K 3.70 GHz CPU 32GB RAM computer simulation speed. In addition, the established platform is used to analyze the effects of temporal pattern DBS on network firing activities. The proposed large-scale STN-GPe network meets the need of real time simulation, which would be rather helpful in designing closed-loop DBS improvement strategies.
Collapse
|
43
|
Kwon D, Lim S, Bae JH, Lee ST, Kim H, Seo YT, Oh S, Kim J, Yeom K, Park BG, Lee JH. On-Chip Training Spiking Neural Networks Using Approximated Backpropagation With Analog Synaptic Devices. Front Neurosci 2020; 14:423. [PMID: 32733180 PMCID: PMC7358558 DOI: 10.3389/fnins.2020.00423] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2020] [Accepted: 04/07/2020] [Indexed: 12/02/2022] Open
Abstract
Hardware-based spiking neural networks (SNNs) inspired by a biological nervous system are regarded as an innovative computing system with very low power consumption and massively parallel operation. To train SNNs with supervision, we propose an efficient on-chip training scheme approximating backpropagation algorithm suitable for hardware implementation. We show that the accuracy of the proposed scheme for SNNs is close to that of conventional artificial neural networks (ANNs) by using the stochastic characteristics of neurons. In a hardware configuration, gated Schottky diodes (GSDs) are used as synaptic devices, which have a saturated current with respect to the input voltage. We design the SNN system by using the proposed on-chip training scheme with the GSDs, which can update their conductance in parallel to speed up the overall system. The performance of the on-chip training SNN system is validated through MNIST data set classification based on network size and total time step. The SNN systems achieve accuracy of 97.83% with 1 hidden layer and 98.44% with 4 hidden layers in fully connected neural networks. We then evaluate the effect of non-linearity and asymmetry of conductance response for long-term potentiation (LTP) and long-term depression (LTD) on the performance of the on-chip training SNN system. In addition, the impact of device variations on the performance of the on-chip training SNN system is evaluated.
Collapse
Affiliation(s)
- Dongseok Kwon
- Department of Electrical and Computer Engineering, Inter-University Semiconductor Research Center, Seoul National University, Seoul, South Korea
| | - Suhwan Lim
- Department of Electrical and Computer Engineering, Inter-University Semiconductor Research Center, Seoul National University, Seoul, South Korea
| | - Jong-Ho Bae
- Department of Electrical and Computer Engineering, Inter-University Semiconductor Research Center, Seoul National University, Seoul, South Korea
| | - Sung-Tae Lee
- Department of Electrical and Computer Engineering, Inter-University Semiconductor Research Center, Seoul National University, Seoul, South Korea
| | - Hyeongsu Kim
- Department of Electrical and Computer Engineering, Inter-University Semiconductor Research Center, Seoul National University, Seoul, South Korea
| | - Young-Tak Seo
- Department of Electrical and Computer Engineering, Inter-University Semiconductor Research Center, Seoul National University, Seoul, South Korea
| | - Seongbin Oh
- Department of Electrical and Computer Engineering, Inter-University Semiconductor Research Center, Seoul National University, Seoul, South Korea
| | - Jangsaeng Kim
- Department of Electrical and Computer Engineering, Inter-University Semiconductor Research Center, Seoul National University, Seoul, South Korea
| | - Kyuho Yeom
- Department of Electrical and Computer Engineering, Inter-University Semiconductor Research Center, Seoul National University, Seoul, South Korea
| | - Byung-Gook Park
- Department of Electrical and Computer Engineering, Inter-University Semiconductor Research Center, Seoul National University, Seoul, South Korea
| | - Jong-Ho Lee
- Department of Electrical and Computer Engineering, Inter-University Semiconductor Research Center, Seoul National University, Seoul, South Korea
| |
Collapse
|
44
|
Abstract
This study presents a computational model to reproduce the biological dynamics of "listening to music." A biologically plausible model of periodicity pitch detection is proposed and simulated. Periodicity pitch is computed across a range of the auditory spectrum. Periodicity pitch is detected from subsets of activated auditory nerve fibers (ANFs). These activate connected model octopus cells, which trigger model neurons detecting onsets and offsets; thence model interval-tuned neurons are innervated at the right interval times; and finally, a set of common interval-detecting neurons indicate pitch. Octopus cells rhythmically spike with the pitch periodicity of the sound. Batteries of interval-tuned neurons stopwatch-like measure the inter-spike intervals of the octopus cells by coding interval durations as first spike latencies (FSLs). The FSL-triggered spikes synchronously coincide through a monolayer spiking neural network at the corresponding receiver pitch neurons.
Collapse
Affiliation(s)
- Frank Klefenz
- Fraunhofer Institute for Digital Media Technology IDMT, Ilmenau, Germany
| | - Tamas Harczos
- Fraunhofer Institute for Digital Media Technology IDMT, Ilmenau, Germany
- Auditory Neuroscience and Optogenetics Laboratory, German Primate Center, Göttingen, Germany
- audifon GmbH & Co. KG, Kölleda, Germany
| |
Collapse
|
45
|
Scalable Implementation of Hippocampal Network on Digital Neuromorphic System towards Brain-Inspired Intelligence. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10082857] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
In this paper, an expanded digital hippocampal spurt neural network (HSNN) is innovatively proposed to simulate the mammalian cognitive system and to perform the neuroregulatory dynamics that play a critical role in the cognitive processes of the brain, such as memory and learning. The real-time computation of a large-scale peak neural network can be realized by the scalable on-chip network and parallel topology. By exploring the latest research in the field of neurons and comparing with the results of this paper, it can be found that the implementation of the hippocampal neuron model using the coordinate rotation numerical calculation algorithm can significantly reduce the cost of hardware resources. In addition, the rational use of on-chip network technology can further improve the performance of the system, and even significantly improve the network scalability on a single field programmable gate array chip. The neuromodulation dynamics are considered in the proposed system, which can replicate more relevant biological dynamics. Based on the analysis of biological theory and the theory of hardware integration, it is shown that the innovative system proposed in this paper can reproduce the biological characteristics of the hippocampal network and may be applied to brain-inspired intelligent subjects. The study in this paper will have an unexpected effect on the future research of digital neuromorphic design of spike neural network and the dynamics of the hippocampal network.
Collapse
|
46
|
Zhang G, Li B, Wu J, Wang R, Lan Y, Sun L, Lei S, Li H, Chen Y. A low-cost and high-speed hardware implementation of spiking neural network. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.11.045] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
47
|
Frenkel C, Lefebvre M, Legat JD, Bol D. A 0.086-mm 2 12.7-pJ/SOP 64k-Synapse 256-Neuron Online-Learning Digital Spiking Neuromorphic Processor in 28-nm CMOS. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2019; 13:145-158. [PMID: 30418919 DOI: 10.1109/tbcas.2018.2880425] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Shifting computing architectures from von Neumann to event-based spiking neural networks (SNNs) uncovers new opportunities for low-power processing of sensory data in applications such as vision or sensorimotor control. Exploring roads toward cognitive SNNs requires the design of compact, low-power and versatile experimentation platforms with the key requirement of online learning in order to adapt and learn new features in uncontrolled environments. However, embedding online learning in SNNs is currently hindered by high incurred complexity and area overheads. In this paper, we present ODIN, a 0.086-mm 2 64k-synapse 256-neuron online-learning digital spiking neuromorphic processor in 28-nm FDSOI CMOS achieving a minimum energy per synaptic operation (SOP) of 12.7 pJ. It leverages an efficient implementation of the spike-driven synaptic plasticity (SDSP) learning rule for high-density embedded online learning with only 0.68 μm 2 per 4-bit synapse. Neurons can be independently configured as a standard leaky integrate-and-fire model or as a custom phenomenological model that emulates the 20 Izhikevich behaviors found in biological spiking neurons. Using a single presentation of 6k 16 × 16 MNIST training images to a single-layer fully-connected 10-neuron network with on-chip SDSP-based learning, ODIN achieves a classification accuracy of 84.5%, while consuming only 15 nJ/inference at 0.55 V using rank order coding. ODIN thus enables further developments toward cognitive neuromorphic devices for low-power, adaptive and low-cost processing.
Collapse
|