1
|
Yang X, Wang J, Huang X, Wang Y, Xiao X. Forced Oscillation Detection via a Hybrid Network of a Spiking Recurrent Neural Network and LSTM. SENSORS (BASEL, SWITZERLAND) 2025; 25:2607. [PMID: 40285296 PMCID: PMC12031014 DOI: 10.3390/s25082607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2025] [Revised: 03/31/2025] [Accepted: 04/18/2025] [Indexed: 04/29/2025]
Abstract
The detection of forced oscillations, especially distinguishing them from natural oscillations, has emerged as a major concern in power system stability monitoring. Deep learning (DL) holds significant potential for detecting forced oscillations correctly. However, existing artificial neural networks (ANNs) face challenges when employed in edge devices for timely detection due to their inherent complex computations and high power consumption. This paper proposes a novel hybrid network that integrates a spiking recurrent neural network (SRNN) with long short-term memory (LSTM). The SRNN achieves computational and energy efficiency, while the integration with LSTM is conducive to effectively capturing temporal dependencies in time-series oscillation data. The proposed hybrid network is trained using the backpropagation-through-time (BPTT) optimization algorithm, with adjustments made to address the discontinuous gradient in the SRNN. We evaluate our proposed model on both simulated and real-world oscillation datasets. Overall, the experimental results demonstrate that the proposed model can achieve higher accuracy and superior performance in distinguishing forced oscillations from natural oscillations, even in the presence of strong noise, compared to pure LSTM and other SRNN-related models.
Collapse
Affiliation(s)
| | | | | | | | - Xianyong Xiao
- College of Electrical Engineering, Sichuan University, Chengdu 610065, China; (X.Y.); (J.W.); (X.H.); (Y.W.)
| |
Collapse
|
2
|
Tang F, Zhang J, Zhang C, Liu L. Brain-Inspired Architecture for Spiking Neural Networks. Biomimetics (Basel) 2024; 9:646. [PMID: 39451852 PMCID: PMC11506793 DOI: 10.3390/biomimetics9100646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Revised: 10/17/2024] [Accepted: 10/17/2024] [Indexed: 10/26/2024] Open
Abstract
Spiking neural networks (SNNs), using action potentials (spikes) to represent and transmit information, are more biologically plausible than traditional artificial neural networks. However, most of the existing SNNs require a separate preprocessing step to convert the real-valued input into spikes that are then input to the network for processing. The dissected spike-coding process may result in information loss, leading to degenerated performance. However, the biological neuron system does not perform a separate preprocessing step. Moreover, the nervous system may not have a single pathway with which to respond and process external stimuli but allows multiple circuits to perceive the same stimulus. Inspired by these advantageous aspects of the biological neural system, we propose a self-adaptive encoding spike neural network with parallel architecture. The proposed network integrates the input-encoding process into the spiking neural network architecture via convolutional operations such that the network can accept the real-valued input and automatically transform it into spikes for further processing. Meanwhile, the proposed network contains two identical parallel branches, inspired by the biological nervous system that processes information in both serial and parallel. The experimental results on multiple image classification tasks reveal that the proposed network can obtain competitive performance, suggesting the effectiveness of the proposed architecture.
Collapse
Affiliation(s)
- Fengzhen Tang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Nanta Street 114, Shenyang 110016, China; (J.Z.); (C.Z.); (L.L.)
| | - Junhuai Zhang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Nanta Street 114, Shenyang 110016, China; (J.Z.); (C.Z.); (L.L.)
- School of Computer Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Chi Zhang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Nanta Street 114, Shenyang 110016, China; (J.Z.); (C.Z.); (L.L.)
- School of Computer Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lianqing Liu
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Nanta Street 114, Shenyang 110016, China; (J.Z.); (C.Z.); (L.L.)
| |
Collapse
|
3
|
Liu G, Deng W, Xie X, Huang L, Tang H. Human-Level Control Through Directly Trained Deep Spiking Q-Networks. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:7187-7198. [PMID: 36063509 DOI: 10.1109/tcyb.2022.3198259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
As the third-generation neural networks, spiking neural networks (SNNs) have great potential on neuromorphic hardware because of their high energy efficiency. However, deep spiking reinforcement learning (DSRL), that is, the reinforcement learning (RL) based on SNNs, is still in its preliminary stage due to the binary output and the nondifferentiable property of the spiking function. To address these issues, we propose a deep spiking Q -network (DSQN) in this article. Specifically, we propose a directly trained DSRL architecture based on the leaky integrate-and-fire (LIF) neurons and deep Q -network (DQN). Then, we adapt a direct spiking learning algorithm for the DSQN. We further demonstrate the advantages of using LIF neurons in DSQN theoretically. Comprehensive experiments have been conducted on 17 top-performing Atari games to compare our method with the state-of-the-art conversion method. The experimental results demonstrate the superiority of our method in terms of performance, stability, generalization and energy efficiency. To the best of our knowledge, our work is the first one to achieve state-of-the-art performance on multiple Atari games with the directly trained SNN.
Collapse
|
4
|
Yu C, Gu Z, Li D, Wang G, Wang A, Li E. STSC-SNN: Spatio-Temporal Synaptic Connection with temporal convolution and attention for spiking neural networks. Front Neurosci 2022; 16:1079357. [PMID: 36620452 PMCID: PMC9817103 DOI: 10.3389/fnins.2022.1079357] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 12/08/2022] [Indexed: 12/25/2022] Open
Abstract
Spiking neural networks (SNNs), as one of the algorithmic models in neuromorphic computing, have gained a great deal of research attention owing to temporal information processing capability, low power consumption, and high biological plausibility. The potential to efficiently extract spatio-temporal features makes it suitable for processing event streams. However, existing synaptic structures in SNNs are almost full-connections or spatial 2D convolution, neither of which can extract temporal dependencies adequately. In this work, we take inspiration from biological synapses and propose a Spatio-Temporal Synaptic Connection SNN (STSC-SNN) model to enhance the spatio-temporal receptive fields of synaptic connections, thereby establishing temporal dependencies across layers. Specifically, we incorporate temporal convolution and attention mechanisms to implement synaptic filtering and gating functions. We show that endowing synaptic models with temporal dependencies can improve the performance of SNNs on classification tasks. In addition, we investigate the impact of performance via varied spatial-temporal receptive fields and reevaluate the temporal modules in SNNs. Our approach is tested on neuromorphic datasets, including DVS128 Gesture (gesture recognition), N-MNIST, CIFAR10-DVS (image classification), and SHD (speech digit recognition). The results show that the proposed model outperforms the state-of-the-art accuracy on nearly all datasets.
Collapse
Affiliation(s)
- Chengting Yu
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
- Zhejiang University - University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining, China
| | - Zheming Gu
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Da Li
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Gaoang Wang
- Zhejiang University - University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining, China
| | - Aili Wang
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
- Zhejiang University - University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining, China
| | - Erping Li
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
- Zhejiang University - University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining, China
| |
Collapse
|
5
|
Mo L, Wang G, Long E, Zhuo M. ALSA: Associative Learning Based Supervised Learning Algorithm for SNN. Front Neurosci 2022; 16:838832. [PMID: 35431777 PMCID: PMC9008323 DOI: 10.3389/fnins.2022.838832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Accepted: 03/07/2022] [Indexed: 11/13/2022] Open
Abstract
Spiking neural network (SNN) is considered to be the brain-like model that best conforms to the biological mechanism of the brain. Due to the non-differentiability of the spike, the training method of SNNs is still incomplete. This paper proposes a supervised learning method for SNNs based on associative learning: ALSA. The method is based on the associative learning mechanism, and its realization is similar to the animal conditioned reflex process, with strong physiological plausibility and rationality. This method uses improved spike-timing-dependent plasticity (STDP) rules, combined with a teacher layer to induct spikes of neurons, to strengthen synaptic connections between input spike patterns and specified output neurons, and weaken synaptic connections between unrelated patterns and unrelated output neurons. Based on ALSA, this paper also completed the supervised learning classification tasks of the IRIS dataset and the MNIST dataset, and achieved 95.7 and 91.58% recognition accuracy, respectively, which fully proves that ALSA is a feasible SNNs supervised learning method. The innovation of this paper is to establish a biological plausible supervised learning method for SNNs, which is based on the STDP learning rules and the associative learning mechanism that exists widely in animal training.
Collapse
|
6
|
|
7
|
Zuo L, Chen Y, Zhang L, Chen C. A spiking neural network with probability information transmission. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.01.109] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
8
|
Yang Z, Xie X, Zhan Q, Liu G, Cai Q, Zheng X. A neural-network-based framework for cigarette laser code identification. Neural Comput Appl 2020. [DOI: 10.1007/s00521-019-04647-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
9
|
Xie X, Liu G, Cai Q, Sun G, Zhang M, Qu H. An end-to-end functional spiking model for sequential feature learning. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.105643] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
10
|
Wang X, Lin X, Dang X. Supervised learning in spiking neural networks: A review of algorithms and evaluations. Neural Netw 2020; 125:258-280. [PMID: 32146356 DOI: 10.1016/j.neunet.2020.02.011] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 12/15/2019] [Accepted: 02/20/2020] [Indexed: 01/08/2023]
Abstract
As a new brain-inspired computational model of the artificial neural network, a spiking neural network encodes and processes neural information through precisely timed spike trains. Spiking neural networks are composed of biologically plausible spiking neurons, which have become suitable tools for processing complex temporal or spatiotemporal information. However, because of their intricately discontinuous and implicit nonlinear mechanisms, the formulation of efficient supervised learning algorithms for spiking neural networks is difficult, and has become an important problem in this research field. This article presents a comprehensive review of supervised learning algorithms for spiking neural networks and evaluates them qualitatively and quantitatively. First, a comparison between spiking neural networks and traditional artificial neural networks is provided. The general framework and some related theories of supervised learning for spiking neural networks are then introduced. Furthermore, the state-of-the-art supervised learning algorithms in recent years are reviewed from the perspectives of applicability to spiking neural network architecture and the inherent mechanisms of supervised learning algorithms. A performance comparison of spike train learning of some representative algorithms is also made. In addition, we provide five qualitative performance evaluation criteria for supervised learning algorithms for spiking neural networks and further present a new taxonomy for supervised learning algorithms depending on these five performance evaluation criteria. Finally, some future research directions in this research field are outlined.
Collapse
Affiliation(s)
- Xiangwen Wang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| | - Xianghong Lin
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China.
| | - Xiaochao Dang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| |
Collapse
|
11
|
|
12
|
Jeyasothy A, Sundaram S, Sundararajan N. SEFRON: A New Spiking Neuron Model With Time-Varying Synaptic Efficacy Function for Pattern Classification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:1231-1240. [PMID: 30273156 DOI: 10.1109/tnnls.2018.2868874] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper presents a new time-varying long-term Synaptic Efficacy Function-based leaky-integrate-and-fire neuRON model, referred to as SEFRON and its supervised learning rule for pattern classification problems. The time-varying synaptic efficacy function is represented by a sum of amplitude modulated Gaussian distribution functions located at different times. For a given pattern, the SEFRON's learning rule determines the changes in the amplitudes of weights at selected presynaptic spike times by minimizing a new error function reflecting the differences between the desired and actual postsynaptic firing times. Similar to the gamma-aminobutyric acid-switch phenomenon observed in a biological neuron that switches between excitatory and inhibitory postsynaptic potentials based on the physiological needs, the time-varying synapse model proposed in this paper allows the synaptic efficacy (weight) to switch signs in a continuous manner. The computational power and the functioning of SEFRON are first illustrated using a binary pattern classification problem. The detailed performance comparisons of a single SEFRON classifier with other spiking neural networks (SNNs) are also presented using four benchmark data sets from the UCI machine learning repository. The results clearly indicate that a single SEFRON provides a similar generalization performance compared to other SNNs with multiple layers and multiple neurons.
Collapse
|
13
|
Nazari S, faez K. Spiking pattern recognition using informative signal of image and unsupervised biologically plausible learning. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.10.066] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
14
|
|
15
|
Zheng Y, Li S, Yan R, Tang H, Tan KC. Sparse Temporal Encoding of Visual Features for Robust Object Recognition by Spiking Neurons. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:5823-5833. [PMID: 29994102 DOI: 10.1109/tnnls.2018.2812811] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Robust object recognition in spiking neural systems remains a challenging in neuromorphic computing area as it needs to solve both the effective encoding of sensory information and also its integration with downstream learning neurons. We target this problem by developing a spiking neural system consisting of sparse temporal encoding and temporal classifier. We propose a sparse temporal encoding algorithm which exploits both spatial and temporal information derived from an spike-timing-dependent plasticity-based HMAX feature extraction process. The temporal feature representation, thus, becomes more appropriate to be integrated with a temporal classifier based on spiking neurons rather than with nontemporal classifier. The algorithm has been validated on two benchmark data sets and the results show the temporal feature encoding and learning-based method achieves high recognition accuracy. The proposed model provides an efficient approach to perform feature representation and recognition in a consistent temporal learning framework, which is easily adapted to neuromorphic implementations.
Collapse
|
16
|
Dong M, Huang X, Xu B. Unsupervised speech recognition through spike-timing-dependent plasticity in a convolutional spiking neural network. PLoS One 2018; 13:e0204596. [PMID: 30496179 PMCID: PMC6264808 DOI: 10.1371/journal.pone.0204596] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2018] [Accepted: 09/11/2018] [Indexed: 11/17/2022] Open
Abstract
Speech recognition (SR) has been improved significantly by artificial neural networks (ANNs), but ANNs have the drawbacks of biologically implausibility and excessive power consumption because of the nonlocal transfer of real-valued errors and weights. While spiking neural networks (SNNs) have the potential to solve these drawbacks of ANNs due to their efficient spike communication and their natural way to utilize kinds of synaptic plasticity rules found in brain for weight modification. However, existing SNN models for SR either had bad performance, or were trained in biologically implausible ways. In this paper, we present a biologically inspired convolutional SNN model for SR. The network adopts the time-to-first-spike coding scheme for fast and efficient information processing. A biological learning rule, spike-timing-dependent plasticity (STDP), is used to adjust the synaptic weights of convolutional neurons to form receptive fields in an unsupervised way. In the convolutional structure, the strategy of local weight sharing is introduced and could lead to better feature extraction of speech signals than global weight sharing. We first evaluated the SNN model with a linear support vector machine (SVM) on the TIDIGITS dataset and it got the performance of 97.5%, comparable to the best results of ANNs. Deep analysis on network outputs showed that, not only are the output data more linearly separable, but they also have fewer dimensions and become sparse. To further confirm the validity of our model, we trained it on a more difficult recognition task based on the TIMIT dataset, and it got a high performance of 93.8%. Moreover, a linear spike-based classifier-tempotron-can also achieve high accuracies very close to that of SVM on both the two tasks. These demonstrate that an STDP-based convolutional SNN model equipped with local weight sharing and temporal coding is capable of solving the SR task accurately and efficiently.
Collapse
Affiliation(s)
- Meng Dong
- School of Automation, Harbin University of Science and Technology, Harbin, Heilongjiang, China
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Xuhui Huang
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Bo Xu
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
17
|
Xu X, Jin X, Yan R, Fang Q, Lu W. Visual Pattern Recognition Using Enhanced Visual Features and PSD-Based Learning Rule. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2017.2769166] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
18
|
Kulkarni SR, Rajendran B. Spiking neural networks for handwritten digit recognition-Supervised learning and network optimization. Neural Netw 2018; 103:118-127. [PMID: 29674234 DOI: 10.1016/j.neunet.2018.03.019] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2017] [Revised: 02/13/2018] [Accepted: 03/27/2018] [Indexed: 12/17/2022]
Abstract
We demonstrate supervised learning in Spiking Neural Networks (SNNs) for the problem of handwritten digit recognition using the spike triggered Normalized Approximate Descent (NormAD) algorithm. Our network that employs neurons operating at sparse biological spike rates below 300Hz achieves a classification accuracy of 98.17% on the MNIST test database with four times fewer parameters compared to the state-of-the-art. We present several insights from extensive numerical experiments regarding optimization of learning parameters and network configuration to improve its accuracy. We also describe a number of strategies to optimize the SNN for implementation in memory and energy constrained hardware, including approximations in computing the neuronal dynamics and reduced precision in storing the synaptic weights. Experiments reveal that even with 3-bit synaptic weights, the classification accuracy of the designed SNN does not degrade beyond 1% as compared to the floating-point baseline. Further, the proposed SNN, which is trained based on the precise spike timing information outperforms an equivalent non-spiking artificial neural network (ANN) trained using back propagation, especially at low bit precision. Thus, our study shows the potential for realizing efficient neuromorphic systems that use spike based information encoding and learning for real-world applications.
Collapse
Affiliation(s)
- Shruti R Kulkarni
- Department of Electrical and Computer Engineering, New Jersey Institute of Technology, NJ, 07102, USA
| | - Bipin Rajendran
- Department of Electrical and Computer Engineering, New Jersey Institute of Technology, NJ, 07102, USA.
| |
Collapse
|
19
|
Xie X, Qu H, Liu G, Zhang M. Efficient training of supervised spiking neural networks via the normalized perceptron based learning rule. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.01.086] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|