1
|
Zhang R, Leng L, Che K, Zhang H, Cheng J, Guo Q, Liao J, Cheng R. Accurate and Efficient Event-Based Semantic Segmentation Using Adaptive Spiking Encoder-Decoder Network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:9326-9340. [PMID: 39178071 DOI: 10.1109/tnnls.2024.3437415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2024]
Abstract
Spiking neural networks (SNNs), known for their low-power, event-driven computation, and intrinsic temporal dynamics, are emerging as promising solutions for processing dynamic, asynchronous signals from event-based sensors. Despite their potential, SNNs face challenges in training and architectural design, resulting in limited performance in challenging event-based dense prediction tasks compared with artificial neural networks (ANNs). In this work, we develop an efficient spiking encoder-decoder network (SpikingEDN) for large-scale event-based semantic segmentation (EbSS) tasks. To enhance the learning efficiency from dynamic event streams, we harness the adaptive threshold which improves network accuracy, sparsity, and robustness in streaming inference. Moreover, we develop a dual-path spiking spatially adaptive modulation (SSAM) module, which is specifically tailored to enhance the representation of sparse events and multimodal inputs, thereby considerably improving network performance. Our SpikingEDN attains a mean intersection over union (MIoU) of 72.57% on the DDD17 dataset and 58.32% on the larger DSEC-Semantic dataset, showing competitive results to the state-of-the-art ANNs while requiring substantially fewer computational resources. Our results shed light on the untapped potential of SNNs in event-based vision applications. The source codes are publicly available at https://github.com/EMI-Group/spikingedn.
Collapse
|
2
|
Wang B, Zhang X, Wang S, Lin N, Li Y, Yu Y, Zhang Y, Yang J, Wu X, He Y, Wang S, Wan T, Chen R, Li G, Deng Y, Qi X, Wang Z, Shang D. Topology optimization of random memristors for input-aware dynamic SNN. SCIENCE ADVANCES 2025; 11:eads5340. [PMID: 40238875 PMCID: PMC12002125 DOI: 10.1126/sciadv.ads5340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2024] [Accepted: 03/12/2025] [Indexed: 04/18/2025]
Abstract
Machine learning has advanced unprecedentedly, exemplified by GPT-4 and SORA. However, they cannot parallel human brains in efficiency and adaptability due to differences in signal representation, optimization, runtime reconfigurability, and hardware architecture. To address these challenges, we introduce pruning optimization for input-aware dynamic memristive spiking neural network (PRIME). PRIME uses spiking neurons to emulate brain's spiking mechanisms and optimizes the topology of random memristive SNNs inspired by structural plasticity, effectively mitigating memristor programming stochasticity. It also uses the input-aware early-stop policy to reduce latency and leverages memristive in-memory computing to mitigate von Neumann bottleneck. Validated on a 40-nm, 256-K memristor-based macro, PRIME achieves comparable classification accuracy and inception score to software baselines, with energy efficiency improvements of 37.8× and 62.5×. In addition, it reduces computational loads by 77 and 12.5% with minimal performance degradation and demonstrates robustness to stochastic memristor noise. PRIME paves the way for brain-inspired neuromorphic computing.
Collapse
Affiliation(s)
- Bo Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- ACCESS – AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Xinyuan Zhang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- ACCESS – AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Shaocong Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- ACCESS – AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Ning Lin
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- ACCESS – AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Yi Li
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- State Key Lab of Fabrication Technologies for Integrated Circuits, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China
- Laboratory of Microelectronic Devices & Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China
| | - Yifei Yu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- ACCESS – AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Yue Zhang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- ACCESS – AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Jichang Yang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- ACCESS – AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Xiaoshan Wu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- ACCESS – AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Yangu He
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- ACCESS – AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Songqi Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- ACCESS – AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Tao Wan
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
| | - Rui Chen
- State Key Lab of Fabrication Technologies for Integrated Circuits, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Guoqi Li
- University of Chinese Academy of Sciences, Beijing 100049, China
- Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Yue Deng
- School of Artificial Intelligence, Beihang University, Beijing 100191, China
- School of Astronautics, Beihang University, Beijing 100191, China
| | - Xiaojuan Qi
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
| | - Zhongrui Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- ACCESS – AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Dashan Shang
- State Key Lab of Fabrication Technologies for Integrated Circuits, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China
- Laboratory of Microelectronic Devices & Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
3
|
Zhu RJ, Zhang M, Zhao Q, Deng H, Duan Y, Deng LJ. TCJA-SNN: Temporal-Channel Joint Attention for Spiking Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:5112-5125. [PMID: 38598397 DOI: 10.1109/tnnls.2024.3377717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/12/2024]
Abstract
Spiking neural networks (SNNs) are attracting widespread interest due to their biological plausibility, energy efficiency, and powerful spatiotemporal information representation ability. Given the critical role of attention mechanisms in enhancing neural network performance, the integration of SNNs and attention mechanisms exhibits tremendous potential to deliver energy-efficient and high-performance computing paradigms. In this article, we present a novel temporal-channel joint attention mechanism for SNNs, referred to as TCJA-SNN. The proposed TCJA-SNN framework can effectively assess the significance of spike sequence from both spatial and temporal dimensions. More specifically, our essential technical contribution lies on: 1) we employ the squeeze operation to compress the spike stream into an average matrix. Then, we leverage two local attention mechanisms based on efficient 1-D convolutions to facilitate comprehensive feature extraction at the temporal and channel levels independently and 2) we introduce the cross-convolutional fusion (CCF) layer as a novel approach to model the interdependencies between the temporal and channel scopes. This layer effectively breaks the independence of these two dimensions and enables the interaction between features. Experimental results demonstrate that the proposed TCJA-SNN outperforms the state-of-the-art (SOTA) on all standard static and neuromorphic datasets, including Fashion-MNIST, CIFAR10, CIFAR100, CIFAR10-DVS, N-Caltech 101, and DVS128 Gesture. Furthermore, we effectively apply the TCJA-SNN framework to image generation tasks by leveraging a variation autoencoder. To the best of our knowledge, this study is the first instance where the SNN-attention mechanism has been employed for high-level classification and low-level generation tasks. Our implementation codes are available at https://github.com/ridgerchu/TCJA.
Collapse
|
4
|
Nazari S, Amiri M. An accurate and fast learning approach in the biologically spiking neural network. Sci Rep 2025; 15:6585. [PMID: 39994277 PMCID: PMC11850897 DOI: 10.1038/s41598-025-90113-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2024] [Accepted: 02/10/2025] [Indexed: 02/26/2025] Open
Abstract
Computations adapted from the interactions of neurons in the nervous system have the potential to be a strong foundation for building computers with cognitive functions including decision-making, generalization, and real-time learning. In this context, a proposed intelligent machine is built on nervous system mechanisms. As a result, the output and middle layer of the machine is made up of a population of pyramidal neurons and interneurons, AMPA/GABA receptors, and excitatory and inhibitory neurotransmitters. The input layer of the machine is derived from the retinal model. A machine with a structure appropriate to biological evidence needs to learn based on biological evidence. Similar to this, the PSAC (Power-STDP Actor-Critic) learning algorithm was developed as a new learning mechanism based on unsupervised and reinforcement learning procedure. Four datasets MNIST, EMNIST, CIFAR10, and CIFAR100 were used to confirm the performance of the proposed learning algorithm compared to deep and spiking networks, and respectively accuracies of 97.7%, 97.95% (digits) and 93.73% (letters), 93.6%, and 75% have been obtained, which shows an improvement in accuracy compared to previous spiking networks. The suggested learning strategy not only outperforms the earlier spike-based learning techniques in terms of accuracy but also exhibits a faster rate of convergence throughout the training phase.
Collapse
Affiliation(s)
- Soheila Nazari
- Faculty of Electrical Engineering, Shahid Beheshti University, Tehran, Iran.
| | - Masoud Amiri
- Department of Biomedical Engineering, School of Medicine, Kermanshah University of Medical Sciences, Kermanshah, Iran.
| |
Collapse
|
5
|
Wang T, Tian M, Wang H, Zhong Z, He J, Tang F, Zhou X, Lin Y, Yu SM, Liu L, Shi C. MorphBungee: A 65-nm 7.2-mm 2 27-µJ/Image Digital Edge Neuromorphic Chip With on-Chip 802-Frame/s Multi-Layer Spiking Neural Network Learning. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2025; 19:209-225. [PMID: 38861446 DOI: 10.1109/tbcas.2024.3412908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2024]
Abstract
This paper presents a digital edge neuromorphic spiking neural network (SNN) processor chip for a variety of edge intelligent cognitive applications. This processor allows high-speed, high-accuracy and fully on-chip spike-timing-based multi-layer SNN learning. It is characteristic of hierarchical multi-core architecture, event-driven processing paradigm, meta-crossbar for efficient spike communication, and hybrid and reconfigurable parallelism. A prototype chip occupying an active silicon area of 7.2 mm2 was fabricated using a 65-nm 1P9M CMOS process. when running a 256-256-256-256-200 4-layer fully-connected SNN on downscaled 16 × 16 MNIST images. it typically achieved a high-speed throughput of 802 and 2270 frames/s for on-chip learning and inference, respectively, with a relatively low power dissipation of around 61 mW at a 100 MHz clock rate under a 1.0V core power supply, Our on-chip learning results in comparably high visual recognition accuracies of 96.06%, 83.38%, 84.53%, 99.22% and 100% on the MNIST, Fashion-MNIST, ETH-80, Yale-10 and ORL-10 datasets, respectively. In addition, we have successfully applied our neuromorphic chip to demonstrate high-resolution satellite cloud image segmentation and non-visual tasks including olfactory classification and textural news categorization. These results indicate that our neuromorphic chip is suitable for various intelligent edge systems under restricted cost, energy and latency budgets while requiring in-situ self-adaptative learning capability.
Collapse
|
6
|
Zhang J, Zhang M, Wang Y, Liu Q, Yin B, Li H, Yang X. Spiking Neural Networks with Adaptive Membrane Time Constant for Event-Based Tracking. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2025; PP:1009-1021. [PMID: 40031251 DOI: 10.1109/tip.2025.3533213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
The brain-inspired Spiking Neural Networks (SNNs) work in an event-driven manner and have an implicit recurrence in neuronal membrane potential to memorize information over time, which are inherently suitable to handle temporal event-based streams. Despite their temporal nature and recent approaches advancements, these methods have predominantly been assessed on event-based classification tasks. In this paper, we explore the utility of SNNs for event-based tracking tasks. Specifically, we propose a brain-inspired adaptive Leaky Integrate-and-Fire neuron (BA-LIF) that can adaptively adjust the membrane time constant according to the inputs, thereby accelerating the leakage of meaningless noise features and reducing the decay of valuable information. SNNs composed of our proposed BA-LIF neurons can achieve high performance without a careful and time-consuming trial-by-error initialization on the membrane time constant. The adaptive capability of our network is further improved by introducing an extra temporal feature aggregator (TFA) that assigns attention weights over the temporal dimension. Extensive experiments on various event-based tracking datasets validate the effectiveness of our proposed method. We further validate the generalization capability of our method by applying it to other event-classification tasks.
Collapse
|
7
|
Zou C, Cui X, Feng S, Chen G, Zhong Y, Dai Z, Wang Y. An all integer-based spiking neural network with dynamic threshold adaptation. Front Neurosci 2024; 18:1449020. [PMID: 39741532 PMCID: PMC11685137 DOI: 10.3389/fnins.2024.1449020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Accepted: 11/19/2024] [Indexed: 01/03/2025] Open
Abstract
Spiking Neural Networks (SNNs) are typically regards as the third generation of neural networks due to their inherent event-driven computing capabilities and remarkable energy efficiency. However, training an SNN that possesses fast inference speed and comparable accuracy to modern artificial neural networks (ANNs) remains a considerable challenge. In this article, a sophisticated SNN modeling algorithm incorporating a novel dynamic threshold adaptation mechanism is proposed. It aims to eliminate the spiking synchronization error commonly occurred in many traditional ANN2SNN conversion works. Additionally, all variables in the proposed SNNs, including the membrane potential, threshold and synaptic weights, are quantized to integers, making them highly compatible with hardware implementation. Experimental results indicate that the proposed spiking LeNet and VGG-Net achieve accuracies exceeding 99.45% and 93.15% on the MNIST and CIFAR-10 datasets, respectively, with only 4 and 8 time steps required for simulating one sample. Due to this all integer-based quantization process, the required computational operations are significantly reduced, potentially providing a substantial energy efficiency advantage for numerous edge computing applications.
Collapse
Affiliation(s)
- Chenglong Zou
- Peking University Chongqing Research Institute of Big Data, Chongqing, China
- School of Mathematical Science, Peking University, Beijing, China
| | - Xiaoxin Cui
- School of Integrated Circuits, Peking University, Beijing, China
| | - Shuo Feng
- School of Integrated Circuits, Peking University, Beijing, China
| | - Guang Chen
- School of Integrated Circuits, Peking University, Beijing, China
| | - Yi Zhong
- School of Integrated Circuits, Peking University, Beijing, China
| | - Zhenhui Dai
- School of Integrated Circuits, Peking University, Beijing, China
| | - Yuan Wang
- School of Integrated Circuits, Peking University, Beijing, China
| |
Collapse
|
8
|
Goupy G, Tirilly P, Bilasco IM. Paired competing neurons improving STDP supervised local learning in Spiking Neural Networks. Front Neurosci 2024; 18:1401690. [PMID: 39119458 PMCID: PMC11307446 DOI: 10.3389/fnins.2024.1401690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Accepted: 07/11/2024] [Indexed: 08/10/2024] Open
Abstract
Direct training of Spiking Neural Networks (SNNs) on neuromorphic hardware has the potential to significantly reduce the energy consumption of artificial neural network training. SNNs trained with Spike Timing-Dependent Plasticity (STDP) benefit from gradient-free and unsupervised local learning, which can be easily implemented on ultra-low-power neuromorphic hardware. However, classification tasks cannot be performed solely with unsupervised STDP. In this paper, we propose Stabilized Supervised STDP (S2-STDP), a supervised STDP learning rule to train the classification layer of an SNN equipped with unsupervised STDP for feature extraction. S2-STDP integrates error-modulated weight updates that align neuron spikes with desired timestamps derived from the average firing time within the layer. Then, we introduce a training architecture called Paired Competing Neurons (PCN) to further enhance the learning capabilities of our classification layer trained with S2-STDP. PCN associates each class with paired neurons and encourages neuron specialization toward target or non-target samples through intra-class competition. We evaluate our methods on image recognition datasets, including MNIST, Fashion-MNIST, and CIFAR-10. Results show that our methods outperform state-of-the-art supervised STDP learning rules, for comparable architectures and numbers of neurons. Further analysis demonstrates that the use of PCN enhances the performance of S2-STDP, regardless of the hyperparameter set and without introducing any additional hyperparameters.
Collapse
|
9
|
Yuan M, Zhang C, Wang Z, Liu H, Pan G, Tang H. Trainable Spiking-YOLO for low-latency and high-performance object detection. Neural Netw 2024; 172:106092. [PMID: 38211460 DOI: 10.1016/j.neunet.2023.106092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 12/06/2023] [Accepted: 12/26/2023] [Indexed: 01/13/2024]
Abstract
Spiking neural networks (SNNs) are considered an attractive option for edge-side applications due to their sparse, asynchronous and event-driven characteristics. However, the application of SNNs to object detection tasks faces challenges in achieving good detection accuracy and high detection speed. To overcome the aforementioned challenges, we propose an end-to-end Trainable Spiking-YOLO (Tr-Spiking-YOLO) for low-latency and high-performance object detection. We evaluate our model on not only frame-based PASCAL VOC dataset but also event-based GEN1 Automotive Detection dataset, and investigate the impacts of different decoding methods on detection performance. The experimental results show that our model achieves competitive/better performance in terms of accuracy, latency and energy consumption compared to similar artificial neural network (ANN) and conversion-based SNN object detection model. Furthermore, when deployed on an edge device, our model achieves a processing speed of approximately from 14 to 39 FPS while maintaining a desirable mean Average Precision (mAP), which is capable of real-time detection on resource-constrained platforms.
Collapse
Affiliation(s)
- Mengwen Yuan
- Research Institute of Intelligent Computing, Zhejiang Lab, Hangzhou 311100, China
| | - Chengjun Zhang
- Research Institute of Intelligent Computing, Zhejiang Lab, Hangzhou 311100, China
| | - Ziming Wang
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
| | - Huixiang Liu
- Research Institute of Intelligent Computing, Zhejiang Lab, Hangzhou 311100, China
| | - Gang Pan
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China; The State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou 310027, China; MOE Frontier Science Center for Brain Science and Brain-Machine Integration, Zhejiang University, Hangzhou 310027, China
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China; The State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou 310027, China; MOE Frontier Science Center for Brain Science and Brain-Machine Integration, Zhejiang University, Hangzhou 310027, China.
| |
Collapse
|
10
|
Yoon R, Oh S, Cho S, Min KS. Memristor-CMOS Hybrid Circuits Implementing Event-Driven Neural Networks for Dynamic Vision Sensor Camera. MICROMACHINES 2024; 15:426. [PMID: 38675238 PMCID: PMC11052483 DOI: 10.3390/mi15040426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 02/28/2024] [Accepted: 03/20/2024] [Indexed: 04/28/2024]
Abstract
For processing streaming events from a Dynamic Vision Sensor camera, two types of neural networks can be considered. One are spiking neural networks, where simple spike-based computation is suitable for low-power consumption, but the discontinuity in spikes can make the training complicated in terms of hardware. The other one are digital Complementary Metal Oxide Semiconductor (CMOS)-based neural networks that can be trained directly using the normal backpropagation algorithm. However, the hardware and energy overhead can be significantly large, because all streaming events must be accumulated and converted into histogram data, which requires a large amount of memory such as SRAM. In this paper, to combine the spike-based operation with the normal backpropagation algorithm, memristor-CMOS hybrid circuits are proposed for implementing event-driven neural networks in hardware. The proposed hybrid circuits are composed of input neurons, synaptic crossbars, hidden/output neurons, and a neural network's controller. Firstly, the input neurons perform preprocessing for the DVS camera's events. The events are converted to histogram data using very simple memristor-based latches in the input neurons. After preprocessing the events, the converted histogram data are delivered to an ANN implemented using synaptic memristor crossbars. The memristor crossbars can perform low-power Multiply-Accumulate (MAC) calculations according to the memristor's current-voltage relationship. The hidden and output neurons can convert the crossbar's column currents to the output voltages according to the Rectified Linear Unit (ReLU) activation function. The neural network's controller adjusts the MAC calculation frequency according to the workload of the event computation. Moreover, the controller can disable the MAC calculation clock automatically to minimize unnecessary power consumption. The proposed hybrid circuits have been verified by circuit simulation for several event-based datasets such as POKER-DVS and MNIST-DVS. The circuit simulation results indicate that the neural network's performance proposed in this paper is degraded by as low as 0.5% while saving as much as 79% in power consumption for POKER-DVS. The recognition rate of the proposed scheme is lower by 0.75% compared to the conventional one, for the MNIST-DVS dataset. In spite of this little loss, the power consumption can be reduced by as much as 75% for the proposed scheme.
Collapse
Affiliation(s)
| | | | | | - Kyeong-Sik Min
- School of Electrical Engineering, Kookmin University, Seoul 02707, Republic of Korea; (R.Y.)
| |
Collapse
|
11
|
Sakemi Y, Yamamoto K, Hosomi T, Aihara K. Sparse-firing regularization methods for spiking neural networks with time-to-first-spike coding. Sci Rep 2023; 13:22897. [PMID: 38129555 PMCID: PMC10739753 DOI: 10.1038/s41598-023-50201-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 12/16/2023] [Indexed: 12/23/2023] Open
Abstract
The training of multilayer spiking neural networks (SNNs) using the error backpropagation algorithm has made significant progress in recent years. Among the various training schemes, the error backpropagation method that directly uses the firing time of neurons has attracted considerable attention because it can realize ideal temporal coding. This method uses time-to-first-spike (TTFS) coding, in which each neuron fires at most once, and this restriction on the number of firings enables information to be processed at a very low firing frequency. This low firing frequency increases the energy efficiency of information processing in SNNs. However, only an upper limit has been provided for TTFS-coded SNNs, and the information-processing capability of SNNs at lower firing frequencies has not been fully investigated. In this paper, we propose two spike-timing-based sparse-firing (SSR) regularization methods to further reduce the firing frequency of TTFS-coded SNNs. Both methods are characterized by the fact that they only require information about the firing timing and associated weights. The effects of these regularization methods were investigated on the MNIST, Fashion-MNIST, and CIFAR-10 datasets using multilayer perceptron networks and convolutional neural network structures.
Collapse
Affiliation(s)
- Yusuke Sakemi
- Research Center for Mathematical Engineering, Chiba Institute of Technology, Narashino, Japan.
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo, Tokyo, Japan.
| | | | | | - Kazuyuki Aihara
- Research Center for Mathematical Engineering, Chiba Institute of Technology, Narashino, Japan
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo, Tokyo, Japan
| |
Collapse
|