1
|
Lin N, Wang S, Li Y, Wang B, Shi S, He Y, Zhang W, Yu Y, Zhang Y, Zhang X, Wong K, Wang S, Chen X, Jiang H, Zhang X, Lin P, Xu X, Qi X, Wang Z, Shang D, Liu Q, Liu M. Resistive memory-based zero-shot liquid state machine for multimodal event data learning. NATURE COMPUTATIONAL SCIENCE 2025; 5:37-47. [PMID: 39789264 DOI: 10.1038/s43588-024-00751-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 11/25/2024] [Indexed: 01/12/2025]
Abstract
The human brain is a complex spiking neural network (SNN) capable of learning multimodal signals in a zero-shot manner by generalizing existing knowledge. Remarkably, it maintains minimal power consumption through event-based signal propagation. However, replicating the human brain in neuromorphic hardware presents both hardware and software challenges. Hardware limitations, such as the slowdown of Moore's law and Von Neumann bottleneck, hinder the efficiency of digital computers. In addition, SNNs are characterized by their software training complexities. Here, to this end, we propose a hardware-software co-design on a 40 nm 256 kB in-memory computing macro that physically integrates a fixed and random liquid state machine SNN encoder with trainable artificial neural network projections. We showcase the zero-shot learning of multimodal events on the N-MNIST and N-TIDIGITS datasets, including visual and audio data association, as well as neural and visual data alignment for brain-machine interfaces. Our co-design achieves classification accuracy comparable to fully optimized software models, resulting in a 152.83- and 393.07-fold reduction in training costs compared with state-of-the-art spiking recurrent neural network-based contrastive learning and prototypical networks, and a 23.34- and 160-fold improvement in energy efficiency compared with cutting-edge digital hardware, respectively. These proof-of-principle prototypes demonstrate zero-shot multimodal events learning capability for emerging efficient and compact neuromorphic hardware.
Collapse
Affiliation(s)
- Ning Lin
- Department of Electrical and Electronic Engineering, University of Hong Kong, Hong Kong, China
- Key Lab of Fabrication Technologies for Integrated Circuits and Key Laboratory of Microelectronic Devices and Integrated Technology, Institute of Microelectronics of the Chinese Academy of Sciences, Beijing, China
- The School of Microelectronics, Southern University of Science and Technology, Shenzhen, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Shaocong Wang
- Department of Electrical and Electronic Engineering, University of Hong Kong, Hong Kong, China
- Key Lab of Fabrication Technologies for Integrated Circuits and Key Laboratory of Microelectronic Devices and Integrated Technology, Institute of Microelectronics of the Chinese Academy of Sciences, Beijing, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Yi Li
- Department of Electrical and Electronic Engineering, University of Hong Kong, Hong Kong, China
- Key Lab of Fabrication Technologies for Integrated Circuits and Key Laboratory of Microelectronic Devices and Integrated Technology, Institute of Microelectronics of the Chinese Academy of Sciences, Beijing, China
| | - Bo Wang
- Department of Electrical and Electronic Engineering, University of Hong Kong, Hong Kong, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Shuhui Shi
- Department of Electrical and Electronic Engineering, University of Hong Kong, Hong Kong, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Yangu He
- Department of Electrical and Electronic Engineering, University of Hong Kong, Hong Kong, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Woyu Zhang
- Key Lab of Fabrication Technologies for Integrated Circuits and Key Laboratory of Microelectronic Devices and Integrated Technology, Institute of Microelectronics of the Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yifei Yu
- Department of Electrical and Electronic Engineering, University of Hong Kong, Hong Kong, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Yue Zhang
- Department of Electrical and Electronic Engineering, University of Hong Kong, Hong Kong, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Xinyuan Zhang
- Department of Electrical and Electronic Engineering, University of Hong Kong, Hong Kong, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Kwunhang Wong
- Department of Electrical and Electronic Engineering, University of Hong Kong, Hong Kong, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Songqi Wang
- Department of Electrical and Electronic Engineering, University of Hong Kong, Hong Kong, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Xiaoming Chen
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Hao Jiang
- Frontier Institute of Chip and System, Fudan University, Shanghai, China
| | - Xumeng Zhang
- Frontier Institute of Chip and System, Fudan University, Shanghai, China
| | - Peng Lin
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Xiaoxin Xu
- Key Lab of Fabrication Technologies for Integrated Circuits and Key Laboratory of Microelectronic Devices and Integrated Technology, Institute of Microelectronics of the Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Xiaojuan Qi
- Department of Electrical and Electronic Engineering, University of Hong Kong, Hong Kong, China
| | - Zhongrui Wang
- The School of Microelectronics, Southern University of Science and Technology, Shenzhen, China.
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China.
| | - Dashan Shang
- Key Lab of Fabrication Technologies for Integrated Circuits and Key Laboratory of Microelectronic Devices and Integrated Technology, Institute of Microelectronics of the Chinese Academy of Sciences, Beijing, China.
- University of Chinese Academy of Sciences, Beijing, China.
| | - Qi Liu
- Frontier Institute of Chip and System, Fudan University, Shanghai, China
| | - Ming Liu
- Key Lab of Fabrication Technologies for Integrated Circuits and Key Laboratory of Microelectronic Devices and Integrated Technology, Institute of Microelectronics of the Chinese Academy of Sciences, Beijing, China
- Frontier Institute of Chip and System, Fudan University, Shanghai, China
| |
Collapse
|
2
|
Zhou C, Zhang H, Yu L, Ye Y, Zhou Z, Huang L, Ma Z, Fan X, Zhou H, Tian Y. Direct training high-performance deep spiking neural networks: a review of theories and methods. Front Neurosci 2024; 18:1383844. [PMID: 39145295 PMCID: PMC11322636 DOI: 10.3389/fnins.2024.1383844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 07/03/2024] [Indexed: 08/16/2024] Open
Abstract
Spiking neural networks (SNNs) offer a promising energy-efficient alternative to artificial neural networks (ANNs), in virtue of their high biological plausibility, rich spatial-temporal dynamics, and event-driven computation. The direct training algorithms based on the surrogate gradient method provide sufficient flexibility to design novel SNN architectures and explore the spatial-temporal dynamics of SNNs. According to previous studies, the performance of models is highly dependent on their sizes. Recently, direct training deep SNNs have achieved great progress on both neuromorphic datasets and large-scale static datasets. Notably, transformer-based SNNs show comparable performance with their ANN counterparts. In this paper, we provide a new perspective to summarize the theories and methods for training deep SNNs with high performance in a systematic and comprehensive way, including theory fundamentals, spiking neuron models, advanced SNN models and residual architectures, software frameworks and neuromorphic hardware, applications, and future trends.
Collapse
Affiliation(s)
| | - Han Zhang
- Peng Cheng Laboratory, Shenzhen, China
- Faculty of Computing, Harbin Institute of Technology, Harbin, China
| | - Liutao Yu
- Peng Cheng Laboratory, Shenzhen, China
| | - Yumin Ye
- Peng Cheng Laboratory, Shenzhen, China
| | - Zhaokun Zhou
- Peng Cheng Laboratory, Shenzhen, China
- School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, Shenzhen, China
| | - Liwei Huang
- Peng Cheng Laboratory, Shenzhen, China
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, China
| | | | - Xiaopeng Fan
- Peng Cheng Laboratory, Shenzhen, China
- Faculty of Computing, Harbin Institute of Technology, Harbin, China
| | | | - Yonghong Tian
- Peng Cheng Laboratory, Shenzhen, China
- School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, Shenzhen, China
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, China
| |
Collapse
|
3
|
Pan W, Zhao F, Zeng Y, Han B. Adaptive structure evolution and biologically plausible synaptic plasticity for recurrent spiking neural networks. Sci Rep 2023; 13:16924. [PMID: 37805632 PMCID: PMC10560283 DOI: 10.1038/s41598-023-43488-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 09/25/2023] [Indexed: 10/09/2023] Open
Abstract
The architecture design and multi-scale learning principles of the human brain that evolved over hundreds of millions of years are crucial to realizing human-like intelligence. Spiking neural network based Liquid State Machine (LSM) serves as a suitable architecture to study brain-inspired intelligence because of its brain-inspired structure and the potential for integrating multiple biological principles. Existing researches on LSM focus on different certain perspectives, including high-dimensional encoding or optimization of the liquid layer, network architecture search, and application to hardware devices. There is still a lack of in-depth inspiration from the learning and structural evolution mechanism of the brain. Considering these limitations, this paper presents a novel LSM learning model that integrates adaptive structural evolution and multi-scale biological learning rules. For structural evolution, an adaptive evolvable LSM model is developed to optimize the neural architecture design of liquid layer with separation property. For brain-inspired learning of LSM, we propose a dopamine-modulated Bienenstock-Cooper-Munros (DA-BCM) method that incorporates global long-term dopamine regulation and local trace-based BCM synaptic plasticity. Comparative experimental results on different decision-making tasks show that introducing structural evolution of the liquid layer, and the DA-BCM regulation of the liquid layer and the readout layer could improve the decision-making ability of LSM and flexibly adapt to rule reversal. This work is committed to exploring how evolution can help to design more appropriate network architectures and how multi-scale neuroplasticity principles coordinated to enable the optimization and learning of LSMs for relatively complex decision-making tasks.
Collapse
Affiliation(s)
- Wenxuan Pan
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Feifei Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Yi Zeng
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China.
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.
- School of Future Technology, University of Chinese Academy of Sciences, Beijing, China.
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.
| | - Bing Han
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
4
|
Chakraborty B, Mukhopadhyay S. Heterogeneous recurrent spiking neural network for spatio-temporal classification. Front Neurosci 2023; 17:994517. [PMID: 36793542 PMCID: PMC9922697 DOI: 10.3389/fnins.2023.994517] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 01/04/2023] [Indexed: 02/01/2023] Open
Abstract
Spiking Neural Networks are often touted as brain-inspired learning models for the third wave of Artificial Intelligence. Although recent SNNs trained with supervised backpropagation show classification accuracy comparable to deep networks, the performance of unsupervised learning-based SNNs remains much lower. This paper presents a heterogeneous recurrent spiking neural network (HRSNN) with unsupervised learning for spatio-temporal classification of video activity recognition tasks on RGB (KTH, UCF11, UCF101) and event-based datasets (DVS128 Gesture). We observed an accuracy of 94.32% for the KTH dataset, 79.58% and 77.53% for the UCF11 and UCF101 datasets, respectively, and an accuracy of 96.54% on the event-based DVS Gesture dataset using the novel unsupervised HRSNN model. The key novelty of the HRSNN is that the recurrent layer in HRSNN consists of heterogeneous neurons with varying firing/relaxation dynamics, and they are trained via heterogeneous spike-time-dependent-plasticity (STDP) with varying learning dynamics for each synapse. We show that this novel combination of heterogeneity in architecture and learning method outperforms current homogeneous spiking neural networks. We further show that HRSNN can achieve similar performance to state-of-the-art backpropagation trained supervised SNN, but with less computation (fewer neurons and sparse connection) and less training data.
Collapse
|
5
|
Wu Z, Zhang H, Lin Y, Li G, Wang M, Tang Y. LIAF-Net: Leaky Integrate and Analog Fire Network for Lightweight and Efficient Spatiotemporal Information Processing. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:6249-6262. [PMID: 33979292 DOI: 10.1109/tnnls.2021.3073016] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Spiking neural networks (SNNs) based on the leaky integrate and fire (LIF) model have been applied to energy-efficient temporal and spatiotemporal processing tasks. Due to the bioplausible neuronal dynamics and simplicity, LIF-SNN benefits from event-driven processing, however, usually face the embarrassment of reduced performance. This may because, in LIF-SNN, the neurons transmit information via spikes. To address this issue, in this work, we propose a leaky integrate and analog fire (LIAF) neuron model so that analog values can be transmitted among neurons, and a deep network termed LIAF-Net is built on it for efficient spatiotemporal processing. In the temporal domain, LIAF follows the traditional LIF dynamics to maintain its temporal processing capability. In the spatial domain, LIAF is able to integrate spatial information through convolutional integration or fully connected integration. As a spatiotemporal layer, LIAF can also be used with traditional artificial neural network (ANN) layers jointly. In addition, the built network can be trained with backpropagation through time (BPTT) directly, which avoids the performance loss caused by ANN to SNN conversion. Experiment results indicate that LIAF-Net achieves comparable performance to the gated recurrent unit (GRU) and long short-term memory (LSTM) on bAbI question answering (QA) tasks and achieves state-of-the-art performance on spatiotemporal dynamic vision sensor (DVS) data sets, including MNIST-DVS, CIFAR10-DVS, and DVS128 Gesture, with much less number of synaptic weights and computational overhead compared with traditional networks built by LSTM, GRU, convolutional LSTM (ConvLSTM), or 3-D convolution (Conv3D). Compared with traditional LIF-SNN, LIAF-Net also shows dramatic accuracy gain on all these experiments. In conclusion, LIAF-Net provides a framework combining the advantages of both ANNs and SNNs for lightweight and efficient spatiotemporal information processing.
Collapse
|
6
|
Deckers L, Tsang IJ, Van Leekwijck W, Latré S. Extended liquid state machines for speech recognition. Front Neurosci 2022; 16:1023470. [PMID: 36389242 PMCID: PMC9651956 DOI: 10.3389/fnins.2022.1023470] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 10/03/2022] [Indexed: 04/19/2024] Open
Abstract
A liquid state machine (LSM) is a biologically plausible model of a cortical microcircuit. It exists of a random, sparse reservoir of recurrently connected spiking neurons with fixed synapses and a trainable readout layer. The LSM exhibits low training complexity and enables backpropagation-free learning in a powerful, yet simple computing paradigm. In this work, the liquid state machine is enhanced by a set of bio-inspired extensions to create the extended liquid state machine (ELSM), which is evaluated on a set of speech data sets. Firstly, we ensure excitatory/inhibitory (E/I) balance to enable the LSM to operate in edge-of-chaos regime. Secondly, spike-frequency adaptation (SFA) is introduced in the LSM to improve the memory capabilities. Lastly, neuronal heterogeneity, by means of a differentiation in time constants, is introduced to extract a richer dynamical LSM response. By including E/I balance, SFA, and neuronal heterogeneity, we show that the ELSM consistently improves upon the LSM while retaining the benefits of the straightforward LSM structure and training procedure. The proposed extensions led up to an 5.2% increase in accuracy while decreasing the number of spikes in the ELSM up to 20.2% on benchmark speech data sets. On some benchmarks, the ELSM can even attain similar performances as the current state-of-the-art in spiking neural networks. Furthermore, we illustrate that the ELSM input-liquid and recurrent synaptic weights can be reduced to 4-bit resolution without any significant loss in classification performance. We thus show that the ELSM is a powerful, biologically plausible and hardware-friendly spiking neural network model that can attain near state-of-the-art accuracy on speech recognition benchmarks for spiking neural networks.
Collapse
Affiliation(s)
- Lucas Deckers
- imec IDLab, Department of Computer Science, University of Antwerp, Antwerp, Belgium
| | | | | | | |
Collapse
|
7
|
Yang S, Linares-Barranco B, Chen B. Heterogeneous Ensemble-Based Spike-Driven Few-Shot Online Learning. Front Neurosci 2022; 16:850932. [PMID: 35615277 PMCID: PMC9124799 DOI: 10.3389/fnins.2022.850932] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Accepted: 03/28/2022] [Indexed: 11/15/2022] Open
Abstract
Spiking neural networks (SNNs) are regarded as a promising candidate to deal with the major challenges of current machine learning techniques, including the high energy consumption induced by deep neural networks. However, there is still a great gap between SNNs and the few-shot learning performance of artificial neural networks. Importantly, existing spike-based few-shot learning models do not target robust learning based on spatiotemporal dynamics and superior machine learning theory. In this paper, we propose a novel spike-based framework with the entropy theory, namely, heterogeneous ensemble-based spike-driven few-shot online learning (HESFOL). The proposed HESFOL model uses the entropy theory to establish the gradient-based few-shot learning scheme in a recurrent SNN architecture. We examine the performance of the HESFOL model based on the few-shot classification tasks using spiking patterns and the Omniglot data set, as well as the few-shot motor control task using an end-effector. Experimental results show that the proposed HESFOL scheme can effectively improve the accuracy and robustness of spike-driven few-shot learning performance. More importantly, the proposed HESFOL model emphasizes the application of modern entropy-based machine learning methods in state-of-the-art spike-driven learning algorithms. Therefore, our study provides new perspectives for further integration of advanced entropy theory in machine learning to improve the learning performance of SNNs, which could be of great merit to applied developments with spike-based neuromorphic systems.
Collapse
Affiliation(s)
- Shuangming Yang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
- *Correspondence: Shuangming Yang,
| | | | - Badong Chen
- Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an, China
- Badong Chen,
| |
Collapse
|
8
|
Schuman CD, Kulkarni SR, Parsa M, Mitchell JP, Date P, Kay B. Opportunities for neuromorphic computing algorithms and applications. NATURE COMPUTATIONAL SCIENCE 2022; 2:10-19. [PMID: 38177712 DOI: 10.1038/s43588-021-00184-y] [Citation(s) in RCA: 163] [Impact Index Per Article: 54.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 12/07/2021] [Indexed: 01/06/2024]
Abstract
Neuromorphic computing technologies will be important for the future of computing, but much of the work in neuromorphic computing has focused on hardware development. Here, we review recent results in neuromorphic computing algorithms and applications. We highlight characteristics of neuromorphic computing technologies that make them attractive for the future of computing and we discuss opportunities for future development of algorithms and applications on these systems.
Collapse
Affiliation(s)
- Catherine D Schuman
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA.
- Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN, USA.
| | - Shruti R Kulkarni
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Maryam Parsa
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
- Department of Electrical and Computer Engineering, George Mason University, Fairfax, VA, USA
| | - J Parker Mitchell
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Prasanna Date
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Bill Kay
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| |
Collapse
|
9
|
Dai Y, Yamamoto H, Sakuraba M, Sato S. Computational Efficiency of a Modular Reservoir Network for Image Recognition. Front Comput Neurosci 2021; 15:594337. [PMID: 33613220 PMCID: PMC7892762 DOI: 10.3389/fncom.2021.594337] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 01/06/2021] [Indexed: 11/13/2022] Open
Abstract
Liquid state machine (LSM) is a type of recurrent spiking network with a strong relationship to neurophysiology and has achieved great success in time series processing. However, the computational cost of simulations and complex dynamics with time dependency limit the size and functionality of LSMs. This paper presents a large-scale bioinspired LSM with modular topology. We integrate the findings on the visual cortex that specifically designed input synapses can fit the activation of the real cortex and perform the Hough transform, a feature extraction algorithm used in digital image processing, without additional cost. We experimentally verify that such a combination can significantly improve the network functionality. The network performance is evaluated using the MNIST dataset where the image data are encoded into spiking series by Poisson coding. We show that the proposed structure can not only significantly reduce the computational complexity but also achieve higher performance compared to the structure of previous reported networks of a similar size. We also show that the proposed structure has better robustness against system damage than the small-world and random structures. We believe that the proposed computationally efficient method can greatly contribute to future applications of reservoir computing.
Collapse
Affiliation(s)
- Yifan Dai
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
| | - Hideaki Yamamoto
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
| | - Masao Sakuraba
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
| | - Shigeo Sato
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
| |
Collapse
|