1
|
Cheong WH, Kim G, Lee Y, Kim EY, Jeon JB, Kim DH, Kim KM. A Neuransistor with Excitatory and Inhibitory Neuronal Behaviors for Liquid State Machine. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2025:e2419122. [PMID: 40195785 DOI: 10.1002/adma.202419122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2024] [Revised: 03/03/2025] [Indexed: 04/09/2025]
Abstract
A liquid state machine (LSM) is a spiking neural network model inspired by biological neural network dynamics designed to process time-varying inputs. In the LSM, maintaining a proper excitatory/inhibitory (E/I) balance among neurons is essential for ensuring network stability and generating rich temporal dynamics for accurate data processing. In this study, a "neuransistor" is proposed that implements the E/I neurons in a single device, allowing for the hardware implementation of the LSM. The device features a three-terminal transistor structure embodying TiO2- x/Al2O3 bi-layer, providing a two-dimensional electron electron gas (2DEG) channel at their interface. This device demonstrates hybrid excitatory and inhibitory dynamics with respect to the applied gate bias polarity, originating from the charge trapping/detrapping between the 2DEG and TiO2- x layers. Additionally, the three-terminal configuration allows masking capabilities by selecting terminal biases, realizing a reservoir behavior with superior reliability and durability. Its use in an LSM reservoir for time-series data prediction tasks using the Henon dataset and a chaotic equation solver for the Lorenz attractor is demonstrated. This benchmarking indicates that the LSM exhibits enhanced performance and efficiency compared to the conventional echo state network, underscoring its potential for advanced applications in reservoir computing.
Collapse
Affiliation(s)
- Woon Hyung Cheong
- Applied Science Research Institute, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141, Republic of Korea
| | - Geunyoung Kim
- Applied Science Research Institute, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141, Republic of Korea
| | - Younghyun Lee
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141, Republic of Korea
| | - Eun Young Kim
- Department of Semiconductor Technology, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141, Republic of Korea
| | - Jae Bum Jeon
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141, Republic of Korea
| | - Do Hoon Kim
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141, Republic of Korea
| | - Kyung Min Kim
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141, Republic of Korea
- Department of Semiconductor Technology, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141, Republic of Korea
| |
Collapse
|
2
|
Zhang W, Geng H, Li P. Composing recurrent spiking neural networks using locally-recurrent motifs and risk-mitigating architectural optimization. Front Neurosci 2024; 18:1412559. [PMID: 38966757 PMCID: PMC11222634 DOI: 10.3389/fnins.2024.1412559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Accepted: 06/03/2024] [Indexed: 07/06/2024] Open
Abstract
In neural circuits, recurrent connectivity plays a crucial role in network function and stability. However, existing recurrent spiking neural networks (RSNNs) are often constructed by random connections without optimization. While RSNNs can produce rich dynamics that are critical for memory formation and learning, systemic architectural optimization of RSNNs is still an open challenge. We aim to enable systematic design of large RSNNs via a new scalable RSNN architecture and automated architectural optimization. We compose RSNNs based on a layer architecture called Sparsely-Connected Recurrent Motif Layer (SC-ML) that consists of multiple small recurrent motifs wired together by sparse lateral connections. The small size of the motifs and sparse inter-motif connectivity leads to an RSNN architecture scalable to large network sizes. We further propose a method called Hybrid Risk-Mitigating Architectural Search (HRMAS) to systematically optimize the topology of the proposed recurrent motifs and SC-ML layer architecture. HRMAS is an alternating two-step optimization process by which we mitigate the risk of network instability and performance degradation caused by architectural change by introducing a novel biologically-inspired "self-repairing" mechanism through intrinsic plasticity. The intrinsic plasticity is introduced to the second step of each HRMAS iteration and acts as unsupervised fast self-adaptation to structural and synaptic weight modifications introduced by the first step during the RSNN architectural "evolution." We demonstrate that the proposed automatic architecture optimization leads to significant performance gains over existing manually designed RSNNs: we achieve 96.44% on TI46-Alpha, 94.66% on N-TIDIGITS, 90.28% on DVS-Gesture, and 98.72% on N-MNIST. To the best of the authors' knowledge, this is the first work to perform systematic architecture optimization on RSNNs.
Collapse
Affiliation(s)
| | | | - Peng Li
- Department of Electrical and Computer Engineering, University of California, Santa Barbara, Santa Barbara, CA, United States
| |
Collapse
|
3
|
Ahsan R, Wu Z, Jalal SA, Kapadia R. Ultralow Power Electronic Analog of a Biological Fitzhugh-Nagumo Neuron. ACS OMEGA 2024; 9:18062-18071. [PMID: 38680341 PMCID: PMC11044232 DOI: 10.1021/acsomega.3c09936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 03/14/2024] [Accepted: 03/26/2024] [Indexed: 05/01/2024]
Abstract
Here, we introduce an electronic circuit that mimics the functionality of a biological spiking neuron following the Fitzhugh-Nagumo (FN) model. The circuit consists of a tunnel diode that exhibits negative differential resistance (NDR) and an active inductive element implemented by a single MOSFET. The FN neuron converts a DC voltage excitation into voltage spikes analogous to biological action potentials. We predict an energy cost of 2 aJ/cycle through detailed simulation and modeling for these FN neurons. Such an FN neuron is CMOS compatible and enables ultralow power oscillatory and spiking neural network hardware. We demonstrate that FN neurons can be used for oscillator-based computing in a coupled oscillator network to form an oscillator Ising machine (OIM) that can solve computationally hard NP-complete max-cut problems while showing robustness toward process variations.
Collapse
Affiliation(s)
- Ragib Ahsan
- Department of Electrical
and Computer Engineering, University of
Southern California, Los Angeles 90089-0001, United
States
| | - Zezhi Wu
- Department of Electrical
and Computer Engineering, University of
Southern California, Los Angeles 90089-0001, United
States
| | - Seyedeh Atiyeh
Abbasi Jalal
- Department of Electrical
and Computer Engineering, University of
Southern California, Los Angeles 90089-0001, United
States
| | - Rehan Kapadia
- Department of Electrical
and Computer Engineering, University of
Southern California, Los Angeles 90089-0001, United
States
| |
Collapse
|
4
|
Pan W, Zhao F, Han B, Dong Y, Zeng Y. Emergence of brain-inspired small-world spiking neural network through neuroevolution. iScience 2024; 27:108845. [PMID: 38327781 PMCID: PMC10847652 DOI: 10.1016/j.isci.2024.108845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 08/23/2023] [Accepted: 01/03/2024] [Indexed: 02/09/2024] Open
Abstract
Studies suggest that the brain's high efficiency and low energy consumption may be closely related to its small-world topology and critical dynamics. However, existing efforts on the performance-oriented structural evolution of spiking neural networks (SNNs) are time-consuming and ignore the core structural properties of the brain. Here, we introduce a multi-objective Evolutionary Liquid State Machine (ELSM), which blends the small-world coefficient and criticality to evolve models and guide the emergence of brain-inspired, efficient structures. Experiments reveal ELSM's consistent and comparable performance, achieving 97.23% on NMNIST and outperforming LSM models on MNIST and Fashion-MNIST with 98.12% and 88.81% accuracies, respectively. Further analysis shows its versatility and spontaneous evolution of topologies such as hub nodes, short paths, long-tailed degree distributions, and numerous communities. This study evolves recurrent spiking neural networks into brain-inspired energy-efficient structures, showcasing versatility in multiple tasks and potential for adaptive general artificial intelligence.
Collapse
Affiliation(s)
- Wenxuan Pan
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Feifei Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Bing Han
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yiting Dong
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yi Zeng
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| |
Collapse
|
5
|
Pan W, Zhao F, Zeng Y, Han B. Adaptive structure evolution and biologically plausible synaptic plasticity for recurrent spiking neural networks. Sci Rep 2023; 13:16924. [PMID: 37805632 PMCID: PMC10560283 DOI: 10.1038/s41598-023-43488-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 09/25/2023] [Indexed: 10/09/2023] Open
Abstract
The architecture design and multi-scale learning principles of the human brain that evolved over hundreds of millions of years are crucial to realizing human-like intelligence. Spiking neural network based Liquid State Machine (LSM) serves as a suitable architecture to study brain-inspired intelligence because of its brain-inspired structure and the potential for integrating multiple biological principles. Existing researches on LSM focus on different certain perspectives, including high-dimensional encoding or optimization of the liquid layer, network architecture search, and application to hardware devices. There is still a lack of in-depth inspiration from the learning and structural evolution mechanism of the brain. Considering these limitations, this paper presents a novel LSM learning model that integrates adaptive structural evolution and multi-scale biological learning rules. For structural evolution, an adaptive evolvable LSM model is developed to optimize the neural architecture design of liquid layer with separation property. For brain-inspired learning of LSM, we propose a dopamine-modulated Bienenstock-Cooper-Munros (DA-BCM) method that incorporates global long-term dopamine regulation and local trace-based BCM synaptic plasticity. Comparative experimental results on different decision-making tasks show that introducing structural evolution of the liquid layer, and the DA-BCM regulation of the liquid layer and the readout layer could improve the decision-making ability of LSM and flexibly adapt to rule reversal. This work is committed to exploring how evolution can help to design more appropriate network architectures and how multi-scale neuroplasticity principles coordinated to enable the optimization and learning of LSMs for relatively complex decision-making tasks.
Collapse
Affiliation(s)
- Wenxuan Pan
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Feifei Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Yi Zeng
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China.
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.
- School of Future Technology, University of Chinese Academy of Sciences, Beijing, China.
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.
| | - Bing Han
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
6
|
Boriskov P, Velichko A, Shilovsky N, Belyaev M. Bifurcation and Entropy Analysis of a Chaotic Spike Oscillator Circuit Based on the S-Switch. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1693. [PMID: 36421548 PMCID: PMC9689857 DOI: 10.3390/e24111693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 11/13/2022] [Accepted: 11/17/2022] [Indexed: 06/16/2023]
Abstract
This paper presents a model and experimental study of a chaotic spike oscillator based on a leaky integrate-and-fire (LIF) neuron, which has a switching element with an S-type current-voltage characteristic (S-switch). The oscillator generates spikes of the S-switch in the form of chaotic pulse position modulation driven by the feedback with rate coding instability of LIF neuron. The oscillator model with piecewise function of the S-switch has resistive feedback using a second order filter. The oscillator circuit is built on four operational amplifiers and two field-effect transistors (MOSFETs) that form an S-switch based on a Schmitt trigger, an active RC filter and a matching amplifier. We investigate the bifurcation diagrams of the model and the circuit and calculate the entropy of oscillations. For the analog circuit, the "regular oscillation-chaos" transition is analysed in a series of tests initiated by a step voltage in the matching amplifier. Entropy values are used to estimate the average time for the transition of oscillations to chaos and the degree of signal correlation of the transition mode of different tests. Study results can be applied in various reservoir computing applications, for example, in choosing and configuring the LogNNet network reservoir circuits.
Collapse
|
7
|
Deckers L, Tsang IJ, Van Leekwijck W, Latré S. Extended liquid state machines for speech recognition. Front Neurosci 2022; 16:1023470. [PMID: 36389242 PMCID: PMC9651956 DOI: 10.3389/fnins.2022.1023470] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 10/03/2022] [Indexed: 04/19/2024] Open
Abstract
A liquid state machine (LSM) is a biologically plausible model of a cortical microcircuit. It exists of a random, sparse reservoir of recurrently connected spiking neurons with fixed synapses and a trainable readout layer. The LSM exhibits low training complexity and enables backpropagation-free learning in a powerful, yet simple computing paradigm. In this work, the liquid state machine is enhanced by a set of bio-inspired extensions to create the extended liquid state machine (ELSM), which is evaluated on a set of speech data sets. Firstly, we ensure excitatory/inhibitory (E/I) balance to enable the LSM to operate in edge-of-chaos regime. Secondly, spike-frequency adaptation (SFA) is introduced in the LSM to improve the memory capabilities. Lastly, neuronal heterogeneity, by means of a differentiation in time constants, is introduced to extract a richer dynamical LSM response. By including E/I balance, SFA, and neuronal heterogeneity, we show that the ELSM consistently improves upon the LSM while retaining the benefits of the straightforward LSM structure and training procedure. The proposed extensions led up to an 5.2% increase in accuracy while decreasing the number of spikes in the ELSM up to 20.2% on benchmark speech data sets. On some benchmarks, the ELSM can even attain similar performances as the current state-of-the-art in spiking neural networks. Furthermore, we illustrate that the ELSM input-liquid and recurrent synaptic weights can be reduced to 4-bit resolution without any significant loss in classification performance. We thus show that the ELSM is a powerful, biologically plausible and hardware-friendly spiking neural network model that can attain near state-of-the-art accuracy on speech recognition benchmarks for spiking neural networks.
Collapse
Affiliation(s)
- Lucas Deckers
- imec IDLab, Department of Computer Science, University of Antwerp, Antwerp, Belgium
| | | | | | | |
Collapse
|
8
|
George AM, Dey S, Banerjee D, Mukherjee A, Suri M. Online Time-Series Forecasting using Spiking Reservoir. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.10.067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
9
|
Yegenoglu A, Subramoney A, Hater T, Jimenez-Romero C, Klijn W, Pérez Martín A, van der Vlag M, Herty M, Morrison A, Diaz-Pier S. Exploring Parameter and Hyper-Parameter Spaces of Neuroscience Models on High Performance Computers With Learning to Learn. Front Comput Neurosci 2022; 16:885207. [PMID: 35720775 PMCID: PMC9199579 DOI: 10.3389/fncom.2022.885207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 04/13/2022] [Indexed: 11/13/2022] Open
Abstract
Neuroscience models commonly have a high number of degrees of freedom and only specific regions within the parameter space are able to produce dynamics of interest. This makes the development of tools and strategies to efficiently find these regions of high importance to advance brain research. Exploring the high dimensional parameter space using numerical simulations has been a frequently used technique in the last years in many areas of computational neuroscience. Today, high performance computing (HPC) can provide a powerful infrastructure to speed up explorations and increase our general understanding of the behavior of the model in reasonable times. Learning to learn (L2L) is a well-known concept in machine learning (ML) and a specific method for acquiring constraints to improve learning performance. This concept can be decomposed into a two loop optimization process where the target of optimization can consist of any program such as an artificial neural network, a spiking network, a single cell model, or a whole brain simulation. In this work, we present L2L as an easy to use and flexible framework to perform parameter and hyper-parameter space exploration of neuroscience models on HPC infrastructure. Learning to learn is an implementation of the L2L concept written in Python. This open-source software allows several instances of an optimization target to be executed with different parameters in an embarrassingly parallel fashion on HPC. L2L provides a set of built-in optimizer algorithms, which make adaptive and efficient exploration of parameter spaces possible. Different from other optimization toolboxes, L2L provides maximum flexibility for the way the optimization target can be executed. In this paper, we show a variety of examples of neuroscience models being optimized within the L2L framework to execute different types of tasks. The tasks used to illustrate the concept go from reproducing empirical data to learning how to solve a problem in a dynamic environment. We particularly focus on simulations with models ranging from the single cell to the whole brain and using a variety of simulation engines like NEST, Arbor, TVB, OpenAIGym, and NetLogo.
Collapse
Affiliation(s)
- Alper Yegenoglu
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
- Department of Mathematics, Institute of Geometry and Applied Mathematics, RWTH Aachen University, Aachen, Germany
| | - Anand Subramoney
- Institute of Neural Computation, Ruhr University Bochum, Bochum, Germany
| | - Thorsten Hater
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Cristian Jimenez-Romero
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Wouter Klijn
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Aarón Pérez Martín
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Michiel van der Vlag
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Michael Herty
- Department of Mathematics, Institute of Geometry and Applied Mathematics, RWTH Aachen University, Aachen, Germany
| | - Abigail Morrison
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Computer Science 3-Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Sandra Diaz-Pier
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| |
Collapse
|
10
|
Yang S, Linares-Barranco B, Chen B. Heterogeneous Ensemble-Based Spike-Driven Few-Shot Online Learning. Front Neurosci 2022; 16:850932. [PMID: 35615277 PMCID: PMC9124799 DOI: 10.3389/fnins.2022.850932] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Accepted: 03/28/2022] [Indexed: 11/15/2022] Open
Abstract
Spiking neural networks (SNNs) are regarded as a promising candidate to deal with the major challenges of current machine learning techniques, including the high energy consumption induced by deep neural networks. However, there is still a great gap between SNNs and the few-shot learning performance of artificial neural networks. Importantly, existing spike-based few-shot learning models do not target robust learning based on spatiotemporal dynamics and superior machine learning theory. In this paper, we propose a novel spike-based framework with the entropy theory, namely, heterogeneous ensemble-based spike-driven few-shot online learning (HESFOL). The proposed HESFOL model uses the entropy theory to establish the gradient-based few-shot learning scheme in a recurrent SNN architecture. We examine the performance of the HESFOL model based on the few-shot classification tasks using spiking patterns and the Omniglot data set, as well as the few-shot motor control task using an end-effector. Experimental results show that the proposed HESFOL scheme can effectively improve the accuracy and robustness of spike-driven few-shot learning performance. More importantly, the proposed HESFOL model emphasizes the application of modern entropy-based machine learning methods in state-of-the-art spike-driven learning algorithms. Therefore, our study provides new perspectives for further integration of advanced entropy theory in machine learning to improve the learning performance of SNNs, which could be of great merit to applied developments with spike-based neuromorphic systems.
Collapse
Affiliation(s)
- Shuangming Yang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
- *Correspondence: Shuangming Yang,
| | | | - Badong Chen
- Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an, China
- Badong Chen,
| |
Collapse
|
11
|
Schuman CD, Kulkarni SR, Parsa M, Mitchell JP, Date P, Kay B. Opportunities for neuromorphic computing algorithms and applications. NATURE COMPUTATIONAL SCIENCE 2022; 2:10-19. [PMID: 38177712 DOI: 10.1038/s43588-021-00184-y] [Citation(s) in RCA: 163] [Impact Index Per Article: 54.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 12/07/2021] [Indexed: 01/06/2024]
Abstract
Neuromorphic computing technologies will be important for the future of computing, but much of the work in neuromorphic computing has focused on hardware development. Here, we review recent results in neuromorphic computing algorithms and applications. We highlight characteristics of neuromorphic computing technologies that make them attractive for the future of computing and we discuss opportunities for future development of algorithms and applications on these systems.
Collapse
Affiliation(s)
- Catherine D Schuman
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA.
- Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN, USA.
| | - Shruti R Kulkarni
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Maryam Parsa
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
- Department of Electrical and Computer Engineering, George Mason University, Fairfax, VA, USA
| | - J Parker Mitchell
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Prasanna Date
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Bill Kay
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| |
Collapse
|
12
|
Tian S, Qu L, Wang L, Hu K, Li N, Xu W. A neural architecture search based framework for liquid state machine design. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.02.076] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
13
|
Neural Network for Low-Memory IoT Devices and MNIST Image Recognition Using Kernels Based on Logistic Map. ELECTRONICS 2020. [DOI: 10.3390/electronics9091432] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This study presents a neural network which uses filters based on logistic mapping (LogNNet). LogNNet has a feedforward network structure, but possesses the properties of reservoir neural networks. The input weight matrix, set by a recurrent logistic mapping, forms the kernels that transform the input space to the higher-dimensional feature space. The most effective recognition of a handwritten digit from MNIST-10 occurs under chaotic behavior of the logistic map. The correlation of classification accuracy with the value of the Lyapunov exponent was obtained. An advantage of LogNNet implementation on IoT devices is the significant savings in memory used. At the same time, LogNNet has a simple algorithm and performance indicators comparable to those of the best resource-efficient algorithms available at the moment. The presented network architecture uses an array of weights with a total memory size from 1 to 29 kB and achieves a classification accuracy of 80.3–96.3%. Memory is saved due to the processor, which sequentially calculates the required weight coefficients during the network operation using the analytical equation of the logistic mapping. The proposed neural network can be used in implementations of artificial intelligence based on constrained devices with limited memory, which are integral blocks for creating ambient intelligence in modern IoT environments. From a research perspective, LogNNet can contribute to the understanding of the fundamental issues of the influence of chaos on the behavior of reservoir-type neural networks.
Collapse
|
14
|
Wijesinghe P, Liyanagedera C, Roy K. Biologically Plausible Class Discrimination Based Recurrent Neural Network Training for Motor Pattern Generation. Front Neurosci 2020; 14:772. [PMID: 33013282 PMCID: PMC7461996 DOI: 10.3389/fnins.2020.00772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Accepted: 06/30/2020] [Indexed: 11/13/2022] Open
Abstract
Biological brain stores massive amount of information. Inspired by features of the biological memory, we propose an algorithm to efficiently store different classes of spatio-temporal information in a Recurrent Neural Network (RNN). A given spatio-temporal input triggers a neuron firing pattern, known as an attractor, and it conveys information about the class to which the input belongs. These attractors are the basic elements of the memory in our RNN. Preparing a set of good attractors is the key to efficiently storing temporal information in an RNN. We achieve this by means of enhancing the “separation” and “approximation” properties associated with the attractors, during the RNN training. We furthermore elaborate how these attractors can trigger an action via the readout in the RNN, similar to the sensory motor action processing in the cerebellum cortex. We show how different voice commands by different speakers trigger hand drawn impressions of the spoken words, by means of our separation and approximation based learning. The method further recognizes the gender of the speaker. The method is evaluated on the TI-46 speech data corpus, and we have achieved 98.6% classification accuracy on the TI-46 digit corpus.
Collapse
|