1
|
Das H, Schuman C, Chakraborty NN, Rose GS. Enhanced read resolution in reconfigurable memristive synapses for Spiking Neural Networks. Sci Rep 2024; 14:8897. [PMID: 38632304 PMCID: PMC11024114 DOI: 10.1038/s41598-024-58947-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Accepted: 04/04/2024] [Indexed: 04/19/2024] Open
Abstract
The synapse is a key element circuit in any memristor-based neuromorphic computing system. A memristor is a two-terminal analog memory device. Memristive synapses suffer from various challenges including high voltage, SET or RESET failure, and READ margin issues that can degrade the distinguishability of stored weights. Enhancing READ resolution is very important to improving the reliability of memristive synapses. Usually, the READ resolution is very small for a memristive synapse with a 4-bit data precision. This work considers a step-by-step analysis to enhance the READ current resolution or the read current difference between two resistance levels for a current-controlled memristor-based synapse. An empirical model is used to characterize the HfO 2 based memristive device. 1 st and 2 nd stage device of our proposed synapse design can be scaled to enhance the READ current margin up to ∼ 4.3 × and ∼ 21%, respectively. Moreover, READ current resolution can be enhanced with run-time adaptation techniques such as READ voltage scaling and body biasing. The READ voltage scaling and body biasing can improve the READ current resolution by about 46% and 15%, respectively. TENNLab's neuromorphic computing framework is leveraged to evaluate the effect of READ current resolution on classification, control, and reservoir computing applications. Higher READ current resolution shows better accuracy than lower resolution even when facing different levels of read noise.
Collapse
Affiliation(s)
- Hritom Das
- Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN, 37996, USA.
| | - Catherine Schuman
- Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN, 37996, USA
| | - Nishith N Chakraborty
- Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN, 37996, USA
| | - Garrett S Rose
- Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN, 37996, USA
| |
Collapse
|
2
|
Asghar MS, Arslan S, Al-Hamid AA, Kim H. A Compact and Low-Power SoC Design for Spiking Neural Network Based on Current Multiplier Charge Injector Synapse. SENSORS (BASEL, SWITZERLAND) 2023; 23:6275. [PMID: 37514571 PMCID: PMC10383375 DOI: 10.3390/s23146275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2023] [Revised: 07/04/2023] [Accepted: 07/06/2023] [Indexed: 07/30/2023]
Abstract
This paper presents a compact analog system-on-chip (SoC) implementation of a spiking neural network (SNN) for low-power Internet of Things (IoT) applications. The low-power implementation of an SNN SoC requires the optimization of not only the SNN model but also the architecture and circuit designs. In this work, the SNN has been constituted from the analog neuron and synaptic circuits, which are designed to optimize both the chip area and power consumption. The proposed synapse circuit is based on a current multiplier charge injector (CMCI) circuit, which can significantly reduce power consumption and chip area compared with the previous work while allowing for design scalability for higher resolutions. The proposed neuron circuit employs an asynchronous structure, which makes it highly sensitive to input synaptic currents and enables it to achieve higher energy efficiency. To compare the performance of the proposed SoC in its area and power consumption, we implemented a digital SoC for the same SNN model in FPGA. The proposed SNN chip, when trained using the MNIST dataset, achieves a classification accuracy of 96.56%. The presented SNN chip has been implemented using a 65 nm CMOS process for fabrication. The entire chip occupies 0.96 mm2 and consumes an average power of 530 μW, which is 200 times lower than its digital counterpart.
Collapse
Affiliation(s)
- Malik Summair Asghar
- Department of Electronics, College of Electrical and Computer Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea; (M.S.A.); (A.A.A.-H.)
- Department of Electrical and Computer Engineering, COMSATS University Islamabad, Abbottabad Campus, Univeristy Road, Tobe Camp., Abbottabad 22044, Pakistan
| | - Saad Arslan
- TSY Design (Pvt.) Ltd., Islamabad 44000, Pakistan;
| | - Ali A. Al-Hamid
- Department of Electronics, College of Electrical and Computer Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea; (M.S.A.); (A.A.A.-H.)
| | - HyungWon Kim
- Department of Electronics, College of Electrical and Computer Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea; (M.S.A.); (A.A.A.-H.)
| |
Collapse
|
3
|
Ou W, Xiao S, Zhu C, Han W, Zhang Q. An overview of brain-like computing: Architecture, applications, and future trends. Front Neurorobot 2022; 16:1041108. [PMID: 36506817 PMCID: PMC9730831 DOI: 10.3389/fnbot.2022.1041108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Accepted: 10/31/2022] [Indexed: 11/25/2022] Open
Abstract
With the development of technology, Moore's law will come to an end, and scientists are trying to find a new way out in brain-like computing. But we still know very little about how the brain works. At the present stage of research, brain-like models are all structured to mimic the brain in order to achieve some of the brain's functions, and then continue to improve the theories and models. This article summarizes the important progress and status of brain-like computing, summarizes the generally accepted and feasible brain-like computing models, introduces, analyzes, and compares the more mature brain-like computing chips, outlines the attempts and challenges of brain-like computing applications at this stage, and looks forward to the future development of brain-like computing. It is hoped that the summarized results will help relevant researchers and practitioners to quickly grasp the research progress in the field of brain-like computing and acquire the application methods and related knowledge in this field.
Collapse
Affiliation(s)
- Wei Ou
- The School of Cyberspace Security, Hainan University, Hainan, China
- Henan Key Laboratory of Network Cryptography Technology, Zhengzhou, China
| | - Shitao Xiao
- The School of Computer Science and Technology, Hainan, China
| | - Chengyu Zhu
- The School of Cyberspace Security, Hainan University, Hainan, China
| | - Wenbao Han
- The School of Cyberspace Security, Hainan University, Hainan, China
| | - Qionglu Zhang
- State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
4
|
Gao S, Xiang SY, Song ZW, Han YN, Zhang YN, Hao Y. Motion detection and direction recognition in a photonic spiking neural network consisting of VCSELs-SA. OPTICS EXPRESS 2022; 30:31701-31713. [PMID: 36242247 DOI: 10.1364/oe.465653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 08/04/2022] [Indexed: 06/16/2023]
Abstract
Motion detection and direction recognition are two important fundamental visual functions among the many cognitive functions performed by the human visual system. The retina and visual cortex are indispensable for composing the visual nervous system. The retina is responsible for transmitting electrical signals converted from light signals to the visual cortex of the brain. We propose a photonic spiking neural network (SNN) based on vertical-cavity surface-emitting lasers with an embedding saturable absorber (VCSELs-SA) with temporal integration effects, and demonstrate that the motion detection and direction recognition tasks can be solved by mimicking the visual nervous system. Simulation results reveal that the proposed photonic SNN with a modified supervised algorithm combining the tempotron and the STDP rule can correctly detect the motion and recognize the direction angles, and is robust to time jitter and the current difference between VCSEL-SAs. The proposed approach adopts a low-power photonic neuromorphic system for real-time information processing, which provides theoretical support for the large-scale application of hardware photonic SNN in the future.
Collapse
|
5
|
Optimal Architecture of Floating-Point Arithmetic for Neural Network Training Processors. SENSORS 2022; 22:s22031230. [PMID: 35161975 PMCID: PMC8840430 DOI: 10.3390/s22031230] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 01/28/2022] [Accepted: 02/03/2022] [Indexed: 12/18/2022]
Abstract
The convergence of artificial intelligence (AI) is one of the critical technologies in the recent fourth industrial revolution. The AIoT (Artificial Intelligence Internet of Things) is expected to be a solution that aids rapid and secure data processing. While the success of AIoT demanded low-power neural network processors, most of the recent research has been focused on accelerator designs only for inference. The growing interest in self-supervised and semi-supervised learning now calls for processors offloading the training process in addition to the inference process. Incorporating training with high accuracy goals requires the use of floating-point operators. The higher precision floating-point arithmetic architectures in neural networks tend to consume a large area and energy. Consequently, an energy-efficient/compact accelerator is required. The proposed architecture incorporates training in 32 bits, 24 bits, 16 bits, and mixed precisions to find the optimal floating-point format for low power and smaller-sized edge device. The proposed accelerator engines have been verified on FPGA for both inference and training of the MNIST image dataset. The combination of 24-bit custom FP format with 16-bit Brain FP has achieved an accuracy of more than 93%. ASIC implementation of this optimized mixed-precision accelerator using TSMC 65nm reveals an active area of 1.036 × 1.036 mm2 and energy consumption of 4.445 µJ per training of one image. Compared with 32-bit architecture, the size and the energy are reduced by 4.7 and 3.91 times, respectively. Therefore, the CNN structure using floating-point numbers with an optimized data path will significantly contribute to developing the AIoT field that requires a small area, low energy, and high accuracy.
Collapse
|
6
|
Doborjeh M, Doborjeh Z, Kasabov N, Barati M, Wang GY. Deep Learning of Explainable EEG Patterns as Dynamic Spatiotemporal Clusters and Rules in a Brain-Inspired Spiking Neural Network. SENSORS 2021; 21:s21144900. [PMID: 34300640 PMCID: PMC8309947 DOI: 10.3390/s21144900] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 07/14/2021] [Accepted: 07/14/2021] [Indexed: 11/18/2022]
Abstract
The paper proposes a new method for deep learning and knowledge discovery in a brain-inspired Spiking Neural Networks (SNN) architecture that enhances the model’s explainability while learning from streaming spatiotemporal brain data (STBD) in an incremental and on-line mode of operation. This led to the extraction of spatiotemporal rules from SNN models that explain why a certain decision (output prediction) was made by the model. During the learning process, the SNN created dynamic neural clusters, captured as polygons, which evolved in time and continuously changed their size and shape. The dynamic patterns of the clusters were quantitatively analyzed to identify the important STBD features that correspond to the most activated brain regions. We studied the trend of dynamically created clusters and their spike-driven events that occur together in specific space and time. The research contributes to: (1) enhanced interpretability of SNN learning behavior through dynamic neural clustering; (2) feature selection and enhanced accuracy of classification; (3) spatiotemporal rules to support model explainability; and (4) a better understanding of the dynamics in STBD in terms of feature interaction. The clustering method was applied to a case study of Electroencephalogram (EEG) data, recorded from a healthy control group (n = 21) and opiate use (n = 18) subjects while they were performing a cognitive task. The SNN models of EEG demonstrated different trends of dynamic clusters across the groups. This suggested to select a group of marker EEG features and resulted in an improved accuracy of EEG classification to 92%, when compared with all-feature classification. During learning of EEG data, the areas of neurons in the SNN model that form adjacent clusters (corresponding to neighboring EEG channels) were detected as fuzzy boundaries that explain overlapping activity of brain regions for each group of subjects.
Collapse
Affiliation(s)
- Maryam Doborjeh
- School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, Auckland 1010, New Zealand; (N.K.); (M.B.)
- Correspondence:
| | - Zohreh Doborjeh
- Department of Audiology, Faculty of Medical and Health Sciences, School of Population Health, The University of Auckland, Auckland 1023, New Zealand;
| | - Nikola Kasabov
- School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, Auckland 1010, New Zealand; (N.K.); (M.B.)
- George Moore Chair of Data Analytics, School of Computing, Engineering and Intelligent Systems, Ulster University, Derry/Londonderry BT48 7JL, UK
| | - Molood Barati
- School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, Auckland 1010, New Zealand; (N.K.); (M.B.)
| | - Grace Y. Wang
- Department of Psychology and Neuroscience, Auckland University of Technology, Auckland 0627, New Zealand;
| |
Collapse
|