1
|
Brain-inspired chaotic spiking backpropagation. Natl Sci Rev 2024; 11:nwae037. [PMID: 38707198 PMCID: PMC11067972 DOI: 10.1093/nsr/nwae037] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 12/19/2023] [Accepted: 01/17/2024] [Indexed: 05/07/2024] Open
Abstract
Spiking neural networks (SNNs) have superior energy efficiency due to their spiking signal transmission, which mimics biological nervous systems, but they are difficult to train effectively. Although surrogate gradient-based methods offer a workable solution, trained SNNs frequently fall into local minima because they are still primarily based on gradient dynamics. Inspired by the chaotic dynamics in animal brain learning, we propose a chaotic spiking backpropagation (CSBP) method that introduces a loss function to generate brain-like chaotic dynamics and further takes advantage of the ergodic and pseudo-random nature to make SNN learning effective and robust. From a computational viewpoint, we found that CSBP significantly outperforms current state-of-the-art methods on both neuromorphic data sets (e.g. DVS-CIFAR10 and DVS-Gesture) and large-scale static data sets (e.g. CIFAR100 and ImageNet) in terms of accuracy and robustness. From a theoretical viewpoint, we show that the learning process of CSBP is initially chaotic, then subject to various bifurcations and eventually converges to gradient dynamics, consistently with the observation of animal brain activity. Our work provides a superior core tool for direct SNN training and offers new insights into understanding the learning process of a biological brain.
Collapse
|
2
|
Spiking neural networks for nonlinear regression. ROYAL SOCIETY OPEN SCIENCE 2024; 11:231606. [PMID: 38699557 PMCID: PMC11062414 DOI: 10.1098/rsos.231606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 01/25/2024] [Accepted: 02/12/2024] [Indexed: 05/05/2024]
Abstract
Spiking neural networks (SNN), also often referred to as the third generation of neural networks, carry the potential for a massive reduction in memory and energy consumption over traditional, second-generation neural networks. Inspired by the undisputed efficiency of the human brain, they introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware. Energy efficiency plays a crucial role in many engineering applications, for instance, in structural health monitoring. Machine learning in engineering contexts, especially in data-driven mechanics, focuses on regression. While regression with SNN has already been discussed in a variety of publications, in this contribution, we provide a novel formulation for its accuracy and energy efficiency. In particular, a network topology for decoding binary spike trains to real numbers is introduced, using the membrane potential of spiking neurons. Several different spiking neural architectures, ranging from simple spiking feed-forward to complex spiking long short-term memory neural networks, are derived. Since the proposed architectures do not contain any dense layers, they exploit the full potential of SNN in terms of energy efficiency. At the same time, the accuracy of the proposed SNN architectures is demonstrated by numerical examples, namely different material models. Linear and nonlinear, as well as history-dependent material models, are examined. While this contribution focuses on mechanical examples, the interested reader may regress any custom function by adapting the published source code.
Collapse
|
3
|
Low-Power Perovskite Neuromorphic Synapse with Enhanced Photon Efficiency for Directional Motion Perception. ACS APPLIED MATERIALS & INTERFACES 2024; 16:22303-22311. [PMID: 38626428 DOI: 10.1021/acsami.4c04398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/18/2024]
Abstract
The advancement of artificial intelligent vision systems heavily relies on the development of fast and accurate optical imaging detection, identification, and tracking. Framed by restricted response speeds and low computational efficiency, traditional optoelectronic information devices are facing challenges in real-time optical imaging tasks and their ability to efficiently process complex visual data. To address the limitations of current optoelectronic information devices, this study introduces a novel photomemristor utilizing halide perovskite thin films. The fabrication process involves adjusting the iodide proportion to enhance the quality of the halide perovskite films and minimize the dark current. The photomemristor exhibits a high external quantum efficiency of over 85%, which leads to a low energy consumption of 0.6 nJ. The spike timing-dependent plasticity characteristics of the device are leveraged to construct a spiking neural network and achieve a 99.1% accuracy rate of directional perception for moving objects. The notable results offer a promising hardware solution for efficient optoneuromorphic and edge computing applications.
Collapse
|
4
|
Mott memristor based stochastic neurons for probabilistic computing. NANOTECHNOLOGY 2024; 35:295201. [PMID: 38593756 DOI: 10.1088/1361-6528/ad3c4b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Accepted: 04/09/2024] [Indexed: 04/11/2024]
Abstract
Many studies suggest that probabilistic spiking in biological neural systems is beneficial as it aids learning and provides Bayesian inference-like dynamics. If appropriately utilised, noise and stochasticity in nanoscale devices can benefit neuromorphic systems. In this paper, we build a stochastic leaky integrate and fire (LIF) neuron, utilising a Mott memristor's inherent stochastic switching dynamics. We demonstrate that the developed LIF neuron is capable of biological neural dynamics. We leverage these characteristics of the proposed LIF neuron by integrating it into a population-coded spiking neural network and a spiking restricted Boltzmann machine (sRBM), thereby showcasing its ability to implement probabilistic learning and inference. The sRBM achieves a software-comparable accuracy of 87.13%. Unlike CMOS-based probabilistic neurons, our design does not require any external noise sources. The designed neurons are highly energy efficient and ultra-compact, requiring only three components: a resistor, a capacitor and a memristor device.
Collapse
|
5
|
Co-learning synaptic delays, weights and adaptation in spiking neural networks. Front Neurosci 2024; 18:1360300. [PMID: 38680445 PMCID: PMC11055628 DOI: 10.3389/fnins.2024.1360300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Accepted: 03/20/2024] [Indexed: 05/01/2024] Open
Abstract
Spiking neural network (SNN) distinguish themselves from artificial neural network (ANN) because of their inherent temporal processing and spike-based computations, enabling a power-efficient implementation in neuromorphic hardware. In this study, we demonstrate that data processing with spiking neurons can be enhanced by co-learning the synaptic weights with two other biologically inspired neuronal features: (1) a set of parameters describing neuronal adaptation processes and (2) synaptic propagation delays. The former allows a spiking neuron to learn how to specifically react to incoming spikes based on its past. The trained adaptation parameters result in neuronal heterogeneity, which leads to a greater variety in available spike patterns and is also found in the brain. The latter enables to learn to explicitly correlate spike trains that are temporally distanced. Synaptic delays reflect the time an action potential requires to travel from one neuron to another. We show that each of the co-learned features separately leads to an improvement over the baseline SNN and that the combination of both leads to state-of-the-art SNN results on all speech recognition datasets investigated with a simple 2-hidden layer feed-forward network. Our SNN outperforms the benchmark ANN on the neuromorphic datasets (Spiking Heidelberg Digits and Spiking Speech Commands), even with fewer trainable parameters. On the 35-class Google Speech Commands dataset, our SNN also outperforms a GRU of similar size. Our study presents brain-inspired improvements in SNN that enable them to excel over an equivalent ANN of similar size on tasks with rich temporal dynamics.
Collapse
|
6
|
Closing the loop: High-speed robotics with accelerated neuromorphic hardware. Front Neurosci 2024; 18:1360122. [PMID: 38595976 PMCID: PMC11002072 DOI: 10.3389/fnins.2024.1360122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Accepted: 03/05/2024] [Indexed: 04/11/2024] Open
Abstract
The BrainScaleS-2 system is an established analog neuromorphic platform with versatile applications in the diverse fields of computational neuroscience and spike-based machine learning. In this work, we extend the system with a configurable realtime event interface that enables a tight coupling of its distinct analog network core to external sensors and actuators. The 1,000-fold acceleration of the emulated nerve cells allows us to target high-speed robotic applications that require precise timing on a microsecond scale. As a showcase, we present a closed-loop setup for commuting brushless DC motors: we utilize PyTorch to train a spiking neural network emulated on the analog substrate to control an electric motor from a sensory event stream. The presented system enables research in the area of event-driven controllers for high-speed robotics, including self-supervised and biologically inspired online learning for such applications.
Collapse
|
7
|
Agreeing to Stop: Reliable Latency-Adaptive Decision Making via Ensembles of Spiking Neural Networks. ENTROPY (BASEL, SWITZERLAND) 2024; 26:126. [PMID: 38392381 PMCID: PMC10888006 DOI: 10.3390/e26020126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2023] [Revised: 01/27/2024] [Accepted: 01/30/2024] [Indexed: 02/24/2024]
Abstract
Spiking neural networks (SNNs) are recurrent models that can leverage sparsity in input time series to efficiently carry out tasks such as classification. Additional efficiency gains can be obtained if decisions are taken as early as possible as a function of the complexity of the input time series. The decision on when to stop inference and produce a decision must rely on an estimate of the current accuracy of the decision. Prior work demonstrated the use of conformal prediction (CP) as a principled way to quantify uncertainty and support adaptive-latency decisions in SNNs. In this paper, we propose to enhance the uncertainty quantification capabilities of SNNs by implementing ensemble models for the purpose of improving the reliability of stopping decisions. Intuitively, an ensemble of multiple models can decide when to stop more reliably by selecting times at which most models agree that the current accuracy level is sufficient. The proposed method relies on different forms of information pooling from ensemble models and offers theoretical reliability guarantees. We specifically show that variational inference-based ensembles with p-variable pooling significantly reduce the average latency of state-of-the-art methods while maintaining reliability guarantees.
Collapse
|
8
|
Training multi-layer spiking neural networks with plastic synaptic weights and delays. Front Neurosci 2024; 17:1253830. [PMID: 38328553 PMCID: PMC10847234 DOI: 10.3389/fnins.2023.1253830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 12/04/2023] [Indexed: 02/09/2024] Open
Abstract
Spiking neural networks are usually considered as the third generation of neural networks, which hold the potential of ultra-low power consumption on corresponding hardware platforms and are very suitable for temporal information processing. However, how to efficiently train the spiking neural networks remains an open question, and most existing learning methods only consider the plasticity of synaptic weights. In this paper, we proposed a new supervised learning algorithm for multiple-layer spiking neural networks based on the typical SpikeProp method. In the proposed method, both the synaptic weights and delays are considered as adjustable parameters to improve both the biological plausibility and the learning performance. In addition, the proposed method inherits the advantages of SpikeProp, which can make full use of the temporal information of spikes. Various experiments are conducted to verify the performance of the proposed method, and the results demonstrate that the proposed method achieves a competitive learning performance compared with the existing related works. Finally, the differences between the proposed method and the existing mainstream multi-layer training algorithms are discussed.
Collapse
|
9
|
A Novel Robotic Controller Using Neural Engineering Framework-Based Spiking Neural Networks. SENSORS (BASEL, SWITZERLAND) 2024; 24:491. [PMID: 38257584 PMCID: PMC10819625 DOI: 10.3390/s24020491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Revised: 01/11/2024] [Accepted: 01/11/2024] [Indexed: 01/24/2024]
Abstract
This paper investigates spiking neural networks (SNN) for novel robotic controllers with the aim of improving accuracy in trajectory tracking. By emulating the operation of the human brain through the incorporation of temporal coding mechanisms, SNN offer greater adaptability and efficiency in information processing, providing significant advantages in the representation of temporal information in robotic arm control compared to conventional neural networks. Exploring specific implementations of SNN in robot control, this study analyzes neuron models and learning mechanisms inherent to SNN. Based on the principles of the Neural Engineering Framework (NEF), a novel spiking PID controller is designed and simulated for a 3-DoF robotic arm using Nengo and MATLAB R2022b. The controller demonstrated good accuracy and efficiency in following designated trajectories, showing minimal deviations, overshoots, or oscillations. A thorough quantitative assessment, utilizing performance metrics like root mean square error (RMSE) and the integral of the absolute value of the time-weighted error (ITAE), provides additional validation for the efficacy of the SNN-based controller. Competitive performance was observed, surpassing a fuzzy controller by 5% in terms of the ITAE index and a conventional PID controller by 6% in the ITAE index and 30% in RMSE performance. This work highlights the utility of NEF and SNN in developing effective robotic controllers, laying the groundwork for future research focused on SNN adaptability in dynamic environments and advanced robotic applications.
Collapse
|
10
|
An FPGA implementation of Bayesian inference with spiking neural networks. Front Neurosci 2024; 17:1291051. [PMID: 38249589 PMCID: PMC10796689 DOI: 10.3389/fnins.2023.1291051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 12/06/2023] [Indexed: 01/23/2024] Open
Abstract
Spiking neural networks (SNNs), as brain-inspired neural network models based on spikes, have the advantage of processing information with low complexity and efficient energy consumption. Currently, there is a growing trend to design hardware accelerators for dedicated SNNs to overcome the limitation of running under the traditional von Neumann architecture. Probabilistic sampling is an effective modeling approach for implementing SNNs to simulate the brain to achieve Bayesian inference. However, sampling consumes considerable time. It is highly demanding for specific hardware implementation of SNN sampling models to accelerate inference operations. Hereby, we design a hardware accelerator based on FPGA to speed up the execution of SNN algorithms by parallelization. We use streaming pipelining and array partitioning operations to achieve model operation acceleration with the least possible resource consumption, and combine the Python productivity for Zynq (PYNQ) framework to implement the model migration to the FPGA while increasing the speed of model operations. We verify the functionality and performance of the hardware architecture on the Xilinx Zynq ZCU104. The experimental results show that the hardware accelerator of the SNN sampling model proposed can significantly improve the computing speed while ensuring the accuracy of inference. In addition, Bayesian inference for spiking neural networks through the PYNQ framework can fully optimize the high performance and low power consumption of FPGAs in embedded applications. Taken together, our proposed FPGA implementation of Bayesian inference with SNNs has great potential for a wide range of applications, it can be ideal for implementing complex probabilistic model inference in embedded systems.
Collapse
|
11
|
BrainPy, a flexible, integrative, efficient, and extensible framework for general-purpose brain dynamics programming. eLife 2023; 12:e86365. [PMID: 38132087 PMCID: PMC10796146 DOI: 10.7554/elife.86365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Accepted: 12/20/2023] [Indexed: 12/23/2023] Open
Abstract
Elucidating the intricate neural mechanisms underlying brain functions requires integrative brain dynamics modeling. To facilitate this process, it is crucial to develop a general-purpose programming framework that allows users to freely define neural models across multiple scales, efficiently simulate, train, and analyze model dynamics, and conveniently incorporate new modeling approaches. In response to this need, we present BrainPy. BrainPy leverages the advanced just-in-time (JIT) compilation capabilities of JAX and XLA to provide a powerful infrastructure tailored for brain dynamics programming. It offers an integrated platform for building, simulating, training, and analyzing brain dynamics models. Models defined in BrainPy can be JIT compiled into binary instructions for various devices, including Central Processing Unit, Graphics Processing Unit, and Tensor Processing Unit, which ensures high-running performance comparable to native C or CUDA. Additionally, BrainPy features an extensible architecture that allows for easy expansion of new infrastructure, utilities, and machine-learning approaches. This flexibility enables researchers to incorporate cutting-edge techniques and adapt the framework to their specific needs.
Collapse
|
12
|
Complex-Exponential-Based Bio-Inspired Neuron Model Implementation in FPGA Using Xilinx System Generator and Vivado Design Suite. Biomimetics (Basel) 2023; 8:621. [PMID: 38132560 PMCID: PMC10741806 DOI: 10.3390/biomimetics8080621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 12/06/2023] [Accepted: 12/15/2023] [Indexed: 12/23/2023] Open
Abstract
This research investigates the implementation of complex-exponential-based neurons in FPGA, which can pave the way for implementing bio-inspired spiking neural networks to compensate for the existing computational constraints in conventional artificial neural networks. The increasing use of extensive neural networks and the complexity of models in handling big data lead to higher power consumption and delays. Hence, finding solutions to reduce computational complexity is crucial for addressing power consumption challenges. The complex exponential form effectively encodes oscillating features like frequency, amplitude, and phase shift, streamlining the demanding calculations typical of conventional artificial neurons through levering the simple phase addition of complex exponential functions. The article implements such a two-neuron and a multi-neuron neural model using the Xilinx System Generator and Vivado Design Suite, employing 8-bit, 16-bit, and 32-bit fixed-point data format representations. The study evaluates the accuracy of the proposed neuron model across different FPGA implementations while also providing a detailed analysis of operating frequency, power consumption, and resource usage for the hardware implementations. BRAM-based Vivado designs outperformed Simulink regarding speed, power, and resource efficiency. Specifically, the Vivado BRAM-based approach supported up to 128 neurons, showcasing optimal LUT and FF resource utilization. Such outcomes accommodate choosing the optimal design procedure for implementing spiking neural networks on FPGAs.
Collapse
|
13
|
Monitoring time domain characteristics of Parkinson's disease using 3D memristive neuromorphic system. Front Comput Neurosci 2023; 17:1274575. [PMID: 38162516 PMCID: PMC10754992 DOI: 10.3389/fncom.2023.1274575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 11/06/2023] [Indexed: 01/03/2024] Open
Abstract
Introduction Parkinson's disease (PD) is a neurodegenerative disorder affecting millions of patients. Closed-Loop Deep Brain Stimulation (CL-DBS) is a therapy that can alleviate the symptoms of PD. The CL-DBS system consists of an electrode sending electrical stimulation signals to a specific region of the brain and a battery-powered stimulator implanted in the chest. The electrical stimuli in CL-DBS systems need to be adjusted in real-time in accordance with the state of PD symptoms. Therefore, fast and precise monitoring of PD symptoms is a critical function for CL-DBS systems. However, the current CL-DBS techniques suffer from high computational demands for real-time PD symptom monitoring, which are not feasible for implanted and wearable medical devices. Methods In this paper, we present an energy-efficient neuromorphic PD symptom detector using memristive three-dimensional integrated circuits (3D-ICs). The excessive oscillation at beta frequencies (13-35 Hz) at the subthalamic nucleus (STN) is used as a biomarker of PD symptoms. Results Simulation results demonstrate that our neuromorphic PD detector, implemented with an 8-layer spiking Long Short-Term Memory (S-LSTM), excels in recognizing PD symptoms, achieving a training accuracy of 99.74% and a validation accuracy of 99.52% for a 75%-25% data split. Furthermore, we evaluated the improvement of our neuromorphic CL-DBS detector using NeuroSIM. The chip area, latency, energy, and power consumption of our CL-DBS detector were reduced by 47.4%, 66.63%, 65.6%, and 67.5%, respectively, for monolithic 3D-ICs. Similarly, for heterogeneous 3D-ICs, employing memristive synapses to replace traditional Static Random Access Memory (SRAM) resulted in reductions of 44.8%, 64.75%, 65.28%, and 67.7% in chip area, latency, and power usage. Discussion This study introduces a novel approach for PD symptom evaluation by directly utilizing spiking signals from neural activities in the time domain. This method significantly reduces the time and energy required for signal conversion compared to traditional frequency domain approaches. The study pioneers the use of neuromorphic computing and memristors in designing CL-DBS systems, surpassing SRAM-based designs in chip design area, latency, and energy efficiency. Lastly, the proposed neuromorphic PD detector demonstrates high resilience to timing variations in brain neural signals, as confirmed by robustness analysis.
Collapse
|
14
|
Computation via Neuron-like Spiking in Percolating Networks of Nanoparticles. NANO LETTERS 2023; 23:10594-10599. [PMID: 37955398 DOI: 10.1021/acs.nanolett.3c03551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2023]
Abstract
The biological brain is a highly efficient computational system in which information processing is performed via electrical spikes. Neuromorphic computing systems that work on similar principles could support the development of the next generation of artificial intelligence and, in particular, enable low-power edge computing. Percolating networks of nanoparticles (PNNs) have previously been shown to exhibit critical spiking behavior, with promise for highly efficient natural computation. Here we employ a rate coding scheme to show that PNNs can perform Boolean operations and image classification. Near perfect accuracy is achieved in both tasks by manipulating the spiking activity using certain control voltages. We demonstrate that the key to successful computation is that nanoscale tunnel gaps within the percolating networks transform input data through a powerful modulus-like nonlinearity. These results provide a basis for implementation of further computational schemes that exploit the brain-like criticality of these networks.
Collapse
|
15
|
Brain-Inspired Spatio-Temporal Associative Memories for Neuroimaging Data Classification: EEG and fMRI. Bioengineering (Basel) 2023; 10:1341. [PMID: 38135932 PMCID: PMC10741022 DOI: 10.3390/bioengineering10121341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 10/16/2023] [Accepted: 11/14/2023] [Indexed: 12/24/2023] Open
Abstract
Humans learn from a lot of information sources to make decisions. Once this information is learned in the brain, spatio-temporal associations are made, connecting all these sources (variables) in space and time represented as brain connectivity. In reality, to make a decision, we usually have only part of the information, either as a limited number of variables, limited time to make the decision, or both. The brain functions as a spatio-temporal associative memory. Inspired by the ability of the human brain, a brain-inspired spatio-temporal associative memory was proposed earlier that utilized the NeuCube brain-inspired spiking neural network framework. Here we applied the STAM framework to develop STAM for neuroimaging data, on the cases of EEG and fMRI, resulting in STAM-EEG and STAM-fMRI. This paper showed that once a NeuCube STAM classification model was trained on a complete spatio-temporal EEG or fMRI data, it could be recalled using only part of the time series, or/and only part of the used variables. We evaluated both temporal and spatial association and generalization accuracy accordingly. This was a pilot study that opens the field for the development of classification systems on other neuroimaging data, such as longitudinal MRI data, trained on complete data but recalled on partial data. Future research includes STAM that will work on data, collected across different settings, in different labs and clinics, that may vary in terms of the variables and time of data collection, along with other parameters. The proposed STAM will be further investigated for early diagnosis and prognosis of brain conditions and for diagnostic/prognostic marker discovery.
Collapse
|
16
|
Corrigendum: Understanding the effects of cortical gyrification in tACS: insights from experiments and computational models. Front Neurosci 2023; 17:1329826. [PMID: 38046655 PMCID: PMC10691734 DOI: 10.3389/fnins.2023.1329826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 10/31/2023] [Indexed: 12/05/2023] Open
Abstract
[This corrects the article DOI: 10.3389/fnins.2023.1223950.].
Collapse
|
17
|
STCA-SNN: self-attention-based temporal-channel joint attention for spiking neural networks. Front Neurosci 2023; 17:1261543. [PMID: 38027490 PMCID: PMC10667472 DOI: 10.3389/fnins.2023.1261543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 10/23/2023] [Indexed: 12/01/2023] Open
Abstract
Spiking Neural Networks (SNNs) have shown great promise in processing spatio-temporal information compared to Artificial Neural Networks (ANNs). However, there remains a performance gap between SNNs and ANNs, which impedes the practical application of SNNs. With intrinsic event-triggered property and temporal dynamics, SNNs have the potential to effectively extract spatio-temporal features from event streams. To leverage the temporal potential of SNNs, we propose a self-attention-based temporal-channel joint attention SNN (STCA-SNN) with end-to-end training, which infers attention weights along both temporal and channel dimensions concurrently. It models global temporal and channel information correlations with self-attention, enabling the network to learn 'what' and 'when' to attend simultaneously. Our experimental results show that STCA-SNNs achieve better performance on N-MNIST (99.67%), CIFAR10-DVS (81.6%), and N-Caltech 101 (80.88%) compared with the state-of-the-art SNNs. Meanwhile, our ablation study demonstrates that STCA-SNNs improve the accuracy of event stream classification tasks.
Collapse
|
18
|
Neurorobotic reinforcement learning for domains with parametrical uncertainty. Front Neurorobot 2023; 17:1239581. [PMID: 37965072 PMCID: PMC10642204 DOI: 10.3389/fnbot.2023.1239581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 09/26/2023] [Indexed: 11/16/2023] Open
Abstract
Neuromorphic hardware paired with brain-inspired learning strategies have enormous potential for robot control. Explicitly, these advantages include low energy consumption, low latency, and adaptability. Therefore, developing and improving learning strategies, algorithms, and neuromorphic hardware integration in simulation is a key to moving the state-of-the-art forward. In this study, we used the neurorobotics platform (NRP) simulation framework to implement spiking reinforcement learning control for a robotic arm. We implemented a force-torque feedback-based classic object insertion task ("peg-in-hole") and controlled the robot for the first time with neuromorphic hardware in the loop. We therefore provide a solution for training the system in uncertain environmental domains by using randomized simulation parameters. This leads to policies that are robust to real-world parameter variations in the target domain, filling the sim-to-real gap.To the best of our knowledge, it is the first neuromorphic implementation of the peg-in-hole task in simulation with the neuromorphic Loihi chip in the loop, and with scripted accelerated interactive training in the Neurorobotics Platform, including randomized domains.
Collapse
|
19
|
Embodied bidirectional simulation of a spiking cortico-basal ganglia-cerebellar-thalamic brain model and a mouse musculoskeletal body model distributed across computers including the supercomputer Fugaku. Front Neurorobot 2023; 17:1269848. [PMID: 37867618 PMCID: PMC10585105 DOI: 10.3389/fnbot.2023.1269848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 09/12/2023] [Indexed: 10/24/2023] Open
Abstract
Embodied simulation with a digital brain model and a realistic musculoskeletal body model provides a means to understand animal behavior and behavioral change. Such simulation can be too large and complex to conduct on a single computer, and so distributed simulation across multiple computers over the Internet is necessary. In this study, we report our joint effort on developing a spiking brain model and a mouse body model, connecting over the Internet, and conducting bidirectional simulation while synchronizing them. Specifically, the brain model consisted of multiple regions including secondary motor cortex, primary motor and somatosensory cortices, basal ganglia, cerebellum and thalamus, whereas the mouse body model, provided by the Neurorobotics Platform of the Human Brain Project, had a movable forelimb with three joints and six antagonistic muscles to act in a virtual environment. Those were simulated in a distributed manner across multiple computers including the supercomputer Fugaku, which is the flagship supercomputer in Japan, while communicating via Robot Operating System (ROS). To incorporate models written in C/C++ in the distributed simulation, we developed a C++ version of the rosbridge library from scratch, which has been released under an open source license. These results provide necessary tools for distributed embodied simulation, and demonstrate its possibility and usefulness toward understanding animal behavior and behavioral change.
Collapse
|
20
|
First-spike coding promotes accurate and efficient spiking neural networks for discrete events with rich temporal structures. Front Neurosci 2023; 17:1266003. [PMID: 37849889 PMCID: PMC10577212 DOI: 10.3389/fnins.2023.1266003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 09/11/2023] [Indexed: 10/19/2023] Open
Abstract
Spiking neural networks (SNNs) are well-suited to process asynchronous event-based data. Most of the existing SNNs use rate-coding schemes that focus on firing rate (FR), and so they generally ignore the spike timing in events. On the contrary, methods based on temporal coding, particularly time-to-first-spike (TTFS) coding, can be accurate and efficient but they are difficult to train. Currently, there is limited research on applying TTFS coding to real events, since traditional TTFS-based methods impose one-spike constraint, which is not realistic for event-based data. In this study, we present a novel decision-making strategy based on first-spike (FS) coding that encodes FS timings of the output neurons to investigate the role of the first-spike timing in classifying real-world event sequences with complex temporal structures. To achieve FS coding, we propose a novel surrogate gradient learning method for discrete spike trains. In the forward pass, output spikes are encoded into discrete times to generate FS times. In the backpropagation, we develop an error assignment method that propagates error from FS times to spikes through a Gaussian window, and then supervised learning for spikes is implemented through a surrogate gradient approach. Additional strategies are introduced to facilitate the training of FS timings, such as adding empty sequences and employing different parameters for different layers. We make a comprehensive comparison between FS and FR coding in the experiments. Our results show that FS coding achieves comparable accuracy to FR coding while leading to superior energy efficiency and distinct neuronal dynamics on data sequences with very rich temporal structures. Additionally, a longer time delay in the first spike leads to higher accuracy, indicating important information is encoded in the timing of the first spike.
Collapse
|
21
|
Gradient-based feature-attribution explainability methods for spiking neural networks. Front Neurosci 2023; 17:1153999. [PMID: 37829721 PMCID: PMC10565802 DOI: 10.3389/fnins.2023.1153999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 09/01/2023] [Indexed: 10/14/2023] Open
Abstract
Introduction Spiking neural networks (SNNs) are a model of computation that mimics the behavior of biological neurons. SNNs process event data (spikes) and operate more sparsely than artificial neural networks (ANNs), resulting in ultra-low latency and small power consumption. This paper aims to adapt and evaluate gradient-based explainability methods for SNNs, which were originally developed for conventional ANNs. Methods The adapted methods aim to create input feature attribution maps for SNNs trained through backpropagation that process either event-based spiking data or real-valued data. The methods address the limitations of existing work on explainability methods for SNNs, such as poor scalability, limited to convolutional layers, requiring the training of another model, and providing maps of activation values instead of true attribution scores. The adapted methods are evaluated on classification tasks for both real-valued and spiking data, and the accuracy of the proposed methods is confirmed through perturbation experiments at the pixel and spike levels. Results and discussion The results reveal that gradient-based SNN attribution methods successfully identify highly contributing pixels and spikes with significantly less computation time than model-agnostic methods. Additionally, we observe that the chosen coding technique has a noticeable effect on the input features that will be most significant. These findings demonstrate the potential of gradient-based explainability methods for SNNs in improving our understanding of how these networks process information and contribute to the development of more efficient and accurate SNNs.
Collapse
|
22
|
Brain-inspired neural circuit evolution for spiking neural networks. Proc Natl Acad Sci U S A 2023; 120:e2218173120. [PMID: 37729206 PMCID: PMC10523604 DOI: 10.1073/pnas.2218173120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 07/27/2023] [Indexed: 09/22/2023] Open
Abstract
In biological neural systems, different neurons are capable of self-organizing to form different neural circuits for achieving a variety of cognitive functions. However, the current design paradigm of spiking neural networks is based on structures derived from deep learning. Such structures are dominated by feedforward connections without taking into account different types of neurons, which significantly prevent spiking neural networks from realizing their potential on complex tasks. It remains an open challenge to apply the rich dynamical properties of biological neural circuits to model the structure of current spiking neural networks. This paper provides a more biologically plausible evolutionary space by combining feedforward and feedback connections with excitatory and inhibitory neurons. We exploit the local spiking behavior of neurons to adaptively evolve neural circuits such as forward excitation, forward inhibition, feedback inhibition, and lateral inhibition by the local law of spike-timing-dependent plasticity and update the synaptic weights in combination with the global error signals. By using the evolved neural circuits, we construct spiking neural networks for image classification and reinforcement learning tasks. Using the brain-inspired Neural circuit Evolution strategy (NeuEvo) with rich neural circuit types, the evolved spiking neural network greatly enhances capability on perception and reinforcement learning tasks. NeuEvo achieves state-of-the-art performance on CIFAR10, DVS-CIFAR10, DVS-Gesture, and N-Caltech101 datasets and achieves advanced performance on ImageNet. Combined with on-policy and off-policy deep reinforcement learning algorithms, it achieves comparable performance with artificial neural networks. The evolved spiking neural circuits lay the foundation for the evolution of complex networks with functions.
Collapse
|
23
|
Efficient human activity recognition with spatio-temporal spiking neural networks. Front Neurosci 2023; 17:1233037. [PMID: 37781248 PMCID: PMC10536255 DOI: 10.3389/fnins.2023.1233037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 07/25/2023] [Indexed: 10/03/2023] Open
Abstract
In this study, we explore Human Activity Recognition (HAR), a task that aims to predict individuals' daily activities utilizing time series data obtained from wearable sensors for health-related applications. Although recent research has predominantly employed end-to-end Artificial Neural Networks (ANNs) for feature extraction and classification in HAR, these approaches impose a substantial computational load on wearable devices and exhibit limitations in temporal feature extraction due to their activation functions. To address these challenges, we propose the application of Spiking Neural Networks (SNNs), an architecture inspired by the characteristics of biological neurons, to HAR tasks. SNNs accumulate input activation as presynaptic potential charges and generate a binary spike upon surpassing a predetermined threshold. This unique property facilitates spatio-temporal feature extraction and confers the advantage of low-power computation attributable to binary spikes. We conduct rigorous experiments on three distinct HAR datasets using SNNs, demonstrating that our approach attains competitive or superior performance relative to ANNs, while concurrently reducing energy consumption by up to 94%.
Collapse
|
24
|
From Brain Models to Robotic Embodied Cognition: How Does Biological Plausibility Inform Neuromorphic Systems? Brain Sci 2023; 13:1316. [PMID: 37759917 PMCID: PMC10526461 DOI: 10.3390/brainsci13091316] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 09/05/2023] [Accepted: 09/07/2023] [Indexed: 09/29/2023] Open
Abstract
We examine the challenging "marriage" between computational efficiency and biological plausibility-A crucial node in the domain of spiking neural networks at the intersection of neuroscience, artificial intelligence, and robotics. Through a transdisciplinary review, we retrace the historical and most recent constraining influences that these parallel fields have exerted on descriptive analysis of the brain, construction of predictive brain models, and ultimately, the embodiment of neural networks in an enacted robotic agent. We study models of Spiking Neural Networks (SNN) as the central means enabling autonomous and intelligent behaviors in biological systems. We then provide a critical comparison of the available hardware and software to emulate SNNs for investigating biological entities and their application on artificial systems. Neuromorphics is identified as a promising tool to embody SNNs in real physical systems and different neuromorphic chips are compared. The concepts required for describing SNNs are dissected and contextualized in the new no man's land between cognitive neuroscience and artificial intelligence. Although there are recent reviews on the application of neuromorphic computing in various modules of the guidance, navigation, and control of robotic systems, the focus of this paper is more on closing the cognition loop in SNN-embodied robotics. We argue that biologically viable spiking neuronal models used for electroencephalogram signals are excellent candidates for furthering our knowledge of the explainability of SNNs. We complete our survey by reviewing different robotic modules that can benefit from neuromorphic hardware, e.g., perception (with a focus on vision), localization, and cognition. We conclude that the tradeoff between symbolic computational power and biological plausibility of hardware can be best addressed by neuromorphics, whose presence in neurorobotics provides an accountable empirical testbench for investigating synthetic and natural embodied cognition. We argue this is where both theoretical and empirical future work should converge in multidisciplinary efforts involving neuroscience, artificial intelligence, and robotics.
Collapse
|
25
|
ALBSNN: ultra-low latency adaptive local binary spiking neural network with accuracy loss estimator. Front Neurosci 2023; 17:1225871. [PMID: 37771337 PMCID: PMC10525310 DOI: 10.3389/fnins.2023.1225871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Accepted: 08/24/2023] [Indexed: 09/30/2023] Open
Abstract
Spiking neural network (SNN) is a brain-inspired model with more spatio-temporal information processing capacity and computational energy efficiency. However, with the increasing depth of SNNs, the memory problem caused by the weights of SNNs has gradually attracted attention. In this study, we propose an ultra-low latency adaptive local binary spiking neural network (ALBSNN) with accuracy loss estimators, which dynamically selects the network layers to be binarized to ensure a balance between quantization degree and classification accuracy by evaluating the error caused by the binarized weights during the network learning process. At the same time, to accelerate the training speed of the network, the global average pooling (GAP) layer is introduced to replace the fully connected layers by combining convolution and pooling. Finally, to further reduce the error caused by the binary weight, we propose binary weight optimization (BWO), which updates the overall weight by directly adjusting the binary weight. This method further reduces the loss of the network that reaches the training bottleneck. The combination of the above methods balances the network's quantization and recognition ability, enabling the network to maintain the recognition capability equivalent to the full precision network and reduce the storage space by more than 20%. So, SNNs can use a small number of time steps to obtain better recognition accuracy. In the extreme case of using only a one-time step, we still can achieve 93.39, 92.12, and 69.55% testing accuracy on three traditional static datasets, Fashion- MNIST, CIFAR-10, and CIFAR-100, respectively. At the same time, we evaluate our method on neuromorphic N-MNIST, CIFAR10-DVS, and IBM DVS128 Gesture datasets and achieve advanced accuracy in SNN with binary weights. Our network has greater advantages in terms of storage resources and training time.
Collapse
|
26
|
Neuromorphic Sentiment Analysis Using Spiking Neural Networks. SENSORS (BASEL, SWITZERLAND) 2023; 23:7701. [PMID: 37765758 PMCID: PMC10536645 DOI: 10.3390/s23187701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 08/25/2023] [Accepted: 09/02/2023] [Indexed: 09/29/2023]
Abstract
Over the past decade, the artificial neural networks domain has seen a considerable embracement of deep neural networks among many applications. However, deep neural networks are typically computationally complex and consume high power, hindering their applicability for resource-constrained applications, such as self-driving vehicles, drones, and robotics. Spiking neural networks, often employed to bridge the gap between machine learning and neuroscience fields, are considered a promising solution for resource-constrained applications. Since deploying spiking neural networks on traditional von-Newman architectures requires significant processing time and high power, typically, neuromorphic hardware is created to execute spiking neural networks. The objective of neuromorphic devices is to mimic the distinctive functionalities of the human brain in terms of energy efficiency, computational power, and robust learning. Furthermore, natural language processing, a machine learning technique, has been widely utilized to aid machines in comprehending human language. However, natural language processing techniques cannot also be deployed efficiently on traditional computing platforms. In this research work, we strive to enhance the natural language processing traits/abilities by harnessing and integrating the SNNs traits, as well as deploying the integrated solution on neuromorphic hardware, efficiently and effectively. To facilitate this endeavor, we propose a novel, unique, and efficient sentiment analysis model created using a large-scale SNN model on SpiNNaker neuromorphic hardware that responds to user inputs. SpiNNaker neuromorphic hardware typically can simulate large spiking neural networks in real time and consumes low power. We initially create an artificial neural networks model, and then train the model using an Internet Movie Database (IMDB) dataset. Next, the pre-trained artificial neural networks model is converted into our proposed spiking neural networks model, called a spiking sentiment analysis (SSA) model. Our SSA model using SpiNNaker, called SSA-SpiNNaker, is created in such a way to respond to user inputs with a positive or negative response. Our proposed SSA-SpiNNaker model achieves 100% accuracy and only consumes 3970 Joules of energy, while processing around 10,000 words and predicting a positive/negative review. Our experimental results and analysis demonstrate that by leveraging the parallel and distributed capabilities of SpiNNaker, our proposed SSA-SpiNNaker model achieves better performance compared to artificial neural networks models. Our investigation into existing works revealed that no similar models exist in the published literature, demonstrating the uniqueness of our proposed model. Our proposed work would offer a synergy between SNNs and NLP within the neuromorphic computing domain, in order to address many challenges in this domain, including computational complexity and power consumption. Our proposed model would not only enhance the capabilities of sentiment analysis but also contribute to the advancement of brain-inspired computing. Our proposed model could be utilized in other resource-constrained and low-power applications, such as robotics, autonomous, and smart systems.
Collapse
|
27
|
Artificial Neuronal Devices Based on Emerging Materials: Neuronal Dynamics and Applications. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2023; 35:e2205047. [PMID: 36609920 DOI: 10.1002/adma.202205047] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 12/02/2022] [Indexed: 06/17/2023]
Abstract
Artificial neuronal devices are critical building blocks of neuromorphic computing systems and currently the subject of intense research motivated by application needs from new computing technology and more realistic brain emulation. Researchers have proposed a range of device concepts that can mimic neuronal dynamics and functions. Although the switching physics and device structures of these artificial neurons are largely different, their behaviors can be described by several neuron models in a more unified manner. In this paper, the reports of artificial neuronal devices based on emerging volatile switching materials are reviewed from the perspective of the demonstrated neuron models, with a focus on the neuronal functions implemented in these devices and the exploitation of these functions for computational and sensing applications. Furthermore, the neuroscience inspirations and engineering methods to enrich the neuronal dynamics that remain to be implemented in artificial neuronal devices and networks toward realizing the full functionalities of biological neurons are discussed.
Collapse
|
28
|
Diffusive Memristors with Uniform and Tunable Relaxation Time for Spike Generation in Event-Based Pattern Recognition. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2023; 35:e2204778. [PMID: 36036786 DOI: 10.1002/adma.202204778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 08/05/2022] [Indexed: 06/15/2023]
Abstract
A diffusive memristor is a promising building block for brain-inspired computing hardware. However, the randomness in the device relaxation dynamics limits the wide-range adoption of diffusive memristors in large arrays. In this work, the device stack is engineered to achieve a much-improved uniformity in the relaxation time (standard deviation σ reduced from ≈12 to ≈0.32 ms). The memristor is further connected with a resistor or a capacitor and the relaxation time is tuned between 1.13 µs and 1.25 ms, ranging from three orders of magnitude. The hierarchy of time surfaces (HOTS) algorithm, to utilize the tunable and uniform relaxation behavior for spike generation, is implemented. An accuracy of 77.3% is achieved in recognizing moving objects in the neuromorphic MNIST (N-MNIST) dataset. The work paves the way for building emerging neuromorphic computing hardware systems with ultralow power consumption.
Collapse
|
29
|
Exploring spiking neural networks: a comprehensive analysis of mathematical models and applications. Front Comput Neurosci 2023; 17:1215824. [PMID: 37692462 PMCID: PMC10483570 DOI: 10.3389/fncom.2023.1215824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 08/07/2023] [Indexed: 09/12/2023] Open
Abstract
This article presents a comprehensive analysis of spiking neural networks (SNNs) and their mathematical models for simulating the behavior of neurons through the generation of spikes. The study explores various models, including LIF and NLIF, for constructing SNNs and investigates their potential applications in different domains. However, implementation poses several challenges, including identifying the most appropriate model for classification tasks that demand high accuracy and low-performance loss. To address this issue, this research study compares the performance, behavior, and spike generation of multiple SNN models using consistent inputs and neurons. The findings of the study provide valuable insights into the benefits and challenges of SNNs and their models, emphasizing the significance of comparing multiple models to identify the most effective one. Moreover, the study quantifies the number of spiking operations required by each model to process the same inputs and produce equivalent outputs, enabling a thorough assessment of computational efficiency. The findings provide valuable insights into the benefits and limitations of SNNs and their models. The research underscores the significance of comparing different models to make informed decisions in practical applications. Additionally, the results reveal essential variations in biological plausibility and computational efficiency among the models, further emphasizing the importance of selecting the most suitable model for a given task. Overall, this study contributes to a deeper understanding of SNNs and offers practical guidelines for using their potential in real-world scenarios.
Collapse
|
30
|
Understanding the effects of cortical gyrification in tACS: insights from experiments and computational models. Front Neurosci 2023; 17:1223950. [PMID: 37655010 PMCID: PMC10467425 DOI: 10.3389/fnins.2023.1223950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 07/25/2023] [Indexed: 09/02/2023] Open
Abstract
The alpha rhythm is often associated with relaxed wakefulness or idling and is altered by various factors. Abnormalities in the alpha rhythm have been linked to several neurological and psychiatric disorders, including Alzheimer's disease. Transcranial alternating current stimulation (tACS) has been proposed as a potential tool to restore a disrupted alpha rhythm in the brain by stimulating at the individual alpha frequency (IAF), although some research has produced contradictory results. In this study, we applied an IAF-tACS protocol over parieto-occipital areas to a sample of healthy subjects and measured its effects over the power spectra. Additionally, we used computational models to get a deeper understanding of the results observed in the experiment. Both experimental and numerical results showed an increase in alpha power of 8.02% with respect to the sham condition in a widespread set of regions in the cortex, excluding some expected parietal regions. This result could be partially explained by taking into account the orientation of the electric field with respect to the columnar structures of the cortex, showing that the gyrification in parietal regions could generate effects in opposite directions (hyper-/depolarization) at the same time in specific brain regions. Additionally, we used a network model of spiking neuronal populations to explore the effects that these opposite polarities could have on neural activity, and we found that the best predictor of alpha power was the average of the normal components of the electric field. To sum up, our study sheds light on the mechanisms underlying tACS brain activity modulation, using both empirical and computational approaches. Non-invasive brain stimulation techniques hold promise for treating brain disorders, but further research is needed to fully understand and control their effects on brain dynamics and cognition. Our findings contribute to this growing body of research and provide a foundation for future studies aimed at optimizing the use of non-invasive brain stimulation in clinical settings.
Collapse
|
31
|
Autonomous driving controllers with neuromorphic spiking neural networks. Front Neurorobot 2023; 17:1234962. [PMID: 37636326 PMCID: PMC10451073 DOI: 10.3389/fnbot.2023.1234962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 07/25/2023] [Indexed: 08/29/2023] Open
Abstract
Autonomous driving is one of the hallmarks of artificial intelligence. Neuromorphic (brain-inspired) control is posed to significantly contribute to autonomous behavior by leveraging spiking neural networks-based energy-efficient computational frameworks. In this work, we have explored neuromorphic implementations of four prominent controllers for autonomous driving: pure-pursuit, Stanley, PID, and MPC, using a physics-aware simulation framework. We extensively evaluated these models with various intrinsic parameters and compared their performance with conventional CPU-based implementations. While being neural approximations, we show that neuromorphic models can perform competitively with their conventional counterparts. We provide guidelines for building neuromorphic architectures for control and describe the importance of their underlying tuning parameters and neuronal resources. Our results show that most models would converge to their optimal performances with merely 100-1,000 neurons. They also highlight the importance of hybrid conventional and neuromorphic designs, as was suggested here with the MPC controller. This study also highlights the limitations of neuromorphic implementations, particularly at higher (> 15 m/s) speeds where they tend to degrade faster than in conventional designs.
Collapse
|
32
|
Direct training high-performance spiking neural networks for object recognition and detection. Front Neurosci 2023; 17:1229951. [PMID: 37614339 PMCID: PMC10442545 DOI: 10.3389/fnins.2023.1229951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Accepted: 07/19/2023] [Indexed: 08/25/2023] Open
Abstract
Introduction The spiking neural network (SNN) is a bionic model that is energy-efficient when implemented on neuromorphic hardwares. The non-differentiability of the spiking signals and the complicated neural dynamics make direct training of high-performance SNNs a great challenge. There are numerous crucial issues to explore for the deployment of direct training SNNs, such as gradient vanishing and explosion, spiking signal decoding, and applications in upstream tasks. Methods To address gradient vanishing, we introduce a binary selection gate into the basic residual block and propose spiking gate (SG) ResNet to implement residual learning in SNNs. We propose two appropriate representations of the gate signal and verify that SG ResNet can overcome gradient vanishing or explosion by analyzing the gradient backpropagation. For the spiking signal decoding, a better decoding scheme than rate coding is achieved by our attention spike decoder (ASD), which dynamically assigns weights to spiking signals along the temporal, channel, and spatial dimensions. Results and discussion The SG ResNet and ASD modules are evaluated on multiple object recognition datasets, including the static ImageNet, CIFAR-100, CIFAR-10, and neuromorphic DVS-CIFAR10 datasets. Superior accuracy is demonstrated with a tiny simulation time step of four, specifically 94.52% top-1 accuracy on CIFAR-10 and 75.64% top-1 accuracy on CIFAR-100. Spiking RetinaNet is proposed using SG ResNet as the backbone and ASD module for information decoding as the first direct-training hybrid SNN-ANN detector for RGB images. Spiking RetinaNet with a SG ResNet34 backbone achieves an mAP of 0.296 on the object detection dataset MSCOCO.
Collapse
|
33
|
EdgeMap: An Optimized Mapping Toolchain for Spiking Neural Network in Edge Computing. SENSORS (BASEL, SWITZERLAND) 2023; 23:6548. [PMID: 37514842 PMCID: PMC10383546 DOI: 10.3390/s23146548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 07/13/2023] [Accepted: 07/18/2023] [Indexed: 07/30/2023]
Abstract
Spiking neural networks (SNNs) have attracted considerable attention as third-generation artificial neural networks, known for their powerful, intelligent features and energy-efficiency advantages. These characteristics render them ideally suited for edge computing scenarios. Nevertheless, the current mapping schemes for deploying SNNs onto neuromorphic hardware face limitations such as extended execution times, low throughput, and insufficient consideration of energy consumption and connectivity, which undermine their suitability for edge computing applications. To address these challenges, we introduce EdgeMap, an optimized mapping toolchain specifically designed for deploying SNNs onto edge devices without compromising performance. EdgeMap consists of two main stages. The first stage involves partitioning the SNN graph into small neuron clusters based on the streaming graph partition algorithm, with the sizes of neuron clusters limited by the physical neuron cores. In the subsequent mapping stage, we adopt a multi-objective optimization algorithm specifically geared towards mitigating energy costs and communication costs for efficient deployment. EdgeMap-evaluated across four typical SNN applications-substantially outperforms other state-of-the-art mapping schemes. The performance improvements include a reduction in average latency by up to 19.8%, energy consumption by 57%, and communication cost by 58%. Moreover, EdgeMap exhibits an impressive enhancement in execution time by a factor of 1225.44×, alongside a throughput increase of up to 4.02×. These results highlight EdgeMap's efficiency and effectiveness, emphasizing its utility for deploying SNN applications in edge computing scenarios.
Collapse
|
34
|
Bio-Inspired Design of Superconducting Spiking Neuron and Synapse. NANOMATERIALS (BASEL, SWITZERLAND) 2023; 13:2101. [PMID: 37513112 PMCID: PMC10383304 DOI: 10.3390/nano13142101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 07/11/2023] [Accepted: 07/17/2023] [Indexed: 07/30/2023]
Abstract
The imitative modelling of processes in the brain of living beings is an ambitious task. However, advances in the complexity of existing hardware brain models are limited by their low speed and high energy consumption. A superconducting circuit with Josephson junctions closely mimics the neuronal membrane with channels involved in the operation of the sodium-potassium pump. The dynamic processes in such a system are characterised by a duration of picoseconds and an energy level of attojoules. In this work, two superconducting models of a biological neuron are studied. New modes of their operation are identified, including the so-called bursting mode, which plays an important role in biological neural networks. The possibility of switching between different modes in situ is shown, providing the possibility of dynamic control of the system. A synaptic connection that mimics the short-term potentiation of a biological synapse is developed and demonstrated. Finally, the simplest two-neuron chain comprising the proposed bio-inspired components is simulated, and the prospects of superconducting hardware biosimilars are briefly discussed.
Collapse
|
35
|
Spiking CMOS-NVM mixed-signal neuromorphic ConvNet with circuit- and training-optimized temporal subsampling. Front Neurosci 2023; 17:1177592. [PMID: 37534034 PMCID: PMC10390782 DOI: 10.3389/fnins.2023.1177592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 06/26/2023] [Indexed: 08/04/2023] Open
Abstract
We increasingly rely on deep learning algorithms to process colossal amount of unstructured visual data. Commonly, these deep learning algorithms are deployed as software models on digital hardware, predominantly in data centers. Intrinsic high energy consumption of Cloud-based deployment of deep neural networks (DNNs) inspired researchers to look for alternatives, resulting in a high interest in Spiking Neural Networks (SNNs) and dedicated mixed-signal neuromorphic hardware. As a result, there is an emerging challenge to transfer DNN architecture functionality to energy-efficient spiking non-volatile memory (NVM)-based hardware with minimal loss in the accuracy of visual data processing. Convolutional Neural Network (CNN) is the staple choice of DNN for visual data processing. However, the lack of analog-friendly spiking implementations and alternatives for some core CNN functions, such as MaxPool, hinders the conversion of CNNs into the spike domain, thus hampering neuromorphic hardware development. To address this gap, in this work, we propose MaxPool with temporal multiplexing for Spiking CNNs (SCNNs), which is amenable for implementation in mixed-signal circuits. In this work, we leverage the temporal dynamics of internal membrane potential of Integrate & Fire neurons to enable MaxPool decision-making in the spiking domain. The proposed MaxPool models are implemented and tested within the SCNN architecture using a modified version of the aihwkit framework, a PyTorch-based toolkit for modeling and simulating hardware-based neural networks. The proposed spiking MaxPool scheme can decide even before the complete spatiotemporal input is applied, thus selectively trading off latency with accuracy. It is observed that by allocating just 10% of the spatiotemporal input window for a pooling decision, the proposed spiking MaxPool achieves up to 61.74% accuracy with a 2-bit weight resolution in the CIFAR10 dataset classification task after training with back propagation, with only about 1% performance drop compared to 62.78% accuracy of the 100% spatiotemporal window case with the 2-bit weight resolution to reflect foundry-integrated ReRAM limitations. In addition, we propose the realization of one of the proposed spiking MaxPool techniques in an NVM crossbar array along with periphery circuits designed in a 130nm CMOS technology. The energy-efficiency estimation results show competitive performance compared to recent neuromorphic chip designs.
Collapse
|
36
|
Adaptive STDP-based on-chip spike pattern detection. Front Neurosci 2023; 17:1203956. [PMID: 37521704 PMCID: PMC10374023 DOI: 10.3389/fnins.2023.1203956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 06/15/2023] [Indexed: 08/01/2023] Open
Abstract
A spiking neural network (SNN) is a bottom-up tool used to describe information processing in brain microcircuits. It is becoming a crucial neuromorphic computational model. Spike-timing-dependent plasticity (STDP) is an unsupervised brain-like learning rule implemented in many SNNs and neuromorphic chips. However, a significant performance gap exists between ideal model simulation and neuromorphic implementation. The performance of STDP learning in neuromorphic chips deteriorates because the resolution of synaptic efficacy in such chips is generally restricted to 6 bits or less, whereas simulations employ the entire 64-bit floating-point precision available on digital computers. Previously, we introduced a bio-inspired learning rule named adaptive STDP and demonstrated via numerical simulation that adaptive STDP (using only 4-bit fixed-point synaptic efficacy) performs similarly to STDP learning (using 64-bit floating-point precision) in a noisy spike pattern detection model. Herein, we present the experimental results demonstrating the performance of adaptive STDP learning. To the best of our knowledge, this is the first study that demonstrates unsupervised noisy spatiotemporal spike pattern detection to perform well and maintain the simulation performance on a mixed-signal CMOS neuromorphic chip with low-resolution synaptic efficacy. The chip was designed in Taiwan Semiconductor Manufacturing Company (TSMC) 250 nm CMOS technology node and comprises a soma circuit and 256 synapse circuits along with their learning circuitry.
Collapse
|
37
|
Exploiting semantic information in a spiking neural SLAM system. Front Neurosci 2023; 17:1190515. [PMID: 37476829 PMCID: PMC10354246 DOI: 10.3389/fnins.2023.1190515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 06/16/2023] [Indexed: 07/22/2023] Open
Abstract
To navigate in new environments, an animal must be able to keep track of its position while simultaneously creating and updating an internal map of features in the environment, a problem formulated as simultaneous localization and mapping (SLAM) in the field of robotics. This requires integrating information from different domains, including self-motion cues, sensory, and semantic information. Several specialized neuron classes have been identified in the mammalian brain as being involved in solving SLAM. While biology has inspired a whole class of SLAM algorithms, the use of semantic information has not been explored in such work. We present a novel, biologically plausible SLAM model called SSP-SLAM-a spiking neural network designed using tools for large scale cognitive modeling. Our model uses a vector representation of continuous spatial maps, which can be encoded via spiking neural activity and bound with other features (continuous and discrete) to create compressed structures containing semantic information from multiple domains (e.g., spatial, temporal, visual, conceptual). We demonstrate that the dynamics of these representations can be implemented with a hybrid oscillatory-interference and continuous attractor network of head direction cells. The estimated self-position from this network is used to learn an associative memory between semantically encoded landmarks and their positions, i.e., an environment map, which is used for loop closure. Our experiments demonstrate that environment maps can be learned accurately and their use greatly improves self-position estimation. Furthermore, grid cells, place cells, and object vector cells are observed by this model. We also run our path integrator network on the NengoLoihi neuromorphic emulator to demonstrate feasibility for a full neuromorphic implementation for energy efficient SLAM.
Collapse
|
38
|
Implementation of Field-Programmable Gate Array Platform for Object Classification Tasks Using Spike-Based Backpropagated Deep Convolutional Spiking Neural Networks. MICROMACHINES 2023; 14:1353. [PMID: 37512665 PMCID: PMC10385231 DOI: 10.3390/mi14071353] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 06/26/2023] [Accepted: 06/28/2023] [Indexed: 07/30/2023]
Abstract
This paper investigates the performance of deep convolutional spiking neural networks (DCSNNs) trained using spike-based backpropagation techniques. Specifically, the study examined temporal spike sequence learning via backpropagation (TSSL-BP) and surrogate gradient descent via backpropagation (SGD-BP) as effective techniques for training DCSNNs on the field programmable gate array (FPGA) platform for object classification tasks. The primary objective of this experimental study was twofold: (i) to determine the most effective backpropagation technique, TSSL-BP or SGD-BP, for deeper spiking neural networks (SNNs) with convolution filters across various datasets; and (ii) to assess the feasibility of deploying DCSNNs trained using backpropagation techniques on low-power FPGA for inference, considering potential configuration adjustments and power requirements. The aforementioned objectives will assist in informing researchers and companies in this field regarding the limitations and unique perspectives of deploying DCSNNs on low-power FPGA devices. The study contributions have three main aspects: (i) the design of a low-power FPGA board featuring a deployable DCSNN chip suitable for object classification tasks; (ii) the inference of TSSL-BP and SGD-BP models with novel network architectures on the FPGA board for object classification tasks; and (iii) a comparative evaluation of the selected spike-based backpropagation techniques and the object classification performance of DCSNNs across multiple metrics using both public (MNIST, CIFAR10, KITTI) and private (INHA_ADAS, INHA_KLP) datasets.
Collapse
|
39
|
VTSNN: a virtual temporal spiking neural network. Front Neurosci 2023; 17:1091097. [PMID: 37287800 PMCID: PMC10242054 DOI: 10.3389/fnins.2023.1091097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2022] [Accepted: 04/28/2023] [Indexed: 06/09/2023] Open
Abstract
Spiking neural networks (SNNs) have recently demonstrated outstanding performance in a variety of high-level tasks, such as image classification. However, advancements in the field of low-level assignments, such as image reconstruction, are rare. This may be due to the lack of promising image encoding techniques and corresponding neuromorphic devices designed specifically for SNN-based low-level vision problems. This paper begins by proposing a simple yet effective undistorted weighted-encoding-decoding technique, which primarily consists of an Undistorted Weighted-Encoding (UWE) and an Undistorted Weighted-Decoding (UWD). The former aims to convert a gray image into spike sequences for effective SNN learning, while the latter converts spike sequences back into images. Then, we design a new SNN training strategy, known as Independent-Temporal Backpropagation (ITBP) to avoid complex loss propagation in spatial and temporal dimensions, and experiments show that ITBP is superior to Spatio-Temporal Backpropagation (STBP). Finally, a so-called Virtual Temporal SNN (VTSNN) is formulated by incorporating the above-mentioned approaches into U-net network architecture, fully utilizing the potent multiscale representation capability. Experimental results on several commonly used datasets such as MNIST, F-MNIST, and CIFAR10 demonstrate that the proposed method produces competitive noise-removal performance extremely which is superior to the existing work. Compared to ANN with the same architecture, VTSNN has a greater chance of achieving superiority while consuming ~1/274 of the energy. Specifically, using the given encoding-decoding strategy, a simple neuromorphic circuit could be easily constructed to maximize this low-carbon strategy.
Collapse
|
40
|
Corrigendum: SCTN: event-based object tracking with energy-efficient deep convolutional spiking neural networks. Front Neurosci 2023; 17:1204334. [PMID: 37260839 PMCID: PMC10227581 DOI: 10.3389/fnins.2023.1204334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 04/24/2023] [Indexed: 06/02/2023] Open
Abstract
[This corrects the article DOI: 10.3389/fnins.2023.1123698.].
Collapse
|
41
|
Meta-SpikePropamine: learning to learn with synaptic plasticity in spiking neural networks. Front Neurosci 2023; 17:1183321. [PMID: 37250397 PMCID: PMC10213417 DOI: 10.3389/fnins.2023.1183321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 04/06/2023] [Indexed: 05/31/2023] Open
Abstract
We propose that in order to harness our understanding of neuroscience toward machine learning, we must first have powerful tools for training brain-like models of learning. Although substantial progress has been made toward understanding the dynamics of learning in the brain, neuroscience-derived models of learning have yet to demonstrate the same performance capabilities as methods in deep learning such as gradient descent. Inspired by the successes of machine learning using gradient descent, we introduce a bi-level optimization framework that seeks to both solve online learning tasks and improve the ability to learn online using models of plasticity from neuroscience. We demonstrate that models of three-factor learning with synaptic plasticity taken from the neuroscience literature can be trained in Spiking Neural Networks (SNNs) with gradient descent via a framework of learning-to-learn to address challenging online learning problems. This framework opens a new path toward developing neuroscience inspired online learning algorithms.
Collapse
|
42
|
Optical flow estimation from event-based cameras and spiking neural networks. Front Neurosci 2023; 17:1160034. [PMID: 37250425 PMCID: PMC10210135 DOI: 10.3389/fnins.2023.1160034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 04/13/2023] [Indexed: 05/31/2023] Open
Abstract
Event-based cameras are raising interest within the computer vision community. These sensors operate with asynchronous pixels, emitting events, or "spikes", when the luminance change at a given pixel since the last event surpasses a certain threshold. Thanks to their inherent qualities, such as their low power consumption, low latency, and high dynamic range, they seem particularly tailored to applications with challenging temporal constraints and safety requirements. Event-based sensors are an excellent fit for Spiking Neural Networks (SNNs), since the coupling of an asynchronous sensor with neuromorphic hardware can yield real-time systems with minimal power requirements. In this work, we seek to develop one such system, using both event sensor data from the DSEC dataset and spiking neural networks to estimate optical flow for driving scenarios. We propose a U-Net-like SNN which, after supervised training, is able to make dense optical flow estimations. To do so, we encourage both minimal norm for the error vector and minimal angle between ground-truth and predicted flow, training our model with back-propagation using a surrogate gradient. In addition, the use of 3d convolutions allows us to capture the dynamic nature of the data by increasing the temporal receptive fields. Upsampling after each decoding stage ensures that each decoder's output contributes to the final estimation. Thanks to separable convolutions, we have been able to develop a light model (when compared to competitors) that can nonetheless yield reasonably accurate optical flow estimates.
Collapse
|
43
|
Volume-transmitted GABA waves pace epileptiform rhythms in the hippocampal network. Curr Biol 2023; 33:1249-1264.e7. [PMID: 36921605 PMCID: PMC10615848 DOI: 10.1016/j.cub.2023.02.051] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 01/05/2023] [Accepted: 02/15/2023] [Indexed: 03/17/2023]
Abstract
Mechanisms that entrain and pace rhythmic epileptiform discharges remain debated. Traditionally, the quest to understand them has focused on interneuronal networks driven by synaptic GABAergic connections. However, synchronized interneuronal discharges could also trigger the transient elevations of extracellular GABA across the tissue volume, thus raising tonic conductance (Gtonic) of synaptic and extrasynaptic GABA receptors in multiple cells. Here, we monitor extracellular GABA in hippocampal slices using patch-clamp GABA "sniffer" and a novel optical GABA sensor, showing that periodic epileptiform discharges are preceded by transient, region-wide waves of extracellular GABA. Neural network simulations that incorporate volume-transmitted GABA signals point to a cycle of GABA-driven network inhibition and disinhibition underpinning this relationship. We test and validate this hypothesis using simultaneous patch-clamp recordings from multiple neurons and selective optogenetic stimulation of fast-spiking interneurons. Critically, reducing GABA uptake in order to decelerate extracellular GABA fluctuations-without affecting synaptic GABAergic transmission or resting GABA levels-slows down rhythmic activity. Our findings thus unveil a key role of extrasynaptic, volume-transmitted GABA in pacing regenerative rhythmic activity in brain networks.
Collapse
|
44
|
Overview of Spiking Neural Network Learning Approaches and Their Computational Complexities. SENSORS (BASEL, SWITZERLAND) 2023; 23:3037. [PMID: 36991750 PMCID: PMC10053242 DOI: 10.3390/s23063037] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 03/08/2023] [Accepted: 03/09/2023] [Indexed: 06/19/2023]
Abstract
Spiking neural networks (SNNs) are subjects of a topic that is gaining more and more interest nowadays. They more closely resemble actual neural networks in the brain than their second-generation counterparts, artificial neural networks (ANNs). SNNs have the potential to be more energy efficient than ANNs on event-driven neuromorphic hardware. This can yield drastic maintenance cost reduction for neural network models, as the energy consumption would be much lower in comparison to regular deep learning models hosted in the cloud today. However, such hardware is still not yet widely available. On standard computer architectures consisting mainly of central processing units (CPUs) and graphics processing units (GPUs) ANNs, due to simpler models of neurons and simpler models of connections between neurons, have the upper hand in terms of execution speed. In general, they also win in terms of learning algorithms, as SNNs do not reach the same levels of performance as their second-generation counterparts in typical machine learning benchmark tasks, such as classification. In this paper, we review existing learning algorithms for spiking neural networks, divide them into categories by type, and assess their computational complexity.
Collapse
|
45
|
Molecular Toxicity Virtual Screening Applying a Quantized Computational SNN-Based Framework. Molecules 2023; 28:molecules28031342. [PMID: 36771009 PMCID: PMC9919191 DOI: 10.3390/molecules28031342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 01/27/2023] [Accepted: 01/29/2023] [Indexed: 02/04/2023] Open
Abstract
Spiking neural networks are biologically inspired machine learning algorithms attracting researchers' attention for their applicability to alternative energy-efficient hardware other than traditional computers. In the current work, spiking neural networks have been tested in a quantitative structure-activity analysis targeting the toxicity of molecules. Multiple public-domain databases of compounds have been evaluated with spiking neural networks, achieving accuracies compatible with high-quality frameworks presented in the previous literature. The numerical experiments also included an analysis of hyperparameters and tested the spiking neural networks on molecular fingerprints of different lengths. Proposing alternatives to traditional software and hardware for time- and resource-consuming tasks, such as those found in chemoinformatics, may open the door to new research and improvements in the field.
Collapse
|
46
|
Temporal derivative computation in the dorsal raphe network revealed by an experimentally driven augmented integrate-and-fire modeling framework. eLife 2023; 12:72951. [PMID: 36655738 PMCID: PMC9977298 DOI: 10.7554/elife.72951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 12/19/2022] [Indexed: 01/20/2023] Open
Abstract
By means of an expansive innervation, the serotonin (5-HT) neurons of the dorsal raphe nucleus (DRN) are positioned to enact coordinated modulation of circuits distributed across the entire brain in order to adaptively regulate behavior. Yet the network computations that emerge from the excitability and connectivity features of the DRN are still poorly understood. To gain insight into these computations, we began by carrying out a detailed electrophysiological characterization of genetically identified mouse 5-HT and somatostatin (SOM) neurons. We next developed a single-neuron modeling framework that combines the realism of Hodgkin-Huxley models with the simplicity and predictive power of generalized integrate-and-fire models. We found that feedforward inhibition of 5-HT neurons by heterogeneous SOM neurons implemented divisive inhibition, while endocannabinoid-mediated modulation of excitatory drive to the DRN increased the gain of 5-HT output. Our most striking finding was that the output of the DRN encodes a mixture of the intensity and temporal derivative of its input, and that the temporal derivative component dominates this mixture precisely when the input is increasing rapidly. This network computation primarily emerged from prominent adaptation mechanisms found in 5-HT neurons, including a previously undescribed dynamic threshold. By applying a bottom-up neural network modeling approach, our results suggest that the DRN is particularly apt to encode input changes over short timescales, reflecting one of the salient emerging computations that dominate its output to regulate behavior.
Collapse
|
47
|
Supervised Learning Algorithm Based on Spike Train Inner Product for Deep Spiking Neural Networks. Brain Sci 2023; 13:brainsci13020168. [PMID: 36831711 PMCID: PMC9954578 DOI: 10.3390/brainsci13020168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2022] [Revised: 01/15/2023] [Accepted: 01/16/2023] [Indexed: 01/20/2023] Open
Abstract
By mimicking the hierarchical structure of human brain, deep spiking neural networks (DSNNs) can extract features from a lower level to a higher level gradually, and improve the performance for the processing of spatio-temporal information. Due to the complex hierarchical structure and implicit nonlinear mechanism, the formulation of spike train level supervised learning methods for DSNNs remains an important problem in this research area. Based on the definition of kernel function and spike trains inner product (STIP) as well as the idea of error backpropagation (BP), this paper firstly proposes a deep supervised learning algorithm for DSNNs named BP-STIP. Furthermore, in order to alleviate the intrinsic weight transport problem of the BP mechanism, feedback alignment (FA) and broadcast alignment (BA) mechanisms are utilized to optimize the error feedback mode of BP-STIP, and two deep supervised learning algorithms named FA-STIP and BA-STIP are also proposed. In the experiments, the effectiveness of the proposed three DSNN algorithms is verified on the MNIST digital image benchmark dataset, and the influence of different kernel functions on the learning performance of DSNNs with different network scales is analyzed. Experimental results show that the FA-STIP and BP-STIP algorithms can achieve 94.73% and 95.65% classification accuracy, which apparently possess better learning performance and stability compared with the benchmark algorithm BP-STIP.
Collapse
|
48
|
The Influence of the Number of Spiking Neurons on Synaptic Plasticity. Biomimetics (Basel) 2023; 8:biomimetics8010028. [PMID: 36648814 PMCID: PMC9844446 DOI: 10.3390/biomimetics8010028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 01/04/2023] [Accepted: 01/06/2023] [Indexed: 01/12/2023] Open
Abstract
The main advantages of spiking neural networks are the high biological plausibility and their fast response due to spiking behaviour. The response time decreases significantly in the hardware implementation of SNN because the neurons operate in parallel. Compared with the traditional computational neural network, the SNN use a lower number of neurons, which also reduces their cost. Another critical characteristic of SNN is their ability to learn by event association that is determined mainly by postsynaptic mechanisms such as long-term potentiation. However, in some conditions, presynaptic plasticity determined by post-tetanic potentiation occurs due to the fast activation of presynaptic neurons. This violates the Hebbian learning rules that are specific to postsynaptic plasticity. Hebbian learning improves the SNN ability to discriminate the neural paths trained by the temporal association of events, which is the key element of learning in the brain. This paper quantifies the efficiency of Hebbian learning as the ratio between the LTP and PTP effects on the synaptic weights. On the basis of this new idea, this work evaluates for the first time the influence of the number of neurons on the PTP/LTP ratio and consequently on the Hebbian learning efficiency. The evaluation was performed by simulating a neuron model that was successfully tested in control applications. The results show that the firing rate of postsynaptic neurons post depends on the number of presynaptic neurons pre, which increases the effect of LTP on the synaptic potentiation. When post activates at a requested rate, the learning efficiency varies in the opposite direction with the number of pres, reaching its maximum when fewer than two pres are used. In addition, Hebbian learning is more efficient at lower presynaptic firing rates that are divisors of the target frequency of post. This study concluded that, when the electronic neurons additionally model presynaptic plasticity to LTP, the efficiency of Hebbian learning is higher when fewer neurons are used. This result strengthens the observations of our previous research where the SNN with a reduced number of neurons could successfully learn to control the motion of robotic fingers.
Collapse
|
49
|
A MoS 2 Hafnium Oxide Based Ferroelectric Encoder for Temporal-Efficient Spiking Neural Network. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2023; 35:e2204949. [PMID: 36366910 DOI: 10.1002/adma.202204949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 07/26/2022] [Indexed: 06/16/2023]
Abstract
Spiking neural network (SNN), where the information is evaluated recurrently through spikes, has manifested significant promises to minimize the energy expenditure in data-intensive machine learning and artificial intelligence. Among these applications, the artificial neural encoders are essential to convert the external stimuli to a spiking format that can be subsequently fed to the neural network. Here, a molybdenum disulfide (MoS2 ) hafnium oxide-based ferroelectric encoder is demonstrated for temporal-efficient information processing in SNN. The fast domain switching attribute associated with the polycrystalline nature of hafnium oxide-based ferroelectric material is exploited for spike encoding, rendering it suitable for realizing biomimetic encoders. Accordingly, a high-performance ferroelectric encoder is achieved, featuring a superior switching efficiency, negligible charge trapping effect, and robust ferroelectric response, which successfully enable a broad dynamic range. Furthermore, an SNN is simulated to verify the precision of the encoded information, in which an average inference accuracy of 95.14% can be achieved, using the Modified National Insitute of Standards and Technology (MNIST) dataset for digit classification. Moreover, this ferroelectric encoder manifests prominent resilience against noise injection with an overall prediction accuracy of 94.73% under various Gaussian noise levels, showing practical promises to reduce the computational load for the neural network.
Collapse
|
50
|
SCTN: Event-based object tracking with energy-efficient deep convolutional spiking neural networks. Front Neurosci 2023; 17:1123698. [PMID: 36875665 PMCID: PMC9978206 DOI: 10.3389/fnins.2023.1123698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Accepted: 01/30/2023] [Indexed: 02/18/2023] Open
Abstract
Event cameras are asynchronous and neuromorphically inspired visual sensors, which have shown great potential in object tracking because they can easily detect moving objects. Since event cameras output discrete events, they are inherently suitable to coordinate with Spiking Neural Network (SNN), which has a unique event-driven computation characteristic and energy-efficient computing. In this paper, we tackle the problem of event-based object tracking by a novel architecture with a discriminatively trained SNN, called the Spiking Convolutional Tracking Network (SCTN). Taking a segment of events as input, SCTN not only better exploits implicit associations among events rather than event-wise processing, but also fully utilizes precise temporal information and maintains the sparse representation in segments instead of frames. To make SCTN more suitable for object tracking, we propose a new loss function that introduces an exponential Intersection over Union (IoU) in the voltage domain. To the best of our knowledge, this is the first tracking network directly trained with SNN. Besides, we present a new event-based tracking dataset, dubbed DVSOT21. In contrast to other competing trackers, experimental results on DVSOT21 demonstrate that our method achieves competitive performance with very low energy consumption compared to ANN based trackers with very low energy consumption compared to ANN based trackers. With lower energy consumption, tracking on neuromorphic hardware will reveal its advantage.
Collapse
|