1
|
Siddique MAB, Zhang Y, An H. Monitoring time domain characteristics of Parkinson's disease using 3D memristive neuromorphic system. Front Comput Neurosci 2023; 17:1274575. [PMID: 38162516 PMCID: PMC10754992 DOI: 10.3389/fncom.2023.1274575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 11/06/2023] [Indexed: 01/03/2024] Open
Abstract
Introduction Parkinson's disease (PD) is a neurodegenerative disorder affecting millions of patients. Closed-Loop Deep Brain Stimulation (CL-DBS) is a therapy that can alleviate the symptoms of PD. The CL-DBS system consists of an electrode sending electrical stimulation signals to a specific region of the brain and a battery-powered stimulator implanted in the chest. The electrical stimuli in CL-DBS systems need to be adjusted in real-time in accordance with the state of PD symptoms. Therefore, fast and precise monitoring of PD symptoms is a critical function for CL-DBS systems. However, the current CL-DBS techniques suffer from high computational demands for real-time PD symptom monitoring, which are not feasible for implanted and wearable medical devices. Methods In this paper, we present an energy-efficient neuromorphic PD symptom detector using memristive three-dimensional integrated circuits (3D-ICs). The excessive oscillation at beta frequencies (13-35 Hz) at the subthalamic nucleus (STN) is used as a biomarker of PD symptoms. Results Simulation results demonstrate that our neuromorphic PD detector, implemented with an 8-layer spiking Long Short-Term Memory (S-LSTM), excels in recognizing PD symptoms, achieving a training accuracy of 99.74% and a validation accuracy of 99.52% for a 75%-25% data split. Furthermore, we evaluated the improvement of our neuromorphic CL-DBS detector using NeuroSIM. The chip area, latency, energy, and power consumption of our CL-DBS detector were reduced by 47.4%, 66.63%, 65.6%, and 67.5%, respectively, for monolithic 3D-ICs. Similarly, for heterogeneous 3D-ICs, employing memristive synapses to replace traditional Static Random Access Memory (SRAM) resulted in reductions of 44.8%, 64.75%, 65.28%, and 67.7% in chip area, latency, and power usage. Discussion This study introduces a novel approach for PD symptom evaluation by directly utilizing spiking signals from neural activities in the time domain. This method significantly reduces the time and energy required for signal conversion compared to traditional frequency domain approaches. The study pioneers the use of neuromorphic computing and memristors in designing CL-DBS systems, surpassing SRAM-based designs in chip design area, latency, and energy efficiency. Lastly, the proposed neuromorphic PD detector demonstrates high resilience to timing variations in brain neural signals, as confirmed by robustness analysis.
Collapse
Affiliation(s)
- Md Abu Bakr Siddique
- Department of Electrical and Computer Engineering, Michigan Technological University, Houghton, MI, United States
| | - Yan Zhang
- Department of Biological Sciences, Michigan Technological University, Houghton, MI, United States
| | - Hongyu An
- Department of Electrical and Computer Engineering, Michigan Technological University, Houghton, MI, United States
| |
Collapse
|
2
|
Bakhshi T, Zafar S. Hybrid Deep Learning Techniques for Securing Bioluminescent Interfaces in Internet of Bio Nano Things. SENSORS (BASEL, SWITZERLAND) 2023; 23:8972. [PMID: 37960671 PMCID: PMC10648166 DOI: 10.3390/s23218972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 10/27/2023] [Accepted: 11/02/2023] [Indexed: 11/15/2023]
Abstract
The Internet of bio-nano things (IoBNT) is an emerging paradigm employing nanoscale (~1-100 nm) biological transceivers to collect in vivo signaling information from the human body and communicate it to healthcare providers over the Internet. Bio-nano-things (BNT) offer external actuation of in-body molecular communication (MC) for targeted drug delivery to otherwise inaccessible parts of the human tissue. BNTs are inter-connected using chemical diffusion channels, forming an in vivo bio-nano network, connected to an external ex vivo environment such as the Internet using bio-cyber interfaces. Bio-luminescent bio-cyber interfacing (BBI) has proven to be promising in realizing IoBNT systems due to their non-obtrusive and low-cost implementation. BBI security, however, is a key concern during practical implementation since Internet connectivity exposes the interfaces to external threat vectors, and accurate classification of anomalous BBI traffic patterns is required to offer mitigation. However, parameter complexity and underlying intricate correlations among BBI traffic characteristics limit the use of existing machine-learning (ML) based anomaly detection methods typically requiring hand-crafted feature designing. To this end, the present work investigates the employment of deep learning (DL) algorithms allowing dynamic and scalable feature engineering to discriminate between normal and anomalous BBI traffic. During extensive validation using singular and multi-dimensional models on the generated dataset, our hybrid convolutional and recurrent ensemble (CNN + LSTM) reported an accuracy of approximately ~93.51% over other deep and shallow structures. Furthermore, employing a hybrid DL network allowed automated extraction of normal as well as temporal features in BBI data, eliminating manual selection and crafting of input features for accurate prediction. Finally, we recommend deployment primitives of the extracted optimal classifier in conventional intrusion detection systems as well as evolving non-Von Neumann architectures for real-time anomaly detection.
Collapse
Affiliation(s)
- Taimur Bakhshi
- School of Built Environment, Engineering & Computing, Leeds Beckett University, Leeds LS1 3HE, UK
| | - Sidra Zafar
- Department of Computer Science, Kinnaird College for Women, Lahore 54000, Pakistan;
| |
Collapse
|
3
|
Vlasov D, Minnekhanov A, Rybka R, Davydov Y, Sboev A, Serenko A, Ilyasov A, Demin V. Memristor-based spiking neural network with online reinforcement learning. Neural Netw 2023; 166:512-523. [PMID: 37579580 DOI: 10.1016/j.neunet.2023.07.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 04/28/2023] [Accepted: 07/24/2023] [Indexed: 08/16/2023]
Abstract
Neural networks implemented in memristor-based hardware can provide fast and efficient in-memory computation, but traditional learning methods such as error back-propagation are hardly feasible in it. Spiking neural networks (SNNs) are highly promising in this regard, as their weights can be changed locally in a self-organized manner without the demand for high-precision changes calculated with the use of information almost from the entire network. This problem is rather relevant for solving control tasks with neural-network reinforcement learning methods, as those are highly sensitive to any source of stochasticity in a model initialization, training, or decision-making procedure. This paper presents an online reinforcement learning algorithm in which the change of connection weights is carried out after processing each environment state during interaction-with-environment data generation. Another novel feature of the algorithm is that it is applied to SNNs with memristor-based STDP-like learning rules. The plasticity functions are obtained from real memristors based on poly-p-xylylene and CoFeB-LiNbO3 nanocomposite, which were experimentally assembled and analyzed. The SNN is comprised of leaky integrate-and-fire neurons. Environmental states are encoded by the timings of input spikes, and the control action is decoded by the first spike. The proposed learning algorithm solves the Cart-Pole benchmark task successfully. This result could be the first step towards implementing a real-time agent learning procedure in a continuous-time environment that can be run on neuromorphic systems with memristive synapses.
Collapse
Affiliation(s)
- Danila Vlasov
- NRC "Kurchatov Institute", Akademika Kurchatova sq., 1 Moscow, Russian Federation
| | - Anton Minnekhanov
- NRC "Kurchatov Institute", Akademika Kurchatova sq., 1 Moscow, Russian Federation
| | - Roman Rybka
- NRC "Kurchatov Institute", Akademika Kurchatova sq., 1 Moscow, Russian Federation; Russian Technological University "MIREA", Vernadsky av., 78 Moscow, Russian Federation.
| | - Yury Davydov
- NRC "Kurchatov Institute", Akademika Kurchatova sq., 1 Moscow, Russian Federation
| | - Alexander Sboev
- NRC "Kurchatov Institute", Akademika Kurchatova sq., 1 Moscow, Russian Federation; Russian Technological University "MIREA", Vernadsky av., 78 Moscow, Russian Federation; NRNU "MEPhi", Kashira Hwy, 31 Moscow, Russian Federation
| | - Alexey Serenko
- NRC "Kurchatov Institute", Akademika Kurchatova sq., 1 Moscow, Russian Federation
| | - Alexander Ilyasov
- NRC "Kurchatov Institute", Akademika Kurchatova sq., 1 Moscow, Russian Federation; Faculty of Physics, Lomonosov Moscow State University, Leninskie gory, 1 Moscow, Russian Federation
| | - Vyacheslav Demin
- NRC "Kurchatov Institute", Akademika Kurchatova sq., 1 Moscow, Russian Federation.
| |
Collapse
|
4
|
Ivanov D, Chezhegov A, Kiselev M, Grunin A, Larionov D. Neuromorphic artificial intelligence systems. Front Neurosci 2022; 16:959626. [PMID: 36188479 PMCID: PMC9516108 DOI: 10.3389/fnins.2022.959626] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 08/17/2022] [Indexed: 11/23/2022] Open
Abstract
Modern artificial intelligence (AI) systems, based on von Neumann architecture and classical neural networks, have a number of fundamental limitations in comparison with the mammalian brain. In this article we discuss these limitations and ways to mitigate them. Next, we present an overview of currently available neuromorphic AI projects in which these limitations are overcome by bringing some brain features into the functioning and organization of computing systems (TrueNorth, Loihi, Tianjic, SpiNNaker, BrainScaleS, NeuronFlow, DYNAP, Akida, Mythic). Also, we present the principle of classifying neuromorphic AI systems by the brain features they use: connectionism, parallelism, asynchrony, impulse nature of information transfer, on-device-learning, local learning, sparsity, analog, and in-memory computing. In addition to reviewing new architectural approaches used by neuromorphic devices based on existing silicon microelectronics technologies, we also discuss the prospects for using a new memristor element base. Examples of recent advances in the use of memristors in neuromorphic applications are also given.
Collapse
Affiliation(s)
- Dmitry Ivanov
- Cifrum, Moscow, Russia
- Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, Moscow, Russia
- *Correspondence: Dmitry Ivanov
| | | | - Mikhail Kiselev
- Cifrum, Moscow, Russia
- Laboratory of Neuromorphic Computations, Department of Physics, Chuvash State University, Cheboksary, Russia
| | - Andrey Grunin
- Faculty of Physics, Lomonosov Moscow State University, Moscow, Russia
| | | |
Collapse
|
5
|
Tiddia G, Golosio B, Albers J, Senk J, Simula F, Pronold J, Fanti V, Pastorelli E, Paolucci PS, van Albada SJ. Fast Simulation of a Multi-Area Spiking Network Model of Macaque Cortex on an MPI-GPU Cluster. Front Neuroinform 2022; 16:883333. [PMID: 35859800 PMCID: PMC9289599 DOI: 10.3389/fninf.2022.883333] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 06/02/2022] [Indexed: 11/29/2022] Open
Abstract
Spiking neural network models are increasingly establishing themselves as an effective tool for simulating the dynamics of neuronal populations and for understanding the relationship between these dynamics and brain function. Furthermore, the continuous development of parallel computing technologies and the growing availability of computational resources are leading to an era of large-scale simulations capable of describing regions of the brain of ever larger dimensions at increasing detail. Recently, the possibility to use MPI-based parallel codes on GPU-equipped clusters to run such complex simulations has emerged, opening up novel paths to further speed-ups. NEST GPU is a GPU library written in CUDA-C/C++ for large-scale simulations of spiking neural networks, which was recently extended with a novel algorithm for remote spike communication through MPI on a GPU cluster. In this work we evaluate its performance on the simulation of a multi-area model of macaque vision-related cortex, made up of about 4 million neurons and 24 billion synapses and representing 32 mm2 surface area of the macaque cortex. The outcome of the simulations is compared against that obtained using the well-known CPU-based spiking neural network simulator NEST on a high-performance computing cluster. The results show not only an optimal match with the NEST statistical measures of the neural activity in terms of three informative distributions, but also remarkable achievements in terms of simulation time per second of biological activity. Indeed, NEST GPU was able to simulate a second of biological time of the full-scale macaque cortex model in its metastable state 3.1× faster than NEST using 32 compute nodes equipped with an NVIDIA V100 GPU each. Using the same configuration, the ground state of the full-scale macaque cortex model was simulated 2.4× faster than NEST.
Collapse
Affiliation(s)
- Gianmarco Tiddia
- Department of Physics, University of Cagliari, Monserrato, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Monserrato, Italy
| | - Bruno Golosio
- Department of Physics, University of Cagliari, Monserrato, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Monserrato, Italy
- *Correspondence: Bruno Golosio
| | - Jasper Albers
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Francesco Simula
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | - Jari Pronold
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Viviana Fanti
- Department of Physics, University of Cagliari, Monserrato, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Monserrato, Italy
| | - Elena Pastorelli
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | | | - Sacha J. van Albada
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Faculty of Mathematics and Natural Sciences, Institute of Zoology, University of Cologne, Cologne, Germany
| |
Collapse
|
6
|
Müller E, Arnold E, Breitwieser O, Czierlinski M, Emmel A, Kaiser J, Mauch C, Schmitt S, Spilger P, Stock R, Stradmann Y, Weis J, Baumbach A, Billaudelle S, Cramer B, Ebert F, Göltz J, Ilmberger J, Karasenko V, Kleider M, Leibfried A, Pehle C, Schemmel J. A Scalable Approach to Modeling on Accelerated Neuromorphic Hardware. Front Neurosci 2022; 16:884128. [PMID: 35663548 PMCID: PMC9157770 DOI: 10.3389/fnins.2022.884128] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 04/20/2022] [Indexed: 11/29/2022] Open
Abstract
Neuromorphic systems open up opportunities to enlarge the explorative space for computational research. However, it is often challenging to unite efficiency and usability. This work presents the software aspects of this endeavor for the BrainScaleS-2 system, a hybrid accelerated neuromorphic hardware architecture based on physical modeling. We introduce key aspects of the BrainScaleS-2 Operating System: experiment workflow, API layering, software design, and platform operation. We present use cases to discuss and derive requirements for the software and showcase the implementation. The focus lies on novel system and software features such as multi-compartmental neurons, fast re-configuration for hardware-in-the-loop training, applications for the embedded processors, the non-spiking operation mode, interactive platform access, and sustainable hardware/software co-development. Finally, we discuss further developments in terms of hardware scale-up, system usability, and efficiency.
Collapse
Affiliation(s)
- Eric Müller
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Elias Arnold
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Oliver Breitwieser
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Milena Czierlinski
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Arne Emmel
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Jakob Kaiser
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christian Mauch
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Sebastian Schmitt
- Third Institute of Physics, University of Göttingen, Göttingen, Germany
| | - Philipp Spilger
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Raphael Stock
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Yannik Stradmann
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johannes Weis
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Baumbach
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- Department of Physiology, University of Bern, Bern, Switzerland
| | | | - Benjamin Cramer
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Falk Ebert
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Julian Göltz
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Joscha Ilmberger
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Vitali Karasenko
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Mitja Kleider
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Aron Leibfried
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christian Pehle
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johannes Schemmel
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
7
|
Bio-Inspired Control System for Fingers Actuated by Multiple SMA Actuators. Biomimetics (Basel) 2022; 7:biomimetics7020062. [PMID: 35645189 PMCID: PMC9149821 DOI: 10.3390/biomimetics7020062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Revised: 05/07/2022] [Accepted: 05/10/2022] [Indexed: 11/30/2022] Open
Abstract
Spiking neural networks are able to control with high precision the rotation and force of single-joint robotic arms when shape memory alloy wires are used for actuation. Bio-inspired robotic arms such as anthropomorphic fingers include more junctions that are actuated simultaneously. Starting from the hypothesis that the motor cortex groups the control of multiple muscles into neural synergies, this work presents for the first time an SNN structure that is able to control a series of finger motions by activation of groups of neurons that drive the corresponding actuators in sequence. The initial motion starts when a command signal is received, while the subsequent ones are initiated based on the sensors’ output. In order to increase the biological plausibility of the control system, the finger is flexed and extended by four SMA wires connected to the phalanges as the main tendons. The results show that the artificial finger that is controlled by the SNN is able to smoothly perform several motions of the human index finger while the command signal is active. To evaluate the advantages of using SNN, we compared the finger behaviours when the SMA actuators are driven by SNN, and by a microcontroller, respectively. In addition, we designed an electronic circuit that models the sensor’s output in concordance with the SNN output.
Collapse
|
8
|
Liu TY, Mahjoubfar A, Prusinski D, Stevens L. Neuromorphic computing for content-based image retrieval. PLoS One 2022; 17:e0264364. [PMID: 35385477 PMCID: PMC8985975 DOI: 10.1371/journal.pone.0264364] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 02/08/2022] [Indexed: 11/18/2022] Open
Abstract
Neuromorphic computing mimics the neural activity of the brain through emulating spiking neural networks. In numerous machine learning tasks, neuromorphic chips are expected to provide superior solutions in terms of cost and power efficiency. Here, we explore the application of Loihi, a neuromorphic computing chip developed by Intel, for the computer vision task of image retrieval. We evaluated the functionalities and the performance metrics that are critical in content-based visual search and recommender systems using deep-learning embeddings. Our results show that the neuromorphic solution is about 2.5 times more energy-efficient compared with an ARM Cortex-A72 CPU and 12.5 times more energy-efficient compared with NVIDIA T4 GPU for inference by a lightweight convolutional neural network when batch size is 1 while maintaining the same level of matching accuracy. The study validates the potential of neuromorphic computing in low-power image retrieval, as a complementary paradigm to the existing von Neumann architectures.
Collapse
Affiliation(s)
- Te-Yuan Liu
- Target Corporation, Sunnyvale, California, United States of America
| | - Ata Mahjoubfar
- Target Corporation, Sunnyvale, California, United States of America
- * E-mail:
| | - Daniel Prusinski
- Target Corporation, Sunnyvale, California, United States of America
| | - Luis Stevens
- Target Corporation, Sunnyvale, California, United States of America
| |
Collapse
|
9
|
Vogginger B, Kreutz F, López-Randulfe J, Liu C, Dietrich R, Gonzalez HA, Scholz D, Reeb N, Auge D, Hille J, Arsalan M, Mirus F, Grassmann C, Knoll A, Mayr C. Automotive Radar Processing With Spiking Neural Networks: Concepts and Challenges. Front Neurosci 2022; 16:851774. [PMID: 35431782 PMCID: PMC9012531 DOI: 10.3389/fnins.2022.851774] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 03/07/2022] [Indexed: 11/13/2022] Open
Abstract
Frequency-modulated continuous wave radar sensors play an essential role for assisted and autonomous driving as they are robust under all weather and light conditions. However, the rising number of transmitters and receivers for obtaining a higher angular resolution increases the cost for digital signal processing. One promising approach for energy-efficient signal processing is the usage of brain-inspired spiking neural networks (SNNs) implemented on neuromorphic hardware. In this article we perform a step-by-step analysis of automotive radar processing and argue how spiking neural networks could replace or complement the conventional processing. We provide SNN examples for two processing steps and evaluate their accuracy and computational efficiency. For radar target detection, an SNN with temporal coding is competitive to the conventional approach at a low compute overhead. Instead, our SNN for target classification achieves an accuracy close to a reference artificial neural network while requiring 200 times less operations. Finally, we discuss the specific requirements and challenges for SNN-based radar processing on neuromorphic hardware. This study proves the general applicability of SNNs for automotive radar processing and sustains the prospect of energy-efficient realizations in automated vehicles.
Collapse
Affiliation(s)
- Bernhard Vogginger
- Chair of Highly-Parallel VLSI-Systems and Neuro-Microelectronics, Faculty of Electrical and Computer Engineering, Institute of Principles of Electrical and Electronic Engineering, Technische Universität Dresden, Dresden, Germany
- *Correspondence: Bernhard Vogginger
| | - Felix Kreutz
- Chair of Highly-Parallel VLSI-Systems and Neuro-Microelectronics, Faculty of Electrical and Computer Engineering, Institute of Principles of Electrical and Electronic Engineering, Technische Universität Dresden, Dresden, Germany
- Infineon Technologies Dresden GmbH & Co., KG, Dresden, Germany
| | | | - Chen Liu
- Chair of Highly-Parallel VLSI-Systems and Neuro-Microelectronics, Faculty of Electrical and Computer Engineering, Institute of Principles of Electrical and Electronic Engineering, Technische Universität Dresden, Dresden, Germany
| | - Robin Dietrich
- Department of Informatics, Technical University of Munich, Munich, Germany
| | - Hector A. Gonzalez
- Chair of Highly-Parallel VLSI-Systems and Neuro-Microelectronics, Faculty of Electrical and Computer Engineering, Institute of Principles of Electrical and Electronic Engineering, Technische Universität Dresden, Dresden, Germany
| | - Daniel Scholz
- Chair of Highly-Parallel VLSI-Systems and Neuro-Microelectronics, Faculty of Electrical and Computer Engineering, Institute of Principles of Electrical and Electronic Engineering, Technische Universität Dresden, Dresden, Germany
- Infineon Technologies Dresden GmbH & Co., KG, Dresden, Germany
| | - Nico Reeb
- Department of Informatics, Technical University of Munich, Munich, Germany
| | - Daniel Auge
- Department of Informatics, Technical University of Munich, Munich, Germany
- Infineon Technologies AG, Munich, Germany
| | - Julian Hille
- Department of Informatics, Technical University of Munich, Munich, Germany
- Infineon Technologies AG, Munich, Germany
| | | | - Florian Mirus
- BMW Group, Research, New Technologies, Garching, Germany
| | | | - Alois Knoll
- Department of Informatics, Technical University of Munich, Munich, Germany
| | - Christian Mayr
- Chair of Highly-Parallel VLSI-Systems and Neuro-Microelectronics, Faculty of Electrical and Computer Engineering, Institute of Principles of Electrical and Electronic Engineering, Technische Universität Dresden, Dresden, Germany
- Centre for Tactile Internet (CeTI) With Human-In-The-Loop, Cluster of Excellence, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|
10
|
Pehle C, Billaudelle S, Cramer B, Kaiser J, Schreiber K, Stradmann Y, Weis J, Leibfried A, Müller E, Schemmel J. The BrainScaleS-2 Accelerated Neuromorphic System With Hybrid Plasticity. Front Neurosci 2022; 16:795876. [PMID: 35281488 PMCID: PMC8907969 DOI: 10.3389/fnins.2022.795876] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Accepted: 01/27/2022] [Indexed: 12/30/2022] Open
Abstract
Since the beginning of information processing by electronic components, the nervous system has served as a metaphor for the organization of computational primitives. Brain-inspired computing today encompasses a class of approaches ranging from using novel nano-devices for computation to research into large-scale neuromorphic architectures, such as TrueNorth, SpiNNaker, BrainScaleS, Tianjic, and Loihi. While implementation details differ, spiking neural networks-sometimes referred to as the third generation of neural networks-are the common abstraction used to model computation with such systems. Here we describe the second generation of the BrainScaleS neuromorphic architecture, emphasizing applications enabled by this architecture. It combines a custom analog accelerator core supporting the accelerated physical emulation of bio-inspired spiking neural network primitives with a tightly coupled digital processor and a digital event-routing network.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | - Johannes Schemmel
- Electronic Visions, Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
11
|
Sharma P, Raj B, Gill SS. Spintronics Based Non-Volatile MRAM for Intelligent Systems. INT J SEMANT WEB INF 2022. [DOI: 10.4018/ijswis.310056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In this paper the spintronic-based memory MRAM is presented that showed how it can replace both SRAM and DRAM and provide the high speed with great chip size. Moreover, MRAM is the nonvolatile memory that provides great advancement in the storage process. The different types of MRAM are mentioned with the techniques used for writing purpose and also mention which one is more used and why. The basic working principle and the function performed by the MRAM are discussed. Artificial intelligence (AI) is mentioned with its pros and cons for intelligent systems. Neuromorphic computing is also explained along with its important role in intelligent systems. Some reasons are also discussed as to why neuromorphic computing is so important. This paper also presents how spintronic-based devices especially memory can be used in intelligent systems and neuromorphic computing. Nanoscale spintronic-based MRAM plays a key role in intelligent systems and neuromorphic computing applications.
Collapse
|
12
|
Dasbach S, Tetzlaff T, Diesmann M, Senk J. Dynamical Characteristics of Recurrent Neuronal Networks Are Robust Against Low Synaptic Weight Resolution. Front Neurosci 2021; 15:757790. [PMID: 35002599 PMCID: PMC8740282 DOI: 10.3389/fnins.2021.757790] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 11/03/2021] [Indexed: 11/13/2022] Open
Abstract
The representation of the natural-density, heterogeneous connectivity of neuronal network models at relevant spatial scales remains a challenge for Computational Neuroscience and Neuromorphic Computing. In particular, the memory demands imposed by the vast number of synapses in brain-scale network simulations constitute a major obstacle. Limiting the number resolution of synaptic weights appears to be a natural strategy to reduce memory and compute load. In this study, we investigate the effects of a limited synaptic-weight resolution on the dynamics of recurrent spiking neuronal networks resembling local cortical circuits and develop strategies for minimizing deviations from the dynamics of networks with high-resolution synaptic weights. We mimic the effect of a limited synaptic weight resolution by replacing normally distributed synaptic weights with weights drawn from a discrete distribution, and compare the resulting statistics characterizing firing rates, spike-train irregularity, and correlation coefficients with the reference solution. We show that a naive discretization of synaptic weights generally leads to a distortion of the spike-train statistics. If the weights are discretized such that the mean and the variance of the total synaptic input currents are preserved, the firing statistics remain unaffected for the types of networks considered in this study. For networks with sufficiently heterogeneous in-degrees, the firing statistics can be preserved even if all synaptic weights are replaced by the mean of the weight distribution. We conclude that even for simple networks with non-plastic neurons and synapses, a discretization of synaptic weights can lead to substantial deviations in the firing statistics unless the discretization is performed with care and guided by a rigorous validation process. For the network model used in this study, the synaptic weights can be replaced by low-resolution weights without affecting its macroscopic dynamical characteristics, thereby saving substantial amounts of memory.
Collapse
Affiliation(s)
- Stefan Dasbach
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
13
|
Büchel J, Zendrikov D, Solinas S, Indiveri G, Muir DR. Supervised training of spiking neural networks for robust deployment on mixed-signal neuromorphic processors. Sci Rep 2021; 11:23376. [PMID: 34862429 PMCID: PMC8642544 DOI: 10.1038/s41598-021-02779-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 11/22/2021] [Indexed: 11/14/2022] Open
Abstract
Mixed-signal analog/digital circuits emulate spiking neurons and synapses with extremely high energy efficiency, an approach known as "neuromorphic engineering". However, analog circuits are sensitive to process-induced variation among transistors in a chip ("device mismatch"). For neuromorphic implementation of Spiking Neural Networks (SNNs), mismatch causes parameter variation between identically-configured neurons and synapses. Each chip exhibits a different distribution of neural parameters, causing deployed networks to respond differently between chips. Current solutions to mitigate mismatch based on per-chip calibration or on-chip learning entail increased design complexity, area and cost, making deployment of neuromorphic devices expensive and difficult. Here we present a supervised learning approach that produces SNNs with high robustness to mismatch and other common sources of noise. Our method trains SNNs to perform temporal classification tasks by mimicking a pre-trained dynamical system, using a local learning rule from non-linear control theory. We demonstrate our method on two tasks requiring temporal memory, and measure the robustness of our approach to several forms of noise and mismatch. We show that our approach is more robust than common alternatives for training SNNs. Our method provides robust deployment of pre-trained networks on mixed-signal neuromorphic hardware, without requiring per-device training or calibration.
Collapse
Affiliation(s)
- Julian Büchel
- SynSense, Thurgauerstrasse 40, 8050, Zurich, Switzerland
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland
| | - Dmitrii Zendrikov
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland
| | - Sergio Solinas
- Department of Biomedical Science, University of Sassari, Piazza Università, 21, 07100, Sassari, Sardegna, Italy
| | - Giacomo Indiveri
- SynSense, Thurgauerstrasse 40, 8050, Zurich, Switzerland
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland
| | - Dylan R Muir
- SynSense, Thurgauerstrasse 40, 8050, Zurich, Switzerland.
| |
Collapse
|
14
|
Chevtchenko SF, Ludermir TB. Combining STDP and binary networks for reinforcement learning from images and sparse rewards. Neural Netw 2021; 144:496-506. [PMID: 34601362 DOI: 10.1016/j.neunet.2021.09.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Revised: 09/03/2021] [Accepted: 09/07/2021] [Indexed: 11/15/2022]
Abstract
Spiking neural networks (SNNs) aim to replicate energy efficiency, learning speed and temporal processing of biological brains. However, accuracy and learning speed of such networks is still behind reinforcement learning (RL) models based on traditional neural models. This work combines a pre-trained binary convolutional neural network with an SNN trained online through reward-modulated STDP in order to leverage advantages of both models. The spiking network is an extension of its previous version, with improvements in architecture and dynamics to address a more challenging task. We focus on extensive experimental evaluation of the proposed model with optimized state-of-the-art baselines, namely proximal policy optimization (PPO) and deep Q network (DQN). The models are compared on a grid-world environment with high dimensional observations, consisting of RGB images with up to 256 × 256 pixels. The experimental results show that the proposed architecture can be a competitive alternative to deep reinforcement learning (DRL) in the evaluated environment and provide a foundation for more complex future applications of spiking networks.
Collapse
Affiliation(s)
- Sérgio F Chevtchenko
- Centro de Informática - CIn, Universidade Federal de Pernambuco, Av. Jornalista Aníbal Fernandes, s/n, Cidade Universitária, 50.740-560, Brazil.
| | - Teresa B Ludermir
- Centro de Informática - CIn, Universidade Federal de Pernambuco, Av. Jornalista Aníbal Fernandes, s/n, Cidade Universitária, 50.740-560, Brazil.
| |
Collapse
|
15
|
|
16
|
Wunderlich TC, Pehle C. Event-based backpropagation can compute exact gradients for spiking neural networks. Sci Rep 2021; 11:12829. [PMID: 34145314 PMCID: PMC8213775 DOI: 10.1038/s41598-021-91786-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 05/28/2021] [Indexed: 11/09/2022] Open
Abstract
Spiking neural networks combine analog computation with event-based communication using discrete spikes. While the impressive advances of deep learning are enabled by training non-spiking artificial neural networks using the backpropagation algorithm, applying this algorithm to spiking networks was previously hindered by the existence of discrete spike events and discontinuities. For the first time, this work derives the backpropagation algorithm for a continuous-time spiking neural network and a general loss function by applying the adjoint method together with the proper partial derivative jumps, allowing for backpropagation through discrete spike events without approximations. This algorithm, EventProp, backpropagates errors at spike times in order to compute the exact gradient in an event-based, temporally and spatially sparse fashion. We use gradients computed via EventProp to train networks on the Yin-Yang and MNIST datasets using either a spike time or voltage based loss function and report competitive performance. Our work supports the rigorous study of gradient-based learning algorithms in spiking neural networks and provides insights toward their implementation in novel brain-inspired hardware.
Collapse
Affiliation(s)
- Timo C Wunderlich
- Kirchhoff-Institute for Physics, Heidelberg University, 69120, Heidelberg, Germany.
- Berlin Institute of Health, Charité-Universitätsmedizin, 10117, Berlin, Germany.
| | - Christian Pehle
- Kirchhoff-Institute for Physics, Heidelberg University, 69120, Heidelberg, Germany.
| |
Collapse
|
17
|
Boussard A, Fessel A, Oettmeier C, Briard L, Döbereiner HG, Dussutour A. Adaptive behaviour and learning in slime moulds: the role of oscillations. Philos Trans R Soc Lond B Biol Sci 2021; 376:20190757. [PMID: 33487112 PMCID: PMC7935053 DOI: 10.1098/rstb.2019.0757] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/27/2020] [Indexed: 12/11/2022] Open
Abstract
The slime mould Physarum polycephalum, an aneural organism, uses information from previous experiences to adjust its behaviour, but the mechanisms by which this is accomplished remain unknown. This article examines the possible role of oscillations in learning and memory in slime moulds. Slime moulds share surprising similarities with the network of synaptic connections in animal brains. First, their topology derives from a network of interconnected, vein-like tubes in which signalling molecules are transported. Second, network motility, which generates slime mould behaviour, is driven by distinct oscillations that organize into spatio-temporal wave patterns. Likewise, neural activity in the brain is organized in a variety of oscillations characterized by different frequencies. Interestingly, the oscillating networks of slime moulds are not precursors of nervous systems but, rather, an alternative architecture. Here, we argue that comparable information-processing operations can be realized on different architectures sharing similar oscillatory properties. After describing learning abilities and oscillatory activities of P. polycephalum, we explore the relation between network oscillations and learning, and evaluate the organism's global architecture with respect to information-processing potential. We hypothesize that, as in the brain, modulation of spontaneous oscillations may sustain learning in slime mould. This article is part of the theme issue 'Basal cognition: conceptual tools and the view from the single cell'.
Collapse
Affiliation(s)
- Aurèle Boussard
- Research Centre on Animal Cognition (CRCA), Centre for Integrative Biology (CBI), Toulouse University, CNRS, UPS, Toulouse 31062, France
| | - Adrian Fessel
- Institut für Biophysik, Universität Bremen, Otto-Hahn-Allee 1, 28359 Bremen, Germany
| | - Christina Oettmeier
- Institut für Biophysik, Universität Bremen, Otto-Hahn-Allee 1, 28359 Bremen, Germany
| | - Léa Briard
- Research Centre on Animal Cognition (CRCA), Centre for Integrative Biology (CBI), Toulouse University, CNRS, UPS, Toulouse 31062, France
| | | | - Audrey Dussutour
- Research Centre on Animal Cognition (CRCA), Centre for Integrative Biology (CBI), Toulouse University, CNRS, UPS, Toulouse 31062, France
| |
Collapse
|
18
|
Hulea M, Ghassemlooy Z, Rajbhandari S, Younus OI, Barleanu A. Optical Axons for Electro-Optical Neural Networks. SENSORS (BASEL, SWITZERLAND) 2020; 20:E6119. [PMID: 33121207 PMCID: PMC7663001 DOI: 10.3390/s20216119] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Revised: 10/19/2020] [Accepted: 10/23/2020] [Indexed: 11/30/2022]
Abstract
Recently, neuromorphic sensors, which convert analogue signals to spiking frequencies, have been reported for neurorobotics. In bio-inspired systems these sensors are connected to the main neural unit to perform post-processing of the sensor data. The performance of spiking neural networks has been improved using optical synapses, which offer parallel communications between the distanced neural areas but are sensitive to the intensity variations of the optical signal. For systems with several neuromorphic sensors, which are connected optically to the main unit, the use of optical synapses is not an advantage. To address this, in this paper we propose and experimentally verify optical axons with synapses activated optically using digital signals. The synaptic weights are encoded by the energy of the stimuli, which are then optically transmitted independently. We show that the optical intensity fluctuations and link's misalignment result in delay in activation of the synapses. For the proposed optical axon, we have demonstrated line of sight transmission over a maximum link length of 190 cm with a delay of 8 μs. Furthermore, we show the axon delay as a function of the illuminance using a fitted model for which the root mean square error (RMS) similarity is 0.95.
Collapse
Affiliation(s)
- Mircea Hulea
- Faculty of Automatic Control and Computer Engineering at Gheorghe Asachi Technical University of Iasi, 700050 Iasi, Romania;
| | - Zabih Ghassemlooy
- Optical Communications Research Group, Faculty of Engineering and Environment at Northumbria University, Newcastle upon Tyne NE7 7XA, UK; (Z.G.); (O.I.Y.)
| | | | - Othman Isam Younus
- Optical Communications Research Group, Faculty of Engineering and Environment at Northumbria University, Newcastle upon Tyne NE7 7XA, UK; (Z.G.); (O.I.Y.)
| | - Alexandru Barleanu
- Faculty of Automatic Control and Computer Engineering at Gheorghe Asachi Technical University of Iasi, 700050 Iasi, Romania;
| |
Collapse
|
19
|
Structural plasticity on an accelerated analog neuromorphic hardware system. Neural Netw 2020; 133:11-20. [PMID: 33091719 DOI: 10.1016/j.neunet.2020.09.024] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Revised: 07/17/2020] [Accepted: 09/28/2020] [Indexed: 11/23/2022]
Abstract
In computational neuroscience, as well as in machine learning, neuromorphic devices promise an accelerated and scalable alternative to neural network simulations. Their neural connectivity and synaptic capacity depend on their specific design choices, but is always intrinsically limited. Here, we present a strategy to achieve structural plasticity that optimizes resource allocation under these constraints by constantly rewiring the pre- and postsynaptic partners while keeping the neuronal fan-in constant and the connectome sparse. In particular, we implemented this algorithm on the analog neuromorphic system BrainScaleS-2. It was executed on a custom embedded digital processor located on chip, accompanying the mixed-signal substrate of spiking neurons and synapse circuits. We evaluated our implementation in a simple supervised learning scenario, showing its ability to optimize the network topology with respect to the nature of its training data, as well as its overall computational efficiency.
Collapse
|
20
|
Cramer B, Stöckel D, Kreft M, Wibral M, Schemmel J, Meier K, Priesemann V. Control of criticality and computation in spiking neuromorphic networks with plasticity. Nat Commun 2020; 11:2853. [PMID: 32503982 PMCID: PMC7275091 DOI: 10.1038/s41467-020-16548-3] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2019] [Accepted: 04/23/2020] [Indexed: 11/08/2022] Open
Abstract
The critical state is assumed to be optimal for any computation in recurrent neural networks, because criticality maximizes a number of abstract computational properties. We challenge this assumption by evaluating the performance of a spiking recurrent neural network on a set of tasks of varying complexity at - and away from critical network dynamics. To that end, we developed a plastic spiking network on a neuromorphic chip. We show that the distance to criticality can be easily adapted by changing the input strength, and then demonstrate a clear relation between criticality, task-performance and information-theoretic fingerprint. Whereas the information-theoretic measures all show that network capacity is maximal at criticality, only the complex tasks profit from criticality, whereas simple tasks suffer. Thereby, we challenge the general assumption that criticality would be beneficial for any task, and provide instead an understanding of how the collective network state should be tuned to task requirement.
Collapse
Affiliation(s)
- Benjamin Cramer
- Kirchhoff-Institute for Physics, Heidelberg University, Im Neuenheimer Feld 227, 69120, Heidelberg, Germany.
| | - David Stöckel
- Kirchhoff-Institute for Physics, Heidelberg University, Im Neuenheimer Feld 227, 69120, Heidelberg, Germany
| | - Markus Kreft
- Kirchhoff-Institute for Physics, Heidelberg University, Im Neuenheimer Feld 227, 69120, Heidelberg, Germany
| | - Michael Wibral
- Campus Institute for Dynamics of Biological Networks, Georg-August University, Hermann-Rein-Straße 3, 37075, Göttingen, Germany
| | - Johannes Schemmel
- Kirchhoff-Institute for Physics, Heidelberg University, Im Neuenheimer Feld 227, 69120, Heidelberg, Germany
| | - Karlheinz Meier
- Kirchhoff-Institute for Physics, Heidelberg University, Im Neuenheimer Feld 227, 69120, Heidelberg, Germany
| | - Viola Priesemann
- Max-Planck-Institute for Dynamics and Self-Organization, Am Faßberg 17, 37077, Göttingen, Germany.
- Bernstein Center for Computational Neuroscience, Georg-August University, Am Faßberg 17, 37077, Göttingen, Germany.
- Department of Physics, Georg-August University, Friedrich-Hund-Platz 1, 37077, Göttingen, Germany.
| |
Collapse
|
21
|
Cremonesi F, Schürmann F. Understanding Computational Costs of Cellular-Level Brain Tissue Simulations Through Analytical Performance Models. Neuroinformatics 2020; 18:407-428. [PMID: 32056104 PMCID: PMC7338826 DOI: 10.1007/s12021-019-09451-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Computational modeling and simulation have become essential tools in the quest to better understand the brain's makeup and to decipher the causal interrelations of its components. The breadth of biochemical and biophysical processes and structures in the brain has led to the development of a large variety of model abstractions and specialized tools, often times requiring high performance computing resources for their timely execution. What has been missing so far was an in-depth analysis of the complexity of the computational kernels, hindering a systematic approach to identifying bottlenecks of algorithms and hardware. If whole brain models are to be achieved on emerging computer generations, models and simulation engines will have to be carefully co-designed for the intrinsic hardware tradeoffs. For the first time, we present a systematic exploration based on analytic performance modeling. We base our analysis on three in silico models, chosen as representative examples of the most widely employed modeling abstractions: current-based point neurons, conductance-based point neurons and conductance-based detailed neurons. We identify that the synaptic modeling formalism, i.e. current or conductance-based representation, and not the level of morphological detail, is the most significant factor in determining the properties of memory bandwidth saturation and shared-memory scaling of in silico models. Even though general purpose computing has, until now, largely been able to deliver high performance, we find that for all types of abstractions, network latency and memory bandwidth will become severe bottlenecks as the number of neurons to be simulated grows. By adapting and extending a performance modeling approach, we deliver a first characterization of the performance landscape of brain tissue simulations, allowing us to pinpoint current bottlenecks for state-of-the-art in silico models, and make projections for future hardware and software requirements.
Collapse
Affiliation(s)
- Francesco Cremonesi
- Blue Brain Project, Brain Mind Institute, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, 1202, Geneva, Switzerland
| | - Felix Schürmann
- Blue Brain Project, Brain Mind Institute, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, 1202, Geneva, Switzerland.
| |
Collapse
|
22
|
Fil J, Chu D. Minimal Spiking Neuron for Solving Multilabel Classification Tasks. Neural Comput 2020; 32:1408-1429. [PMID: 32433898 DOI: 10.1162/neco_a_01290] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The multispike tempotron (MST) is a powersul, single spiking neuron model that can solve complex supervised classification tasks. It is also internally complex, computationally expensive to evaluate, and unsuitable for neuromorphic hardware. Here we aim to understand whether it is possible to simplify the MST model while retaining its ability to learn and process information. To this end, we introduce a family of generalized neuron models (GNMs) that are a special case of the spike response model and much simpler and cheaper to simulate than the MST. We find that over a wide range of parameters, the GNM can learn at least as well as the MST does. We identify the temporal autocorrelation of the membrane potential as the most important ingredient of the GNM that enables it to classify multiple spatiotemporal patterns. We also interpret the GNM as a chemical system, thus conceptually bridging computation by neural networks with molecular information processing. We conclude the letter by proposing alternative training approaches for the GNM, including error trace learning and error backpropagation.
Collapse
Affiliation(s)
- Jakub Fil
- School of Computing, University of Kent, Canterbury CT2 7NF, U.K.
| | - Dominique Chu
- School of Computing, University of Kent, Canterbury CT2 7NF, U.K.
| |
Collapse
|
23
|
Wu C, Zhang Y, Zhou X, Li D, Park JH, An H, Sung S, Lin J, Guo T, Li F, Kim TW. Binary Electronic Synapses for Integrating Digital and Neuromorphic Computation in a Single Physical Platform. ACS APPLIED MATERIALS & INTERFACES 2020; 12:17130-17138. [PMID: 32174099 DOI: 10.1021/acsami.0c02145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
As a promising advanced computation technology, the integration of digital computation with neuromorphic computation into a single physical platform holds the advantage of a precise, deterministic, fast data process as well as the advantage of a flexible, paralleled, fault-tolerant data process. Even though two-terminal memristive devices have been respectively proved as leading electronic elements for digital computation and neuromorphic computation, it is difficult to steadily maintain both sudden-state-change and gradual-state-change in a single device due to the entirely different operating mechanisms. In this work, we developed a digital-analog compatible memristive device, namely, binary electronic synapse, through realizing controllable cation drift in a memristive layer. The devices feature nonvolatile binary memory as well as artificial neuromorphic plasticity with high operation endurance. With strong nonlinearity in switching dynamics, binary switching, neuromorphic plasticity, two-dimension information store, and trainable memory can be implemented by a single device.
Collapse
Affiliation(s)
- Chaoxing Wu
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Yongai Zhang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Xiongtu Zhou
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Dianlun Li
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Jae Hyeon Park
- Department of Electronic and Computer Engineering, Hanyang University, Seoul 133-791, Korea
| | - Haoqun An
- Department of Electronic and Computer Engineering, Hanyang University, Seoul 133-791, Korea
| | - Sihyun Sung
- Department of Electronic and Computer Engineering, Hanyang University, Seoul 133-791, Korea
| | - Jintang Lin
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Tailiang Guo
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Fushan Li
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Tae Whan Kim
- Department of Electronic and Computer Engineering, Hanyang University, Seoul 133-791, Korea
| |
Collapse
|
24
|
Kungl AF, Schmitt S, Klähn J, Müller P, Baumbach A, Dold D, Kugele A, Müller E, Koke C, Kleider M, Mauch C, Breitwieser O, Leng L, Gürtler N, Güttler M, Husmann D, Husmann K, Hartel A, Karasenko V, Grübl A, Schemmel J, Meier K, Petrovici MA. Accelerated Physical Emulation of Bayesian Inference in Spiking Neural Networks. Front Neurosci 2019; 13:1201. [PMID: 31798400 PMCID: PMC6868054 DOI: 10.3389/fnins.2019.01201] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Accepted: 10/23/2019] [Indexed: 11/13/2022] Open
Abstract
The massively parallel nature of biological information processing plays an important role due to its superiority in comparison to human-engineered computing devices. In particular, it may hold the key to overcoming the von Neumann bottleneck that limits contemporary computer architectures. Physical-model neuromorphic devices seek to replicate not only this inherent parallelism, but also aspects of its microscopic dynamics in analog circuits emulating neurons and synapses. However, these machines require network models that are not only adept at solving particular tasks, but that can also cope with the inherent imperfections of analog substrates. We present a spiking network model that performs Bayesian inference through sampling on the BrainScaleS neuromorphic platform, where we use it for generative and discriminative computations on visual data. By illustrating its functionality on this platform, we implicitly demonstrate its robustness to various substrate-specific distortive effects, as well as its accelerated capability for computation. These results showcase the advantages of brain-inspired physical computation and provide important building blocks for large-scale neuromorphic applications.
Collapse
Affiliation(s)
- Akos F Kungl
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Sebastian Schmitt
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johann Klähn
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Paul Müller
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Baumbach
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Dominik Dold
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Alexander Kugele
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Eric Müller
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christoph Koke
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Mitja Kleider
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christian Mauch
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Oliver Breitwieser
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Luziwei Leng
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Nico Gürtler
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Maurice Güttler
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Dan Husmann
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Kai Husmann
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Hartel
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Vitali Karasenko
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Grübl
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johannes Schemmel
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Karlheinz Meier
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Mihai A Petrovici
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany.,Department of Physiology, University of Bern, Bern, Switzerland
| |
Collapse
|
25
|
Bohnstingl T, Scherr F, Pehle C, Meier K, Maass W. Neuromorphic Hardware Learns to Learn. Front Neurosci 2019; 13:483. [PMID: 31178681 PMCID: PMC6536858 DOI: 10.3389/fnins.2019.00483] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2019] [Accepted: 04/29/2019] [Indexed: 11/13/2022] Open
Abstract
Hyperparameters and learning algorithms for neuromorphic hardware are usually chosen by hand to suit a particular task. In contrast, networks of neurons in the brain were optimized through extensive evolutionary and developmental processes to work well on a range of computing and learning tasks. Occasionally this process has been emulated through genetic algorithms, but these require themselves hand-design of their details and tend to provide a limited range of improvements. We employ instead other powerful gradient-free optimization tools, such as cross-entropy methods and evolutionary strategies, in order to port the function of biological optimization processes to neuromorphic hardware. As an example, we show these optimization algorithms enable neuromorphic agents to learn very efficiently from rewards. In particular, meta-plasticity, i.e., the optimization of the learning rule which they use, substantially enhances reward-based learning capability of the hardware. In addition, we demonstrate for the first time Learning-to-Learn benefits from such hardware, in particular, the capability to extract abstract knowledge from prior learning experiences that speeds up the learning of new but related tasks. Learning-to-Learn is especially suited for accelerated neuromorphic hardware, since it makes it feasible to carry out the required very large number of network computations.
Collapse
Affiliation(s)
- Thomas Bohnstingl
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Franz Scherr
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Christian Pehle
- Kirchhoff-Institute for Physics, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| | - Karlheinz Meier
- Kirchhoff-Institute for Physics, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| | - Wolfgang Maass
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| |
Collapse
|