1
|
Perera S, Xu Y, van Schaik A, Wang R. Low-latency hierarchical routing of reconfigurable neuromorphic systems. Front Neurosci 2025; 19:1493623. [PMID: 39967805 PMCID: PMC11832709 DOI: 10.3389/fnins.2025.1493623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2024] [Accepted: 01/08/2025] [Indexed: 02/20/2025] Open
Abstract
A reconfigurable hardware accelerator implementation for spiking neural network (SNN) simulation using field-programmable gate arrays (FPGAs) is promising and attractive research because massive parallelism results in better execution speed. For large-scale SNN simulations, a large number of FPGAs are needed. However, inter-FPGA communication bottlenecks cause congestion, data losses, and latency inefficiencies. In this work, we employed a hierarchical tree-based interconnection architecture for multi-FPGAs. This architecture is scalable as new branches can be added to a tree, maintaining a constant local bandwidth. The tree-based approach contrasts with linear Network on Chip (NoC), where congestion can arise from numerous connections. We propose a routing architecture that introduces an arbiter mechanism by employing stochastic arbitration considering data level queues of First In, First Out (FIFO) buffers. This mechanism effectively reduces the bottleneck caused by FIFO congestion, resulting in improved overall latency. Results present measurement data collected for performance analysis of latency. We compared the performance of the design using our proposed stochastic routing scheme to a traditional round-robin architecture. The results demonstrate that the stochastic arbiters achieve lower worst-case latency and improved overall performance compared to the round-robin arbiters.
Collapse
Affiliation(s)
- Samalika Perera
- International Centre for Neuromorphic Systems, The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Kingswood, NSW, Australia
| | | | | | | |
Collapse
|
2
|
Urbizagastegui P, van Schaik A, Wang R. Memory-efficient neurons and synapses for spike-timing-dependent-plasticity in large-scale spiking networks. Front Neurosci 2024; 18:1450640. [PMID: 39308944 PMCID: PMC11412959 DOI: 10.3389/fnins.2024.1450640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2024] [Accepted: 08/12/2024] [Indexed: 09/25/2024] Open
Abstract
This paper addresses the challenges posed by frequent memory access during simulations of large-scale spiking neural networks involving synaptic plasticity. We focus on the memory accesses performed during a common synaptic plasticity rule since this can be a significant factor limiting the efficiency of the simulations. We propose neuron models that are represented by only three state variables, which are engineered to enforce the appropriate neuronal dynamics. Additionally, memory retrieval is executed solely by fetching postsynaptic variables, promoting a contiguous memory storage and leveraging the capabilities of burst mode operations to reduce the overhead associated with each access. Different plasticity rules could be implemented despite the adopted simplifications, each leading to a distinct synaptic weight distribution (i.e., unimodal and bimodal). Moreover, our method requires fewer average memory accesses compared to a naive approach. We argue that the strategy described can speed up memory transactions and reduce latencies while maintaining a small memory footprint.
Collapse
Affiliation(s)
- Pablo Urbizagastegui
- International Centre for Neuromorphic Systems, The MARCS Institute for Brain, Behavior, and Development, Western Sydney University, Kingswood, NSW, Australia
| | | | | |
Collapse
|
3
|
Hou KM, Diao X, Shi H, Ding H, Zhou H, de Vaulx C. Trends and Challenges in AIoT/IIoT/IoT Implementation. SENSORS (BASEL, SWITZERLAND) 2023; 23:5074. [PMID: 37299800 PMCID: PMC10255551 DOI: 10.3390/s23115074] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 05/16/2023] [Accepted: 05/18/2023] [Indexed: 06/12/2023]
Abstract
For the next coming years, metaverse, digital twin and autonomous vehicle applications are the leading technologies for many complex applications hitherto inaccessible such as health and life sciences, smart home, smart agriculture, smart city, smart car and logistics, Industry 4.0, entertainment (video game) and social media applications, due to recent tremendous developments in process modeling, supercomputing, cloud data analytics (deep learning, etc.), communication network and AIoT/IIoT/IoT technologies. AIoT/IIoT/IoT is a crucial research field because it provides the essential data to fuel metaverse, digital twin, real-time Industry 4.0 and autonomous vehicle applications. However, the science of AIoT is inherently multidisciplinary, and therefore, it is difficult for readers to understand its evolution and impacts. Our main contribution in this article is to analyze and highlight the trends and challenges of the AIoT technology ecosystem including core hardware (MCU, MEMS/NEMS sensors and wireless access medium), core software (operating system and protocol communication stack) and middleware (deep learning on a microcontroller: TinyML). Two low-powered AI technologies emerge: TinyML and neuromorphic computing, but only one AIoT/IIoT/IoT device implementation using TinyML dedicated to strawberry disease detection as a case study. So far, despite the very rapid progress of AIoT/IIoT/IoT technologies, several challenges remain to be overcome such as safety, security, latency, interoperability and reliability of sensor data, which are essential characteristics to meet the requirements of metaverse, digital twin, autonomous vehicle and Industry 4.0. applications.
Collapse
Affiliation(s)
- Kun Mean Hou
- Université Clermont-Auvergne, CNRS, Mines de Saint-Étienne, Clermont-Auvergne-INP, LIMOS, F-63000 Clermont-Ferrand, France
| | | | - Hongling Shi
- College of Electronics and Information Engineering, South Central Minzu University (SCMZU), Wuhan 430070, China
| | - Hao Ding
- College of Electronics and Information Engineering, South Central Minzu University (SCMZU), Wuhan 430070, China
| | | | - Christophe de Vaulx
- Université Clermont-Auvergne, CNRS, Mines de Saint-Étienne, Clermont-Auvergne-INP, LIMOS, F-63000 Clermont-Ferrand, France
| |
Collapse
|
4
|
Kauth K, Stadtmann T, Sobhani V, Gemmeke T. neuroAIx-Framework: design of future neuroscience simulation systems exhibiting execution of the cortical microcircuit model 20× faster than biological real-time. Front Comput Neurosci 2023; 17:1144143. [PMID: 37152299 PMCID: PMC10156974 DOI: 10.3389/fncom.2023.1144143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 03/30/2023] [Indexed: 05/09/2023] Open
Abstract
Introduction Research in the field of computational neuroscience relies on highly capable simulation platforms. With real-time capabilities surpassed for established models like the cortical microcircuit, it is time to conceive next-generation systems: neuroscience simulators providing significant acceleration, even for larger networks with natural density, biologically plausible multi-compartment models and the modeling of long-term and structural plasticity. Methods Stressing the need for agility to adapt to new concepts or findings in the domain of neuroscience, we have developed the neuroAIx-Framework consisting of an empirical modeling tool, a virtual prototype, and a cluster of FPGA boards. This framework is designed to support and accelerate the continuous development of such platforms driven by new insights in neuroscience. Results Based on design space explorations using this framework, we devised and realized an FPGA cluster consisting of 35 NetFPGA SUME boards. Discussion This system functions as an evaluation platform for our framework. At the same time, it resulted in a fully deterministic neuroscience simulation system surpassing the state of the art in both performance and energy efficiency. It is capable of simulating the microcircuit with 20× acceleration compared to biological real-time and achieves an energy efficiency of 48nJ per synaptic event.
Collapse
|
5
|
Yang S, Wang J, Deng B, Azghadi MR, Linares-Barranco B. Neuromorphic Context-Dependent Learning Framework With Fault-Tolerant Spike Routing. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:7126-7140. [PMID: 34115596 DOI: 10.1109/tnnls.2021.3084250] [Citation(s) in RCA: 57] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Neuromorphic computing is a promising technology that realizes computation based on event-based spiking neural networks (SNNs). However, fault-tolerant on-chip learning remains a challenge in neuromorphic systems. This study presents the first scalable neuromorphic fault-tolerant context-dependent learning (FCL) hardware framework. We show how this system can learn associations between stimulation and response in two context-dependent learning tasks from experimental neuroscience, despite possible faults in the hardware nodes. Furthermore, we demonstrate how our novel fault-tolerant neuromorphic spike routing scheme can avoid multiple fault nodes successfully and can enhance the maximum throughput of the neuromorphic network by 0.9%-16.1% in comparison with previous studies. By utilizing the real-time computational capabilities and multiple-fault-tolerant property of the proposed system, the neuronal mechanisms underlying the spiking activities of neuromorphic networks can be readily explored. In addition, the proposed system can be applied in real-time learning and decision-making applications, brain-machine integration, and the investigation of brain cognition during learning.
Collapse
|
6
|
Müller E, Schmitt S, Mauch C, Billaudelle S, Grübl A, Güttler M, Husmann D, Ilmberger J, Jeltsch S, Kaiser J, Klähn J, Kleider M, Koke C, Montes J, Müller P, Partzsch J, Passenberg F, Schmidt H, Vogginger B, Weidner J, Mayr C, Schemmel J. The operating system of the neuromorphic BrainScaleS-1 system. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.05.081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
7
|
Trensch G, Morrison A. A System-on-Chip Based Hybrid Neuromorphic Compute Node Architecture for Reproducible Hyper-Real-Time Simulations of Spiking Neural Networks. Front Neuroinform 2022; 16:884033. [PMID: 35846779 PMCID: PMC9277345 DOI: 10.3389/fninf.2022.884033] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 05/23/2022] [Indexed: 11/23/2022] Open
Abstract
Despite the great strides neuroscience has made in recent decades, the underlying principles of brain function remain largely unknown. Advancing the field strongly depends on the ability to study large-scale neural networks and perform complex simulations. In this context, simulations in hyper-real-time are of high interest, as they would enable both comprehensive parameter scans and the study of slow processes, such as learning and long-term memory. Not even the fastest supercomputer available today is able to meet the challenge of accurate and reproducible simulation with hyper-real acceleration. The development of novel neuromorphic computer architectures holds out promise, but the high costs and long development cycles for application-specific hardware solutions makes it difficult to keep pace with the rapid developments in neuroscience. However, advances in System-on-Chip (SoC) device technology and tools are now providing interesting new design possibilities for application-specific implementations. Here, we present a novel hybrid software-hardware architecture approach for a neuromorphic compute node intended to work in a multi-node cluster configuration. The node design builds on the Xilinx Zynq-7000 SoC device architecture that combines a powerful programmable logic gate array (FPGA) and a dual-core ARM Cortex-A9 processor extension on a single chip. Our proposed architecture makes use of both and takes advantage of their tight coupling. We show that available SoC device technology can be used to build smaller neuromorphic computing clusters that enable hyper-real-time simulation of networks consisting of tens of thousands of neurons, and are thus capable of meeting the high demands for modeling and simulation in neuroscience.
Collapse
Affiliation(s)
- Guido Trensch
- Simulation and Data Laboratory Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, Jülich Research Centre, Jülich, Germany
- Department of Computer Science 3—Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Abigail Morrison
- Simulation and Data Laboratory Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, Jülich Research Centre, Jülich, Germany
- Department of Computer Science 3—Software Engineering, RWTH Aachen University, Aachen, Germany
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA-Institute Brain Structure-Function Relationship (JBI-1/INM-10), Research Centre Jülich, Jülich, Germany
| |
Collapse
|
8
|
Zhang G, Zhang X, Rong H, Paul P, Zhu M, Neri F, Ong YS. A Layered Spiking Neural System for Classification Problems. Int J Neural Syst 2022; 32:2250023. [PMID: 35416762 DOI: 10.1142/s012906572250023x] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Biological brains have a natural capacity for resolving certain classification tasks. Studies on biologically plausible spiking neurons, architectures and mechanisms of artificial neural systems that closely match biological observations while giving high classification performance are gaining momentum. Spiking neural P systems (SN P systems) are a class of membrane computing models and third-generation neural networks that are based on the behavior of biological neural cells and have been used in various engineering applications. Furthermore, SN P systems are characterized by a highly flexible structure that enables the design of a machine learning algorithm by mimicking the structure and behavior of biological cells without the over-simplification present in neural networks. Based on this aspect, this paper proposes a novel type of SN P system, namely, layered SN P system (LSN P system), to solve classification problems by supervised learning. The proposed LSN P system consists of a multi-layer network containing multiple weighted fuzzy SN P systems with adaptive weight adjustment rules. The proposed system employs specific ascending dimension techniques and a selection method of output neurons for classification problems. The experimental results obtained using benchmark datasets from the UCI machine learning repository and MNIST dataset demonstrated the feasibility and effectiveness of the proposed LSN P system. More importantly, the proposed LSN P system presents the first SN P system that demonstrates sufficient performance for use in addressing real-world classification problems.
Collapse
Affiliation(s)
- Gexiang Zhang
- School of Control Engineering, Chengdu University of Information Technology, Chengdu 610225, P. R. China
| | - Xihai Zhang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, P. R. China
| | - Haina Rong
- School of Electrical Engineering, Southwest Jiaotong University, Chengdu 610031, P. R. China
| | - Prithwineel Paul
- School of Control Engineering, Chengdu University of Information Technology, Chengdu 610225, P. R. China
| | - Ming Zhu
- School of Control Engineering, Chengdu University of Information Technology, Chengdu 610225, P. R. China
| | - Ferrante Neri
- NICE Group, Department of Computer Science, University of Surrey, UK
| | - Yew-Soon Ong
- School of Computer Science and Engineering, Nanyang Technological University, Singapore
| |
Collapse
|
9
|
Vanattou-Saïfoudine N, Han C, Krause R, Vasilaki E, von der Behrens W, Indiveri G. A robust model of Stimulus-Specific Adaptation validated on neuromorphic hardware. Sci Rep 2021; 11:17904. [PMID: 34504155 PMCID: PMC8429557 DOI: 10.1038/s41598-021-97217-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Accepted: 08/10/2021] [Indexed: 02/08/2023] Open
Abstract
Stimulus-Specific Adaptation (SSA) to repetitive stimulation is a phenomenon that has been observed across many different species and in several brain sensory areas. It has been proposed as a computational mechanism, responsible for separating behaviorally relevant information from the continuous stream of sensory information. Although SSA can be induced and measured reliably in a wide variety of conditions, the network details and intracellular mechanisms giving rise to SSA still remain unclear. Recent computational studies proposed that SSA could be associated with a fast and synchronous neuronal firing phenomenon called Population Spikes (PS). Here, we test this hypothesis using a mean-field rate model and corroborate it using a neuromorphic hardware. As the neuromorphic circuits used in this study operate in real-time with biologically realistic time constants, they can reproduce the same dynamics observed in biological systems, together with the exploration of different connectivity schemes, with complete control of the system parameter settings. Besides, the hardware permits the iteration of multiple experiments over many trials, for extended amounts of time and without losing the networks and individual neural processes being studied. Following this "neuromorphic engineering" approach, we therefore study the PS hypothesis in a biophysically inspired recurrent networks of spiking neurons and evaluate the role of different linear and non-linear dynamic computational primitives such as spike-frequency adaptation or short-term depression (STD). We compare both the theoretical mean-field model of SSA and PS to previously obtained experimental results in the area of novelty detection and observe its behavior on its neuromorphic physical equivalent model. We show how the approach proposed can be extended to other computational neuroscience modelling efforts for understanding high-level phenomena in mechanistic models.
Collapse
Affiliation(s)
- Natacha Vanattou-Saïfoudine
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland.
- Department of Computer Science, University of Sheffield, Sheffield, UK.
| | - Chao Han
- Department of Computer Science, University of Sheffield, Sheffield, UK
| | - Renate Krause
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Eleni Vasilaki
- Department of Computer Science, University of Sheffield, Sheffield, UK
| | | | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| |
Collapse
|
10
|
A Dynamic Reconfigurable Architecture for Hybrid Spiking and Convolutional FPGA-Based Neural Network Designs. JOURNAL OF LOW POWER ELECTRONICS AND APPLICATIONS 2021. [DOI: 10.3390/jlpea11030032] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This work presents a dynamically reconfigurable architecture for Neural Network (NN) accelerators implemented in Field-Programmable Gate Array (FPGA) that can be applied in a variety of application scenarios. Although the concept of Dynamic Partial Reconfiguration (DPR) is increasingly used in NN accelerators, the throughput is usually lower than pure static designs. This work presents a dynamically reconfigurable energy-efficient accelerator architecture that does not sacrifice throughput performance. The proposed accelerator comprises reconfigurable processing engines and dynamically utilizes the device resources according to model parameters. Using the proposed architecture with DPR, different NN types and architectures can be realized on the same FPGA. Moreover, the proposed architecture maximizes throughput performance with design optimizations while considering the available resources on the hardware platform. We evaluate our design with different NN architectures for two different tasks. The first task is the image classification of two distinct datasets, and this requires switching between Convolutional Neural Network (CNN) architectures having different layer structures. The second task requires switching between NN architectures, namely a CNN architecture with high accuracy and throughput and a hybrid architecture that combines convolutional layers and an optimized Spiking Neural Network (SNN) architecture. We demonstrate throughput results from quickly reprogramming only a tiny part of the FPGA hardware using DPR. Experimental results show that the implemented designs achieve a 7× faster frame rate than current FPGA accelerators while being extremely flexible and using comparable resources.
Collapse
|
11
|
Krishna A, Mittal D, Virupaksha SG, Nair AR, Narayanan R, Thakur CS. Biomimetic FPGA-based spatial navigation model with grid cells and place cells. Neural Netw 2021; 139:45-63. [PMID: 33677378 DOI: 10.1016/j.neunet.2021.01.028] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2020] [Revised: 01/15/2021] [Accepted: 01/25/2021] [Indexed: 12/22/2022]
Abstract
The mammalian spatial navigation system is characterized by an initial divergence of internal representations, with disparate classes of neurons responding to distinct features including location, speed, borders and head direction; an ensuing convergence finally enables navigation and path integration. Here, we report the algorithmic and hardware implementation of biomimetic neural structures encompassing a feed-forward trimodular, multi-layer architecture representing grid-cell, place-cell and decoding modules for navigation. The grid-cell module comprised of neurons that fired in a grid-like pattern, and was built of distinct layers that constituted the dorsoventral span of the medial entorhinal cortex. Each layer was built as an independent continuous attractor network with distinct grid-field spatial scales. The place-cell module comprised of neurons that fired at one or few spatial locations, organized into different clusters based on convergent modular inputs from different grid-cell layers, replicating the gradient in place-field size along the hippocampal dorso-ventral axis. The decoding module, a two-layer neural network that constitutes the convergence of the divergent representations in preceding modules, received inputs from the place-cell module and provided specific coordinates of the navigating object. After vital design optimizations involving all modules, we implemented the tri-modular structure on Zynq Ultrascale+ field-programmable gate array silicon chip, and demonstrated its capacity in precisely estimating the navigational trajectory with minimal overall resource consumption involving a mere 2.92% Look Up Table utilization. Our implementation of a biomimetic, digital spatial navigation system is stable, reliable, reconfigurable, real-time with execution time of about 32 s for 100k input samples (in contrast to 40 minutes on Intel Core i7-7700 CPU with 8 cores clocking at 3.60 GHz) and thus can be deployed for autonomous-robotic navigation without requiring additional sensors.
Collapse
Affiliation(s)
- Adithya Krishna
- NeuRonICS Lab, Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore 560012, India.
| | - Divyansh Mittal
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore 560012, India.
| | - Siri Garudanagiri Virupaksha
- NeuRonICS Lab, Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore 560012, India.
| | - Abhishek Ramdas Nair
- NeuRonICS Lab, Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore 560012, India.
| | - Rishikesh Narayanan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore 560012, India.
| | - Chetan Singh Thakur
- NeuRonICS Lab, Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore 560012, India.
| |
Collapse
|
12
|
Golosio B, Tiddia G, De Luca C, Pastorelli E, Simula F, Paolucci PS. Fast Simulations of Highly-Connected Spiking Cortical Models Using GPUs. Front Comput Neurosci 2021; 15:627620. [PMID: 33679358 PMCID: PMC7925400 DOI: 10.3389/fncom.2021.627620] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 01/26/2021] [Indexed: 11/16/2022] Open
Abstract
Over the past decade there has been a growing interest in the development of parallel hardware systems for simulating large-scale networks of spiking neurons. Compared to other highly-parallel systems, GPU-accelerated solutions have the advantage of a relatively low cost and a great versatility, thanks also to the possibility of using the CUDA-C/C++ programming languages. NeuronGPU is a GPU library for large-scale simulations of spiking neural network models, written in the C++ and CUDA-C++ programming languages, based on a novel spike-delivery algorithm. This library includes simple LIF (leaky-integrate-and-fire) neuron models as well as several multisynapse AdEx (adaptive-exponential-integrate-and-fire) neuron models with current or conductance based synapses, different types of spike generators, tools for recording spikes, state variables and parameters, and it supports user-definable models. The numerical solution of the differential equations of the dynamics of the AdEx models is performed through a parallel implementation, written in CUDA-C++, of the fifth-order Runge-Kutta method with adaptive step-size control. In this work we evaluate the performance of this library on the simulation of a cortical microcircuit model, based on LIF neurons and current-based synapses, and on balanced networks of excitatory and inhibitory neurons, using AdEx or Izhikevich neuron models and conductance-based or current-based synapses. On these models, we will show that the proposed library achieves state-of-the-art performance in terms of simulation time per second of biological activity. In particular, using a single NVIDIA GeForce RTX 2080 Ti GPU board, the full-scale cortical-microcircuit model, which includes about 77,000 neurons and 3 · 108 connections, can be simulated at a speed very close to real time, while the simulation time of a balanced network of 1,000,000 AdEx neurons with 1,000 connections per neuron was about 70 s per second of biological activity.
Collapse
Affiliation(s)
- Bruno Golosio
- Department of Physics, University of Cagliari, Cagliari, Italy.,Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Cagliari, Italy
| | - Gianmarco Tiddia
- Department of Physics, University of Cagliari, Cagliari, Italy.,Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Cagliari, Italy
| | - Chiara De Luca
- Ph.D. Program in Behavioral Neuroscience, "Sapienza" University of Rome, Rome, Italy.,Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | - Elena Pastorelli
- Ph.D. Program in Behavioral Neuroscience, "Sapienza" University of Rome, Rome, Italy.,Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | - Francesco Simula
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | | |
Collapse
|
13
|
Parvizi-Fard A, Salimi-Nezhad N, Amiri M, Falotico E, Laschi C. Sharpness recognition based on synergy between bio-inspired nociceptors and tactile mechanoreceptors. Sci Rep 2021; 11:2109. [PMID: 33483529 PMCID: PMC7822817 DOI: 10.1038/s41598-021-81199-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Accepted: 01/04/2021] [Indexed: 01/30/2023] Open
Abstract
Touch and pain sensations are complementary aspects of daily life that convey crucial information about the environment while also providing protection to our body. Technological advancements in prosthesis design and control mechanisms assist amputees to regain lost function but often they have no meaningful tactile feedback or perception. In the present study, we propose a bio-inspired tactile system with a population of 23 digital afferents: 12 RA-I, 6 SA-I, and 5 nociceptors. Indeed, the functional concept of the nociceptor is implemented on the FPGA for the first time. One of the main features of biological tactile afferents is that their distal axon branches in the skin, creating complex receptive fields. Given these physiological observations, the bio-inspired afferents are randomly connected to the several neighboring mechanoreceptors with different weights to form their own receptive field. To test the performance of the proposed neuromorphic chip in sharpness detection, a robotic system with three-degree of freedom equipped with the tactile sensor indents the 3D-printed objects. Spike responses of the biomimetic afferents are then collected for analysis by rate and temporal coding algorithms. In this way, the impact of the innervation mechanism and collaboration of afferents and nociceptors on sharpness recognition are investigated. Our findings suggest that the synergy between sensory afferents and nociceptors conveys more information about tactile stimuli which in turn leads to the robustness of the proposed neuromorphic system against damage to the taxels or afferents. Moreover, it is illustrated that spiking activity of the biomimetic nociceptors is amplified as the sharpness increases which can be considered as a feedback mechanism for prosthesis protection. This neuromorphic approach advances the development of prosthesis to include the sensory feedback and to distinguish innocuous (non-painful) and noxious (painful) stimuli.
Collapse
Affiliation(s)
- Adel Parvizi-Fard
- grid.412112.50000 0001 2012 5829Medical Biology Research Center, Institute of Health Technology, Kermanshah University of Medical Sciences, Kermanshah, Iran
| | - Nima Salimi-Nezhad
- grid.412112.50000 0001 2012 5829Medical Biology Research Center, Institute of Health Technology, Kermanshah University of Medical Sciences, Kermanshah, Iran
| | - Mahmood Amiri
- grid.412112.50000 0001 2012 5829Medical Technology Research Center, Institute of Health Technology, Kermanshah University of Medical Sciences, Parastar Ave., Kermanshah, Iran
| | - Egidio Falotico
- grid.263145.70000 0004 1762 600XThe BioRobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Italy ,grid.263145.70000 0004 1762 600XDepartment of Excellence in Robotics and AI, Scuola Superiore Sant’Anna, Pisa, Italy
| | - Cecilia Laschi
- grid.263145.70000 0004 1762 600XThe BioRobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Italy ,grid.263145.70000 0004 1762 600XDepartment of Excellence in Robotics and AI, Scuola Superiore Sant’Anna, Pisa, Italy ,grid.4280.e0000 0001 2180 6431Department of Mechanical Engineering, National University of Singapore, Singapore, Singapore
| |
Collapse
|
14
|
Dutta S, Schafer C, Gomez J, Ni K, Joshi S, Datta S. Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges. Front Neurosci 2020; 14:634. [PMID: 32670012 PMCID: PMC7327100 DOI: 10.3389/fnins.2020.00634] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Accepted: 05/22/2020] [Indexed: 11/13/2022] Open
Abstract
The two possible pathways toward artificial intelligence (AI)-(i) neuroscience-oriented neuromorphic computing [like spiking neural network (SNN)] and (ii) computer science driven machine learning (like deep learning) differ widely in their fundamental formalism and coding schemes (Pei et al., 2019). Deviating from traditional deep learning approach of relying on neuronal models with static nonlinearities, SNNs attempt to capture brain-like features like computation using spikes. This holds the promise of improving the energy efficiency of the computing platforms. In order to achieve a much higher areal and energy efficiency compared to today's hardware implementation of SNN, we need to go beyond the traditional route of relying on CMOS-based digital or mixed-signal neuronal circuits and segregation of computation and memory under the von Neumann architecture. Recently, ferroelectric field-effect transistors (FeFETs) are being explored as a promising alternative for building neuromorphic hardware by utilizing their non-volatile nature and rich polarization switching dynamics. In this work, we propose an all FeFET-based SNN hardware that allows low-power spike-based information processing and co-localized memory and computing (a.k.a. in-memory computing). We experimentally demonstrate the essential neuronal and synaptic dynamics in a 28 nm high-K metal gate FeFET technology. Furthermore, drawing inspiration from the traditional machine learning approach of optimizing a cost function to adjust the synaptic weights, we implement a surrogate gradient (SG) learning algorithm on our SNN platform that allows us to perform supervised learning on MNIST dataset. As such, we provide a pathway toward building energy-efficient neuromorphic hardware that can support traditional machine learning algorithms. Finally, we undertake synergistic device-algorithm co-design by accounting for the impacts of device-level variation (stochasticity) and limited bit precision of on-chip synaptic weights (available analog states) on the classification accuracy.
Collapse
Affiliation(s)
- Sourav Dutta
- Department of Electrical Engineering, College of Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Clemens Schafer
- Department of Computer Science and Engineering, College of Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Jorge Gomez
- Department of Electrical Engineering, College of Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Kai Ni
- Department of Microsystems Engineering, Rochester Institute of Technology, Rochester, NY, United States
| | - Siddharth Joshi
- Department of Computer Science and Engineering, College of Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Suman Datta
- Department of Electrical Engineering, College of Engineering, University of Notre Dame, Notre Dame, IN, United States
| |
Collapse
|
15
|
Salimi-Nezhad N, Ilbeigi E, Amiri M, Falotico E, Laschi C. A Digital Hardware System for Spiking Network of Tactile Afferents. Front Neurosci 2020; 13:1330. [PMID: 32009869 PMCID: PMC6971225 DOI: 10.3389/fnins.2019.01330] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Accepted: 11/26/2019] [Indexed: 11/13/2022] Open
Abstract
In the present research, we explore the possibility of utilizing a hardware-based neuromorphic approach to develop a tactile sensory system at the level of first-order afferents, which are slowly adapting type 1 (SA-I) and fast adapting type 1 (FA-I) afferents. Four spiking models are used to mimic neural signals of both SA-I and FA-I primary afferents. Next, a digital circuit is designed for each spiking model for both afferents to be implemented on the field-programmable gate array (FPGA). The four different digital circuits are then compared from source utilization point of view to find the minimum cost circuit for creating a population of digital afferents. In this way, the firing responses of both SA-I and FA-I afferents are physically measured in hardware. Finally, a population of 243 afferents consisting of 90 SA-I and 153 FA-I digital neuromorphic circuits are implemented on the FPGA. The FPGA also receives nine inputs from the force sensors through an interfacing board. Therefore, the data of multiple inputs are processed by the spiking network of tactile afferents, simultaneously. Benefiting from parallel processing capabilities of FPGA, the proposed architecture offers a low-cost neuromorphic structure for tactile information processing. Applying machine learning algorithms on the artificial spiking patterns collected from FPGA, we successfully classified three different objects based on the firing rate paradigm. Consequently, the proposed neuromorphic system provides the opportunity for development of new tactile processing component for robotic and prosthetic applications.
Collapse
Affiliation(s)
- Nima Salimi-Nezhad
- Medical Biology Research Center, Kermanshah University of Medical Sciences, Kermanshah, Iran
| | - Erfan Ilbeigi
- Medical Biology Research Center, Kermanshah University of Medical Sciences, Kermanshah, Iran
| | - Mahmood Amiri
- Medical Technology Research Center, Kermanshah University of Medical Sciences, Kermanshah, Iran
| | - Egidio Falotico
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Italy
| | - Cecilia Laschi
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Italy
| |
Collapse
|
16
|
Yan Y, Kappel D, Neumarker F, Partzsch J, Vogginger B, Hoppner S, Furber S, Maass W, Legenstein R, Mayr C. Efficient Reward-Based Structural Plasticity on a SpiNNaker 2 Prototype. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2019; 13:579-591. [PMID: 30932847 DOI: 10.1109/tbcas.2019.2906401] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Advances in neuroscience uncover the mechanisms employed by the brain to efficiently solve complex learning tasks with very limited resources. However, the efficiency is often lost when one tries to port these findings to a silicon substrate, since brain-inspired algorithms often make extensive use of complex functions, such as random number generators, that are expensive to compute on standard general purpose hardware. The prototype chip of the second generation SpiNNaker system is designed to overcome this problem. Low-power advanced RISC machine (ARM) processors equipped with a random number generator and an exponential function accelerator enable the efficient execution of brain-inspired algorithms. We implement the recently introduced reward-based synaptic sampling model that employs structural plasticity to learn a function or task. The numerical simulation of the model requires to update the synapse variables in each time step including an explorative random term. To the best of our knowledge, this is the most complex synapse model implemented so far on the SpiNNaker system. By making efficient use of the hardware accelerators and numerical optimizations, the computation time of one plasticity update is reduced by a factor of 2. This, combined with fitting the model into to the local static random access memory (SRAM), leads to 62% energy reduction compared to the case without accelerators and the use of external dynamic random access memory (DRAM). The model implementation is integrated into the SpiNNaker software framework allowing for scalability onto larger systems. The hardware-software system presented in this paper paves the way for power-efficient mobile and biomedical applications with biologically plausible brain-inspired algorithms.
Collapse
|
17
|
Jokar E, Abolfathi H, Ahmadi A. A Novel Nonlinear Function Evaluation Approach for Efficient FPGA Mapping of Neuron and Synaptic Plasticity Models. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2019; 13:454-469. [PMID: 30802873 DOI: 10.1109/tbcas.2019.2900943] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Efficient hardware realization of spiking neural networks is of great significance in a wide variety of applications, such as high-speed modeling and simulation of large-scale neural systems. Exploiting the key features of FPGAs, this paper presents a novel nonlinear function evaluation approach, based on an effective uniform piecewise linear segmentation method, to efficiently approximate the nonlinear terms of neuron and synaptic plasticity models targeting low-cost digital implementation. The proposed approach takes advantage of a high-speed and extremely simple segment address encoder unit regardless of the number of segments, and therefore is capable of accurately approximating a given nonlinear function with a large number of straight lines. In addition, this approach can be efficiently mapped into FPGAs with minimal hardware cost. To investigate the application of the proposed nonlinear function evaluation approach in low-cost neuromorphic circuit design, it is applied to four case studies: the Izhikevich and FitzHugh-Nagumo neuron models as 2-dimensional case studies, the Hindmarsh-Rose neuron model as a relatively complex 3-dimensional model containing two nonlinear terms, and a calcium-based synaptic plasticity model capable of producing various STDP curves. Simulation and FPGA synthesis results demonstrate that the hardware proposed for each case study is capable of producing various responses remarkably similar to the original model and significantly outperforms the previously published counterparts in terms of resource utilization and maximum clock frequency.
Collapse
|
18
|
Thakur CS, Molin JL, Cauwenberghs G, Indiveri G, Kumar K, Qiao N, Schemmel J, Wang R, Chicca E, Olson Hasler J, Seo JS, Yu S, Cao Y, van Schaik A, Etienne-Cummings R. Large-Scale Neuromorphic Spiking Array Processors: A Quest to Mimic the Brain. Front Neurosci 2018; 12:891. [PMID: 30559644 PMCID: PMC6287454 DOI: 10.3389/fnins.2018.00891] [Citation(s) in RCA: 76] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2018] [Accepted: 11/14/2018] [Indexed: 11/16/2022] Open
Abstract
Neuromorphic engineering (NE) encompasses a diverse range of approaches to information processing that are inspired by neurobiological systems, and this feature distinguishes neuromorphic systems from conventional computing systems. The brain has evolved over billions of years to solve difficult engineering problems by using efficient, parallel, low-power computation. The goal of NE is to design systems capable of brain-like computation. Numerous large-scale neuromorphic projects have emerged recently. This interdisciplinary field was listed among the top 10 technology breakthroughs of 2014 by the MIT Technology Review and among the top 10 emerging technologies of 2015 by the World Economic Forum. NE has two-way goals: one, a scientific goal to understand the computational properties of biological neural systems by using models implemented in integrated circuits (ICs); second, an engineering goal to exploit the known properties of biological systems to design and implement efficient devices for engineering applications. Building hardware neural emulators can be extremely useful for simulating large-scale neural models to explain how intelligent behavior arises in the brain. The principal advantages of neuromorphic emulators are that they are highly energy efficient, parallel and distributed, and require a small silicon area. Thus, compared to conventional CPUs, these neuromorphic emulators are beneficial in many engineering applications such as for the porting of deep learning algorithms for various recognitions tasks. In this review article, we describe some of the most significant neuromorphic spiking emulators, compare the different architectures and approaches used by them, illustrate their advantages and drawbacks, and highlight the capabilities that each can deliver to neural modelers. This article focuses on the discussion of large-scale emulators and is a continuation of a previous review of various neural and synapse circuits (Indiveri et al., 2011). We also explore applications where these emulators have been used and discuss some of their promising future applications.
Collapse
Affiliation(s)
- Chetan Singh Thakur
- Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore, India
| | - Jamal Lottier Molin
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Gert Cauwenberghs
- Department of Bioengineering and Institute for Neural Computation, University of California, San Diego, La Jolla, CA, United States
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Kundan Kumar
- Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore, India
| | - Ning Qiao
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Johannes Schemmel
- Kirchhoff Institute for Physics, University of Heidelberg, Heidelberg, Germany
| | - Runchun Wang
- The MARCS Institute, Western Sydney University, Kingswood, NSW, Australia
| | - Elisabetta Chicca
- Cognitive Interaction Technology – Center of Excellence, Bielefeld University, Bielefeld, Germany
| | - Jennifer Olson Hasler
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | - Jae-sun Seo
- School of Electrical, Computer and Engineering, Arizona State University, Tempe, AZ, United States
| | - Shimeng Yu
- School of Electrical, Computer and Engineering, Arizona State University, Tempe, AZ, United States
| | - Yu Cao
- School of Electrical, Computer and Engineering, Arizona State University, Tempe, AZ, United States
| | - André van Schaik
- The MARCS Institute, Western Sydney University, Kingswood, NSW, Australia
| | - Ralph Etienne-Cummings
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, United States
| |
Collapse
|
19
|
Mizrahi A, Marsh T, Hoskins B, Stiles MD. Scalable Method to Find the Shortest Path in a Graph with Circuits of Memristors. PHYSICAL REVIEW APPLIED 2018; 10:10.1103/physrevapplied.10.064035. [PMID: 39450158 PMCID: PMC11500059 DOI: 10.1103/physrevapplied.10.064035] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2024]
Abstract
Finding the shortest path in a graph has applications in a wide range of optimization problems. However, algorithmic methods scale with the size of the graph in terms of time and energy. We propose a method to solve the shortest-path problem using circuits of nanodevices called memristors and validate it on graphs of different sizes and topologies. It is both valid for an experimentally derived memristor model and robust to device variability. The time and energy of the computation scale with the length of the shortest path rather than with the size of the graph, making this method particularly attractive for solving large graphs with small path lengths.
Collapse
Affiliation(s)
- Alice Mizrahi
- National Institute of Standards and Technology, Gaithersburg, Maryland, USA
- Maryland NanoCenter, University of Maryland, College Park, Maryland, USA
| | - Thomas Marsh
- National Institute of Standards and Technology, Gaithersburg, Maryland, USA
| | - Brian Hoskins
- National Institute of Standards and Technology, Gaithersburg, Maryland, USA
| | - M. D. Stiles
- National Institute of Standards and Technology, Gaithersburg, Maryland, USA
| |
Collapse
|
20
|
Blundell I, Brette R, Cleland TA, Close TG, Coca D, Davison AP, Diaz-Pier S, Fernandez Musoles C, Gleeson P, Goodman DFM, Hines M, Hopkins MW, Kumbhar P, Lester DR, Marin B, Morrison A, Müller E, Nowotny T, Peyser A, Plotnikov D, Richmond P, Rowley A, Rumpe B, Stimberg M, Stokes AB, Tomkins A, Trensch G, Woodman M, Eppler JM. Code Generation in Computational Neuroscience: A Review of Tools and Techniques. Front Neuroinform 2018; 12:68. [PMID: 30455637 PMCID: PMC6230720 DOI: 10.3389/fninf.2018.00068] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2018] [Accepted: 09/12/2018] [Indexed: 01/18/2023] Open
Abstract
Advances in experimental techniques and computational power allowing researchers to gather anatomical and electrophysiological data at unprecedented levels of detail have fostered the development of increasingly complex models in computational neuroscience. Large-scale, biophysically detailed cell models pose a particular set of computational challenges, and this has led to the development of a number of domain-specific simulators. At the other level of detail, the ever growing variety of point neuron models increases the implementation barrier even for those based on the relatively simple integrate-and-fire neuron model. Independently of the model complexity, all modeling methods crucially depend on an efficient and accurate transformation of mathematical model descriptions into efficiently executable code. Neuroscientists usually publish model descriptions in terms of the mathematical equations underlying them. However, actually simulating them requires they be translated into code. This can cause problems because errors may be introduced if this process is carried out by hand, and code written by neuroscientists may not be very computationally efficient. Furthermore, the translated code might be generated for different hardware platforms, operating system variants or even written in different languages and thus cannot easily be combined or even compared. Two main approaches to addressing this issues have been followed. The first is to limit users to a fixed set of optimized models, which limits flexibility. The second is to allow model definitions in a high level interpreted language, although this may limit performance. Recently, a third approach has become increasingly popular: using code generation to automatically translate high level descriptions into efficient low level code to combine the best of previous approaches. This approach also greatly enriches efforts to standardize simulator-independent model description languages. In the past few years, a number of code generation pipelines have been developed in the computational neuroscience community, which differ considerably in aim, scope and functionality. This article provides an overview of existing pipelines currently used within the community and contrasts their capabilities and the technologies and concepts behind them.
Collapse
Affiliation(s)
- Inga Blundell
- Forschungszentrum Jülich, Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich, Germany
| | - Romain Brette
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| | - Thomas A. Cleland
- Department of Psychology, Cornell University, Ithaca, NY, United States
| | - Thomas G. Close
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia
| | - Daniel Coca
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, United Kingdom
| | - Andrew P. Davison
- Unité de Neurosciences, Information et Complexité, CNRS FRE 3693, Gif sur Yvette, France
| | - Sandra Diaz-Pier
- Forschungszentrum Jülich, Simulation Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich, Germany
| | - Carlos Fernandez Musoles
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, United Kingdom
| | - Padraig Gleeson
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| | - Dan F. M. Goodman
- Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
| | - Michael Hines
- Department of Neurobiology, School of Medicine, Yale University, New Haven, CT, United States
| | - Michael W. Hopkins
- Advanced Processor Technologies Group, School of Computer ScienceUniversity of Manchester, Manchester, United Kingdom
| | - Pramod Kumbhar
- Blue Brain Project, EPFLCampus Biotech, Geneva, Switzerland
| | - David R. Lester
- Advanced Processor Technologies Group, School of Computer ScienceUniversity of Manchester, Manchester, United Kingdom
| | - Bóris Marin
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
- Centro de Matemática, Computação e CogniçãoUniversidade Federal do ABC, São Bernardo do Campo, Brazil
| | - Abigail Morrison
- Forschungszentrum Jülich, Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich, Germany
- Forschungszentrum Jülich, Simulation Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich, Germany
- Faculty of Psychology, Institute of Cognitive NeuroscienceRuhr-University Bochum, Bochum, Germany
| | - Eric Müller
- Kirchhoff-Institute for PhysicsUniversität Heidelberg, Heidelberg, Germany
| | - Thomas Nowotny
- Centre for Computational Neuroscience and Robotics, School of Engineering and InformaticsUniversity of Sussex, Brighton, United Kingdom
| | - Alexander Peyser
- Forschungszentrum Jülich, Simulation Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich, Germany
| | - Dimitri Plotnikov
- Forschungszentrum Jülich, Simulation Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich, Germany
- RWTH Aachen University, Software EngineeringJülich Aachen Research Alliance, Aachen, Germany
| | - Paul Richmond
- Department of Computer ScienceUniversity of Sheffield, Sheffield, United Kingdom
| | - Andrew Rowley
- Advanced Processor Technologies Group, School of Computer ScienceUniversity of Manchester, Manchester, United Kingdom
| | - Bernhard Rumpe
- RWTH Aachen University, Software EngineeringJülich Aachen Research Alliance, Aachen, Germany
| | - Marcel Stimberg
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| | - Alan B. Stokes
- Advanced Processor Technologies Group, School of Computer ScienceUniversity of Manchester, Manchester, United Kingdom
| | - Adam Tomkins
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, United Kingdom
| | - Guido Trensch
- Forschungszentrum Jülich, Simulation Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich, Germany
| | - Marmaduke Woodman
- Institut de Neurosciences des SystèmesAix Marseille Université, Marseille, France
| | - Jochen Martin Eppler
- Forschungszentrum Jülich, Simulation Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich, Germany
| |
Collapse
|