1
|
Yaghini Bonabi S, Asgharian H, Safari S, Nili Ahmadabadi M. FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model. Front Neurosci 2014; 8:379. [PMID: 25484854 PMCID: PMC4240168 DOI: 10.3389/fnins.2014.00379] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2014] [Accepted: 11/05/2014] [Indexed: 11/29/2022] Open
Abstract
A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well.
Collapse
Affiliation(s)
- Safa Yaghini Bonabi
- Cognitive Robotic Lab., School of Electrical and Computer Engineering, College of Engineering, University of Tehran Tehran, Iran
| | - Hassan Asgharian
- Research Center of Information Technology, Department of Computer Engineering, Iran University of Science and Technology Tehran, Iran
| | - Saeed Safari
- High Performance Embedded Computing Lab., School of Electrical and Computer Engineering, College of Engineering, University of Tehran Tehran, Iran
| | - Majid Nili Ahmadabadi
- Cognitive Robotic Lab., School of Electrical and Computer Engineering, College of Engineering, University of Tehran Tehran, Iran ; School of Cognitive Sciences, Institute for Research in Fundamental Sciences, IPM Tehran, Iran
| |
Collapse
|
2
|
Petrovici MA, Vogginger B, Müller P, Breitwieser O, Lundqvist M, Muller L, Ehrlich M, Destexhe A, Lansner A, Schüffny R, Schemmel J, Meier K. Characterization and compensation of network-level anomalies in mixed-signal neuromorphic modeling platforms. PLoS One 2014; 9:e108590. [PMID: 25303102 PMCID: PMC4193761 DOI: 10.1371/journal.pone.0108590] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2014] [Accepted: 08/22/2014] [Indexed: 11/18/2022] Open
Abstract
Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks.
Collapse
Affiliation(s)
- Mihai A. Petrovici
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Bernhard Vogginger
- Technische Universität Dresden, Institute of Circuits and Systems, Dresden, Germany
| | - Paul Müller
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Oliver Breitwieser
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Mikael Lundqvist
- Department of Computational Biology, School of Computer Science and Communication, Stockholm University and Royal Institute of Technology, Stockholm, Sweden
| | - Lyle Muller
- CNRS, Unité de Neuroscience, Information et Complexité, Gif sur Yvette, France
| | - Matthias Ehrlich
- Technische Universität Dresden, Institute of Circuits and Systems, Dresden, Germany
| | - Alain Destexhe
- CNRS, Unité de Neuroscience, Information et Complexité, Gif sur Yvette, France
| | - Anders Lansner
- Department of Computational Biology, School of Computer Science and Communication, Stockholm University and Royal Institute of Technology, Stockholm, Sweden
| | - René Schüffny
- Technische Universität Dresden, Institute of Circuits and Systems, Dresden, Germany
| | - Johannes Schemmel
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Karlheinz Meier
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| |
Collapse
|
3
|
Vrtaric D, Ceperic V, Baric A. Area-efficient differential Gaussian circuit for dedicated hardware implementations of Gaussian function based machine learning algorithms. Neurocomputing 2013. [DOI: 10.1016/j.neucom.2013.02.022] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
4
|
Saïghi S, Bornat Y, Tomas J, Le Masson G, Renaud S. A library of analog operators based on the hodgkin-huxley formalism for the design of tunable, real-time, silicon neurons. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2011; 5:3-19. [PMID: 23850974 DOI: 10.1109/tbcas.2010.2078816] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
In this paper, we present a library of analog operators used for the analog real-time computation of the Hodgkin-Huxley formalism. These operators make it possible to design a silicon (Si) neuron that is dynamically tunable, and that reproduces different kinds of neurons. We used an original method in neuromorphic engineering to characterize this Si neuron. In electrophysiology, this method is well known as the "voltage-clamp" technique. We also compare the features of an application-specific integrated circuit built with this library with results obtained from software simulations. We then present the complex behavior of neural membrane voltages and the potential applications of this Si neuron.
Collapse
|
5
|
|
6
|
Maguire L. Does Soft Computing Classify Research in Spiking Neural Networks? INT J COMPUT INT SYS 2010. [DOI: 10.1080/18756891.2010.9727688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022] Open
|
7
|
Lázaro J, Arias J, Astarloa A, Bidarte U, Zuloaga A. Hardware architecture for a general regression neural network coprocessor. Neurocomputing 2007. [DOI: 10.1016/j.neucom.2007.01.012] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
8
|
Vogelstein RJ, Mallik U, Culurciello E, Cauwenberghs G, Etienne-Cummings R. A multichip neuromorphic system for spike-based visual information processing. Neural Comput 2007; 19:2281-300. [PMID: 17650061 DOI: 10.1162/neco.2007.19.9.2281] [Citation(s) in RCA: 56] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We present a multichip, mixed-signal VLSI system for spike-based vision processing. The system consists of an 80 x 60 pixel neuromorphic retina and a 4800 neuron silicon cortex with 4,194,304 synapses. Its functionality is illustrated with experimental data on multiple components of an attention-based hierarchical model of cortical object recognition, including feature coding, salience detection, and foveation. This model exploits arbitrary and reconfigurable connectivity between cells in the multichip architecture, achieved by asynchronously routing neural spike events within and between chips according to a memory-based look-up table. Synaptic parameters, including conductance and reversal potential, are also stored in memory and are used to dynamically configure synapse circuits within the silicon neurons.
Collapse
Affiliation(s)
- R Jacob Vogelstein
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA.
| | | | | | | | | |
Collapse
|
9
|
Vogelstein RJ, Mallik U, Vogelstein JT, Cauwenberghs G. Dynamically reconfigurable silicon array of spiking neurons with conductance-based synapses. ACTA ACUST UNITED AC 2007; 18:253-65. [PMID: 17278476 DOI: 10.1109/tnn.2006.883007] [Citation(s) in RCA: 183] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
A mixed-signal very large scale integration (VLSI) chip for large scale emulation of spiking neural networks is presented. The chip contains 2400 silicon neurons with fully programmable and reconfigurable synaptic connectivity. Each neuron implements a discrete-time model of a single-compartment cell. The model allows for analog membrane dynamics and an arbitrary number of synaptic connections, each with tunable conductance and reversal potential. The array of silicon neurons functions as an address-event (AE) transceiver, with incoming and outgoing spikes communicated over an asynchronous event-driven digital bus. Address encoding and conflict resolution of spiking events are implemented via a randomized arbitration scheme that ensures balanced servicing of event requests across the array. Routing of events is implemented externally using dynamically programmable random-access memory that stores a postsynaptic address, the conductance, and the reversal potential of each synaptic connection. Here, we describe the silicon neuron circuits, present experimental data characterizing the 3 mm x 3 mm chip fabricated in 0.5-microm complementary metal-oxide-semiconductor (CMOS) technology, and demonstrate its utility by configuring the hardware to emulate a model of attractor dynamics and waves of neural activity during sleep in rat hippocampus.
Collapse
Affiliation(s)
- R Jacob Vogelstein
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD 21205, USA.
| | | | | | | |
Collapse
|
10
|
Zou Q, Bornat Y, Saïghi S, Tomas J, Renaud S, Destexhe A. Analog-digital simulations of full conductance-based networks of spiking neurons with spike timing dependent plasticity. NETWORK (BRISTOL, ENGLAND) 2006; 17:211-33. [PMID: 17162612 DOI: 10.1080/09548980600711124] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
We introduce and test a system for simulating networks of conductance-based neuron models using analog circuits. At the single-cell level, we use custom-designed analog circuits (ASICs) that simulate two types of spiking neurons based on Hodgkin-Huxley like dynamics: "regular spiking" excitatory neurons with spike-frequency adaptation, and "fast spiking" inhibitory neurons. Synaptic interactions are mediated by conductance-based synaptic currents described by kinetic models. Connectivity and plasticity rules are implemented digitally through a real time interface between a computer and a PCI board containing the ASICs. We show a prototype system of a few neurons interconnected with synapses undergoing spike-timing dependent plasticity (STDP), and compare this system with numerical simulations. We use this system to evaluate the effect of parameter dispersion on the behavior of small circuits of neurons. It is shown that, although the exact spike timings are not precisely emulated by the ASIC neurons, the behavior of small networks with STDP matches that of numerical simulations. Thus, this mixed analog-digital architecture provides a valuable tool for real-time simulations of networks of neurons with STDP. They should be useful for any real-time application, such as hybrid systems interfacing network models with biological neurons.
Collapse
Affiliation(s)
- Quan Zou
- Integrative and Computational Neuroscience Unit, CNRS, Gif-sur-Yvette, France
| | | | | | | | | | | |
Collapse
|