1
|
Galloni AR, Yuan Y, Zhu M, Yu H, Bisht RS, Wu CTM, Grienberger C, Ramanathan S, Milstein AD. Neuromorphic one-shot learning utilizing a phase-transition material. Proc Natl Acad Sci U S A 2024; 121:e2318362121. [PMID: 38630718 PMCID: PMC11047090 DOI: 10.1073/pnas.2318362121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 03/25/2024] [Indexed: 04/19/2024] Open
Abstract
Design of hardware based on biological principles of neuronal computation and plasticity in the brain is a leading approach to realizing energy- and sample-efficient AI and learning machines. An important factor in selection of the hardware building blocks is the identification of candidate materials with physical properties suitable to emulate the large dynamic ranges and varied timescales of neuronal signaling. Previous work has shown that the all-or-none spiking behavior of neurons can be mimicked by threshold switches utilizing material phase transitions. Here, we demonstrate that devices based on a prototypical metal-insulator-transition material, vanadium dioxide (VO2), can be dynamically controlled to access a continuum of intermediate resistance states. Furthermore, the timescale of their intrinsic relaxation can be configured to match a range of biologically relevant timescales from milliseconds to seconds. We exploit these device properties to emulate three aspects of neuronal analog computation: fast (~1 ms) spiking in a neuronal soma compartment, slow (~100 ms) spiking in a dendritic compartment, and ultraslow (~1 s) biochemical signaling involved in temporal credit assignment for a recently discovered biological mechanism of one-shot learning. Simulations show that an artificial neural network using properties of VO2 devices to control an agent navigating a spatial environment can learn an efficient path to a reward in up to fourfold fewer trials than standard methods. The phase relaxations described in our study may be engineered in a variety of materials and can be controlled by thermal, electrical, or optical stimuli, suggesting further opportunities to emulate biological learning in neuromorphic hardware.
Collapse
Affiliation(s)
- Alessandro R. Galloni
- Department of Neuroscience and Cell Biology, Robert Wood Johnson Medical School, Rutgers, The State University of New Jersey, Piscataway, NJ08854
- Center for Advanced Biotechnology and Medicine, Rutgers, The State University of New Jersey, Piscataway, NJ08854
| | - Yifan Yuan
- Department of Electrical and Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ08854
| | - Minning Zhu
- Department of Electrical and Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ08854
| | - Haoming Yu
- School of Materials Engineering, Purdue University, West Lafayette, IN47907
| | - Ravindra S. Bisht
- Department of Electrical and Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ08854
| | - Chung-Tse Michael Wu
- Department of Electrical and Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ08854
| | - Christine Grienberger
- Department of Neuroscience, Brandeis University, Waltham, MA02453
- Department of Biology and Volen National Center for Complex Systems, Brandeis University, Waltham, MA02453
| | - Shriram Ramanathan
- Department of Electrical and Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ08854
| | - Aaron D. Milstein
- Department of Neuroscience and Cell Biology, Robert Wood Johnson Medical School, Rutgers, The State University of New Jersey, Piscataway, NJ08854
- Center for Advanced Biotechnology and Medicine, Rutgers, The State University of New Jersey, Piscataway, NJ08854
| |
Collapse
|
2
|
Zhou PJ, Zuo Y, Qiao GC, Zhang CM, Zhang Z, Meng LW, Yu Q, Liu Y, Hu SG. Achieving High Core Neuron Density in a Neuromorphic Chip Through Trade-off Among Area, Power Consumption, and Data Access Bandwidth. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2023; 17:1319-1330. [PMID: 37405896 DOI: 10.1109/tbcas.2023.3292469] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/07/2023]
Abstract
As a crucial component of neuromorphic chips, on-chip memory usually occupies most of the on-chip resources and limits the improvement of neuron density. The alternative of using off-chip memory may result in additional power consumption or even a bottleneck for off-chip data access. This article proposes an on- and off-chip co-design approach and a figure of merit (FOM) to achieve a trade-off between chip area, power consumption, and data access bandwidth. By evaluating the FOM of each design scheme, the scheme with the highest FOM (1.085× better than the baseline) is adopted to design a neuromorphic chip. Deep multiplexing and weight-sharing technologies are used to reduce on-chip resource overhead and data access pressure. A hybrid memory design method is proposed to optimize on- and off-chip memory distribution, which reduces on-chip storage pressure and total power consumption by 92.88% and 27.86%, respectively, while avoiding the explosion of off-chip access bandwidth. The co-designed neuromorphic chip with ten cores fabricated under standard 55 nm CMOS technology has an area of 4.4 mm 2 and a core neuron density of 4.92 K/mm 2, an improvement of 3.39 ∼ 30.56× compared with previous works. After deploying a full-connected and a convolution-based spiking neural network (SNN) for ECG signal recognition, the neuromorphic chip achieves 92% and 95% accuracy, respectively. This work provides a new path for developing high-density and large-scale neuromorphic chips.
Collapse
|
3
|
Park J, Ha S, Yu T, Neftci E, Cauwenberghs G. A 22-pJ/spike 73-Mspikes/s 130k-compartment neural array transceiver with conductance-based synaptic and membrane dynamics. Front Neurosci 2023; 17:1198306. [PMID: 37700751 PMCID: PMC10493285 DOI: 10.3389/fnins.2023.1198306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Accepted: 07/07/2023] [Indexed: 09/14/2023] Open
Abstract
Neuromorphic cognitive computing offers a bio-inspired means to approach the natural intelligence of biological neural systems in silicon integrated circuits. Typically, such circuits either reproduce biophysical neuronal dynamics in great detail as tools for computational neuroscience, or abstract away the biology by simplifying the functional forms of neural computation in large-scale systems for machine intelligence with high integration density and energy efficiency. Here we report a hybrid which offers biophysical realism in the emulation of multi-compartmental neuronal network dynamics at very large scale with high implementation efficiency, and yet with high flexibility in configuring the functional form and the network topology. The integrate-and-fire array transceiver (IFAT) chip emulates the continuous-time analog membrane dynamics of 65 k two-compartment neurons with conductance-based synapses. Fired action potentials are registered as address-event encoded output spikes, while the four types of synapses coupling to each neuron are activated by address-event decoded input spikes for fully reconfigurable synaptic connectivity, facilitating virtual wiring as implemented by routing address-event spikes externally through synaptic routing table. Peak conductance strength of synapse activation specified by the address-event input spans three decades of dynamic range, digitally controlled by pulse width and amplitude modulation (PWAM) of the drive voltage activating the log-domain linear synapse circuit. Two nested levels of micro-pipelining in the IFAT architecture improve both throughput and efficiency of synaptic input. This two-tier micro-pipelining results in a measured sustained peak throughput of 73 Mspikes/s and overall chip-level energy efficiency of 22 pJ/spike. Non-uniformity in digitally encoded synapse strength due to analog mismatch is mitigated through single-point digital offset calibration. Combined with the flexibly layered and recurrent synaptic connectivity provided by hierarchical address-event routing of registered spike events through external memory, the IFAT lends itself to efficient large-scale emulation of general biophysical spiking neural networks, as well as rate-based mapping of rectified linear unit (ReLU) neural activations.
Collapse
Affiliation(s)
- Jongkil Park
- Center for Neuromorphic Engineering, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
- Institute for Neural Computation, University of California, San Diego, La Jolla, CA, United States
- Department of Electrical and Computer Engineering, Jacobs School of Engineering, University of California, San Diego, La Jolla, CA, United States
| | - Sohmyung Ha
- Institute for Neural Computation, University of California, San Diego, La Jolla, CA, United States
- Department of Bioengineering, Jacobs School of Engineering, University of California, San Diego, La Jolla, CA, United States
- Division of Engineering, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Theodore Yu
- Institute for Neural Computation, University of California, San Diego, La Jolla, CA, United States
- Department of Electrical and Computer Engineering, Jacobs School of Engineering, University of California, San Diego, La Jolla, CA, United States
| | - Emre Neftci
- Peter Grünberg Institute, Forschungszentrum Jülich, RWTH, Aachen, Germany
| | - Gert Cauwenberghs
- Institute for Neural Computation, University of California, San Diego, La Jolla, CA, United States
- Department of Bioengineering, Jacobs School of Engineering, University of California, San Diego, La Jolla, CA, United States
| |
Collapse
|
4
|
Gautam A, Kohno T. A Conductance-Based Silicon Synapse Circuit. Biomimetics (Basel) 2022; 7:246. [PMID: 36546946 PMCID: PMC9775663 DOI: 10.3390/biomimetics7040246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 12/09/2022] [Accepted: 12/13/2022] [Indexed: 12/23/2022] Open
Abstract
Neuron, synapse, and learning circuits inspired by the brain comprise the key components of a neuromorphic chip. In this study, we present a conductance-based analog silicon synapse circuit suitable for the implementation of reduced or multi-compartment neuron models. Compartmental models are more bio-realistic. They are implemented in neuromorphic chips aiming to mimic the electrical activities of the neuronal networks in the brain and incorporate biomimetic soma and synapse circuits. Most contemporary low-power analog synapse circuits implement bioinspired "current-based" synaptic models suited for the implementation of single-compartment point neuron models. They emulate the exponential decay profile of the synaptic current, but ignore the effect of the postsynaptic membrane potential on the synaptic current. This dependence is necessary to emulate shunting inhibition, which is thought to play important roles in information processing in the brain. The proposed circuit uses an oscillator-based resistor-type element at its output stage to incorporate this effect. This circuit is used to demonstrate the shunting inhibition phenomenon. Next, to demonstrate that the oscillatory nature of the induced synaptic current has no unforeseen effects, the synapse circuit is employed in a spatiotemporal spike pattern detection task. The task employs the adaptive spike-timing-dependent plasticity (STDP) learning rule, a bio-inspired learning rule introduced in a previous study. The mixed-signal chip is designed in a Taiwan Manufacturing Semiconductor Company 250 nm complementary metal oxide semiconductor technology node. It comprises a biomimetic soma circuit and 256 synapse circuits, along with their learning circuitries.
Collapse
Affiliation(s)
- Ashish Gautam
- Institute of Industrial Science, The University of Tokyo, Tokyo 153-8505, Japan
| | | |
Collapse
|
5
|
Gautam A, Kohno T. An Adaptive STDP Learning Rule for Neuromorphic Systems. Front Neurosci 2021; 15:741116. [PMID: 34630026 PMCID: PMC8498208 DOI: 10.3389/fnins.2021.741116] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 08/13/2021] [Indexed: 11/18/2022] Open
Abstract
The promise of neuromorphic computing to develop ultra-low-power intelligent devices lies in its ability to localize information processing and memory storage in synaptic circuits much like the synapses in the brain. Spiking neural networks modeled using high-resolution synapses and armed with local unsupervised learning rules like spike time-dependent plasticity (STDP) have shown promising results in tasks such as pattern detection and image classification. However, designing and implementing a conventional, multibit STDP circuit becomes complex both in terms of the circuitry and the required silicon area. In this work, we introduce a modified and hardware-friendly STDP learning (named adaptive STDP) implemented using just 4-bit synapses. We demonstrate the capability of this learning rule in a pattern recognition task, in which a neuron learns to recognize a specific spike pattern embedded within noisy inhomogeneous Poisson spikes. Our results demonstrate that the performance of the proposed learning rule (94% using just 4-bit synapses) is similar to the conventional STDP learning (96% using 64-bit floating-point precision). The models used in this study are ideal ones for a CMOS neuromorphic circuit with analog soma and synapse circuits and mixed-signal learning circuits. The learning circuit stores the synaptic weight in a 4-bit digital memory that is updated asynchronously. In circuit simulation with Taiwan Semiconductor Manufacturing Company (TSMC) 250 nm CMOS process design kit (PDK), the static power consumption of a single synapse and the energy per spike (to generate a synaptic current of amplitude 15 pA and time constant 3 ms) are less than 2 pW and 200 fJ, respectively. The static power consumption of the learning circuit is less than 135 pW, and the energy to process a pair of pre- and postsynaptic spikes corresponding to a single learning step is less than 235 pJ. A single 4-bit synapse (capable of being configured as excitatory, inhibitory, or shunting inhibitory) along with its learning circuitry and digital memory occupies around 17,250 μm2 of silicon area.
Collapse
Affiliation(s)
- Ashish Gautam
- Department of Electrical Engineering and Information Systems, Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Takashi Kohno
- Institute of Industrial Science, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
6
|
Malik SA, Mir AH. Discrete Multiplierless Implementation of Fractional Order Hindmarsh–Rose Model. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2021. [DOI: 10.1109/tetci.2020.2979462] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
7
|
Primavera BA, Shainline JM. Considerations for Neuromorphic Supercomputing in Semiconducting and Superconducting Optoelectronic Hardware. Front Neurosci 2021; 15:732368. [PMID: 34552465 PMCID: PMC8450355 DOI: 10.3389/fnins.2021.732368] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 08/09/2021] [Indexed: 11/24/2022] Open
Abstract
Any large-scale spiking neuromorphic system striving for complexity at the level of the human brain and beyond will need to be co-optimized for communication and computation. Such reasoning leads to the proposal for optoelectronic neuromorphic platforms that leverage the complementary properties of optics and electronics. Starting from the conjecture that future large-scale neuromorphic systems will utilize integrated photonics and fiber optics for communication in conjunction with analog electronics for computation, we consider two possible paths toward achieving this vision. The first is a semiconductor platform based on analog CMOS circuits and waveguide-integrated photodiodes. The second is a superconducting approach that utilizes Josephson junctions and waveguide-integrated superconducting single-photon detectors. We discuss available devices, assess scaling potential, and provide a list of key metrics and demonstrations for each platform. Both platforms hold potential, but their development will diverge in important respects. Semiconductor systems benefit from a robust fabrication ecosystem and can build on extensive progress made in purely electronic neuromorphic computing but will require III-V light source integration with electronics at an unprecedented scale, further advances in ultra-low capacitance photodiodes, and success from emerging memory technologies. Superconducting systems place near theoretically minimum burdens on light sources (a tremendous boon to one of the most speculative aspects of either platform) and provide new opportunities for integrated, high-endurance synaptic memory. However, superconducting optoelectronic systems will also contend with interfacing low-voltage electronic circuits to semiconductor light sources, the serial biasing of superconducting devices on an unprecedented scale, a less mature fabrication ecosystem, and cryogenic infrastructure.
Collapse
Affiliation(s)
- Bryce A. Primavera
- National Institute of Standards and Technology, Boulder, CO, United States
- Department of Physics, University of Colorado Boulder, Boulder, CO, United States
| | | |
Collapse
|
8
|
Yang S, Deng B, Wang J, Li H, Lu M, Che Y, Wei X, Loparo KA. Scalable Digital Neuromorphic Architecture for Large-Scale Biophysically Meaningful Neural Network With Multi-Compartment Neurons. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:148-162. [PMID: 30892250 DOI: 10.1109/tnnls.2019.2899936] [Citation(s) in RCA: 83] [Impact Index Per Article: 16.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Multicompartment emulation is an essential step to enhance the biological realism of neuromorphic systems and to further understand the computational power of neurons. In this paper, we present a hardware efficient, scalable, and real-time computing strategy for the implementation of large-scale biologically meaningful neural networks with one million multi-compartment neurons (CMNs). The hardware platform uses four Altera Stratix III field-programmable gate arrays, and both the cellular and the network levels are considered, which provides an efficient implementation of a large-scale spiking neural network with biophysically plausible dynamics. At the cellular level, a cost-efficient multi-CMN model is presented, which can reproduce the detailed neuronal dynamics with representative neuronal morphology. A set of efficient neuromorphic techniques for single-CMN implementation are presented with all the hardware cost of memory and multiplier resources removed and with hardware performance of computational speed enhanced by 56.59% in comparison with the classical digital implementation method. At the network level, a scalable network-on-chip (NoC) architecture is proposed with a novel routing algorithm to enhance the NoC performance including throughput and computational latency, leading to higher computational efficiency and capability in comparison with state-of-the-art projects. The experimental results demonstrate that the proposed work can provide an efficient model and architecture for large-scale biologically meaningful networks, while the hardware synthesis results demonstrate low area utilization and high computational speed that supports the scalability of the approach.
Collapse
|
9
|
Keren H, Partzsch J, Marom S, Mayr CG. A Biohybrid Setup for Coupling Biological and Neuromorphic Neural Networks. Front Neurosci 2019; 13:432. [PMID: 31133779 PMCID: PMC6517490 DOI: 10.3389/fnins.2019.00432] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2018] [Accepted: 04/15/2019] [Indexed: 12/30/2022] Open
Abstract
Developing technologies for coupling neural activity and artificial neural components, is key for advancing neural interfaces and neuroprosthetics. We present a biohybrid experimental setting, where the activity of a biological neural network is coupled to a biomimetic hardware network. The implementation of the hardware network (denoted NeuroSoC) exhibits complex dynamics with a multiplicity of time-scales, emulating 2880 neurons and 12.7 M synapses, designed on a VLSI chip. This network is coupled to a neural network in vitro, where the activities of both the biological and the hardware networks can be recorded, processed, and integrated bidirectionally in real-time. This experimental setup enables an adjustable and well-monitored coupling, while providing access to key functional features of neural networks. We demonstrate the feasibility to functionally couple the two networks and to implement control circuits to modify the biohybrid activity. Overall, we provide an experimental model for neuromorphic-neural interfaces, hopefully to advance the capability to interface with neural activity, and with its irregularities in pathology.
Collapse
Affiliation(s)
- Hanna Keren
- Department of Physiology, Biophysics and Systems Biology, Ruth and Bruce Rappaport Faculty of Medicine, Technion - Israel Institute of Technology, Haifa, Israel
- Network Biology Research Laboratory, Faculty of Electrical Engineering, Technion - Israel Institute of Technology, Haifa, Israel
- Institute of Circuits and Systems, Faculty of Electrical and Computer Engineering, School of Engineering Sciences, Dresden University of Technology, Dresden, Germany
| | - Johannes Partzsch
- Institute of Circuits and Systems, Faculty of Electrical and Computer Engineering, School of Engineering Sciences, Dresden University of Technology, Dresden, Germany
| | - Shimon Marom
- Department of Physiology, Biophysics and Systems Biology, Ruth and Bruce Rappaport Faculty of Medicine, Technion - Israel Institute of Technology, Haifa, Israel
- Network Biology Research Laboratory, Faculty of Electrical Engineering, Technion - Israel Institute of Technology, Haifa, Israel
| | - Christian G Mayr
- Institute of Circuits and Systems, Faculty of Electrical and Computer Engineering, School of Engineering Sciences, Dresden University of Technology, Dresden, Germany
| |
Collapse
|
10
|
Thakur CS, Molin JL, Cauwenberghs G, Indiveri G, Kumar K, Qiao N, Schemmel J, Wang R, Chicca E, Olson Hasler J, Seo JS, Yu S, Cao Y, van Schaik A, Etienne-Cummings R. Large-Scale Neuromorphic Spiking Array Processors: A Quest to Mimic the Brain. Front Neurosci 2018; 12:891. [PMID: 30559644 PMCID: PMC6287454 DOI: 10.3389/fnins.2018.00891] [Citation(s) in RCA: 76] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2018] [Accepted: 11/14/2018] [Indexed: 11/16/2022] Open
Abstract
Neuromorphic engineering (NE) encompasses a diverse range of approaches to information processing that are inspired by neurobiological systems, and this feature distinguishes neuromorphic systems from conventional computing systems. The brain has evolved over billions of years to solve difficult engineering problems by using efficient, parallel, low-power computation. The goal of NE is to design systems capable of brain-like computation. Numerous large-scale neuromorphic projects have emerged recently. This interdisciplinary field was listed among the top 10 technology breakthroughs of 2014 by the MIT Technology Review and among the top 10 emerging technologies of 2015 by the World Economic Forum. NE has two-way goals: one, a scientific goal to understand the computational properties of biological neural systems by using models implemented in integrated circuits (ICs); second, an engineering goal to exploit the known properties of biological systems to design and implement efficient devices for engineering applications. Building hardware neural emulators can be extremely useful for simulating large-scale neural models to explain how intelligent behavior arises in the brain. The principal advantages of neuromorphic emulators are that they are highly energy efficient, parallel and distributed, and require a small silicon area. Thus, compared to conventional CPUs, these neuromorphic emulators are beneficial in many engineering applications such as for the porting of deep learning algorithms for various recognitions tasks. In this review article, we describe some of the most significant neuromorphic spiking emulators, compare the different architectures and approaches used by them, illustrate their advantages and drawbacks, and highlight the capabilities that each can deliver to neural modelers. This article focuses on the discussion of large-scale emulators and is a continuation of a previous review of various neural and synapse circuits (Indiveri et al., 2011). We also explore applications where these emulators have been used and discuss some of their promising future applications.
Collapse
Affiliation(s)
- Chetan Singh Thakur
- Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore, India
| | - Jamal Lottier Molin
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Gert Cauwenberghs
- Department of Bioengineering and Institute for Neural Computation, University of California, San Diego, La Jolla, CA, United States
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Kundan Kumar
- Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore, India
| | - Ning Qiao
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Johannes Schemmel
- Kirchhoff Institute for Physics, University of Heidelberg, Heidelberg, Germany
| | - Runchun Wang
- The MARCS Institute, Western Sydney University, Kingswood, NSW, Australia
| | - Elisabetta Chicca
- Cognitive Interaction Technology – Center of Excellence, Bielefeld University, Bielefeld, Germany
| | - Jennifer Olson Hasler
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | - Jae-sun Seo
- School of Electrical, Computer and Engineering, Arizona State University, Tempe, AZ, United States
| | - Shimeng Yu
- School of Electrical, Computer and Engineering, Arizona State University, Tempe, AZ, United States
| | - Yu Cao
- School of Electrical, Computer and Engineering, Arizona State University, Tempe, AZ, United States
| | - André van Schaik
- The MARCS Institute, Western Sydney University, Kingswood, NSW, Australia
| | - Ralph Etienne-Cummings
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, United States
| |
Collapse
|
11
|
Aamir SA, Muller P, Kiene G, Kriener L, Stradmann Y, Grubl A, Schemmel J, Meier K. A Mixed-Signal Structured AdEx Neuron for Accelerated Neuromorphic Cores. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2018; 12:1027-1037. [PMID: 30047897 DOI: 10.1109/tbcas.2018.2848203] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Here, we describe a multicompartment neuron circuit based on the adaptive-exponential I&F (AdEx) model, developed for the second-generation BrainScaleS hardware. Based on an existing modular leaky integrate-and-fire (LIF) architecture designed in 65-nm CMOS, the circuit features exponential spike generation, neuronal adaptation, intercompartmental connections as well as a conductance-based reset. The design reproduces a diverse set of firing patterns observed in cortical pyramidal neurons. Further, it enables the emulation of sodium and calcium spikes, as well as N-methyl-D-aspartate plateau potentials known from apical and thin dendrites. We characterize the AdEx circuit extensions and exemplify how the interplay between passive and nonlinear active signal processing enhances the computational capabilities of single (but structured) on-chip neurons.
Collapse
|
12
|
Wang R, van Schaik A. Breaking Liebig's Law: An Advanced Multipurpose Neuromorphic Engine. Front Neurosci 2018; 12:593. [PMID: 30210278 PMCID: PMC6123369 DOI: 10.3389/fnins.2018.00593] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Accepted: 08/07/2018] [Indexed: 11/13/2022] Open
Abstract
We present a massively-parallel scalable multi-purpose neuromorphic engine. All existing neuromorphic hardware systems suffer from Liebig’s law (that the performance of the system is limited by the component in shortest supply) as they have fixed numbers of dedicated neurons and synapses for specific types of plasticity. For any application, it is always the availability of one of these components that limits the size of the model, leaving the others unused. To overcome this problem, our engine adopts a unique novel architecture: an array of identical components, each of which can be configured as a leaky-integrate-and-fire (LIF) neuron, a learning-synapse, or an axon with trainable delay. Spike timing dependent plasticity (STDP) and spike timing dependent delay plasticity (STDDP) are the two supported learning rules. All the parameters are stored in the SRAMs such that runtime reconfiguration is supported. As a proof of concept, we have implemented a prototype system with 16 neural engines, each of which consists of 32768 (32k) components, yielding half a million components, on an entry level FPGA (Altera Cyclone V). We verified the prototype system with measurement results. To demonstrate that our neuromorphic engine is a high performance and scalable digital design, we implemented it using TSMC 28nm HPC technology. Place and route results using Cadence Innovus with a clock frequency of 2.5 GHz show that this engine achieves an excellent area efficiency of 1.68 μm2 per component: 256k (218) components in a silicon area of 650 μm × 680 μm (∼0.44 mm2, the utilization of the silicon area is 98.7%). The power consumption of this engine is 37 mW, yielding a power efficiency of 0.92 pJ per synaptic operation (SOP).
Collapse
Affiliation(s)
- Runchun Wang
- The MARCS Institute, Western Sydney University, Sydney, NSW, Australia
| | - André van Schaik
- The MARCS Institute, Western Sydney University, Sydney, NSW, Australia
| |
Collapse
|
13
|
Wang RM, Thakur CS, van Schaik A. An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator. Front Neurosci 2018; 12:213. [PMID: 29692702 PMCID: PMC5902707 DOI: 10.3389/fnins.2018.00213] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2017] [Accepted: 03/16/2018] [Indexed: 11/13/2022] Open
Abstract
This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks.
Collapse
Affiliation(s)
- Runchun M Wang
- The MARCS Institute, University of Western Sydney, Sydney, NSW, Australia
| | - Chetan S Thakur
- Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore, India
| | - André van Schaik
- The MARCS Institute, University of Western Sydney, Sydney, NSW, Australia
| |
Collapse
|
14
|
Park J, Yu T, Joshi S, Maier C, Cauwenberghs G. Hierarchical Address Event Routing for Reconfigurable Large-Scale Neuromorphic Systems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:2408-2422. [PMID: 27483491 DOI: 10.1109/tnnls.2016.2572164] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We present a hierarchical address-event routing (HiAER) architecture for scalable communication of neural and synaptic spike events between neuromorphic processors, implemented with five Xilinx Spartan-6 field-programmable gate arrays and four custom analog neuromophic integrated circuits serving 262k neurons and 262M synapses. The architecture extends the single-bus address-event representation protocol to a hierarchy of multiple nested buses, routing events across increasing scales of spatial distance. The HiAER protocol provides individually programmable axonal delay in addition to strength for each synapse, lending itself toward biologically plausible neural network architectures, and scales across a range of hierarchies suitable for multichip and multiboard systems in reconfigurable large-scale neuromorphic systems. We show approximately linear scaling of net global synaptic event throughput with number of routing nodes in the network, at 3.6×107 synaptic events per second per 16k-neuron node in the hierarchy.
Collapse
Affiliation(s)
- Jongkil Park
- Department of Electrical and Computer Engineering, Jacobs School of Engineering, Institute of Neural Computation, University of California at San Diego, La Jolla, CA, USA
| | | | - Siddharth Joshi
- Department of Electrical and Computer Engineering, Jacobs School of Engineering, Institute of Neural Computation, University of California at San Diego, La Jolla, CA, USA
| | - Christoph Maier
- Institute of Neural Computation, University of California at San Diego, La Jolla, CA, USA
| | - Gert Cauwenberghs
- Department of Bioengineering, Jacobs School of Engineering, Institute of Neural Computation, University of California at San Diego, La Jolla, CA, USA
| |
Collapse
|
15
|
A real-time FPGA implementation of a biologically inspired central pattern generator network. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.03.028] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
16
|
Abstract
Conventional hardware platforms consume huge amount of energy for cognitive learning due to the data movement between the processor and the off-chip memory. Brain-inspired device technologies using analogue weight storage allow to complete cognitive tasks more efficiently. Here we present an analogue non-volatile resistive memory (an electronic synapse) with foundry friendly materials. The device shows bidirectional continuous weight modulation behaviour. Grey-scale face classification is experimentally demonstrated using an integrated 1024-cell array with parallel online training. The energy consumption within the analogue synapses for each iteration is 1,000 × (20 ×) lower compared to an implementation using Intel Xeon Phi processor with off-chip memory (with hypothetical on-chip digital resistive random access memory). The accuracy on test sets is close to the result using a central processing unit. These experimental results consolidate the feasibility of analogue synaptic array and pave the way toward building an energy efficient and large-scale neuromorphic system. Using chips that mimic the human brain to perform cognitive tasks, namely neuromorphic computing, calls for low power and high efficiency hardware. Here, Yao et al. show on-chip analogue weight storage by integrating non-volatile resistive memory into a CMOS platform and test it in facial recognition.
Collapse
|
17
|
|
18
|
Haghiri S, Ahmadi A, Saif M. Complete Neuron-Astrocyte Interaction Model: Digital Multiplierless Design and Networking Mechanism. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2017; 11:117-127. [PMID: 27662685 DOI: 10.1109/tbcas.2016.2583920] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Glial cells, also known as neuroglia or glia, are non-neuronal cells providing support and protection for neurons in the central nervous system (CNS). They also act as supportive cells in the brain. Among a variety of glial cells, the star-shaped glial cells, i.e., astrocytes, are the largest cell population in the brain. The important role of astrocyte such as neuronal synchronization, synaptic information regulation, feedback to neural activity and extracellular regulation make the astrocytes play a vital role in brain disease. This paper presents a modified complete neuron-astrocyte interaction model that is more suitable for efficient and large scale biological neural network realization on digital platforms. Simulation results show that the modified complete interaction model can reproduce biological-like behavior of the original neuron-astrocyte mechanism. The modified interaction model is investigated in terms of digital realization feasibility and cost targeting a low cost hardware implementation. Networking behavior of this interaction is investigated and compared between two cases: i) the neuron spiking mechanism without astrocyte effects, and ii) the effect of astrocyte in regulating the neurons behavior and synaptic transmission via controlling the LTP and LTD processes. Hardware implementation on FPGA shows that the modified model mimics the main mechanism of neuron-astrocyte communication with higher performance and considerably lower hardware overhead cost compared with the original interaction model.
Collapse
|
19
|
Hayati M, Nouri M, Haghiri S, Abbott D. A Digital Realization of Astrocyte and Neural Glial Interactions. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2016; 10:518-529. [PMID: 26390499 DOI: 10.1109/tbcas.2015.2450837] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The implementation of biological neural networks is a key objective of the neuromorphic research field. Astrocytes are the largest cell population in the brain. With the discovery of calcium wave propagation through astrocyte networks, now it is more evident that neuronal networks alone may not explain functionality of the strongest natural computer, the brain. Models of cortical function must now account for astrocyte activities as well as their relationships with neurons in encoding and manipulation of sensory information. From an engineering viewpoint, astrocytes provide feedback to both presynaptic and postsynaptic neurons to regulate their signaling behaviors. This paper presents a modified neural glial interaction model that allows a convenient digital implementation. This model can reproduce relevant biological astrocyte behaviors, which provide appropriate feedback control in regulating neuronal activities in the central nervous system (CNS). Accordingly, we investigate the feasibility of a digital implementation for a single astrocyte constructed by connecting a two coupled FitzHugh Nagumo (FHN) neuron model to an implementation of the proposed astrocyte model using neuron-astrocyte interactions. Hardware synthesis, physical implementation on FPGA, and theoretical analysis confirm that the proposed neuron astrocyte model, with significantly low hardware cost, can mimic biological behavior such as the regulation of postsynaptic neuron activity and the synaptic transmission mechanisms.
Collapse
|
20
|
Digital implementations of thalamocortical neuron models and its application in thalamocortical control using FPGA for Parkinson׳s disease. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.11.026] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
21
|
Mayr C, Partzsch J, Noack M, Hänzsche S, Scholze S, Höppner S, Ellguth G, Schüffny R. A Biological-Realtime Neuromorphic System in 28 nm CMOS Using Low-Leakage Switched Capacitor Circuits. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2016; 10:243-254. [PMID: 25680215 DOI: 10.1109/tbcas.2014.2379294] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
A switched-capacitor (SC) neuromorphic system for closed-loop neural coupling in 28 nm CMOS is presented, occupying 600 um by 600 um. It offers 128 input channels (i.e., presynaptic terminals), 8192 synapses and 64 output channels (i.e., neurons). Biologically realistic neuron and synapse dynamics are achieved via a faithful translation of the behavioural equations to SC circuits. As leakage currents significantly affect circuit behaviour at this technology node, dedicated compensation techniques are employed to achieve biological-realtime operation, with faithful reproduction of time constants of several 100 ms at room temperature. Power draw of the overall system is 1.9 mW.
Collapse
|
22
|
Partzsch J, Schüffny R. Network-driven design principles for neuromorphic systems. Front Neurosci 2015; 9:386. [PMID: 26539079 PMCID: PMC4611986 DOI: 10.3389/fnins.2015.00386] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2015] [Accepted: 10/05/2015] [Indexed: 11/17/2022] Open
Abstract
Synaptic connectivity is typically the most resource-demanding part of neuromorphic systems. Commonly, the architecture of these systems is chosen mainly on technical considerations. As a consequence, the potential for optimization arising from the inherent constraints of connectivity models is left unused. In this article, we develop an alternative, network-driven approach to neuromorphic architecture design. We describe methods to analyse performance of existing neuromorphic architectures in emulating certain connectivity models. Furthermore, we show step-by-step how to derive a neuromorphic architecture from a given connectivity model. For this, we introduce a generalized description for architectures with a synapse matrix, which takes into account shared use of circuit components for reducing total silicon area. Architectures designed with this approach are fitted to a connectivity model, essentially adapting to its connection density. They are guaranteeing faithful reproduction of the model on chip, while requiring less total silicon area. In total, our methods allow designers to implement more area-efficient neuromorphic systems and verify usability of the connectivity resources in these systems.
Collapse
Affiliation(s)
- Johannes Partzsch
- Chair for Highly Parallel VLSI Systems and Neuromorphic Circuits, Department of Electrical Engineering and Information Technology, Technische Universität Dresden Dresden, Germany
| | - Rene Schüffny
- Chair for Highly Parallel VLSI Systems and Neuromorphic Circuits, Department of Electrical Engineering and Information Technology, Technische Universität Dresden Dresden, Germany
| |
Collapse
|
23
|
Orchard G, Meyer C, Etienne-Cummings R, Posch C, Thakor N, Benosman R. HFirst: A Temporal Approach to Object Recognition. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2015; 37:2028-2040. [PMID: 26353184 DOI: 10.1109/tpami.2015.2392947] [Citation(s) in RCA: 77] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
This paper introduces a spiking hierarchical model for object recognition which utilizes the precise timing information inherently present in the output of biologically inspired asynchronous address event representation (AER) vision sensors. The asynchronous nature of these systems frees computation and communication from the rigid predetermined timing enforced by system clocks in conventional systems. Freedom from rigid timing constraints opens the possibility of using true timing to our advantage in computation. We show not only how timing can be used in object recognition, but also how it can in fact simplify computation. Specifically, we rely on a simple temporal-winner-take-all rather than more computationally intensive synchronous operations typically used in biologically inspired neural networks for object recognition. This approach to visual computation represents a major paradigm shift from conventional clocked systems and can find application in other sensory modalities and computational tasks. We showcase effectiveness of the approach by achieving the highest reported accuracy to date (97.5% ± 3.5%) for a previously published four class card pip recognition task and an accuracy of 84.9% ± 1.9% for a new more difficult 36 class character recognition task.
Collapse
|
24
|
Stromatias E, Neil D, Pfeiffer M, Galluppi F, Furber SB, Liu SC. Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms. Front Neurosci 2015. [PMID: 26217169 PMCID: PMC4496577 DOI: 10.3389/fnins.2015.00222] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.
Collapse
Affiliation(s)
- Evangelos Stromatias
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester Manchester, UK
| | - Daniel Neil
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Michael Pfeiffer
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Francesco Galluppi
- Centre National de la Recherche Scientifique UMR 7210, Equipe de Vision et Calcul Naturel, Vision Institute, UMR S968 Inserm, CHNO des Quinze-Vingts, Université Pierre et Marie Curie Paris, France
| | - Steve B Furber
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester Manchester, UK
| | - Shih-Chii Liu
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| |
Collapse
|
25
|
Thomas A, Niehörster S, Fabretti S, Shepheard N, Kuschel O, Küpper K, Wollschläger J, Krzysteczko P, Chicca E. Tunnel junction based memristors as artificial synapses. Front Neurosci 2015; 9:241. [PMID: 26217173 PMCID: PMC4493388 DOI: 10.3389/fnins.2015.00241] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Accepted: 06/24/2015] [Indexed: 11/30/2022] Open
Abstract
We prepared magnesia, tantalum oxide, and barium titanate based tunnel junction structures and investigated their memristive properties. The low amplitudes of the resistance change in these types of junctions are the major obstacle for their use. Here, we increased the amplitude of the resistance change from 10% up to 100%. Utilizing the memristive properties, we looked into the use of the junction structures as artificial synapses. We observed analogs of long-term potentiation, long-term depression and spike-time dependent plasticity in these simple two terminal devices. Finally, we suggest a possible pathway of these devices toward their integration in neuromorphic systems for storing analog synaptic weights and supporting the implementation of biologically plausible learning mechanisms.
Collapse
Affiliation(s)
- Andy Thomas
- Thin Films and Physics of Nanostructures, Bielefeld UniversityBielefeld, Germany
- IFW Dresden, Institute for Metallic MaterialsDresden, Germany
| | - Stefan Niehörster
- Thin Films and Physics of Nanostructures, Bielefeld UniversityBielefeld, Germany
| | - Savio Fabretti
- Thin Films and Physics of Nanostructures, Bielefeld UniversityBielefeld, Germany
| | - Norman Shepheard
- Thin Films and Physics of Nanostructures, Bielefeld UniversityBielefeld, Germany
- Cognitive Interaction Technology Center of Excellence and Faculty of Technology, Bielefeld UniversityBielefeld, Germany
| | - Olga Kuschel
- Fachbereich Physik and Center of Physics and Chemistry of New Materials, Osnabrück UniversityOsnabrück, Germany
| | - Karsten Küpper
- Fachbereich Physik and Center of Physics and Chemistry of New Materials, Osnabrück UniversityOsnabrück, Germany
| | - Joachim Wollschläger
- Fachbereich Physik and Center of Physics and Chemistry of New Materials, Osnabrück UniversityOsnabrück, Germany
| | - Patryk Krzysteczko
- Thin Films and Physics of Nanostructures, Bielefeld UniversityBielefeld, Germany
- Physikalisch Technische BundesanstaltBraunschweig, Germany
| | - Elisabetta Chicca
- Cognitive Interaction Technology Center of Excellence and Faculty of Technology, Bielefeld UniversityBielefeld, Germany
| |
Collapse
|
26
|
Wang RM, Hamilton TJ, Tapson JC, van Schaik A. A neuromorphic implementation of multiple spike-timing synaptic plasticity rules for large-scale neural networks. Front Neurosci 2015; 9:180. [PMID: 26041985 PMCID: PMC4438254 DOI: 10.3389/fnins.2015.00180] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2014] [Accepted: 05/06/2015] [Indexed: 11/24/2022] Open
Abstract
We present a neuromorphic implementation of multiple synaptic plasticity learning rules, which include both Spike Timing Dependent Plasticity (STDP) and Spike Timing Dependent Delay Plasticity (STDDP). We present a fully digital implementation as well as a mixed-signal implementation, both of which use a novel dynamic-assignment time-multiplexing approach and support up to 226 (64M) synaptic plasticity elements. Rather than implementing dedicated synapses for particular types of synaptic plasticity, we implemented a more generic synaptic plasticity adaptor array that is separate from the neurons in the neural network. Each adaptor performs synaptic plasticity according to the arrival times of the pre- and post-synaptic spikes assigned to it, and sends out a weighted or delayed pre-synaptic spike to the post-synaptic neuron in the neural network. This strategy provides great flexibility for building complex large-scale neural networks, as a neural network can be configured for multiple synaptic plasticity rules without changing its structure. We validate the proposed neuromorphic implementations with measurement results and illustrate that the circuits are capable of performing both STDP and STDDP. We argue that it is practical to scale the work presented here up to 236 (64G) synaptic adaptors on a current high-end FPGA platform.
Collapse
Affiliation(s)
- Runchun M Wang
- The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| | - Tara J Hamilton
- The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| | - Jonathan C Tapson
- The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| | - André van Schaik
- The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| |
Collapse
|
27
|
Irizarry-Valle Y, Parker AC. An astrocyte neuromorphic circuit that influences neuronal phase synchrony. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2015; 9:175-187. [PMID: 25934997 DOI: 10.1109/tbcas.2015.2417580] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Neuromorphic circuits are designed and simulated to emulate the role of astrocytes in phase synchronization of neuronal activity. We emulate, to a first order, the ability of slow inward currents (SICs) evoked by the astrocyte, acting on extrasynaptic N-methyl-D-aspartate receptors (NMDAR) of adjacent neurons, as a mechanism for phase synchronization. We run a simulation test incorporating two small networks of neurons interacting with astrocytic microdomains. These microdomains are designed using a resistive and capacitive ladder network and their interactions occur through pass transistors. Upon enough synaptic activity, the astrocytic microdomains interact with each other, generating SIC events on synapses of adjacent neurons. Since the amplitude of SICs is several orders of magnitude larger compared to synaptic currents, a SIC event drastically enhances the excitatory postsynaptic potential (EPSP) on adjacent neurons simultaneously. This causes neurons to fire synchronously in phase. Phase synchrony holds for a duration of time proportional to the time constant of the SIC decay. Once the SIC decay has completed, the neurons are able to go back to their natural phase difference, inducing desynchronization of their firing of spikes. This paper incorporates some biological aspects observed by recent experiments showing astrocytic influence on neuronal synchronization, and intends to offer a circuit view on the hypothesis of astrocytic role on synchronous activity that could potentially lead to the binding of neuronal information.
Collapse
|
28
|
Noack M, Partzsch J, Mayr CG, Hänzsche S, Scholze S, Höppner S, Ellguth G, Schüffny R. Switched-capacitor realization of presynaptic short-term-plasticity and stop-learning synapses in 28 nm CMOS. Front Neurosci 2015; 9:10. [PMID: 25698914 PMCID: PMC4313588 DOI: 10.3389/fnins.2015.00010] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2014] [Accepted: 01/09/2015] [Indexed: 11/13/2022] Open
Abstract
Synaptic dynamics, such as long- and short-term plasticity, play an important role in the complexity and biological realism achievable when running neural networks on a neuromorphic IC. For example, they endow the IC with an ability to adapt and learn from its environment. In order to achieve the millisecond to second time constants required for these synaptic dynamics, analog subthreshold circuits are usually employed. However, due to process variation and leakage problems, it is almost impossible to port these types of circuits to modern sub-100nm technologies. In contrast, we present a neuromorphic system in a 28 nm CMOS process that employs switched capacitor (SC) circuits to implement 128 short term plasticity presynapses as well as 8192 stop-learning synapses. The neuromorphic system consumes an area of 0.36 mm(2) and runs at a power consumption of 1.9 mW. The circuit makes use of a technique for minimizing leakage effects allowing for real-time operation with time constants up to several seconds. Since we rely on SC techniques for all calculations, the system is composed of only generic mixed-signal building blocks. These generic building blocks make the system easy to port between technologies and the large digital circuit part inherent in an SC system benefits fully from technology scaling.
Collapse
Affiliation(s)
- Marko Noack
- Chair of Highly-Parallel VLSI-Systems and Neuromorphic Circuits, Technische Universität DresdenDresden, Germany
| | - Johannes Partzsch
- Chair of Highly-Parallel VLSI-Systems and Neuromorphic Circuits, Technische Universität DresdenDresden, Germany
| | - Christian G. Mayr
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurich, Switzerland
| | - Stefan Hänzsche
- Chair of Highly-Parallel VLSI-Systems and Neuromorphic Circuits, Technische Universität DresdenDresden, Germany
| | - Stefan Scholze
- Chair of Highly-Parallel VLSI-Systems and Neuromorphic Circuits, Technische Universität DresdenDresden, Germany
| | - Sebastian Höppner
- Chair of Highly-Parallel VLSI-Systems and Neuromorphic Circuits, Technische Universität DresdenDresden, Germany
| | - Georg Ellguth
- Chair of Highly-Parallel VLSI-Systems and Neuromorphic Circuits, Technische Universität DresdenDresden, Germany
| | - Rene Schüffny
- Chair of Highly-Parallel VLSI-Systems and Neuromorphic Circuits, Technische Universität DresdenDresden, Germany
| |
Collapse
|
29
|
Soleimani H, Bavandpour M, Ahmadi A, Abbott D. Digital implementation of a biological astrocyte model and its application. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:127-139. [PMID: 25532161 DOI: 10.1109/tnnls.2014.2311839] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This paper presents a modified astrocyte model that allows a convenient digital implementation. This model is aimed at reproducing relevant biological astrocyte behaviors, which provide appropriate feedback control in regulating neuronal activities in the central nervous system. Accordingly, we investigate the feasibility of a digital implementation for a single astrocyte and a biological neuronal network model constructed by connecting two limit-cycle Hopf oscillators to an implementation of the proposed astrocyte model using oscillator-astrocyte interactions with weak coupling. Hardware synthesis, physical implementation on field-programmable gate array, and theoretical analysis confirm that the proposed astrocyte model, with considerably low hardware overhead, can mimic biological astrocyte model behaviors, resulting in desynchronization of the two coupled limit-cycle oscillators.
Collapse
|
30
|
Petrovici MA, Vogginger B, Müller P, Breitwieser O, Lundqvist M, Muller L, Ehrlich M, Destexhe A, Lansner A, Schüffny R, Schemmel J, Meier K. Characterization and compensation of network-level anomalies in mixed-signal neuromorphic modeling platforms. PLoS One 2014; 9:e108590. [PMID: 25303102 PMCID: PMC4193761 DOI: 10.1371/journal.pone.0108590] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2014] [Accepted: 08/22/2014] [Indexed: 11/18/2022] Open
Abstract
Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks.
Collapse
Affiliation(s)
- Mihai A. Petrovici
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Bernhard Vogginger
- Technische Universität Dresden, Institute of Circuits and Systems, Dresden, Germany
| | - Paul Müller
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Oliver Breitwieser
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Mikael Lundqvist
- Department of Computational Biology, School of Computer Science and Communication, Stockholm University and Royal Institute of Technology, Stockholm, Sweden
| | - Lyle Muller
- CNRS, Unité de Neuroscience, Information et Complexité, Gif sur Yvette, France
| | - Matthias Ehrlich
- Technische Universität Dresden, Institute of Circuits and Systems, Dresden, Germany
| | - Alain Destexhe
- CNRS, Unité de Neuroscience, Information et Complexité, Gif sur Yvette, France
| | - Anders Lansner
- Department of Computational Biology, School of Computer Science and Communication, Stockholm University and Royal Institute of Technology, Stockholm, Sweden
| | - René Schüffny
- Technische Universität Dresden, Institute of Circuits and Systems, Dresden, Germany
| | - Johannes Schemmel
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Karlheinz Meier
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| |
Collapse
|
31
|
Closed-loop brain-machine-body interfaces for noninvasive rehabilitation of movement disorders. Ann Biomed Eng 2014; 42:1573-93. [PMID: 24833254 DOI: 10.1007/s10439-014-1032-6] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2013] [Accepted: 05/07/2014] [Indexed: 12/17/2022]
Abstract
Traditional approaches for neurological rehabilitation of patients affected with movement disorders, such as Parkinson's disease (PD), dystonia, and essential tremor (ET) consist mainly of oral medication, physical therapy, and botulinum toxin injections. Recently, the more invasive method of deep brain stimulation (DBS) showed significant improvement of the physical symptoms associated with these disorders. In the past several years, the adoption of feedback control theory helped DBS protocols to take into account the progressive and dynamic nature of these neurological movement disorders that had largely been ignored so far. As a result, a more efficient and effective management of PD cardinal symptoms has emerged. In this paper, we review closed-loop systems for rehabilitation of movement disorders, focusing on PD, for which several invasive and noninvasive methods have been developed during the last decade, reducing the complications and side effects associated with traditional rehabilitation approaches and paving the way for tailored individual therapeutics. We then present a novel, transformative, noninvasive closed-loop framework based on force neurofeedback and discuss several future developments of closed-loop systems that might bring us closer to individualized solutions for neurological rehabilitation of movement disorders.
Collapse
|
32
|
Abstract
Machine-Learning tasks are becoming pervasive in a broad range of domains, and in a broad range of systems (from embedded systems to data centers). At the same time, a small set of machine-learning algorithms (especially Convolutional and Deep Neural Networks, i.e., CNNs and DNNs) are proving to be state-of-the-art across many applications. As architectures evolve towards heterogeneous multi-cores composed of a mix of cores and accelerators, a machine-learning accelerator can achieve the rare combination of efficiency (due to the small number of target algorithms) and broad application scope.
Until now, most machine-learning accelerator designs have focused on efficiently implementing the computational part of the algorithms. However, recent state-of-the-art CNNs and DNNs are characterized by their large size. In this study, we design an accelerator for large-scale CNNs and DNNs, with a special emphasis on the impact of memory on accelerator design, performance and energy.
We show that it is possible to design an accelerator with a high throughput, capable of performing 452 GOP/s (key NN operations such as synaptic weight multiplications and neurons outputs additions) in a small footprint of 3.02 mm2 and 485 mW; compared to a 128-bit 2GHz SIMD processor, the accelerator is 117.87x faster, and it can reduce the total energy by 21.08x. The accelerator characteristics are obtained after layout at 65 nm. Such a high throughput in a small footprint can open up the usage of state-of-the-art machine-learning algorithms in a broad set of systems and for a broad set of applications.
Collapse
|
33
|
Abstract
Machine-Learning tasks are becoming pervasive in a broad range of domains, and in a broad range of systems (from embedded systems to data centers). At the same time, a small set of machine-learning algorithms (especially Convolutional and Deep Neural Networks, i.e., CNNs and DNNs) are proving to be state-of-the-art across many applications. As architectures evolve towards heterogeneous multi-cores composed of a mix of cores and accelerators, a machine-learning accelerator can achieve the rare combination of efficiency (due to the small number of target algorithms) and broad application scope.
Until now, most machine-learning accelerator designs have focused on efficiently implementing the computational part of the algorithms. However, recent state-of-the-art CNNs and DNNs are characterized by their large size. In this study, we design an accelerator for large-scale CNNs and DNNs, with a special emphasis on the impact of memory on accelerator design, performance and energy.
We show that it is possible to design an accelerator with a high throughput, capable of performing 452 GOP/s (key NN operations such as synaptic weight multiplications and neurons outputs additions) in a small footprint of 3.02 mm2 and 485 mW; compared to a 128-bit 2GHz SIMD processor, the accelerator is 117.87x faster, and it can reduce the total energy by 21.08x. The accelerator characteristics are obtained after layout at 65 nm. Such a high throughput in a small footprint can open up the usage of state-of-the-art machine-learning algorithms in a broad set of systems and for a broad set of applications.
Collapse
|
34
|
Wang RM, Hamilton TJ, Tapson JC, van Schaik A. A mixed-signal implementation of a polychronous spiking neural network with delay adaptation. Front Neurosci 2014; 8:51. [PMID: 24672422 PMCID: PMC3957211 DOI: 10.3389/fnins.2014.00051] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2013] [Accepted: 02/26/2014] [Indexed: 11/13/2022] Open
Abstract
We present a mixed-signal implementation of a re-configurable polychronous spiking neural network capable of storing and recalling spatio-temporal patterns. The proposed neural network contains one neuron array and one axon array. Spike Timing Dependent Delay Plasticity is used to fine-tune delays and add dynamics to the network. In our mixed-signal implementation, the neurons and axons have been implemented as both analog and digital circuits. The system thus consists of one FPGA, containing the digital neuron array and the digital axon array, and one analog IC containing the analog neuron array and the analog axon array. The system can be easily configured to use different combinations of each. We present and discuss the experimental results of all combinations of the analog and digital axon arrays and the analog and digital neuron arrays. The test results show that the proposed neural network is capable of successfully recalling more than 85% of stored patterns using both analog and digital circuits.
Collapse
Affiliation(s)
- Runchun M Wang
- Bioelectronics and Neuroscience, The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| | - Tara J Hamilton
- Bioelectronics and Neuroscience, The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| | - Jonathan C Tapson
- Bioelectronics and Neuroscience, The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| | - André van Schaik
- Bioelectronics and Neuroscience, The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| |
Collapse
|
35
|
Hsieh HY, Tang KT. Hardware friendly probabilistic spiking neural network with long-term and short-term plasticity. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2013; 24:2063-2074. [PMID: 24805223 DOI: 10.1109/tnnls.2013.2271644] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper proposes a probabilistic spiking neural network (PSNN) with unimodal weight distribution, possessing long- and short-term plasticity. The proposed algorithm is derived by both the arithmetic gradient decent calculation and bioinspired algorithms. The algorithm is benchmarked by the Iris and Wisconsin breast cancer (WBC) data sets. The network features fast convergence speed and high accuracy. In the experiment, the PSNN took not more than 40 epochs for convergence. The average testing accuracy for Iris and WBC data is 96.7% and 97.2%, respectively. To test the usefulness of the PSNN to real world application, the PSNN was also tested with the odor data, which was collected by our self-developed electronic nose (e-nose). Compared with the algorithm (K-nearest neighbor) that has the highest classification accuracy in the e-nose for the same odor data, the classification accuracy of the PSNN is only 1.3% less but the memory requirement can be reduced at least 40%. All the experiments suggest that the PSNN is hardware friendly. First, it requires only nine-bits weight resolution for training and testing. Second, the PSNN can learn complex data sets with a little number of neurons that in turn reduce the cost of VLSI implementation. In addition, the algorithm is insensitive to synaptic noise and the parameter variation induced by the VLSI fabrication. Therefore, the algorithm can be implemented by either software or hardware, making it suitable for wider application.
Collapse
|
36
|
Abstract
The quest to implement intelligent processing in electronic neuromorphic systems lacks methods for achieving reliable behavioral dynamics on substrates of inherently imprecise and noisy neurons. Here we report a solution to this problem that involves first mapping an unreliable hardware layer of spiking silicon neurons into an abstract computational layer composed of generic reliable subnetworks of model neurons and then composing the target behavioral dynamics as a "soft state machine" running on these reliable subnets. In the first step, the neural networks of the abstract layer are realized on the hardware substrate by mapping the neuron circuit bias voltages to the model parameters. This mapping is obtained by an automatic method in which the electronic circuit biases are calibrated against the model parameters by a series of population activity measurements. The abstract computational layer is formed by configuring neural networks as generic soft winner-take-all subnetworks that provide reliable processing by virtue of their active gain, signal restoration, and multistability. The necessary states and transitions of the desired high-level behavior are then easily embedded in the computational layer by introducing only sparse connections between some neurons of the various subnets. We demonstrate this synthesis method for a neuromorphic sensory agent that performs real-time context-dependent classification of motion patterns observed by a silicon retina.
Collapse
|
37
|
Matsubara T, Torikai H. Asynchronous cellular automaton-based neuron: theoretical analysis and on-FPGA learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2013; 24:736-748. [PMID: 24808424 DOI: 10.1109/tnnls.2012.2230643] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
A generalized asynchronous cellular automaton-based neuron model is a special kind of cellular automaton that is designed to mimic the nonlinear dynamics of neurons. The model can be implemented as an asynchronous sequential logic circuit and its control parameter is the pattern of wires among the circuit elements that is adjustable after implementation in a field-programmable gate array (FPGA) device. In this paper, a novel theoretical analysis method for the model is presented. Using this method, stabilities of neuron-like orbits and occurrence mechanisms of neuron-like bifurcations of the model are clarified theoretically. Also, a novel learning algorithm for the model is presented. An equivalent experiment shows that an FPGA-implemented learning algorithm enables an FPGA-implemented model to automatically reproduce typical nonlinear responses and occurrence mechanisms observed in biological and model neurons.
Collapse
|
38
|
Pfeil T, Grübl A, Jeltsch S, Müller E, Müller P, Petrovici MA, Schmuker M, Brüderle D, Schemmel J, Meier K. Six networks on a universal neuromorphic computing substrate. Front Neurosci 2013; 7:11. [PMID: 23423583 PMCID: PMC3575075 DOI: 10.3389/fnins.2013.00011] [Citation(s) in RCA: 115] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2012] [Accepted: 01/18/2013] [Indexed: 11/28/2022] Open
Abstract
In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality.
Collapse
Affiliation(s)
- Thomas Pfeil
- Kirchhoff-Institute for Physics, Universität Heidelberg Heidelberg, Germany
| | | | | | | | | | | | | | | | | | | |
Collapse
|
39
|
Serrano-Gotarredona T, Masquelier T, Prodromakis T, Indiveri G, Linares-Barranco B. STDP and STDP variations with memristors for spiking neuromorphic learning systems. Front Neurosci 2013; 7:2. [PMID: 23423540 PMCID: PMC3575074 DOI: 10.3389/fnins.2013.00002] [Citation(s) in RCA: 98] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2012] [Accepted: 01/06/2013] [Indexed: 12/03/2022] Open
Abstract
In this paper we review several ways of realizing asynchronous Spike-Timing-Dependent-Plasticity (STDP) using memristors as synapses. Our focus is on how to use individual memristors to implement synaptic weight multiplications, in a way such that it is not necessary to (a) introduce global synchronization and (b) to separate memristor learning phases from memristor performing phases. In the approaches described, neurons fire spikes asynchronously when they wish and memristive synapses perform computation and learn at their own pace, as it happens in biological neural systems. We distinguish between two different memristor physics, depending on whether they respond to the original "moving wall" or to the "filament creation and annihilation" models. Independent of the memristor physics, we discuss two different types of STDP rules that can be implemented with memristors: either the pure timing-based rule that takes into account the arrival time of the spikes from the pre- and the post-synaptic neurons, or a hybrid rule that takes into account only the timing of pre-synaptic spikes and the membrane potential and other state variables of the post-synaptic neuron. We show how to implement these rules in cross-bar architectures that comprise massive arrays of memristors, and we discuss applications for artificial vision.
Collapse
Affiliation(s)
- T. Serrano-Gotarredona
- Department of Analog and Mixed-Signal Design, Instituto de Microelectrónica de Sevilla, IMSE-CNM-CSICSevilla, Spain
| | - T. Masquelier
- Unit for Brain and Cognition, Department of Information and Communication Technologies, Universitat Pompeu FabraBarcelona, Spain
- Laboratory of Neurobiology of Adaptive Processes, UMR 7102, CNRS - University Pierre and Marie CurieParis, France
| | - T. Prodromakis
- Centre for Bio-inspired Technology, Institute of Biomedical Engineering, Imperial College London
| | - G. Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurich, Switzerland
| | - B. Linares-Barranco
- Department of Analog and Mixed-Signal Design, Instituto de Microelectrónica de Sevilla, IMSE-CNM-CSICSevilla, Spain
| |
Collapse
|
40
|
Yu T, Park J, Joshi S, Maier C, Cauwenberghs G. Event-driven neural integration and synchronicity in analog VLSI. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2013; 2012:775-8. [PMID: 23366007 DOI: 10.1109/embc.2012.6346046] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Synchrony and temporal coding in the central nervous system, as the source of local field potentials and complex neural dynamics, arises from precise timing relationships between spike action population events across neuronal assemblies. Recently it has been shown that coincidence detection based on spike event timing also presents a robust neural code invariant to additive incoherent noise from desynchronized and unrelated inputs. We present spike-based coincidence detection using integrate-and-fire neural membrane dynamics along with pooled conductance-based synaptic dynamics in a hierarchical address-event architecture. Within this architecture, we encode each synaptic event with parameters that govern synaptic connectivity, synaptic strength, and axonal delay with additional global configurable parameters that govern neural and synaptic temporal dynamics. Spike-based coincidence detection is observed and analyzed in measurements on a log-domain analog VLSI implementation of the integrate-and-fire neuron and conductance-based synapse dynamics.
Collapse
Affiliation(s)
- Theodore Yu
- Silicon Valley Labs of Texas Instruments, Santa Clara, CA 95051, USA.
| | | | | | | | | |
Collapse
|
41
|
Zamarreno-Ramos C, Linares-Barranco A, Serrano-Gotarredona T, Linares-Barranco B. Multicasting mesh AER: a scalable assembly approach for reconfigurable neuromorphic structured AER systems. Application to ConvNets. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2013; 7:82-102. [PMID: 23853282 DOI: 10.1109/tbcas.2012.2195725] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
This paper presents a modular, scalable approach to assembling hierarchically structured neuromorphic Address Event Representation (AER) systems. The method consists of arranging modules in a 2D mesh, each communicating bidirectionally with all four neighbors. Address events include a module label. Each module includes an AER router which decides how to route address events. Two routing approaches have been proposed, analyzed and tested, using either destination or source module labels. Our analyses reveal that depending on traffic conditions and network topologies either one or the other approach may result in better performance. Experimental results are given after testing the approach using high-end Virtex-6 FPGAs. The approach is proposed for both single and multiple FPGAs, in which case a special bidirectional parallel-serial AER link with flow control is exploited, using the FPGA Rocket-I/O interfaces. Extensive test results are provided exploiting convolution modules of 64 × 64 pixels with kernels with sizes up to 11 × 11, which process real sensory data from a Dynamic Vision Sensor (DVS) retina. One single Virtex-6 FPGA can hold up to 64 of these convolution modules, which is equivalent to a neural network with 262 × 10(3) neurons and almost 32 million synapses.
Collapse
|
42
|
Pande S, Morgan F, Cawley S, Bruintjes T, Smit G, McGinley B, Carrillo S, Harkin J, McDaid L. Modular Neural Tile Architecture for Compact Embedded Hardware Spiking Neural Network. Neural Process Lett 2013. [DOI: 10.1007/s11063-012-9274-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
43
|
Zamarreño-Ramos C, Serrano-Gotarredona T, Linares-Barranco B. A 0.35 μm sub-ns wake-up time ON-OFF switchable LVDS driver-receiver chip I/O pad pair for rate-dependent power saving in AER bit-serial links. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2012; 6:486-497. [PMID: 23853235 DOI: 10.1109/tbcas.2012.2186136] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
This paper presents a low power switchable current mode driver/receiver I/O pair for high speed serial transmission of asynchronous address event representation (AER) information. The sparse nature of AER packets (also called events) allows driver/receiver bias currents to be switched off to save power. The on/off times must be lower than the bit time to minimize the latency introduced by the switching mechanism. Using this technique, the link power consumption can be scaled down with the event rate without compromising the maximum system throughput. The proposed technique has been implemented on a typical push/pull low voltage differential signaling (LVDS) circuit, but it can easily be extended to other widely used current mode standards, such as current mode logic (CML) or low-voltage positive emitter-coupled logic (LVPECL). A proof of concept prototype has been fabricated in 0.35 μm CMOS incorporating the proposed driver/receiver pair along with a previously reported switchable serializer/deserializer scheme. At a 500 Mbps bit rate, the maximum event rate is 11 Mevent/s for 32-bit events. In this situation, current consumption is 7.5 mA and 9.6 mA for the driver and receiver, respectively, while differential voltage amplitude is ±300 mV. However, if event rate is lower than 20-30 Kevent/s, current consumption has a floor of 270 μA for the driver and 570 μA for the receiver. The measured ON/OFF switching times are in the order of 1 ns. The serial link could be operated at up to 710 Mbps bit rate, resulting in a maximum 32-bit event rate of 15 Mevent/s . This is the same peak event rate as that obtained with the same SerDes circuits and a non-switched driver/receiver pair.
Collapse
|
44
|
VLSI circuits implementing computational models of neocortical circuits. J Neurosci Methods 2012; 210:93-109. [DOI: 10.1016/j.jneumeth.2012.01.019] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2011] [Revised: 01/27/2012] [Accepted: 01/31/2012] [Indexed: 11/20/2022]
|
45
|
Bamford SA, Murray AF, Willshaw DJ. Spike-timing-dependent plasticity with weight dependence evoked from physical constraints. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2012; 6:385-398. [PMID: 23853183 DOI: 10.1109/tbcas.2012.2184285] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Analogue and mixed-signal VLSI implementations of Spike-Timing-Dependent Plasticity (STDP) are reviewed. A circuit is presented with a compact implementation of STDP suitable for parallel integration in large synaptic arrays. In contrast to previously published circuits, it uses the limitations of the silicon substrate to achieve various forms and degrees of weight dependence of STDP. It also uses reverse-biased transistors to reduce leakage from a capacitance representing weight. Chip results are presented showing: various ways in which the learning rule may be shaped; how synaptic weights may retain some indication of their learned values over periods of minutes; and how distributions of weights for synapses convergent on single neurons may shift between more or less extreme bimodality according to the strength of correlational cues in their inputs.
Collapse
Affiliation(s)
- Simeon A Bamford
- Neuroinformatics Doctoral Training Centre, University of Edinburgh, Edinburgh, Scotland EH8 9AB, UK.
| | | | | |
Collapse
|
46
|
Garg V, Shekhar R, Harris JG. Spiking neuron computation with the time machine. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2012; 6:142-155. [PMID: 23852979 DOI: 10.1109/tbcas.2011.2179544] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
The Time Machine (TM) is a spike-based computation architecture that represents synaptic weights in time. This choice of weight representation allows the use of virtual synapses, providing an excellent tradeoff in terms of flexibility, arbitrary weight connections and hardware usage compared to dedicated synapse architectures. The TM supports an arbitrary number of synapses and is limited only by the number of simultaneously active synapses to each neuron. SpikeSim, a behavioral hardware simulator for the architecture, is described along with example algorithms for edge detection and objection recognition. The TM can implement traditional spike-based processing as well as recently developed time mode operations where step functions serve as the input and output of each neuron block. A custom hybrid digital/analog implementation and a fully digital realization of the TM are discussed. An analog chip with 32 neurons, 1024 synapses and an address event representation (AER) block has been fabricated in 0.5 μm technology. A fully digital field-programmable gate array (FPGA)-based implementation of the architecture has 6,144 neurons and 100,352 simultaneously active synapses. Both implementations utilize a digital controller for routing spikes that can process up to 34 million synapses per second.
Collapse
Affiliation(s)
- Vaibhav Garg
- Texas Instruments Incorpoarted, Dallas, TX 75266, USA.
| | | | | |
Collapse
|
47
|
Heo Y, Song H. Circuit modeling and implementation of a biological neuron using a negative resistor for neuron chip. BIOCHIP JOURNAL 2012. [DOI: 10.1007/s13206-012-6103-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
48
|
Rast A, Galluppi F, Davies S, Plana L, Patterson C, Sharp T, Lester D, Furber S. Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware. Neural Netw 2011; 24:961-78. [DOI: 10.1016/j.neunet.2011.06.014] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2010] [Revised: 06/14/2011] [Accepted: 06/16/2011] [Indexed: 11/28/2022]
|
49
|
Folowosele F, Hamilton TJ, Etienne-Cummings R. Silicon modeling of the Mihalaş-Niebur neuron. ACTA ACUST UNITED AC 2011; 22:1915-27. [PMID: 21990331 DOI: 10.1109/tnn.2011.2167020] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
There are a number of spiking and bursting neuron models with varying levels of complexity, ranging from the simple integrate-and-fire model to the more complex Hodgkin-Huxley model. The simpler models tend to be easily implemented in silicon but yet not biologically plausible. Conversely, the more complex models tend to occupy a large area although they are more biologically plausible. In this paper, we present the 0.5 μm complementary metal-oxide-semiconductor (CMOS) implementation of the Mihalaş-Niebur neuron model--a generalized model of the leaky integrate-and-fire neuron with adaptive threshold--that is able to produce most of the known spiking and bursting patterns that have been observed in biology. Our implementation modifies the original proposed model, making it more amenable to CMOS implementation and more biologically plausible. All but one of the spiking properties--tonic spiking, class 1 spiking, phasic spiking, hyperpolarized spiking, rebound spiking, spike frequency adaptation, accommodation, threshold variability, integrator and input bistability--are demonstrated in this model.
Collapse
Affiliation(s)
- Fopefolu Folowosele
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA.
| | | | | |
Collapse
|
50
|
Yu T, Sejnowski TJ, Cauwenberghs G. Biophysical Neural Spiking, Bursting, and Excitability Dynamics in Reconfigurable Analog VLSI. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2011; 5:420-9. [PMID: 22227949 PMCID: PMC3251010 DOI: 10.1109/tbcas.2011.2169794] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We study a range of neural dynamics under variations in biophysical parameters underlying extended Morris-Lecar and Hodgkin-Huxley models in three gating variables. The extended models are implemented in NeuroDyn, a four neuron, twelve synapse continuous-time analog VLSI programmable neural emulation platform with generalized channel kinetics and biophysical membrane dynamics. The dynamics exhibit a wide range of time scales extending beyond 100 ms neglected in typical silicon models of tonic spiking neurons. Circuit simulations and measurements show transition from tonic spiking to tonic bursting dynamics through variation of a single conductance parameter governing calcium recovery. We similarly demonstrate transition from graded to all-or-none neural excitability in the onset of spiking dynamics through the variation of channel kinetic parameters governing the speed of potassium activation. Other combinations of variations in conductance and channel kinetic parameters give rise to phasic spiking and spike frequency adaptation dynamics. The NeuroDyn chip consumes 1.29 mW and occupies 3 mm × 3 mm in 0.5 μm CMOS, supporting emerging developments in neuromorphic silicon-neuron interfaces.
Collapse
Affiliation(s)
- Theodore Yu
- Department of Electrical and Computer Engineering, Jacobs School of Engineering and Institute of Neural Computation, University of California San Diego, La Jolla, CA 92093 USA
| | - Terrence J. Sejnowski
- Division of Biological Sciences and Institute of Neural Computation, University of California San Diego, La Jolla, CA 92093 USA and also with the Howard Hughes Medical Institute, Salk Institute, La Jolla, CA 92037 USA
| | - Gert Cauwenberghs
- Department of Bioengineering, Jacobs School of Engineering and Institute of Neural Computation, University of California San Diego, La Jolla, CA 92093 USA
| |
Collapse
|