1
|
Wang X, Zhong M, Cheng H, Xie J, Zhou Y, Ren J, Liu M. SpikeGoogle: Spiking Neural Networks with GoogLeNet‐like inception module. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2022. [DOI: 10.1049/cit2.12082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Affiliation(s)
- Xuan Wang
- School of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen, China Guangdong Provincial Key Laboratory of Fire Science and Intelligent Emergency Technology Guangzhou China
| | - Minghong Zhong
- School of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen, China Guangdong Provincial Key Laboratory of Fire Science and Intelligent Emergency Technology Guangzhou China
| | - Hoiyuen Cheng
- School of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen, China Guangdong Provincial Key Laboratory of Fire Science and Intelligent Emergency Technology Guangzhou China
| | - Junjie Xie
- School of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen, China Guangdong Provincial Key Laboratory of Fire Science and Intelligent Emergency Technology Guangzhou China
| | - Yingchu Zhou
- Shenzhen Academy of Metrology and Quality Inspection Shenzhen China
| | - Jun Ren
- Infocare Systems Limited New Zealand
| | - Mengyuan Liu
- School of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen, China Guangdong Provincial Key Laboratory of Fire Science and Intelligent Emergency Technology Guangzhou China
| |
Collapse
|
2
|
Spiking Neural Networks for Computational Intelligence: An Overview. BIG DATA AND COGNITIVE COMPUTING 2021. [DOI: 10.3390/bdcc5040067] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Deep neural networks with rate-based neurons have exhibited tremendous progress in the last decade. However, the same level of progress has not been observed in research on spiking neural networks (SNN), despite their capability to handle temporal data, energy-efficiency and low latency. This could be because the benchmarking techniques for SNNs are based on the methods used for evaluating deep neural networks, which do not provide a clear evaluation of the capabilities of SNNs. Particularly, the benchmarking of SNN approaches with regards to energy efficiency and latency requires realization in suitable hardware, which imposes additional temporal and resource constraints upon ongoing projects. This review aims to provide an overview of the current real-world applications of SNNs and identifies steps to accelerate research involving SNNs in the future.
Collapse
|
3
|
Zahra O, Tolu S, Navarro-Alarcon D. Differential mapping spiking neural network for sensor-based robot control. BIOINSPIRATION & BIOMIMETICS 2021; 16:036008. [PMID: 33706302 DOI: 10.1088/1748-3190/abedce] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 03/11/2021] [Indexed: 06/12/2023]
Abstract
In this work, a spiking neural network (SNN) is proposed for approximating differential sensorimotor maps of robotic systems. The computed model is used as a local Jacobian-like projection that relates changes in sensor space to changes in motor space. The SNN consists of an input (sensory) layer and an output (motor) layer connected through plastic synapses, with inter-inhibitory connections at the output layer. Spiking neurons are modeled as Izhikevich neurons with a synaptic learning rule based on spike timing-dependent plasticity. Feedback data from proprioceptive and exteroceptive sensors are encoded and fed into the input layer through a motor babbling process. A guideline for tuning the network parameters is proposed and applied along with the particle swarm optimization technique. Our proposed control architecture takes advantage of biologically plausible tools of an SNN to achieve the target reaching task while minimizing deviations from the desired path, and consequently minimizing the execution time. Thanks to the chosen architecture and optimization of the parameters, the number of neurons and the amount of data required for training are considerably low. The SNN is capable of handling noisy sensor readings to guide the robot movements in real-time. Experimental results are presented to validate the control methodology with a vision-guided robot.
Collapse
Affiliation(s)
- Omar Zahra
- The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China
| | | | - David Navarro-Alarcon
- The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China
| |
Collapse
|
4
|
Heidarpur M, Khosravifar P, Ahmadi A, Ahmadi M. CORDIC-Astrocyte: Tripartite Glutamate-IP3-Ca 2+ Interaction Dynamics on FPGA. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2020; 14:36-47. [PMID: 31751284 DOI: 10.1109/tbcas.2019.2953631] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Real-time, large-scale simulation of biological systems is challenging due to different types of nonlinear functions describing biochemical reactions in the cells. The promise of the high speed, cost effectiveness, and power efficiency in addition to parallel processing has made application-specific hardware an attractive simulation platform. This paper proposes high-speed and low-cost digital hardware to emulate a biological-plausible astrocyte and glutamate-release mechanism. The nonlinear terms of these models were calculated using a high-precision and cost-effective algorithm. Subsequently, the modified models were simulated to study and validate their functions. We developed several hardware versions by setting different constraints to investigate trade-offs and find the best possible design. FPGA implementation results confirmed the ability of the design to emulate biological cell behaviours in detail with high accuracy. As for performance, the proposed design turned out to be faster and more efficient than previously published works that targeted digital hardware for biological-plausible astrocytes.
Collapse
|
5
|
Taherkhani A, Belatreche A, Li Y, Cosma G, Maguire LP, McGinnity TM. A review of learning in biologically plausible spiking neural networks. Neural Netw 2019; 122:253-272. [PMID: 31726331 DOI: 10.1016/j.neunet.2019.09.036] [Citation(s) in RCA: 104] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 09/17/2019] [Accepted: 09/23/2019] [Indexed: 11/30/2022]
Abstract
Artificial neural networks have been used as a powerful processing tool in various areas such as pattern recognition, control, robotics, and bioinformatics. Their wide applicability has encouraged researchers to improve artificial neural networks by investigating the biological brain. Neurological research has significantly progressed in recent years and continues to reveal new characteristics of biological neurons. New technologies can now capture temporal changes in the internal activity of the brain in more detail and help clarify the relationship between brain activity and the perception of a given stimulus. This new knowledge has led to a new type of artificial neural network, the Spiking Neural Network (SNN), that draws more faithfully on biological properties to provide higher processing abilities. A review of recent developments in learning of spiking neurons is presented in this paper. First the biological background of SNN learning algorithms is reviewed. The important elements of a learning algorithm such as the neuron model, synaptic plasticity, information encoding and SNN topologies are then presented. Then, a critical review of the state-of-the-art learning algorithms for SNNs using single and multiple spikes is presented. Additionally, deep spiking neural networks are reviewed, and challenges and opportunities in the SNN field are discussed.
Collapse
Affiliation(s)
- Aboozar Taherkhani
- School of Computer Science and Informatics, Faculty of Computing, Engineering and Media, De Montfort University, Leicester, UK.
| | - Ammar Belatreche
- Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne, UK
| | - Yuhua Li
- School of Computer Science and Informatics, Cardiff University, Cardiff, UK
| | - Georgina Cosma
- Department of Computer Science, Loughborough University, Loughborough, UK
| | - Liam P Maguire
- Intelligent Systems Research Centre, Ulster University, Northern Ireland, Derry, UK
| | - T M McGinnity
- Intelligent Systems Research Centre, Ulster University, Northern Ireland, Derry, UK; School of Science and Technology, Nottingham Trent University, Nottingham, UK
| |
Collapse
|
6
|
Panda P, Srinivasa N. Learning to Recognize Actions From Limited Training Examples Using a Recurrent Spiking Neural Model. Front Neurosci 2018; 12:126. [PMID: 29551962 PMCID: PMC5840233 DOI: 10.3389/fnins.2018.00126] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2017] [Accepted: 02/16/2018] [Indexed: 11/13/2022] Open
Abstract
A fundamental challenge in machine learning today is to build a model that can learn from few examples. Here, we describe a reservoir based spiking neural model for learning to recognize actions with a limited number of labeled videos. First, we propose a novel encoding, inspired by how microsaccades influence visual perception, to extract spike information from raw video data while preserving the temporal correlation across different frames. Using this encoding, we show that the reservoir generalizes its rich dynamical activity toward signature action/movements enabling it to learn from few training examples. We evaluate our approach on the UCF-101 dataset. Our experiments demonstrate that our proposed reservoir achieves 81.3/87% Top-1/Top-5 accuracy, respectively, on the 101-class data while requiring just 8 video examples per class for training. Our results establish a new benchmark for action recognition from limited video examples for spiking neural models while yielding competitive accuracy with respect to state-of-the-art non-spiking neural models.
Collapse
Affiliation(s)
- Priyadarshini Panda
- Nanoelectronics Research Laboratory, School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United States
| | | |
Collapse
|
7
|
Srinivasa N, Stepp ND, Cruz-Albrecht J. Criticality as a Set-Point for Adaptive Behavior in Neuromorphic Hardware. Front Neurosci 2015; 9:449. [PMID: 26648839 PMCID: PMC4664726 DOI: 10.3389/fnins.2015.00449] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2015] [Accepted: 11/13/2015] [Indexed: 11/13/2022] Open
Abstract
Neuromorphic hardware are designed by drawing inspiration from biology to overcome limitations of current computer architectures while forging the development of a new class of autonomous systems that can exhibit adaptive behaviors. Several designs in the recent past are capable of emulating large scale networks but avoid complexity in network dynamics by minimizing the number of dynamic variables that are supported and tunable in hardware. We believe that this is due to the lack of a clear understanding of how to design self-tuning complex systems. It has been widely demonstrated that criticality appears to be the default state of the brain and manifests in the form of spontaneous scale-invariant cascades of neural activity. Experiment, theory and recent models have shown that neuronal networks at criticality demonstrate optimal information transfer, learning and information processing capabilities that affect behavior. In this perspective article, we argue that understanding how large scale neuromorphic electronics can be designed to enable emergent adaptive behavior will require an understanding of how networks emulated by such hardware can self-tune local parameters to maintain criticality as a set-point. We believe that such capability will enable the design of truly scalable intelligent systems using neuromorphic hardware that embrace complexity in network dynamics rather than avoiding it.
Collapse
Affiliation(s)
- Narayan Srinivasa
- Information and System Sciences Lab, Center for Neural and Emergent Systems, HRL Laboratories LLC Malibu, CA, USA
| | - Nigel D Stepp
- Information and System Sciences Lab, Center for Neural and Emergent Systems, HRL Laboratories LLC Malibu, CA, USA
| | | |
Collapse
|
8
|
Srinivasa N, Cho Y. Unsupervised discrimination of patterns in spiking neural networks with excitatory and inhibitory synaptic plasticity. Front Comput Neurosci 2014; 8:159. [PMID: 25566045 PMCID: PMC4266024 DOI: 10.3389/fncom.2014.00159] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2014] [Accepted: 11/18/2014] [Indexed: 12/02/2022] Open
Abstract
A spiking neural network model is described for learning to discriminate among spatial patterns in an unsupervised manner. The network anatomy consists of source neurons that are activated by external inputs, a reservoir that resembles a generic cortical layer with an excitatory-inhibitory (EI) network and a sink layer of neurons for readout. Synaptic plasticity in the form of STDP is imposed on all the excitatory and inhibitory synapses at all times. While long-term excitatory STDP enables sparse and efficient learning of the salient features in inputs, inhibitory STDP enables this learning to be stable by establishing a balance between excitatory and inhibitory currents at each neuron in the network. The synaptic weights between source and reservoir neurons form a basis set for the input patterns. The neural trajectories generated in the reservoir due to input stimulation and lateral connections between reservoir neurons can be readout by the sink layer neurons. This activity is used for adaptation of synapses between reservoir and sink layer neurons. A new measure called the discriminability index (DI) is introduced to compute if the network can discriminate between old patterns already presented in an initial training session. The DI is also used to compute if the network adapts to new patterns without losing its ability to discriminate among old patterns. The final outcome is that the network is able to correctly discriminate between all patterns—both old and new. This result holds as long as inhibitory synapses employ STDP to continuously enable current balance in the network. The results suggest a possible direction for future investigation into how spiking neural networks could address the stability-plasticity question despite having continuous synaptic plasticity.
Collapse
Affiliation(s)
- Narayan Srinivasa
- Center for Neural and Emergent Systems, Information and Systems Sciences Department, HRL Laboratories LLC Malibu, CA, USA
| | - Youngkwan Cho
- Center for Neural and Emergent Systems, Information and Systems Sciences Department, HRL Laboratories LLC Malibu, CA, USA
| |
Collapse
|
9
|
Rumbell T, Denham SL, Wennekers T. A spiking self-organizing map combining STDP, oscillations, and continuous learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2014; 25:894-907. [PMID: 24808036 DOI: 10.1109/tnnls.2013.2283140] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
The self-organizing map (SOM) is a neural network algorithm to create topographically ordered spatial representations of an input data set using unsupervised learning. The SOM algorithm is inspired by the feature maps found in mammalian cortices but lacks some important functional properties of its biological equivalents. Neurons have no direct access to global information, transmit information through spikes and may be using phasic coding of spike times within synchronized oscillations, receive continuous input from the environment, do not necessarily alter network properties such as learning rate and lateral connectivity throughout training, and learn through relative timing of action potentials across a synaptic connection. In this paper, a network of integrate-and-fire neurons is presented that incorporates solutions to each of these issues through the neuron model and network structure. Results of the simulated experiments assessing map formation using artificial data as well as the Iris and Wisconsin Breast Cancer datasets show that this novel implementation maintains fundamental properties of the conventional SOM, thereby representing a significant step toward further understanding of the self-organizational properties of the brain while providing an additional method for implementing SOMs that can be utilized for future modeling in software or special purpose spiking neuron hardware.
Collapse
|
10
|
Minkovich K, Thibeault CM, O'Brien MJ, Nogin A, Cho Y, Srinivasa N. HRLSim: a high performance spiking neural network simulator for GPGPU clusters. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2014; 25:316-331. [PMID: 24807031 DOI: 10.1109/tnnls.2013.2276056] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Modeling of large-scale spiking neural models is an important tool in the quest to understand brain function and subsequently create real-world applications. This paper describes a spiking neural network simulator environment called HRL Spiking Simulator (HRLSim). This simulator is suitable for implementation on a cluster of general purpose graphical processing units (GPGPUs). Novel aspects of HRLSim are described and an analysis of its performance is provided for various configurations of the cluster. With the advent of inexpensive GPGPU cards and compute power, HRLSim offers an affordable and scalable tool for design, real-time simulation, and analysis of large-scale spiking neural networks.
Collapse
|
11
|
Cruz-Albrecht JM, Derosier T, Srinivasa N. A scalable neural chip with synaptic electronics using CMOS integrated memristors. NANOTECHNOLOGY 2013; 24:384011. [PMID: 23999447 DOI: 10.1088/0957-4484/24/38/384011] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
The design and simulation of a scalable neural chip with synaptic electronics using nanoscale memristors fully integrated with complementary metal-oxide-semiconductor (CMOS) is presented. The circuit consists of integrate-and-fire neurons and synapses with spike-timing dependent plasticity (STDP). The synaptic conductance values can be stored in memristors with eight levels, and the topology of connections between neurons is reconfigurable. The circuit has been designed using a 90 nm CMOS process with via connections to on-chip post-processed memristor arrays. The design has about 16 million CMOS transistors and 73 728 integrated memristors. We provide circuit level simulations of the entire chip performing neuronal and synaptic computations that result in biologically realistic functional behavior.
Collapse
|
12
|
Thibeault CM, Srinivasa N. Using a hybrid neuron in physiologically inspired models of the basal ganglia. Front Comput Neurosci 2013; 7:88. [PMID: 23847524 PMCID: PMC3701869 DOI: 10.3389/fncom.2013.00088] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2013] [Accepted: 06/15/2013] [Indexed: 11/15/2022] Open
Abstract
Our current understanding of the basal ganglia (BG) has facilitated the creation of computational models that have contributed novel theories, explored new functional anatomy and demonstrated results complementing physiological experiments. However, the utility of these models extends beyond these applications. Particularly in neuromorphic engineering, where the basal ganglia's role in computation is important for applications such as power efficient autonomous agents and model-based control strategies. The neurons used in existing computational models of the BG, however, are not amenable for many low-power hardware implementations. Motivated by a need for more hardware accessible networks, we replicate four published models of the BG, spanning single neuron and small networks, replacing the more computationally expensive neuron models with an Izhikevich hybrid neuron. This begins with a network modeling action-selection, where the basal activity levels and the ability to appropriately select the most salient input is reproduced. A Parkinson's disease model is then explored under normal conditions, Parkinsonian conditions and during subthalamic nucleus deep brain stimulation (DBS). The resulting network is capable of replicating the loss of thalamic relay capabilities in the Parkinsonian state and its return under DBS. This is also demonstrated using a network capable of action-selection. Finally, a study of correlation transfer under different patterns of Parkinsonian activity is presented. These networks successfully captured the significant results of the originals studies. This not only creates a foundation for neuromorphic hardware implementations but may also support the development of large-scale biophysical models. The former potentially providing a way of improving the efficacy of DBS and the latter allowing for the efficient simulation of larger more comprehensive networks.
Collapse
Affiliation(s)
- Corey M Thibeault
- Center for Neural and Emergent Systems, Information and System Sciences Laboratory, HRL Laboratories LLC. Malibu, CA, USA ; Department of Electrical and Biomedical Engineering, The University of Nevada Reno, NV, USA ; Department of Computer Science and Engineering, The University of Nevada Reno, NV, USA
| | | |
Collapse
|