1
|
Towards the Simulation of a Realistic Large-Scale Spiking Network on a Desktop Multi-GPU System. Bioengineering (Basel) 2022; 9:bioengineering9100543. [PMID: 36290510 PMCID: PMC9598639 DOI: 10.3390/bioengineering9100543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 09/20/2022] [Accepted: 10/07/2022] [Indexed: 12/03/2022] Open
Abstract
The reproduction of the brain ’sactivity and its functionality is the main goal of modern neuroscience. To this aim, several models have been proposed to describe the activity of single neurons at different levels of detail. Then, single neurons are linked together to build a network, in order to reproduce complex behaviors. In the literature, different network-building rules and models have been described, targeting realistic distributions and connections of the neurons. In particular, the Granular layEr Simulator (GES) performs the granular layer network reconstruction considering biologically realistic rules to connect the neurons. Moreover, it simulates the network considering the Hodgkin–Huxley model. The work proposed in this paper adopts the network reconstruction model of GES and proposes a simulation module based on Leaky Integrate and Fire (LIF) model. This simulator targets the reproduction of the activity of large scale networks, exploiting the GPU technology to reduce the processing times. Experimental results show that a multi-GPU system reduces the simulation of a network with more than 1.8 million neurons from approximately 54 to 13 h.
Collapse
|
2
|
Electrical coupling regulated by GABAergic nucleo-olivary afferent fibres facilitates cerebellar sensory-motor adaptation. Neural Netw 2022; 155:422-438. [DOI: 10.1016/j.neunet.2022.08.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 07/16/2022] [Accepted: 08/24/2022] [Indexed: 11/18/2022]
|
3
|
Zhang A, Li X, Gao Y, Niu Y. Event-Driven Intrinsic Plasticity for Spiking Convolutional Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1986-1995. [PMID: 34106868 DOI: 10.1109/tnnls.2021.3084955] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The biologically discovered intrinsic plasticity (IP) learning rule, which changes the intrinsic excitability of an individual neuron by adaptively turning the firing threshold, has been shown to be crucial for efficient information processing. However, this learning rule needs extra time for updating operations at each step, causing extra energy consumption and reducing the computational efficiency. The event-driven or spike-based coding strategy of spiking neural networks (SNNs), i.e., neurons will only be active if driven by continuous spiking trains, employs all-or-none pulses (spikes) to transmit information, contributing to sparseness in neuron activations. In this article, we propose two event-driven IP learning rules, namely, input-driven and self-driven IP, based on basic IP learning. Input-driven means that IP updating occurs only when the neuron receives spiking inputs from its presynaptic neurons, whereas self-driven means that IP updating only occurs when the neuron generates a spike. A spiking convolutional neural network (SCNN) is developed based on the ANN2SNN conversion method, i.e., converting a well-trained rate-based artificial neural network to an SNN via directly mapping the connection weights. By comparing the computational performance of SCNNs with different IP rules on the recognition of MNIST, FashionMNIST, Cifar10, and SVHN datasets, we demonstrate that the two event-based IP rules can remarkably reduce IP updating operations, contributing to sparse computations and accelerating the recognition process. This work may give insights into the modeling of brain-inspired SNNs for low-power applications.
Collapse
|
4
|
Eshraghian JK, Wang X, Lu WD. Memristor-Based Binarized Spiking Neural Networks: Challenges and Applications. IEEE NANOTECHNOLOGY MAGAZINE 2022. [DOI: 10.1109/mnano.2022.3141443] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
5
|
Computational epidemiology study of homeostatic compensation during sensorimotor aging. Neural Netw 2021; 146:316-333. [PMID: 34923219 DOI: 10.1016/j.neunet.2021.11.024] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 10/26/2021] [Accepted: 11/24/2021] [Indexed: 11/20/2022]
Abstract
The vestibulo-ocular reflex (VOR) stabilizes vision during head motion. Age-related changes of vestibular neuroanatomical properties predict a linear decay of VOR function. Nonetheless, human epidemiological data show a stable VOR function across the life span. In this study, we model cerebellum-dependent VOR adaptation to relate structural and functional changes throughout aging. We consider three neurosynaptic factors that may codetermine VOR adaptation during aging: the electrical coupling of inferior olive neurons, the long-term spike timing-dependent plasticity at parallel fiber - Purkinje cell synapses and mossy fiber - medial vestibular nuclei synapses, and the intrinsic plasticity of Purkinje cell synapses Our cross-sectional aging analyses suggest that long-term plasticity acts as a global homeostatic mechanism that underpins the stable temporal profile of VOR function. The results also suggest that the intrinsic plasticity of Purkinje cell synapses operates as a local homeostatic mechanism that further sustains the VOR at older ages. Importantly, the computational epidemiology approach presented in this study allows discrepancies among human cross-sectional studies to be understood in terms of interindividual variability in older individuals. Finally, our longitudinal aging simulations show that the amount of residual fibers coding for the peak and trough of the VOR cycle constitutes a predictive hallmark of VOR trajectories over a lifetime.
Collapse
|
6
|
Abadia I, Naveros F, Garrido JA, Ros E, Luque NR. On Robot Compliance: A Cerebellar Control Approach. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:2476-2489. [PMID: 31647453 DOI: 10.1109/tcyb.2019.2945498] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The work presented here is a novel biological approach for the compliant control of a robotic arm in real time (RT). We integrate a spiking cerebellar network at the core of a feedback control loop performing torque-driven control. The spiking cerebellar controller provides torque commands allowing for accurate and coordinated arm movements. To compute these output motor commands, the spiking cerebellar controller receives the robot's sensorial signals, the robot's goal behavior, and an instructive signal. These input signals are translated into a set of evolving spiking patterns representing univocally a specific system state at every point of time. Spike-timing-dependent plasticity (STDP) is then supported, allowing for building adaptive control. The spiking cerebellar controller continuously adapts the torque commands provided to the robot from experience as STDP is deployed. Adaptive torque commands, in turn, help the spiking cerebellar controller to cope with built-in elastic elements within the robot's actuators mimicking human muscles (inherently elastic). We propose a natural integration of a bioinspired control scheme, based on the cerebellum, with a compliant robot. We prove that our compliant approach outperforms the accuracy of the default factory-installed position control in a set of tasks used for addressing cerebellar motor behavior: controlling six degrees of freedom (DoF) in smooth movements, fast ballistic movements, and unstructured scenario compliant movements.
Collapse
|
7
|
Abstract
This paper present contributions to the state-of-the art for graphics processing unit (GPU-based) embedded intelligence (EI) research for architectures and applications. This paper gives a comprehensive review and representative studies of the emerging and current paradigms for GPU-based EI with the focus on the architecture, technologies and applications: (1) First, the overview and classifications of GPU-based EI research are presented to give the full spectrum in this area that also serves as a concise summary of the scope of the paper; (2) Second, various architecture technologies for GPU-based deep learning techniques and applications are discussed in detail; and (3) Third, various architecture technologies for machine learning techniques and applications are discussed. This paper aims to give useful insights for the research area and motivate researchers towards the development of GPU-based EI for practical deployment and applications.
Collapse
|
8
|
Kuriyama R, Casellato C, D'Angelo E, Yamazaki T. Real-Time Simulation of a Cerebellar Scaffold Model on Graphics Processing Units. Front Cell Neurosci 2021; 15:623552. [PMID: 33897369 PMCID: PMC8058369 DOI: 10.3389/fncel.2021.623552] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Accepted: 03/15/2021] [Indexed: 11/13/2022] Open
Abstract
Large-scale simulation of detailed computational models of neuronal microcircuits plays a prominent role in reproducing and predicting the dynamics of the microcircuits. To reconstruct a microcircuit, one must choose neuron and synapse models, placements, connectivity, and numerical simulation methods according to anatomical and physiological constraints. For reconstruction and refinement, it is useful to be able to replace one module easily while leaving the others as they are. One way to achieve this is via a scaffolding approach, in which a simulation code is built on independent modules for placements, connections, and network simulations. Owing to the modularity of functions, this approach enables researchers to improve the performance of the entire simulation by simply replacing a problematic module with an improved one. Casali et al. (2019) developed a spiking network model of the cerebellar microcircuit using this approach, and while it reproduces electrophysiological properties of cerebellar neurons, it takes too much computational time. Here, we followed this scaffolding approach and replaced the simulation module with an accelerated version on graphics processing units (GPUs). Our cerebellar scaffold model ran roughly 100 times faster than the original version. In fact, our model is able to run faster than real time, with good weak and strong scaling properties. To demonstrate an application of real-time simulation, we implemented synaptic plasticity mechanisms at parallel fiber-Purkinje cell synapses, and carried out simulation of behavioral experiments known as gain adaptation of optokinetic response. We confirmed that the computer simulation reproduced experimental findings while being completed in real time. Actually, a computer simulation for 2 s of the biological time completed within 750 ms. These results suggest that the scaffolding approach is a promising concept for gradual development and refactoring of simulation codes for large-scale elaborate microcircuits. Moreover, a real-time version of the cerebellar scaffold model, which is enabled by parallel computing technology owing to GPUs, may be useful for large-scale simulations and engineering applications that require real-time signal processing and motor control.
Collapse
Affiliation(s)
- Rin Kuriyama
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Claudia Casellato
- Neurophysiology Unit, Neurocomputational Laboratory, Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Egidio D'Angelo
- Neurophysiology Unit, Neurocomputational Laboratory, Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
- IRCCS Mondino Foundation, Pavia, Italy
| | - Tadashi Yamazaki
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| |
Collapse
|
9
|
Florimbi G, Torti E, Masoli S, D'Angelo E, Leporati F. Granular layEr Simulator: Design and Multi-GPU Simulation of the Cerebellar Granular Layer. Front Comput Neurosci 2021; 15:630795. [PMID: 33833674 PMCID: PMC8023391 DOI: 10.3389/fncom.2021.630795] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 02/17/2021] [Indexed: 11/15/2022] Open
Abstract
In modern computational modeling, neuroscientists need to reproduce long-lasting activity of large-scale networks, where neurons are described by highly complex mathematical models. These aspects strongly increase the computational load of the simulations, which can be efficiently performed by exploiting parallel systems to reduce the processing times. Graphics Processing Unit (GPU) devices meet this need providing on desktop High Performance Computing. In this work, authors describe a novel Granular layEr Simulator development implemented on a multi-GPU system capable of reconstructing the cerebellar granular layer in a 3D space and reproducing its neuronal activity. The reconstruction is characterized by a high level of novelty and realism considering axonal/dendritic field geometries, oriented in the 3D space, and following convergence/divergence rates provided in literature. Neurons are modeled using Hodgkin and Huxley representations. The network is validated by reproducing typical behaviors which are well-documented in the literature, such as the center-surround organization. The reconstruction of a network, whose volume is 600 × 150 × 1,200 μm3 with 432,000 granules, 972 Golgi cells, 32,399 glomeruli, and 4,051 mossy fibers, takes 235 s on an Intel i9 processor. The 10 s activity reproduction takes only 4.34 and 3.37 h exploiting a single and multi-GPU desktop system (with one or two NVIDIA RTX 2080 GPU, respectively). Moreover, the code takes only 3.52 and 2.44 h if run on one or two NVIDIA V100 GPU, respectively. The relevant speedups reached (up to ~38× in the single-GPU version, and ~55× in the multi-GPU) clearly demonstrate that the GPU technology is highly suitable for realistic large network simulations.
Collapse
Affiliation(s)
- Giordana Florimbi
- Custom Computing and Programmable Systems Laboratory, Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy
| | - Emanuele Torti
- Custom Computing and Programmable Systems Laboratory, Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy
| | - Stefano Masoli
- Neurocomputational Laboratory, Neurophysiology Unit, Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Egidio D'Angelo
- Neurocomputational Laboratory, Neurophysiology Unit, Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy.,Istituti di Ricovero e Cura a Carattere Scientifico (IRCCS) Mondino Foundation, Pavia, Italy
| | - Francesco Leporati
- Custom Computing and Programmable Systems Laboratory, Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy
| |
Collapse
|
10
|
Yamazaki T, Igarashi J, Yamaura H. Human-scale Brain Simulation via Supercomputer: A Case Study on the Cerebellum. Neuroscience 2021; 462:235-246. [PMID: 33482329 DOI: 10.1016/j.neuroscience.2021.01.014] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2020] [Revised: 12/30/2020] [Accepted: 01/06/2021] [Indexed: 01/03/2023]
Abstract
Performance of supercomputers has been steadily and exponentially increasing for the past 20 years, and is expected to increase further. This unprecedented computational power enables us to build and simulate large-scale neural network models composed of tens of billions of neurons and tens of trillions of synapses with detailed anatomical connections and realistic physiological parameters. Such "human-scale" brain simulation could be considered a milestone in computational neuroscience and even in general neuroscience. Towards this milestone, it is mandatory to introduce modern high-performance computing technology into neuroscience research. In this article, we provide an introductory landscape about large-scale brain simulation on supercomputers from the viewpoints of computational neuroscience and modern high-performance computing technology for specialists in experimental as well as computational neurosciences. This introduction to modeling and simulation methods is followed by a review of various representative large-scale simulation studies conducted to date. Then, we direct our attention to the cerebellum, with a review of more simulation studies specific to that region. Furthermore, we present recent simulation results of a human-scale cerebellar network model composed of 86 billion neurons on the Japanese flagship supercomputer K (now retired). Finally, we discuss the necessity and importance of human-scale brain simulation, and suggest future directions of such large-scale brain simulation research.
Collapse
Affiliation(s)
- Tadashi Yamazaki
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Japan.
| | | | - Hiroshi Yamaura
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Japan
| |
Collapse
|
11
|
Data-Driven Compartmental Modeling Method for Harmonic Analysis—A Study of the Electric Arc Furnace. ENERGIES 2019. [DOI: 10.3390/en12224378] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The electric arc furnace (EAF) contributes to almost one-third of the global iron and steel industry, and its harmonic pollution has drawn attention. An accurate EAF harmonic model is essential to evaluate the harmonic pollution of EAF. In this paper, a data-driven compartmental modeling method (DCMM) is proposed for the multi-mode EAF harmonic model. The proposed DCMM considers the coupling relationship among different frequencies of harmonics to enhance the modeling accuracy, meanwhile, the dimensions of the harmonic dataset are reduced to improve computational efficiency. Furthermore, the proposed DCMM is applicable to establish a multi-mode EAF harmonic model by dividing the multi-mode EAF harmonic dataset into several clusters corresponding to the different modes of the EAF smelting process. The performance evaluation results show that the proposed DCMM is adaptive in terms of establishing the multi-mode model, even if the data volumes, number of clusters, and sample distribution change significantly. Finally, a case study of EAF harmonic data is conducted to establish a multi-mode EAF harmonic model, showing that the proposed DCMM is effective and accurate in EAF modeling.
Collapse
|
12
|
Hu R, Chang S, Wang H, He J, Huang Q. Efficient Multispike Learning for Spiking Neural Networks Using Probability-Modulated Timing Method. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:1984-1997. [PMID: 30418889 DOI: 10.1109/tnnls.2018.2875471] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Error functions are normally based on the distance between output spikes and target spikes in supervised learning algorithms for spiking neural networks (SNNs). Due to the discontinuous nature of the internal state of spiking neuron, it is challenging to ensure that the number of output spikes and target spikes kept identical in multispike learning. This problem is conventionally dealt with by using the smaller of the number of desired spikes and that of actual output spikes in learning. However, if this approach is used, information is lost as some spikes are neglected. In this paper, a probability-modulated timing mechanism is built on the stochastic neurons, where the discontinuous spike patterns are converted to the likelihood of generating the desired output spike trains. By applying this mechanism to a probability-modulated spiking classifier, a probability-modulated SNN (PMSNN) is constructed. In its multilayer and multispike learning structure, more inputs are incorporated and mapped to the target spike trains. A clustering rule connection mechanism is also applied to a reservoir to improve the efficiency of information transmission among synapses, which can map the highly correlated inputs to the adjacent neurons. Results of comparisons between the proposed method and popular the SNN algorithms showed that the PMSNN yields higher efficiency and requires fewer parameters.
Collapse
|
13
|
Exploration of a mechanism to form bionic, self-growing and self-organizing neural network. Artif Intell Rev 2019. [DOI: 10.1007/s10462-018-9626-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
14
|
Luque NR, Naveros F, Carrillo RR, Ros E, Arleo A. Spike burst-pause dynamics of Purkinje cells regulate sensorimotor adaptation. PLoS Comput Biol 2019; 15:e1006298. [PMID: 30860991 PMCID: PMC6430425 DOI: 10.1371/journal.pcbi.1006298] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2018] [Revised: 03/22/2019] [Accepted: 01/08/2019] [Indexed: 11/25/2022] Open
Abstract
Cerebellar Purkinje cells mediate accurate eye movement coordination. However, it remains unclear how oculomotor adaptation depends on the interplay between the characteristic Purkinje cell response patterns, namely tonic, bursting, and spike pauses. Here, a spiking cerebellar model assesses the role of Purkinje cell firing patterns in vestibular ocular reflex (VOR) adaptation. The model captures the cerebellar microcircuit properties and it incorporates spike-based synaptic plasticity at multiple cerebellar sites. A detailed Purkinje cell model reproduces the three spike-firing patterns that are shown to regulate the cerebellar output. Our results suggest that pauses following Purkinje complex spikes (bursts) encode transient disinhibition of target medial vestibular nuclei, critically gating the vestibular signals conveyed by mossy fibres. This gating mechanism accounts for early and coarse VOR acquisition, prior to the late reflex consolidation. In addition, properly timed and sized Purkinje cell bursts allow the ratio between long-term depression and potentiation (LTD/LTP) to be finely shaped at mossy fibre-medial vestibular nuclei synapses, which optimises VOR consolidation. Tonic Purkinje cell firing maintains the consolidated VOR through time. Importantly, pauses are crucial to facilitate VOR phase-reversal learning, by reshaping previously learnt synaptic weight distributions. Altogether, these results predict that Purkinje spike burst-pause dynamics are instrumental to VOR learning and reversal adaptation. Cerebellar Purkinje cells regulate accurate eye movement coordination. However, it remains unclear how cerebellar-dependent oculomotor adaptation depends on the interplay between Purkinje cell characteristic response patterns: tonic, high frequency bursting, and post-complex spike pauses. We explore the role of Purkinje spike burst-pause dynamics in VOR adaptation. A biophysical model of Purkinje cell is at the core of a spiking network model, which captures the cerebellar microcircuit properties and incorporates spike-based synaptic plasticity mechanisms at different cerebellar sites. We show that Purkinje spike burst-pause dynamics are critical for (1) gating the vestibular-motor response association during VOR acquisition; (2) mediating the LTD/LTP balance for VOR consolidation; (3) reshaping synaptic efficacy distributions for VOR phase-reversal adaptation; (4) explaining the reversal VOR gain discontinuities during sleeping.
Collapse
Affiliation(s)
- Niceto R. Luque
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
- * E-mail: (NRL); (AA)
| | - Francisco Naveros
- Department of Computer Architecture and Technology, CITIC-University of Granada, Granada, Spain
| | - Richard R. Carrillo
- Department of Computer Architecture and Technology, CITIC-University of Granada, Granada, Spain
| | - Eduardo Ros
- Department of Computer Architecture and Technology, CITIC-University of Granada, Granada, Spain
| | - Angelo Arleo
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
- * E-mail: (NRL); (AA)
| |
Collapse
|
15
|
Naveros F, Luque NR, Ros E, Arleo A. VOR Adaptation on a Humanoid iCub Robot Using a Spiking Cerebellar Model. IEEE TRANSACTIONS ON CYBERNETICS 2019; 50:4744-4757. [PMID: 30835236 DOI: 10.1109/tcyb.2019.2899246] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
We embed a spiking cerebellar model within an adaptive real-time (RT) control loop that is able to operate a real robotic body (iCub) when performing different vestibulo-ocular reflex (VOR) tasks. The spiking neural network computation, including event- and time-driven neural dynamics, neural activity, and spike-timing dependent plasticity (STDP) mechanisms, leads to a nondeterministic computation time caused by the neural activity volleys encountered during cerebellar simulation. This nondeterministic computation time motivates the integration of an RT supervisor module that is able to ensure a well-orchestrated neural computation time and robot operation. Actually, our neurorobotic experimental setup (VOR) benefits from the biological sensory motor delay between the cerebellum and the body to buffer the computational overloads as well as providing flexibility in adjusting the neural computation time and RT operation. The RT supervisor module provides for incremental countermeasures that dynamically slow down or speed up the cerebellar simulation by either halting the simulation or disabling certain neural computation features (i.e., STDP mechanisms, spike propagation, and neural updates) to cope with the RT constraints imposed by the real robot operation. This neurorobotic experimental setup is applied to different horizontal and vertical VOR adaptive tasks that are widely used by the neuroscientific community to address cerebellar functioning. We aim to elucidate the manner in which the combination of the cerebellar neural substrate and the distributed plasticity shapes the cerebellar neural activity to mediate motor adaptation. This paper underlies the need for a two-stage learning process to facilitate VOR acquisition.
Collapse
|
16
|
Antonietti A, Casellato C, D'Angelo E, Pedrocchi A. Model-Driven Analysis of Eyeblink Classical Conditioning Reveals the Underlying Structure of Cerebellar Plasticity and Neuronal Activity. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:2748-2762. [PMID: 27608482 DOI: 10.1109/tnnls.2016.2598190] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The cerebellum plays a critical role in sensorimotor control. However, how the specific circuits and plastic mechanisms of the cerebellum are engaged in closed-loop processing is still unclear. We developed an artificial sensorimotor control system embedding a detailed spiking cerebellar microcircuit with three bidirectional plasticity sites. This proved able to reproduce a cerebellar-driven associative paradigm, the eyeblink classical conditioning (EBCC), in which a precise time relationship between an unconditioned stimulus (US) and a conditioned stimulus (CS) is established. We challenged the spiking model to fit an experimental data set from human subjects. Two subsequent sessions of EBCC acquisition and extinction were recorded and transcranial magnetic stimulation (TMS) was applied on the cerebellum to alter circuit function and plasticity. Evolutionary algorithms were used to find the near-optimal model parameters to reproduce the behaviors of subjects in the different sessions of the protocol. The main finding is that the optimized cerebellar model was able to learn to anticipate (predict) conditioned responses with accurate timing and success rate, demonstrating fast acquisition, memory stabilization, rapid extinction, and faster reacquisition as in EBCC in humans. The firing of Purkinje cells (PCs) and deep cerebellar nuclei (DCN) changed during learning under the control of synaptic plasticity, which evolved at different rates, with a faster acquisition in the cerebellar cortex than in DCN synapses. Eventually, a reduced PC activity released DCN discharge just after the CS, precisely anticipating the US and causing the eyeblink. Moreover, a specific alteration in cortical plasticity explained the EBCC changes induced by cerebellar TMS in humans. In this paper, for the first time, it is shown how closed-loop simulations, using detailed cerebellar microcircuit models, can be successfully used to fit real experimental data sets. Thus, the changes of the model parameters in the different sessions of the protocol unveil how implicit microcircuit mechanisms can generate normal and altered associative behaviors.The cerebellum plays a critical role in sensorimotor control. However, how the specific circuits and plastic mechanisms of the cerebellum are engaged in closed-loop processing is still unclear. We developed an artificial sensorimotor control system embedding a detailed spiking cerebellar microcircuit with three bidirectional plasticity sites. This proved able to reproduce a cerebellar-driven associative paradigm, the eyeblink classical conditioning (EBCC), in which a precise time relationship between an unconditioned stimulus (US) and a conditioned stimulus (CS) is established. We challenged the spiking model to fit an experimental data set from human subjects. Two subsequent sessions of EBCC acquisition and extinction were recorded and transcranial magnetic stimulation (TMS) was applied on the cerebellum to alter circuit function and plasticity. Evolutionary algorithms were used to find the near-optimal model parameters to reproduce the behaviors of subjects in the different sessions of the protocol. The main finding is that the optimized cerebellar model was able to learn to anticipate (predict) conditioned responses with accurate timing and success rate, demonstrating fast acquisition, memory stabilization, rapid extinction, and faster reacquisition as in EBCC in humans. The firing of Purkinje cells (PCs) and deep cerebellar nuclei (DCN) changed during learning under the control of synaptic plasticity, which evolved at different rates, with a faster acquisition in the cerebellar cortex than in DCN synapses. Eventually, a reduced PC activity released DCN discharge just after the CS, precisely anticipating the US and causing the eyeblink. Moreover, a specific alteration in cortical plasticity explained the EBCC changes induced by cerebellar TMS in humans. In this paper, for the first time, it is shown how closed-loop simulations, using detailed cerebellar microcircuit models, can be successfully used to fit real experimental data sets. Thus, the changes of the model parameters in the different sessions of the protocol unveil how implicit microcircuit mechanisms can generate normal and altered associative behaviors.
Collapse
Affiliation(s)
- Alberto Antonietti
- Department of Electronics, Neuroengineering and Medical Robotics Laboratory, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Claudia Casellato
- Department of Electronics, Neuroengineering and Medical Robotics Laboratory, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Egidio D'Angelo
- Department of Brain and Behavioral Sciences, Brain Connectivity Center, Istituto di Ricovero e Cura a Carattere Scientifico and the Istituto Neurologico Nazionale C. Mondino, University of Pavia, Pavia, Italy
| | - Alessandra Pedrocchi
- Department of Electronics, Neuroengineering and Medical Robotics Laboratory, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| |
Collapse
|
17
|
Xie X, Qu H, Yi Z, Kurths J. Efficient Training of Supervised Spiking Neural Network via Accurate Synaptic-Efficiency Adjustment Method. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:1411-1424. [PMID: 28113824 DOI: 10.1109/tnnls.2016.2541339] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The spiking neural network (SNN) is the third generation of neural networks and performs remarkably well in cognitive tasks, such as pattern recognition. The temporal neural encode mechanism found in biological hippocampus enables SNN to possess more powerful computation capability than networks with other encoding schemes. However, this temporal encoding approach requires neurons to process information serially on time, which reduces learning efficiency significantly. To keep the powerful computation capability of the temporal encoding mechanism and to overcome its low efficiency in the training of SNNs, a new training algorithm, the accurate synaptic-efficiency adjustment method is proposed in this paper. Inspired by the selective attention mechanism of the primate visual system, our algorithm selects only the target spike time as attention areas, and ignores voltage states of the untarget ones, resulting in a significant reduction of training time. Besides, our algorithm employs a cost function based on the voltage difference between the potential of the output neuron and the firing threshold of the SNN, instead of the traditional precise firing time distance. A normalized spike-timing-dependent-plasticity learning window is applied to assigning this error to different synapses for instructing their training. Comprehensive simulations are conducted to investigate the learning properties of our algorithm, with input neurons emitting both single spike and multiple spikes. Simulation results indicate that our algorithm possesses higher learning performance than the existing other methods and achieves the state-of-the-art efficiency in the training of SNN.
Collapse
|
18
|
Xie X, Qu H, Liu G, Zhang M. Efficient training of supervised spiking neural networks via the normalized perceptron based learning rule. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.01.086] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
19
|
Naveros F, Garrido JA, Carrillo RR, Ros E, Luque NR. Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks. Front Neuroinform 2017; 11:7. [PMID: 28223930 PMCID: PMC5293783 DOI: 10.3389/fninf.2017.00007] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2016] [Accepted: 01/18/2017] [Indexed: 12/12/2022] Open
Abstract
Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under increasing levels of neural complexity.
Collapse
Affiliation(s)
- Francisco Naveros
- Department of Computer Architecture and Technology, Research Centre for Information and Communication Technologies, University of Granada Granada, Spain
| | - Jesus A Garrido
- Department of Computer Architecture and Technology, Research Centre for Information and Communication Technologies, University of Granada Granada, Spain
| | - Richard R Carrillo
- Department of Computer Architecture and Technology, Research Centre for Information and Communication Technologies, University of Granada Granada, Spain
| | - Eduardo Ros
- Department of Computer Architecture and Technology, Research Centre for Information and Communication Technologies, University of Granada Granada, Spain
| | - Niceto R Luque
- Vision Institute, Aging in Vision and Action LabParis, France; CNRS, INSERM, Pierre and Marie Curie UniversityParis, France
| |
Collapse
|
20
|
Prieto A, Prieto B, Ortigosa EM, Ros E, Pelayo F, Ortega J, Rojas I. Neural networks: An overview of early research, current frameworks and new challenges. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.06.014] [Citation(s) in RCA: 161] [Impact Index Per Article: 17.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
21
|
An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks. PLoS One 2016; 11:e0150329. [PMID: 27044001 PMCID: PMC4820126 DOI: 10.1371/journal.pone.0150329] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2015] [Accepted: 02/11/2016] [Indexed: 11/21/2022] Open
Abstract
The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper.
Collapse
|
22
|
Gosui M, Yamazaki T. Real-World-Time Simulation of Memory Consolidation in a Large-Scale Cerebellar Model. Front Neuroanat 2016; 10:21. [PMID: 26973472 PMCID: PMC4776399 DOI: 10.3389/fnana.2016.00021] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2015] [Accepted: 02/18/2016] [Indexed: 11/23/2022] Open
Abstract
We report development of a large-scale spiking network model of the cerebellum composed of more than 1 million neurons. The model is implemented on graphics processing units (GPUs), which are dedicated hardware for parallel computing. Using 4 GPUs simultaneously, we achieve realtime simulation, in which computer simulation of cerebellar activity for 1 s completes within 1 s in the real-world time, with temporal resolution of 1 ms. This allows us to carry out a very long-term computer simulation of cerebellar activity in a practical time with millisecond temporal resolution. Using the model, we carry out computer simulation of long-term gain adaptation of optokinetic response (OKR) eye movements for 5 days aimed to study the neural mechanisms of posttraining memory consolidation. The simulation results are consistent with animal experiments and our theory of posttraining memory consolidation. These results suggest that realtime computing provides a useful means to study a very slow neural process such as memory consolidation in the brain.
Collapse
Affiliation(s)
- Masato Gosui
- Department of Communication Engineering and Informatics, Graduate School of Informatics and Engineering, The University of Electro-CommunicationsTokyo, Japan
| | - Tadashi Yamazaki
- Department of Communication Engineering and Informatics, Graduate School of Informatics and Engineering, The University of Electro-CommunicationsTokyo, Japan
- Neuroinformatics Japan Center, RIKEN Brain Science InstituteSaitama, Japan
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and TechnologyIbaraki, Japan
| |
Collapse
|
23
|
Luque NR, Garrido JA, Naveros F, Carrillo RR, D'Angelo E, Ros E. Distributed Cerebellar Motor Learning: A Spike-Timing-Dependent Plasticity Model. Front Comput Neurosci 2016; 10:17. [PMID: 26973504 PMCID: PMC4773604 DOI: 10.3389/fncom.2016.00017] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2015] [Accepted: 02/15/2016] [Indexed: 11/13/2022] Open
Abstract
Deep cerebellar nuclei neurons receive both inhibitory (GABAergic) synaptic currents from Purkinje cells (within the cerebellar cortex) and excitatory (glutamatergic) synaptic currents from mossy fibers. Those two deep cerebellar nucleus inputs are thought to be also adaptive, embedding interesting properties in the framework of accurate movements. We show that distributed spike-timing-dependent plasticity mechanisms (STDP) located at different cerebellar sites (parallel fibers to Purkinje cells, mossy fibers to deep cerebellar nucleus cells, and Purkinje cells to deep cerebellar nucleus cells) in close-loop simulations provide an explanation for the complex learning properties of the cerebellum in motor learning. Concretely, we propose a new mechanistic cerebellar spiking model. In this new model, deep cerebellar nuclei embed a dual functionality: deep cerebellar nuclei acting as a gain adaptation mechanism and as a facilitator for the slow memory consolidation at mossy fibers to deep cerebellar nucleus synapses. Equipping the cerebellum with excitatory (e-STDP) and inhibitory (i-STDP) mechanisms at deep cerebellar nuclei afferents allows the accommodation of synaptic memories that were formed at parallel fibers to Purkinje cells synapses and then transferred to mossy fibers to deep cerebellar nucleus synapses. These adaptive mechanisms also contribute to modulate the deep-cerebellar-nucleus-output firing rate (output gain modulation toward optimizing its working range).
Collapse
Affiliation(s)
- Niceto R Luque
- Department of Computer Architecture and Technology, Research Centre for Information and Communications Technologies of the University of Granada (CITIC-UGR) Granada, Spain
| | - Jesús A Garrido
- Department of Computer Architecture and Technology, Research Centre for Information and Communications Technologies of the University of Granada (CITIC-UGR) Granada, Spain
| | - Francisco Naveros
- Department of Computer Architecture and Technology, Research Centre for Information and Communications Technologies of the University of Granada (CITIC-UGR) Granada, Spain
| | - Richard R Carrillo
- Department of Computer Architecture and Technology, Research Centre for Information and Communications Technologies of the University of Granada (CITIC-UGR) Granada, Spain
| | - Egidio D'Angelo
- Brain Connectivity Center, Istituto di Ricovero e Cura a Carattere Scientifico, Istituto Neurologico Nazionale Casimiro MondinoPavia, Italy; Department of Brain and Behavioural Sciences, University of PaviaPavia, Italy
| | - Eduardo Ros
- Department of Computer Architecture and Technology, Research Centre for Information and Communications Technologies of the University of Granada (CITIC-UGR) Granada, Spain
| |
Collapse
|