1
|
Perez-Peña F, Cifredo-Chacon MA, Quiros-Olozabal A. Digital neuromorphic real-time platform. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.09.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
2
|
Antonietti A, Monaco J, D'Angelo E, Pedrocchi A, Casellato C. Dynamic Redistribution of Plasticity in a Cerebellar Spiking Neural Network Reproducing an Associative Learning Task Perturbed by TMS. Int J Neural Syst 2018; 28:1850020. [PMID: 29914314 DOI: 10.1142/s012906571850020x] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
During natural learning, synaptic plasticity is thought to evolve dynamically and redistribute within and among subcircuits. This process should emerge in plastic neural networks evolving under behavioral feedback and should involve changes distributed across multiple synaptic sites. In eyeblink classical conditioning (EBCC), the cerebellum learns to predict the precise timing between two stimuli, hence EBCC represents an elementary yet meaningful paradigm to investigate the cerebellar network functioning. We have simulated EBCC mechanisms by reconstructing a realistic cerebellar microcircuit model and embedding multiple plasticity rules imitating those revealed experimentally. The model was tuned to fit experimental EBCC human data, estimating the underlying learning time-constants. Learning started rapidly with plastic changes in the cerebellar cortex followed by slower changes in the deep cerebellar nuclei. This process was characterized by differential development of long-term potentiation and depression at individual synapses, with a progressive accumulation of plasticity distributed over the whole network. The experimental data included two EBCC sessions interleaved by a trans-cranial magnetic stimulation (TMS). The experimental and the model response data were not significantly different in each learning phase, and the model goodness-of-fit was [Formula: see text] for all the experimental conditions. The models fitted on TMS data revealed a slowed down re-acquisition (sessions-2) compared to the control condition ([Formula: see text]). The plasticity parameters characterizing each model significantly differ among conditions, and thus mechanistically explain these response changes. Importantly, the model was able to capture the alteration in EBCC consolidation caused by TMS and showed that TMS affected plasticity at cortical synapses thereby altering the fast learning phase. This, secondarily, also affected plasticity in deep cerebellar nuclei altering learning dynamics in the entire sensory-motor loop. This observation reveals dynamic redistribution of changes over the entire network and suggests how TMS affects local circuit computation and memory processing in the cerebellum.
Collapse
Affiliation(s)
- Alberto Antonietti
- 1 Department of Electronics, Information and Bioengineering, Politecnico di Milano, Piazza Leonardo Da Vinci 32, 20133 Milano, Italy
| | - Jessica Monaco
- 2 Department of Brain and Behavioral Sciences, University of Pavia, Via Forlanini 6, Pavia, Italy.,3 Brain Connectivity Center, Istituto Neurologico IRCCS Fondazione C. Mondino, Via Mondino 2, 1-27100 Pavia, Italy
| | - Egidio D'Angelo
- 2 Department of Brain and Behavioral Sciences, University of Pavia, Via Forlanini 6, Pavia, Italy.,3 Brain Connectivity Center, Istituto Neurologico IRCCS Fondazione C. Mondino, Via Mondino 2, 1-27100 Pavia, Italy
| | - Alessandra Pedrocchi
- 1 Department of Electronics, Information and Bioengineering, Politecnico di Milano, Piazza Leonardo Da Vinci 32, 20133 Milano, Italy
| | - Claudia Casellato
- 2 Department of Brain and Behavioral Sciences, University of Pavia, Via Forlanini 6, Pavia, Italy
| |
Collapse
|
3
|
Antonietti A, Casellato C, D'Angelo E, Pedrocchi A. Model-Driven Analysis of Eyeblink Classical Conditioning Reveals the Underlying Structure of Cerebellar Plasticity and Neuronal Activity. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:2748-2762. [PMID: 27608482 DOI: 10.1109/tnnls.2016.2598190] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The cerebellum plays a critical role in sensorimotor control. However, how the specific circuits and plastic mechanisms of the cerebellum are engaged in closed-loop processing is still unclear. We developed an artificial sensorimotor control system embedding a detailed spiking cerebellar microcircuit with three bidirectional plasticity sites. This proved able to reproduce a cerebellar-driven associative paradigm, the eyeblink classical conditioning (EBCC), in which a precise time relationship between an unconditioned stimulus (US) and a conditioned stimulus (CS) is established. We challenged the spiking model to fit an experimental data set from human subjects. Two subsequent sessions of EBCC acquisition and extinction were recorded and transcranial magnetic stimulation (TMS) was applied on the cerebellum to alter circuit function and plasticity. Evolutionary algorithms were used to find the near-optimal model parameters to reproduce the behaviors of subjects in the different sessions of the protocol. The main finding is that the optimized cerebellar model was able to learn to anticipate (predict) conditioned responses with accurate timing and success rate, demonstrating fast acquisition, memory stabilization, rapid extinction, and faster reacquisition as in EBCC in humans. The firing of Purkinje cells (PCs) and deep cerebellar nuclei (DCN) changed during learning under the control of synaptic plasticity, which evolved at different rates, with a faster acquisition in the cerebellar cortex than in DCN synapses. Eventually, a reduced PC activity released DCN discharge just after the CS, precisely anticipating the US and causing the eyeblink. Moreover, a specific alteration in cortical plasticity explained the EBCC changes induced by cerebellar TMS in humans. In this paper, for the first time, it is shown how closed-loop simulations, using detailed cerebellar microcircuit models, can be successfully used to fit real experimental data sets. Thus, the changes of the model parameters in the different sessions of the protocol unveil how implicit microcircuit mechanisms can generate normal and altered associative behaviors.The cerebellum plays a critical role in sensorimotor control. However, how the specific circuits and plastic mechanisms of the cerebellum are engaged in closed-loop processing is still unclear. We developed an artificial sensorimotor control system embedding a detailed spiking cerebellar microcircuit with three bidirectional plasticity sites. This proved able to reproduce a cerebellar-driven associative paradigm, the eyeblink classical conditioning (EBCC), in which a precise time relationship between an unconditioned stimulus (US) and a conditioned stimulus (CS) is established. We challenged the spiking model to fit an experimental data set from human subjects. Two subsequent sessions of EBCC acquisition and extinction were recorded and transcranial magnetic stimulation (TMS) was applied on the cerebellum to alter circuit function and plasticity. Evolutionary algorithms were used to find the near-optimal model parameters to reproduce the behaviors of subjects in the different sessions of the protocol. The main finding is that the optimized cerebellar model was able to learn to anticipate (predict) conditioned responses with accurate timing and success rate, demonstrating fast acquisition, memory stabilization, rapid extinction, and faster reacquisition as in EBCC in humans. The firing of Purkinje cells (PCs) and deep cerebellar nuclei (DCN) changed during learning under the control of synaptic plasticity, which evolved at different rates, with a faster acquisition in the cerebellar cortex than in DCN synapses. Eventually, a reduced PC activity released DCN discharge just after the CS, precisely anticipating the US and causing the eyeblink. Moreover, a specific alteration in cortical plasticity explained the EBCC changes induced by cerebellar TMS in humans. In this paper, for the first time, it is shown how closed-loop simulations, using detailed cerebellar microcircuit models, can be successfully used to fit real experimental data sets. Thus, the changes of the model parameters in the different sessions of the protocol unveil how implicit microcircuit mechanisms can generate normal and altered associative behaviors.
Collapse
Affiliation(s)
- Alberto Antonietti
- Department of Electronics, Neuroengineering and Medical Robotics Laboratory, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Claudia Casellato
- Department of Electronics, Neuroengineering and Medical Robotics Laboratory, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Egidio D'Angelo
- Department of Brain and Behavioral Sciences, Brain Connectivity Center, Istituto di Ricovero e Cura a Carattere Scientifico and the Istituto Neurologico Nazionale C. Mondino, University of Pavia, Pavia, Italy
| | - Alessandra Pedrocchi
- Department of Electronics, Neuroengineering and Medical Robotics Laboratory, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| |
Collapse
|
4
|
Naveros F, Garrido JA, Carrillo RR, Ros E, Luque NR. Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks. Front Neuroinform 2017; 11:7. [PMID: 28223930 PMCID: PMC5293783 DOI: 10.3389/fninf.2017.00007] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2016] [Accepted: 01/18/2017] [Indexed: 12/12/2022] Open
Abstract
Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under increasing levels of neural complexity.
Collapse
Affiliation(s)
- Francisco Naveros
- Department of Computer Architecture and Technology, Research Centre for Information and Communication Technologies, University of Granada Granada, Spain
| | - Jesus A Garrido
- Department of Computer Architecture and Technology, Research Centre for Information and Communication Technologies, University of Granada Granada, Spain
| | - Richard R Carrillo
- Department of Computer Architecture and Technology, Research Centre for Information and Communication Technologies, University of Granada Granada, Spain
| | - Eduardo Ros
- Department of Computer Architecture and Technology, Research Centre for Information and Communication Technologies, University of Granada Granada, Spain
| | - Niceto R Luque
- Vision Institute, Aging in Vision and Action LabParis, France; CNRS, INSERM, Pierre and Marie Curie UniversityParis, France
| |
Collapse
|
5
|
Falotico E, Vannucci L, Ambrosano A, Albanese U, Ulbrich S, Vasquez Tieck JC, Hinkel G, Kaiser J, Peric I, Denninger O, Cauli N, Kirtay M, Roennau A, Klinker G, Von Arnim A, Guyot L, Peppicelli D, Martínez-Cañada P, Ros E, Maier P, Weber S, Huber M, Plecher D, Röhrbein F, Deser S, Roitberg A, van der Smagt P, Dillman R, Levi P, Laschi C, Knoll AC, Gewaltig MO. Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform. Front Neurorobot 2017; 11:2. [PMID: 28179882 PMCID: PMC5263131 DOI: 10.3389/fnbot.2017.00002] [Citation(s) in RCA: 53] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2016] [Accepted: 01/04/2017] [Indexed: 11/13/2022] Open
Abstract
Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain-body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 "Neurorobotics" of the Human Brain Project (HBP). At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.
Collapse
Affiliation(s)
- Egidio Falotico
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Italy
| | - Lorenzo Vannucci
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Italy
| | | | - Ugo Albanese
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Italy
| | - Stefan Ulbrich
- Department of Intelligent Systems and Production Engineering (ISPE – IDS/TKS), FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Juan Camilo Vasquez Tieck
- Department of Intelligent Systems and Production Engineering (ISPE – IDS/TKS), FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Georg Hinkel
- Department of Software Engineering (SE), FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Jacques Kaiser
- Department of Intelligent Systems and Production Engineering (ISPE – IDS/TKS), FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Igor Peric
- Department of Intelligent Systems and Production Engineering (ISPE – IDS/TKS), FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Oliver Denninger
- Department of Software Engineering (SE), FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Nino Cauli
- Computer and Robot Vision Laboratory, Instituto de Sistemas e Robotica, Instituto Superior Tecnico, Lisbon, Portugal
| | - Murat Kirtay
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Italy
| | - Arne Roennau
- Department of Intelligent Systems and Production Engineering (ISPE – IDS/TKS), FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Gudrun Klinker
- Department of Informatics, Technical University of Munich, Garching, Germany
| | | | - Luc Guyot
- Blue Brain Project (BBP), École polytechnique fédérale de Lausanne (EPFL), Genève, Switzerland
| | - Daniel Peppicelli
- Blue Brain Project (BBP), École polytechnique fédérale de Lausanne (EPFL), Genève, Switzerland
| | - Pablo Martínez-Cañada
- Department of Computer Architecture and Technology, CITIC, University of Granada, Granada, Spain
| | - Eduardo Ros
- Department of Computer Architecture and Technology, CITIC, University of Granada, Granada, Spain
| | - Patrick Maier
- Department of Informatics, Technical University of Munich, Garching, Germany
| | - Sandro Weber
- Department of Informatics, Technical University of Munich, Garching, Germany
| | - Manuel Huber
- Department of Informatics, Technical University of Munich, Garching, Germany
| | - David Plecher
- Department of Informatics, Technical University of Munich, Garching, Germany
| | - Florian Röhrbein
- Department of Informatics, Technical University of Munich, Garching, Germany
| | - Stefan Deser
- Department of Informatics, Technical University of Munich, Garching, Germany
| | - Alina Roitberg
- Department of Informatics, Technical University of Munich, Garching, Germany
| | | | - Rüdiger Dillman
- Department of Intelligent Systems and Production Engineering (ISPE – IDS/TKS), FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Paul Levi
- Department of Intelligent Systems and Production Engineering (ISPE – IDS/TKS), FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Cecilia Laschi
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Italy
| | - Alois C. Knoll
- Department of Informatics, Technical University of Munich, Garching, Germany
| | - Marc-Oliver Gewaltig
- Blue Brain Project (BBP), École polytechnique fédérale de Lausanne (EPFL), Genève, Switzerland
| |
Collapse
|
6
|
Prieto A, Prieto B, Ortigosa EM, Ros E, Pelayo F, Ortega J, Rojas I. Neural networks: An overview of early research, current frameworks and new challenges. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.06.014] [Citation(s) in RCA: 161] [Impact Index Per Article: 17.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
7
|
AER-SRT: Scalable spike distribution by means of synchronous serial ring topology address event representation. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.07.080] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
8
|
Naveros F, Luque NR, Garrido JA, Carrillo RR, Anguita M, Ros E. A Spiking Neural Simulator Integrating Event-Driven and Time-Driven Computation Schemes Using Parallel CPU-GPU Co-Processing: A Case Study. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:1567-1574. [PMID: 25167556 DOI: 10.1109/tnnls.2014.2345844] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU-GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks.
Collapse
|
9
|
Matsubara T, Torikai H. Asynchronous cellular automaton-based neuron: theoretical analysis and on-FPGA learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2013; 24:736-748. [PMID: 24808424 DOI: 10.1109/tnnls.2012.2230643] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
A generalized asynchronous cellular automaton-based neuron model is a special kind of cellular automaton that is designed to mimic the nonlinear dynamics of neurons. The model can be implemented as an asynchronous sequential logic circuit and its control parameter is the pattern of wires among the circuit elements that is adjustable after implementation in a field-programmable gate array (FPGA) device. In this paper, a novel theoretical analysis method for the model is presented. Using this method, stabilities of neuron-like orbits and occurrence mechanisms of neuron-like bifurcations of the model are clarified theoretically. Also, a novel learning algorithm for the model is presented. An equivalent experiment shows that an FPGA-implemented learning algorithm enables an FPGA-implemented model to automatically reproduce typical nonlinear responses and occurrence mechanisms observed in biological and model neurons.
Collapse
|
10
|
Caron LC, D'Haene M, Mailhot F, Schrauwen B, Rouat J. Event management for large scale event-driven digital hardware spiking neural networks. Neural Netw 2013; 45:83-93. [PMID: 23522624 DOI: 10.1016/j.neunet.2013.02.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2012] [Revised: 11/10/2012] [Accepted: 02/22/2013] [Indexed: 11/17/2022]
Abstract
The interest in brain-like computation has led to the design of a plethora of innovative neuromorphic systems. Individually, spiking neural networks (SNNs), event-driven simulation and digital hardware neuromorphic systems get a lot of attention. Despite the popularity of event-driven SNNs in software, very few digital hardware architectures are found. This is because existing hardware solutions for event management scale badly with the number of events. This paper introduces the structured heap queue, a pipelined digital hardware data structure, and demonstrates its suitability for event management. The structured heap queue scales gracefully with the number of events, allowing the efficient implementation of large scale digital hardware event-driven SNNs. The scaling is linear for memory, logarithmic for logic resources and constant for processing time. The use of the structured heap queue is demonstrated on a field-programmable gate array (FPGA) with an image segmentation experiment and a SNN of 65,536 neurons and 513,184 synapses. Events can be processed at the rate of 1 every 7 clock cycles and a 406×158 pixel image is segmented in 200 ms.
Collapse
Affiliation(s)
- Louis-Charles Caron
- NECOTIS, Université de Sherbrooke, 2500 boul. de l'Université, Sherbrooke (Québec), J1K 2R1, Canada.
| | | | | | | | | |
Collapse
|
11
|
Pande S, Morgan F, Cawley S, Bruintjes T, Smit G, McGinley B, Carrillo S, Harkin J, McDaid L. Modular Neural Tile Architecture for Compact Embedded Hardware Spiking Neural Network. Neural Process Lett 2013. [DOI: 10.1007/s11063-012-9274-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
12
|
Luque NR, Garrido JA, Carrillo RR, Coenen OJMD, Ros E. Cerebellar input configuration toward object model abstraction in manipulation tasks. ACTA ACUST UNITED AC 2011; 22:1321-8. [PMID: 21708499 DOI: 10.1109/tnn.2011.2156809] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
It is widely assumed that the cerebellum is one of the main nervous centers involved in correcting and refining planned movement and accounting for disturbances occurring during movement, for instance, due to the manipulation of objects which affect the kinematics and dynamics of the robot-arm plant model. In this brief, we evaluate a way in which a cerebellar-like structure can store a model in the granular and molecular layers. Furthermore, we study how its microstructure and input representations (context labels and sensorimotor signals) can efficiently support model abstraction toward delivering accurate corrective torque values for increasing precision during different-object manipulation. We also describe how the explicit (object-related input labels) and implicit state input representations (sensorimotor signals) complement each other to better handle different models and allow interpolation between two already stored models. This facilitates accurate corrections during manipulations of new objects taking advantage of already stored models.
Collapse
Affiliation(s)
- Niceto R Luque
- Department of Computer Architecture and Technology, University of Granada, Granada, Spain.
| | | | | | | | | |
Collapse
|
13
|
Luque NR, Garrido JA, Carrillo RR, Coenen OJMD, Ros E. Cerebellarlike corrective model inference engine for manipulation tasks. ACTA ACUST UNITED AC 2011; 41:1299-312. [PMID: 21536535 DOI: 10.1109/tsmcb.2011.2138693] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper presents how a simple cerebellumlike architecture can infer corrective models in the framework of a control task when manipulating objects that significantly affect the dynamics model of the system. The main motivation of this paper is to evaluate a simplified bio-mimetic approach in the framework of a manipulation task. More concretely, the paper focuses on how the model inference process takes place within a feedforward control loop based on the cerebellar structure and on how these internal models are built up by means of biologically plausible synaptic adaptation mechanisms. This kind of investigation may provide clues on how biology achieves accurate control of non-stiff-joint robot with low-power actuators which involve controlling systems with high inertial components. This paper studies how a basic temporal-correlation kernel including long-term depression (LTD) and a constant long-term potentiation (LTP) at parallel fiber-Purkinje cell synapses can effectively infer corrective models. We evaluate how this spike-timing-dependent plasticity correlates sensorimotor activity arriving through the parallel fibers with teaching signals (dependent on error estimates) arriving through the climbing fibers from the inferior olive. This paper addresses the study of how these LTD and LTP components need to be well balanced with each other to achieve accurate learning. This is of interest to evaluate the relevant role of homeostatic mechanisms in biological systems where adaptation occurs in a distributed manner. Furthermore, we illustrate how the temporal-correlation kernel can also work in the presence of transmission delays in sensorimotor pathways. We use a cerebellumlike spiking neural network which stores the corrective models as well-structured weight patterns distributed among the parallel fibers to Purkinje cell connections.
Collapse
Affiliation(s)
- Niceto Rafael Luque
- Department of Computer Architecture and Technology, University of Granada, Granada, Spain.
| | | | | | | | | |
Collapse
|
14
|
|
15
|
Maguire L. Does Soft Computing Classify Research in Spiking Neural Networks? INT J COMPUT INT SYS 2010. [DOI: 10.1080/18756891.2010.9727688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022] Open
|
16
|
Koickal TJ, Gouveia LC, Hamilton A. A programmable spike-timing based circuit block for reconfigurable neuromorphic computing. Neurocomputing 2009. [DOI: 10.1016/j.neucom.2008.12.036] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
17
|
Lim H, Choe Y. Extrapolative delay compensation through facilitating synapses and its relation to the flash-lag effect. ACTA ACUST UNITED AC 2008; 19:1678-88. [PMID: 18842473 DOI: 10.1109/tnn.2008.2001002] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Neural conduction delay is a serious issue for organisms that need to act in real time. Various forms of flash-lag effect (FLE) suggest that the nervous system may perform extrapolation to compensate for delay. For example, in motion FLE, the position of a moving object is perceived to be ahead of a brief flash when they are actually colocalized. However, the precise mechanism for extrapolation at a single-neuron level has not been fully investigated. Our hypothesis is that facilitating synapses, with their dynamic sensitivity to the rate of change in the input, can serve as a neural basis for extrapolation. To test this hypothesis, we constructed and tested models of facilitating dynamics. First, we derived a spiking neuron model of facilitating dynamics at a single-neuron level, and tested it in the luminance FLE domain. Second, the spiking neuron model was extended to include multiple neurons and spike-timing-dependent plasticity (STDP), and was tested with orientation FLE. The results showed a strong relationship between delay compensation, FLE, and facilitating synapses/STDP. The results are expected to shed new light on real time and predictive processing in the brain, at the single neuron level.
Collapse
Affiliation(s)
- Heejin Lim
- Department of Neurobiology and Anatomy, University of Texas Medical School at Houston, Houston, TX 77030, USA.
| | | |
Collapse
|
18
|
Carrillo RR, Ros E, Boucheny C, Coenen OJMD. A real-time spiking cerebellum model for learning robot control. Biosystems 2008; 94:18-27. [PMID: 18616974 DOI: 10.1016/j.biosystems.2008.05.008] [Citation(s) in RCA: 53] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2007] [Revised: 10/30/2007] [Accepted: 05/23/2008] [Indexed: 10/21/2022]
Abstract
We describe a neural network model of the cerebellum based on integrate-and-fire spiking neurons with conductance-based synapses. The neuron characteristics are derived from our earlier detailed models of the different cerebellar neurons. We tested the cerebellum model in a real-time control application with a robotic platform. Delays were introduced in the different sensorimotor pathways according to the biological system. The main plasticity in the cerebellar model is a spike-timing dependent plasticity (STDP) at the parallel fiber to Purkinje cell connections. This STDP is driven by the inferior olive (IO) activity, which encodes an error signal using a novel probabilistic low frequency model. We demonstrate the cerebellar model in a robot control system using a target-reaching task. We test whether the system learns to reach different target positions in a non-destructive way, therefore abstracting a general dynamics model. To test the system's ability to self-adapt to different dynamical situations, we present results obtained after changing the dynamics of the robotic platform significantly (its friction and load). The experimental results show that the cerebellar-based system is able to adapt dynamically to different contexts.
Collapse
Affiliation(s)
- Richard R Carrillo
- Department of Computer Architecture and Technology, ETSI Informática y de Telecomunicación, University of Granada, Spain.
| | | | | | | |
Collapse
|
19
|
Voutsas K, Adamy J. A biologically inspired spiking neural network for sound source lateralization. IEEE TRANSACTIONS ON NEURAL NETWORKS 2007; 18:1785-99. [PMID: 18051193 DOI: 10.1109/tnn.2007.899623] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In this paper, a binaural sound source lateralization spiking neural network (NN) will be presented which is inspired by most recent neurophysiological studies on the role of certain nuclei in the superior olivary complex (SOC) and the inferior colliculus (IC). The binaural sound source lateralization neural network (BiSoLaNN) is a spiking NN based on neural mechanisms, utilizing complex neural models, and attempting to simulate certain parts of nuclei of the auditory system in detail. The BiSoLaNN utilizes both excitatory and inhibitory ipsilateral and contralateral influences arrayed in only one delay line originating in the contralateral side to achieve a sharp azimuthal localization. It will be shown that the proposed model can be used both for purposes of understanding the mechanisms of an NN of the auditory system and for sound source lateralization tasks in technical applications, e.g., its use with the Darmstadt robotic head (DRH).
Collapse
Affiliation(s)
- Kyriakos Voutsas
- Control Theory and Robotics Laboratory, Darmstadt University of Technology, Darmstadt 64283, Germany.
| | | |
Collapse
|
20
|
Maguire L, McGinnity T, Glackin B, Ghani A, Belatreche A, Harkin J. Challenges for large-scale implementations of spiking neural networks on FPGAs. Neurocomputing 2007. [DOI: 10.1016/j.neucom.2006.11.029] [Citation(s) in RCA: 127] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
21
|
Vogelstein RJ, Mallik U, Culurciello E, Cauwenberghs G, Etienne-Cummings R. A multichip neuromorphic system for spike-based visual information processing. Neural Comput 2007; 19:2281-300. [PMID: 17650061 DOI: 10.1162/neco.2007.19.9.2281] [Citation(s) in RCA: 56] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We present a multichip, mixed-signal VLSI system for spike-based vision processing. The system consists of an 80 x 60 pixel neuromorphic retina and a 4800 neuron silicon cortex with 4,194,304 synapses. Its functionality is illustrated with experimental data on multiple components of an attention-based hierarchical model of cortical object recognition, including feature coding, salience detection, and foveation. This model exploits arbitrary and reconfigurable connectivity between cells in the multichip architecture, achieved by asynchronously routing neural spike events within and between chips according to a memory-based look-up table. Synaptic parameters, including conductance and reversal potential, are also stored in memory and are used to dynamically configure synapse circuits within the silicon neurons.
Collapse
Affiliation(s)
- R Jacob Vogelstein
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA.
| | | | | | | | | |
Collapse
|
22
|
Pearson MJ, Pipe AG, Mitchinson B, Gurney K, Melhuish C, Gilhespy I, Nibouche M. Implementing Spiking Neural Networks for Real-Time Signal-Processing and Control Applications: A Model-Validated FPGA Approach. ACTA ACUST UNITED AC 2007; 18:1472-87. [DOI: 10.1109/tnn.2007.891203] [Citation(s) in RCA: 83] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
23
|
Duren RW, Marks RJ, Reynolds PD, Trumbo ML. Real-time neural network inversion on the SRC-6e reconfigurable computer. IEEE TRANSACTIONS ON NEURAL NETWORKS 2007; 18:889-901. [PMID: 17526353 DOI: 10.1109/tnn.2007.891679] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Implementation of real-time neural network inversion on the SRC-6e, a computer that uses multiple field-programmable gate arrays (FPGAs) as reconfigurable computing elements, is examined using a sonar application as a specific case study. A feedforward multilayer perceptron neural network is used to estimate the performance of the sonar system (Jung et al., 2001). A particle swarm algorithm uses the trained network to perform a search for the control parameters required to optimize the output performance of the sonar system in the presence of imposed environmental constraints (Fox et al., 2002). The particle swarm optimization (PSO) requires repetitive queries of the neural network. Alternatives for implementing neural networks and particle swarm algorithms in reconfigurable hardware are contrasted. The final implementation provides nearly two orders of magnitude of speed increase over a state-of-the-art personal computer (PC), providing a real-time solution.
Collapse
Affiliation(s)
- Russell W Duren
- Department of Electrical and Computer Engineering, Baylor University, Waco, TX 76798, USA.
| | | | | | | |
Collapse
|
24
|
Abstract
An excitable membrane is described which can perform different visual tasks such as contour detection, contour propagation, image segmentation, and motion detection. The membrane is designed to fit into a neuromorphic multichip system. It consists of a single two-dimensional (2-D) layer of locally connected integrate-and-fire neurons and propagates input in the subthreshold and the above-threshold range. It requires adjustment of only one parameter to switch between the visual tasks. The performance of two spiking membranes of different connectivity is compared, a hexagonally and an octagonally connected membrane. Their hardware and system suitability is discussed.
Collapse
Affiliation(s)
- Christoph Rasche
- Department of Psychology, Justus-Liebig University, Giessen, 35390 Germany.
| |
Collapse
|
25
|
Vogelstein RJ, Mallik U, Vogelstein JT, Cauwenberghs G. Dynamically reconfigurable silicon array of spiking neurons with conductance-based synapses. ACTA ACUST UNITED AC 2007; 18:253-65. [PMID: 17278476 DOI: 10.1109/tnn.2006.883007] [Citation(s) in RCA: 183] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
A mixed-signal very large scale integration (VLSI) chip for large scale emulation of spiking neural networks is presented. The chip contains 2400 silicon neurons with fully programmable and reconfigurable synaptic connectivity. Each neuron implements a discrete-time model of a single-compartment cell. The model allows for analog membrane dynamics and an arbitrary number of synaptic connections, each with tunable conductance and reversal potential. The array of silicon neurons functions as an address-event (AE) transceiver, with incoming and outgoing spikes communicated over an asynchronous event-driven digital bus. Address encoding and conflict resolution of spiking events are implemented via a randomized arbitration scheme that ensures balanced servicing of event requests across the array. Routing of events is implemented externally using dynamically programmable random-access memory that stores a postsynaptic address, the conductance, and the reversal potential of each synaptic connection. Here, we describe the silicon neuron circuits, present experimental data characterizing the 3 mm x 3 mm chip fabricated in 0.5-microm complementary metal-oxide-semiconductor (CMOS) technology, and demonstrate its utility by configuring the hardware to emulate a model of attractor dynamics and waves of neural activity during sleep in rat hippocampus.
Collapse
Affiliation(s)
- R Jacob Vogelstein
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD 21205, USA.
| | | | | | | |
Collapse
|