1
|
Igarashi J. Future projections for mammalian whole-brain simulations based on technological trends in related fields. Neurosci Res 2024:S0168-0102(24)00138-X. [PMID: 39571736 DOI: 10.1016/j.neures.2024.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2024] [Accepted: 11/13/2024] [Indexed: 12/13/2024]
Abstract
Large-scale brain simulation allows us to understand the interaction of vast numbers of neurons having nonlinear dynamics to help understand the information processing mechanisms in the brain. The scale of brain simulations continues to rise as computer performance improves exponentially. However, a simulation of the human whole brain has not yet been achieved as of 2024 due to insufficient computational performance and brain measurement data. This paper examines technological trends in supercomputers, cell type classification, connectomics, and large-scale activity measurements relevant to whole-brain simulation. Based on these trends, we attempt to predict the feasible timeframe for mammalian whole-brain simulation. Our estimates suggest that mouse whole-brain simulation at the cellular level could be realized around 2034, marmoset around 2044, and human likely later than 2044.
Collapse
Affiliation(s)
- Jun Igarashi
- High Performance Artificial Intelligence Systems Research Team, Center for Computational Science, RIKEN, Japan.
| |
Collapse
|
2
|
Tsuzuki S. Extreme value statistics of nerve transmission delay. PLoS One 2024; 19:e0306605. [PMID: 38968286 PMCID: PMC11226101 DOI: 10.1371/journal.pone.0306605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Accepted: 06/20/2024] [Indexed: 07/07/2024] Open
Abstract
Delays in nerve transmission are an important topic in the field of neuroscience. Spike signals fired or received by the dendrites of a neuron travel from the axon to a presynaptic cell. The spike signal then triggers a chemical reaction at the synapse, wherein a presynaptic cell transfers neurotransmitters to the postsynaptic cell, regenerates electrical signals via a chemical reaction through ion channels, and transmits them to neighboring neurons. In the context of describing the complex physiological reaction process as a stochastic process, this study aimed to show that the distribution of the maximum time interval of spike signals follows extreme-order statistics. By considering the statistical variance in the time constant of the leaky Integrate-and-Fire model, a deterministic time evolution model for spike signals, we enabled randomness in the time interval of the spike signals. When the time constant follows an exponential distribution function, the time interval of the spike signal also follows an exponential distribution. In this case, our theory and simulations confirmed that the histogram of the maximum time interval follows the Gumbel distribution, one of the three forms of extreme-value statistics. We further confirmed that the histogram of the maximum time interval followed a Fréchet distribution when the time interval of the spike signal followed a Pareto distribution. These findings confirm that nerve transmission delay can be described using extreme value statistics and can therefore be used as a new indicator of transmission delay.
Collapse
Affiliation(s)
- Satori Tsuzuki
- Research Center for Advanced Science and Technology, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
3
|
Igarashi J, Yamaura H, Yamazaki T. Large-Scale Simulation of a Layered Cortical Sheet of Spiking Network Model Using a Tile Partitioning Method. Front Neuroinform 2019; 13:71. [PMID: 31849631 PMCID: PMC6895031 DOI: 10.3389/fninf.2019.00071] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Accepted: 11/12/2019] [Indexed: 11/13/2022] Open
Abstract
One of the grand challenges for computational neuroscience and high-performance computing is computer simulation of a human-scale whole brain model with spiking neurons and synaptic plasticity using supercomputers. To achieve such a simulation, the target network model must be partitioned onto a number of computational nodes, and the sub-network models are executed in parallel while communicating spike information across different nodes. However, it remains unclear how the target network model should be partitioned for efficient computing on next generation of supercomputers. Specifically, reducing the communication of spike information across compute nodes is essential, because of the relatively slower network performance than processor and memory. From the viewpoint of biological features, the cerebral cortex and cerebellum contain 99% of neurons and synapses and form layered sheet structures. Therefore, an efficient method to split the network should exploit the layered sheet structures. In this study, we indicate that a tile partitioning method leads to efficient communication. To demonstrate it, a simulation software called MONET (Millefeuille-like Organization NEural neTwork simulator) that partitions a network model as described above was developed. The MONET simulator was implemented on the Japanese flagship supercomputer K, which is composed of 82,944 computational nodes. We examined a performance of calculation, communication and memory consumption in the tile partitioning method for a cortical model with realistic anatomical and physiological parameters. The result showed that the tile partitioning method drastically reduced communication data amount by replacing network communication with DRAM access and sharing the communication data with neighboring neurons. We confirmed the scalability and efficiency of the tile partitioning method on up to 63,504 compute nodes of the K computer for the cortical model. In the companion paper by Yamaura et al., the performance for a cerebellar model was examined. These results suggest that the tile partitioning method will have advantage for a human-scale whole-brain simulation on exascale computers.
Collapse
Affiliation(s)
- Jun Igarashi
- Computational Engineering Applications Unit, Head Office for Information Systems and Cybersecurity, RIKEN, Saitama, Japan
| | - Hiroshi Yamaura
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Tadashi Yamazaki
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| |
Collapse
|
4
|
Experimental Study of Reinforcement Learning in Mobile Robots Through Spiking Architecture of Thalamo-Cortico-Thalamic Circuitry of Mammalian Brain. ROBOTICA 2019. [DOI: 10.1017/s0263574719001632] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
SUMMARYIn this paper, the behavioral learning of robots through spiking neural networks is studied in which the architecture of the network is based on the thalamo-cortico-thalamic circuitry of the mammalian brain. According to a variety of neurons, the Izhikevich model of single neuron is used for the representation of neuronal behaviors. One thousand and ninety spiking neurons are considered in the network. The spiking model of the proposed architecture is derived and prepared for the learning problem of robots. The reinforcement learning algorithm is based on spike-timing-dependent plasticity and dopamine release as a reward. It results in strengthening the synaptic weights of the neurons that are involved in the robot’s proper performance. Sensory and motor neurons are placed in the thalamus and cortical module, respectively. The inputs of thalamo-cortico-thalamic circuitry are the signals related to distance of the target from robot, and the outputs are the velocities of actuators. The target attraction task is used as an example to validate the proposed method in which dopamine is released when the robot catches the target. Some simulation studies, as well as experimental implementation, are done on a mobile robot named Tabrizbot. Experimental studies illustrate that after successful learning, the meantime of catching target is decreased by about 36%. These prove that through the proposed method, thalamo-cortical structure could be trained successfully to learn to perform various robotic tasks.
Collapse
|
5
|
Yang S, Wang J, Deng B, Liu C, Li H, Fietkiewicz C, Loparo KA. Real-Time Neuromorphic System for Large-Scale Conductance-Based Spiking Neural Networks. IEEE TRANSACTIONS ON CYBERNETICS 2019; 49:2490-2503. [PMID: 29993922 DOI: 10.1109/tcyb.2018.2823730] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The investigation of the human intelligence, cognitive systems and functional complexity of human brain is significantly facilitated by high-performance computational platforms. In this paper, we present a real-time digital neuromorphic system for the simulation of large-scale conductance-based spiking neural networks (LaCSNN), which has the advantages of both high biological realism and large network scale. Using this system, a detailed large-scale cortico-basal ganglia-thalamocortical loop is simulated using a scalable 3-D network-on-chip (NoC) topology with six Altera Stratix III field-programmable gate arrays simulate 1 million neurons. Novel router architecture is presented to deal with the communication of multiple data flows in the multinuclei neural network, which has not been solved in previous NoC studies. At the single neuron level, cost-efficient conductance-based neuron models are proposed, resulting in the average utilization of 95% less memory resources and 100% less DSP resources for multiplier-less realization, which is the foundation of the large-scale realization. An analysis of the modified models is conducted, including investigation of bifurcation behaviors and ionic dynamics, demonstrating the required range of dynamics with a more reduced resource cost. The proposed LaCSNN system is shown to outperform the alternative state-of-the-art approaches previously used to implement the large-scale spiking neural network, and enables a broad range of potential applications due to its real-time computational power.
Collapse
|
6
|
Zeng Y, Wang G, Xu B. A Basal Ganglia Network Centric Reinforcement Learning Model and Its Application in Unmanned Aerial Vehicle. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2017.2649564] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
7
|
Yang S, Deng B, Li H, Liu C, Wang J, Yu H, Qin Y. FPGA implementation of hippocampal spiking network and its real-time simulation on dynamical neuromodulation of oscillations. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2017.12.031] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
8
|
A real-time FPGA implementation of a biologically inspired central pattern generator network. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.03.028] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
9
|
Gosui M, Yamazaki T. Real-World-Time Simulation of Memory Consolidation in a Large-Scale Cerebellar Model. Front Neuroanat 2016; 10:21. [PMID: 26973472 PMCID: PMC4776399 DOI: 10.3389/fnana.2016.00021] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2015] [Accepted: 02/18/2016] [Indexed: 11/23/2022] Open
Abstract
We report development of a large-scale spiking network model of the cerebellum composed of more than 1 million neurons. The model is implemented on graphics processing units (GPUs), which are dedicated hardware for parallel computing. Using 4 GPUs simultaneously, we achieve realtime simulation, in which computer simulation of cerebellar activity for 1 s completes within 1 s in the real-world time, with temporal resolution of 1 ms. This allows us to carry out a very long-term computer simulation of cerebellar activity in a practical time with millisecond temporal resolution. Using the model, we carry out computer simulation of long-term gain adaptation of optokinetic response (OKR) eye movements for 5 days aimed to study the neural mechanisms of posttraining memory consolidation. The simulation results are consistent with animal experiments and our theory of posttraining memory consolidation. These results suggest that realtime computing provides a useful means to study a very slow neural process such as memory consolidation in the brain.
Collapse
Affiliation(s)
- Masato Gosui
- Department of Communication Engineering and Informatics, Graduate School of Informatics and Engineering, The University of Electro-CommunicationsTokyo, Japan
| | - Tadashi Yamazaki
- Department of Communication Engineering and Informatics, Graduate School of Informatics and Engineering, The University of Electro-CommunicationsTokyo, Japan
- Neuroinformatics Japan Center, RIKEN Brain Science InstituteSaitama, Japan
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and TechnologyIbaraki, Japan
| |
Collapse
|
10
|
Orts-Escolano S, Garcia-Rodriguez J, Serra-Perez JA, Jimeno-Morenilla A, Garcia-Garcia A, Morell V, Cazorla M. 3D model reconstruction using neural gas accelerated on GPU. Appl Soft Comput 2015. [DOI: 10.1016/j.asoc.2015.03.042] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
11
|
Parallel Computational Intelligence-Based Multi-Camera Surveillance System. JOURNAL OF SENSOR AND ACTUATOR NETWORKS 2014. [DOI: 10.3390/jsan3020095] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
12
|
Carlson KD, Nageswaran JM, Dutt N, Krichmar JL. An efficient automated parameter tuning framework for spiking neural networks. Front Neurosci 2014; 8:10. [PMID: 24550771 PMCID: PMC3912986 DOI: 10.3389/fnins.2014.00010] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2013] [Accepted: 01/17/2014] [Indexed: 11/13/2022] Open
Abstract
As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65× of the GPU implementation over the CPU implementation, or 0.35 h per generation for GPU vs. 23.5 h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier.
Collapse
Affiliation(s)
- Kristofor D Carlson
- Department of Cognitive Sciences, University of California Irvine Irvine, CA, USA
| | | | - Nikil Dutt
- Department of Computer Science, University of California Irvine Irvine, CA, USA
| | - Jeffrey L Krichmar
- Department of Cognitive Sciences, University of California Irvine Irvine, CA, USA ; Department of Computer Science, University of California Irvine Irvine, CA, USA
| |
Collapse
|
13
|
Minkovich K, Thibeault CM, O'Brien MJ, Nogin A, Cho Y, Srinivasa N. HRLSim: a high performance spiking neural network simulator for GPGPU clusters. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2014; 25:316-331. [PMID: 24807031 DOI: 10.1109/tnnls.2013.2276056] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Modeling of large-scale spiking neural models is an important tool in the quest to understand brain function and subsequently create real-world applications. This paper describes a spiking neural network simulator environment called HRL Spiking Simulator (HRLSim). This simulator is suitable for implementation on a cluster of general purpose graphical processing units (GPGPUs). Novel aspects of HRLSim are described and an analysis of its performance is provided for various configurations of the cluster. With the advent of inexpensive GPGPU cards and compute power, HRLSim offers an affordable and scalable tool for design, real-time simulation, and analysis of large-scale spiking neural networks.
Collapse
|
14
|
Hoang RV, Tanna D, Jayet Bray LC, Dascalu SM, Harris FC. A novel CPU/GPU simulation environment for large-scale biologically realistic neural modeling. Front Neuroinform 2013; 7:19. [PMID: 24106475 PMCID: PMC3788332 DOI: 10.3389/fninf.2013.00019] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2013] [Accepted: 09/03/2013] [Indexed: 11/13/2022] Open
Abstract
Computational Neuroscience is an emerging field that provides unique opportunities to study complex brain structures through realistic neural simulations. However, as biological details are added to models, the execution time for the simulation becomes longer. Graphics Processing Units (GPUs) are now being utilized to accelerate simulations due to their ability to perform computations in parallel. As such, they have shown significant improvement in execution time compared to Central Processing Units (CPUs). Most neural simulators utilize either multiple CPUs or a single GPU for better performance, but still show limitations in execution time when biological details are not sacrificed. Therefore, we present a novel CPU/GPU simulation environment for large-scale biological networks, the NeoCortical Simulator version 6 (NCS6). NCS6 is a free, open-source, parallelizable, and scalable simulator, designed to run on clusters of multiple machines, potentially with high performance computing devices in each of them. It has built-in leaky-integrate-and-fire (LIF) and Izhikevich (IZH) neuron models, but users also have the capability to design their own plug-in interface for different neuron types as desired. NCS6 is currently able to simulate one million cells and 100 million synapses in quasi real time by distributing data across eight machines with each having two video cards.
Collapse
Affiliation(s)
- Roger V Hoang
- Brain Computation Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno NV, USA
| | | | | | | | | |
Collapse
|
15
|
Thibeault CM, Srinivasa N. Using a hybrid neuron in physiologically inspired models of the basal ganglia. Front Comput Neurosci 2013; 7:88. [PMID: 23847524 PMCID: PMC3701869 DOI: 10.3389/fncom.2013.00088] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2013] [Accepted: 06/15/2013] [Indexed: 11/15/2022] Open
Abstract
Our current understanding of the basal ganglia (BG) has facilitated the creation of computational models that have contributed novel theories, explored new functional anatomy and demonstrated results complementing physiological experiments. However, the utility of these models extends beyond these applications. Particularly in neuromorphic engineering, where the basal ganglia's role in computation is important for applications such as power efficient autonomous agents and model-based control strategies. The neurons used in existing computational models of the BG, however, are not amenable for many low-power hardware implementations. Motivated by a need for more hardware accessible networks, we replicate four published models of the BG, spanning single neuron and small networks, replacing the more computationally expensive neuron models with an Izhikevich hybrid neuron. This begins with a network modeling action-selection, where the basal activity levels and the ability to appropriately select the most salient input is reproduced. A Parkinson's disease model is then explored under normal conditions, Parkinsonian conditions and during subthalamic nucleus deep brain stimulation (DBS). The resulting network is capable of replicating the loss of thalamic relay capabilities in the Parkinsonian state and its return under DBS. This is also demonstrated using a network capable of action-selection. Finally, a study of correlation transfer under different patterns of Parkinsonian activity is presented. These networks successfully captured the significant results of the originals studies. This not only creates a foundation for neuromorphic hardware implementations but may also support the development of large-scale biophysical models. The former potentially providing a way of improving the efficacy of DBS and the latter allowing for the efficient simulation of larger more comprehensive networks.
Collapse
Affiliation(s)
- Corey M Thibeault
- Center for Neural and Emergent Systems, Information and System Sciences Laboratory, HRL Laboratories LLC. Malibu, CA, USA ; Department of Electrical and Biomedical Engineering, The University of Nevada Reno, NV, USA ; Department of Computer Science and Engineering, The University of Nevada Reno, NV, USA
| | | |
Collapse
|
16
|
Yamazaki T, Igarashi J. Realtime cerebellum: a large-scale spiking network model of the cerebellum that runs in realtime using a graphics processing unit. Neural Netw 2013; 47:103-11. [PMID: 23434303 DOI: 10.1016/j.neunet.2013.01.019] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2012] [Revised: 01/24/2013] [Accepted: 01/25/2013] [Indexed: 11/29/2022]
Abstract
The cerebellum plays an essential role in adaptive motor control. Once we are able to build a cerebellar model that runs in realtime, which means that a computer simulation of 1 s in the simulated world completes within 1 s in the real world, the cerebellar model could be used as a realtime adaptive neural controller for physical hardware such as humanoid robots. In this paper, we introduce "Realtime Cerebellum (RC)", a new implementation of our large-scale spiking network model of the cerebellum, which was originally built to study cerebellar mechanisms for simultaneous gain and timing control and acted as a general-purpose supervised learning machine of spatiotemporal information known as reservoir computing, on a graphics processing unit (GPU). Owing to the massive parallel computing capability of a GPU, RC runs in realtime, while reproducing qualitatively the same simulation results of the Pavlovian delay eyeblink conditioning with the previous version. RC is adopted as a realtime adaptive controller of a humanoid robot, which is instructed to learn a proper timing to swing a bat to hit a flying ball online. These results suggest that RC provides a means to apply the computational power of the cerebellum as a versatile supervised learning machine towards engineering applications.
Collapse
Affiliation(s)
- Tadashi Yamazaki
- RIKEN Brain Science Institute, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan.
| | | |
Collapse
|
17
|
Abstract
Modern graphics cards contain hundreds of cores that can be programmed for intensive calculations. They are beginning to be used for spiking neural network simulations. The goal is to make parallel simulation of spiking neural networks available to a large audience, without the requirements of a cluster. We review the ongoing efforts towards this goal, and we outline the main difficulties.
Collapse
Affiliation(s)
- Romain Brette
- Laboratoire Psychologie de la Perception, CNRS and Université Paris Descartes, Paris, France.
| | | |
Collapse
|
18
|
|