1
|
Livinț-Popa L, Dragoș H, Vlad I, Dăbală V, Chelaru V, Ștefănescu E, Crecan-Suciu B, Mureșanu D. New Horizons in Neuroscience: The Summer School of Brain Mapping and Stimulation Techniques. J Med Life 2025; 18:265-269. [PMID: 40405926 PMCID: PMC12094304 DOI: 10.25122/jml-2025-1001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2025] [Accepted: 03/29/2025] [Indexed: 05/26/2025] Open
Affiliation(s)
- Livia Livinț-Popa
- Department of Neurosciences, Iuliu Hațieganu University of Medicine and Pharmacy, Cluj-Napoca, Romania
- RoNeuro Institute for Neurological Research and Diagnostic, Cluj-Napoca, Romania
- County Emergency Clinical Hospital, Cluj-Napoca, Romania
| | - Hanna Dragoș
- Department of Neurosciences, Iuliu Hațieganu University of Medicine and Pharmacy, Cluj-Napoca, Romania
- RoNeuro Institute for Neurological Research and Diagnostic, Cluj-Napoca, Romania
- County Emergency Clinical Hospital, Cluj-Napoca, Romania
| | - Irina Vlad
- Department of Neurosciences, Iuliu Hațieganu University of Medicine and Pharmacy, Cluj-Napoca, Romania
- RoNeuro Institute for Neurological Research and Diagnostic, Cluj-Napoca, Romania
| | - Victor Dăbală
- Department of Neurosciences, Iuliu Hațieganu University of Medicine and Pharmacy, Cluj-Napoca, Romania
- RoNeuro Institute for Neurological Research and Diagnostic, Cluj-Napoca, Romania
- County Emergency Clinical Hospital, Cluj-Napoca, Romania
| | - Vlad Chelaru
- Department of Neurosciences, Iuliu Hațieganu University of Medicine and Pharmacy, Cluj-Napoca, Romania
- RoNeuro Institute for Neurological Research and Diagnostic, Cluj-Napoca, Romania
- Faculty of Medicine, Iuliu Hațieganu University of Medicine and Pharmacy, Cluj-Napoca, Romania
| | - Emanuel Ștefănescu
- Department of Neurosciences, Iuliu Hațieganu University of Medicine and Pharmacy, Cluj-Napoca, Romania
- RoNeuro Institute for Neurological Research and Diagnostic, Cluj-Napoca, Romania
| | - Bianca Crecan-Suciu
- Department of Neurosciences, Iuliu Hațieganu University of Medicine and Pharmacy, Cluj-Napoca, Romania
- RoNeuro Institute for Neurological Research and Diagnostic, Cluj-Napoca, Romania
- County Emergency Clinical Hospital, Cluj-Napoca, Romania
| | - Dafin Mureșanu
- Department of Neurosciences, Iuliu Hațieganu University of Medicine and Pharmacy, Cluj-Napoca, Romania
- RoNeuro Institute for Neurological Research and Diagnostic, Cluj-Napoca, Romania
- County Emergency Clinical Hospital, Cluj-Napoca, Romania
| |
Collapse
|
2
|
Wu H, Feng E, Yin H, Zhang Y, Chen G, Zhu B, Yue X, Zhang H, Liu Q, Xiong L. Biomaterials for neuroengineering: applications and challenges. Regen Biomater 2025; 12:rbae137. [PMID: 40007617 PMCID: PMC11855295 DOI: 10.1093/rb/rbae137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2024] [Revised: 10/19/2024] [Accepted: 11/03/2024] [Indexed: 02/27/2025] Open
Abstract
Neurological injuries and diseases are a leading cause of disability worldwide, underscoring the urgent need for effective therapies. Neural regaining and enhancement therapies are seen as the most promising strategies for restoring neural function, offering hope for individuals affected by these conditions. Despite their promise, the path from animal research to clinical application is fraught with challenges. Neuroengineering, particularly through the use of biomaterials, has emerged as a key field that is paving the way for innovative solutions to these challenges. It seeks to understand and treat neurological disorders, unravel the nature of consciousness, and explore the mechanisms of memory and the brain's relationship with behavior, offering solutions for neural tissue engineering, neural interfaces and targeted drug delivery systems. These biomaterials, including both natural and synthetic types, are designed to replicate the cellular environment of the brain, thereby facilitating neural repair. This review aims to provide a comprehensive overview for biomaterials in neuroengineering, highlighting their application in neural functional regaining and enhancement across both basic research and clinical practice. It covers recent developments in biomaterial-based products, including 2D to 3D bioprinted scaffolds for cell and organoid culture, brain-on-a-chip systems, biomimetic electrodes and brain-computer interfaces. It also explores artificial synapses and neural networks, discussing their applications in modeling neural microenvironments for repair and regeneration, neural modulation and manipulation and the integration of traditional Chinese medicine. This review serves as a comprehensive guide to the role of biomaterials in advancing neuroengineering solutions, providing insights into the ongoing efforts to bridge the gap between innovation and clinical application.
Collapse
Affiliation(s)
- Huanghui Wu
- Translational Research Institute of Brain and Brain-Like Intelligence, Shanghai Key Laboratory of Anesthesiology and Brain Functional Modulation, Clinical Research Center for Anesthesiology and Perioperative Medicine, Department of Anesthesiology and Perioperative Medicine, Shanghai Fourth People’s Hospital, School of Medicine, Tongji University, Shanghai 200434, China
| | - Enduo Feng
- Translational Research Institute of Brain and Brain-Like Intelligence, Shanghai Key Laboratory of Anesthesiology and Brain Functional Modulation, Clinical Research Center for Anesthesiology and Perioperative Medicine, Department of Anesthesiology and Perioperative Medicine, Shanghai Fourth People’s Hospital, School of Medicine, Tongji University, Shanghai 200434, China
| | - Huanxin Yin
- Translational Research Institute of Brain and Brain-Like Intelligence, Shanghai Key Laboratory of Anesthesiology and Brain Functional Modulation, Clinical Research Center for Anesthesiology and Perioperative Medicine, Department of Anesthesiology and Perioperative Medicine, Shanghai Fourth People’s Hospital, School of Medicine, Tongji University, Shanghai 200434, China
| | - Yuxin Zhang
- Translational Research Institute of Brain and Brain-Like Intelligence, Shanghai Key Laboratory of Anesthesiology and Brain Functional Modulation, Clinical Research Center for Anesthesiology and Perioperative Medicine, Department of Anesthesiology and Perioperative Medicine, Shanghai Fourth People’s Hospital, School of Medicine, Tongji University, Shanghai 200434, China
| | - Guozhong Chen
- Translational Research Institute of Brain and Brain-Like Intelligence, Shanghai Key Laboratory of Anesthesiology and Brain Functional Modulation, Clinical Research Center for Anesthesiology and Perioperative Medicine, Department of Anesthesiology and Perioperative Medicine, Shanghai Fourth People’s Hospital, School of Medicine, Tongji University, Shanghai 200434, China
| | - Beier Zhu
- Translational Research Institute of Brain and Brain-Like Intelligence, Shanghai Key Laboratory of Anesthesiology and Brain Functional Modulation, Clinical Research Center for Anesthesiology and Perioperative Medicine, Department of Anesthesiology and Perioperative Medicine, Shanghai Fourth People’s Hospital, School of Medicine, Tongji University, Shanghai 200434, China
| | - Xuezheng Yue
- School of Materials and Chemistry, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Haiguang Zhang
- Rapid Manufacturing Engineering Center, School of Mechatronical Engineering and Automation, Shanghai University, Shanghai 200444, China
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, Shanghai University, Shanghai 200072, China
| | - Qiong Liu
- Translational Research Institute of Brain and Brain-Like Intelligence, Shanghai Key Laboratory of Anesthesiology and Brain Functional Modulation, Clinical Research Center for Anesthesiology and Perioperative Medicine, Department of Anesthesiology and Perioperative Medicine, Shanghai Fourth People’s Hospital, School of Medicine, Tongji University, Shanghai 200434, China
- State Key Laboratory of Molecular Engineering of Polymers, Department of Macromolecular Science, Fudan University, Shanghai 200438, China
| | - Lize Xiong
- Translational Research Institute of Brain and Brain-Like Intelligence, Shanghai Key Laboratory of Anesthesiology and Brain Functional Modulation, Clinical Research Center for Anesthesiology and Perioperative Medicine, Department of Anesthesiology and Perioperative Medicine, Shanghai Fourth People’s Hospital, School of Medicine, Tongji University, Shanghai 200434, China
| |
Collapse
|
3
|
Schilling A, Sedley W, Gerum R, Metzner C, Tziridis K, Maier A, Schulze H, Zeng FG, Friston KJ, Krauss P. Predictive coding and stochastic resonance as fundamental principles of auditory phantom perception. Brain 2023; 146:4809-4825. [PMID: 37503725 PMCID: PMC10690027 DOI: 10.1093/brain/awad255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 06/27/2023] [Accepted: 07/15/2023] [Indexed: 07/29/2023] Open
Abstract
Mechanistic insight is achieved only when experiments are employed to test formal or computational models. Furthermore, in analogy to lesion studies, phantom perception may serve as a vehicle to understand the fundamental processing principles underlying healthy auditory perception. With a special focus on tinnitus-as the prime example of auditory phantom perception-we review recent work at the intersection of artificial intelligence, psychology and neuroscience. In particular, we discuss why everyone with tinnitus suffers from (at least hidden) hearing loss, but not everyone with hearing loss suffers from tinnitus. We argue that intrinsic neural noise is generated and amplified along the auditory pathway as a compensatory mechanism to restore normal hearing based on adaptive stochastic resonance. The neural noise increase can then be misinterpreted as auditory input and perceived as tinnitus. This mechanism can be formalized in the Bayesian brain framework, where the percept (posterior) assimilates a prior prediction (brain's expectations) and likelihood (bottom-up neural signal). A higher mean and lower variance (i.e. enhanced precision) of the likelihood shifts the posterior, evincing a misinterpretation of sensory evidence, which may be further confounded by plastic changes in the brain that underwrite prior predictions. Hence, two fundamental processing principles provide the most explanatory power for the emergence of auditory phantom perceptions: predictive coding as a top-down and adaptive stochastic resonance as a complementary bottom-up mechanism. We conclude that both principles also play a crucial role in healthy auditory perception. Finally, in the context of neuroscience-inspired artificial intelligence, both processing principles may serve to improve contemporary machine learning techniques.
Collapse
Affiliation(s)
- Achim Schilling
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, 91058 Erlangen, Germany
| | - William Sedley
- Translational and Clinical Research Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK
| | - Richard Gerum
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, 91058 Erlangen, Germany
- Department of Physics and Astronomy and Center for Vision Research, York University, Toronto, ON M3J 1P3, Canada
| | - Claus Metzner
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
| | | | - Andreas Maier
- Pattern Recognition Lab, University Erlangen-Nürnberg, 91058 Erlangen, Germany
| | - Holger Schulze
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
| | - Fan-Gang Zeng
- Center for Hearing Research, Departments of Anatomy and Neurobiology, Biomedical Engineering, Cognitive Sciences, Otolaryngology–Head and Neck Surgery, University of California Irvine, Irvine, CA 92697, USA
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Patrick Krauss
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, 91058 Erlangen, Germany
- Pattern Recognition Lab, University Erlangen-Nürnberg, 91058 Erlangen, Germany
| |
Collapse
|
4
|
Park J, Ha S, Yu T, Neftci E, Cauwenberghs G. A 22-pJ/spike 73-Mspikes/s 130k-compartment neural array transceiver with conductance-based synaptic and membrane dynamics. Front Neurosci 2023; 17:1198306. [PMID: 37700751 PMCID: PMC10493285 DOI: 10.3389/fnins.2023.1198306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Accepted: 07/07/2023] [Indexed: 09/14/2023] Open
Abstract
Neuromorphic cognitive computing offers a bio-inspired means to approach the natural intelligence of biological neural systems in silicon integrated circuits. Typically, such circuits either reproduce biophysical neuronal dynamics in great detail as tools for computational neuroscience, or abstract away the biology by simplifying the functional forms of neural computation in large-scale systems for machine intelligence with high integration density and energy efficiency. Here we report a hybrid which offers biophysical realism in the emulation of multi-compartmental neuronal network dynamics at very large scale with high implementation efficiency, and yet with high flexibility in configuring the functional form and the network topology. The integrate-and-fire array transceiver (IFAT) chip emulates the continuous-time analog membrane dynamics of 65 k two-compartment neurons with conductance-based synapses. Fired action potentials are registered as address-event encoded output spikes, while the four types of synapses coupling to each neuron are activated by address-event decoded input spikes for fully reconfigurable synaptic connectivity, facilitating virtual wiring as implemented by routing address-event spikes externally through synaptic routing table. Peak conductance strength of synapse activation specified by the address-event input spans three decades of dynamic range, digitally controlled by pulse width and amplitude modulation (PWAM) of the drive voltage activating the log-domain linear synapse circuit. Two nested levels of micro-pipelining in the IFAT architecture improve both throughput and efficiency of synaptic input. This two-tier micro-pipelining results in a measured sustained peak throughput of 73 Mspikes/s and overall chip-level energy efficiency of 22 pJ/spike. Non-uniformity in digitally encoded synapse strength due to analog mismatch is mitigated through single-point digital offset calibration. Combined with the flexibly layered and recurrent synaptic connectivity provided by hierarchical address-event routing of registered spike events through external memory, the IFAT lends itself to efficient large-scale emulation of general biophysical spiking neural networks, as well as rate-based mapping of rectified linear unit (ReLU) neural activations.
Collapse
Affiliation(s)
- Jongkil Park
- Center for Neuromorphic Engineering, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
- Institute for Neural Computation, University of California, San Diego, La Jolla, CA, United States
- Department of Electrical and Computer Engineering, Jacobs School of Engineering, University of California, San Diego, La Jolla, CA, United States
| | - Sohmyung Ha
- Institute for Neural Computation, University of California, San Diego, La Jolla, CA, United States
- Department of Bioengineering, Jacobs School of Engineering, University of California, San Diego, La Jolla, CA, United States
- Division of Engineering, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Theodore Yu
- Institute for Neural Computation, University of California, San Diego, La Jolla, CA, United States
- Department of Electrical and Computer Engineering, Jacobs School of Engineering, University of California, San Diego, La Jolla, CA, United States
| | - Emre Neftci
- Peter Grünberg Institute, Forschungszentrum Jülich, RWTH, Aachen, Germany
| | - Gert Cauwenberghs
- Institute for Neural Computation, University of California, San Diego, La Jolla, CA, United States
- Department of Bioengineering, Jacobs School of Engineering, University of California, San Diego, La Jolla, CA, United States
| |
Collapse
|
5
|
Kauth K, Stadtmann T, Sobhani V, Gemmeke T. neuroAIx-Framework: design of future neuroscience simulation systems exhibiting execution of the cortical microcircuit model 20× faster than biological real-time. Front Comput Neurosci 2023; 17:1144143. [PMID: 37152299 PMCID: PMC10156974 DOI: 10.3389/fncom.2023.1144143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 03/30/2023] [Indexed: 05/09/2023] Open
Abstract
Introduction Research in the field of computational neuroscience relies on highly capable simulation platforms. With real-time capabilities surpassed for established models like the cortical microcircuit, it is time to conceive next-generation systems: neuroscience simulators providing significant acceleration, even for larger networks with natural density, biologically plausible multi-compartment models and the modeling of long-term and structural plasticity. Methods Stressing the need for agility to adapt to new concepts or findings in the domain of neuroscience, we have developed the neuroAIx-Framework consisting of an empirical modeling tool, a virtual prototype, and a cluster of FPGA boards. This framework is designed to support and accelerate the continuous development of such platforms driven by new insights in neuroscience. Results Based on design space explorations using this framework, we devised and realized an FPGA cluster consisting of 35 NetFPGA SUME boards. Discussion This system functions as an evaluation platform for our framework. At the same time, it resulted in a fully deterministic neuroscience simulation system surpassing the state of the art in both performance and energy efficiency. It is capable of simulating the microcircuit with 20× acceleration compared to biological real-time and achieves an energy efficiency of 48nJ per synaptic event.
Collapse
|
6
|
Integrative modeling of the cell. Acta Biochim Biophys Sin (Shanghai) 2022; 54:1213-1221. [PMID: 36017893 PMCID: PMC9909318 DOI: 10.3724/abbs.2022115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
A whole-cell model represents certain aspects of the cell structure and/or function. Due to the high complexity of the cell, an integrative modeling approach is often taken to utilize all available information including experimental data, prior knowledge and prior models. In this review, we summarize an emerging workflow of whole-cell modeling into five steps: (i) gather information; (ii) represent the modeled system into modules; (iii) translate input information into scoring function; (iv) sample the whole-cell model; (v) validate and interpret the model. In particular, we propose the integrative modeling of the cell by combining available (whole-cell) models to maximize the accuracy, precision, and completeness. In addition, we list quantitative predictions of various aspects of cell biology from existing whole-cell models. Moreover, we discuss the remaining challenges and future directions, and highlight the opportunity to establish an integrative spatiotemporal multi-scale whole-cell model based on a community approach.
Collapse
|
7
|
Wei J, Wang Z, Li Y, Lu J, Jiang H, An J, Li Y, Gao L, Zhang X, Shi T, Liu Q. FangTianSim: High-Level Cycle-Accurate Resistive Random-Access Memory-Based Multi-Core Spiking Neural Network Processor Simulator. Front Neurosci 2022; 15:806325. [PMID: 35126046 PMCID: PMC8811373 DOI: 10.3389/fnins.2021.806325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 12/10/2021] [Indexed: 11/23/2022] Open
Abstract
Realization of spiking neural network (SNN) hardware with high energy efficiency and high integration may provide a promising solution to data processing challenges in future internet of things (IoT) and artificial intelligence (AI). Recently, design of multi-core reconfigurable SNN chip based on resistive random-access memory (RRAM) is drawing great attention, owing to the unique properties of RRAM, e.g., high integration density, low power consumption, and processing-in-memory (PIM). Therefore, RRAM-based SNN chip may have further improvements in integration and energy efficiency. The design of such a chip will face the following problems: significant delay in pulse transmission due to complex logic control and inter-core communication; high risk of digital, analog, and RRAM hybrid design; and non-ideal characteristics of analog circuit and RRAM. In order to effectively bridge the gap between device, circuit, algorithm, and architecture, this paper proposes a simulation model—FangTianSim, which covers analog neuron circuit, RRAM model and multi-core architecture and its accuracy is at the clock level. This model can be used to verify the functionalities, delay, and power consumption of SNN chip. This information cannot only be used to verify the rationality of the architecture but also guide the chip design. In order to map different network topologies on the chip, SNN representation format, interpreter, and instruction generator are designed. Finally, the function of FangTianSim is verified on liquid state machine (LSM), fully connected neural network (FCNN), and convolutional neural network (CNN).
Collapse
Affiliation(s)
- Jinsong Wei
- Zhejiang Laboratory, Institute of Intelligent Computing, Hangzhou, China
- Institute of Microelectronics, Chinese Academy of Sciences, Beijing, China
| | - Zhibin Wang
- Zhejiang Laboratory, Institute of Intelligent Computing, Hangzhou, China
| | - Ye Li
- Zhejiang Laboratory, Institute of Intelligent Computing, Hangzhou, China
| | - Jikai Lu
- Zhejiang Laboratory, Institute of Intelligent Computing, Hangzhou, China
- School of Microelectronics, University of Science and Technology of China, Hefei, China
| | - Hao Jiang
- Zhejiang Laboratory, Institute of Intelligent Computing, Hangzhou, China
- School of Microelectronics, University of Science and Technology of China, Hefei, China
| | - Junjie An
- Institute of Microelectronics, Chinese Academy of Sciences, Beijing, China
- School of Microelectronics, University of Science and Technology of China, Hefei, China
| | - Yiqi Li
- Zhejiang Laboratory, Institute of Intelligent Computing, Hangzhou, China
| | - Lili Gao
- Zhejiang Laboratory, Institute of Intelligent Computing, Hangzhou, China
| | - Xumeng Zhang
- Frontier Institute of Chip and System, Fudan University, Shanghai, China
| | - Tuo Shi
- Zhejiang Laboratory, Institute of Intelligent Computing, Hangzhou, China
- Institute of Microelectronics, Chinese Academy of Sciences, Beijing, China
- *Correspondence: Tuo Shi,
| | - Qi Liu
- Frontier Institute of Chip and System, Fudan University, Shanghai, China
| |
Collapse
|
8
|
Serruya MD. Connecting the Brain to Itself through an Emulation. Front Neurosci 2017; 11:373. [PMID: 28713235 PMCID: PMC5492113 DOI: 10.3389/fnins.2017.00373] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2017] [Accepted: 06/15/2017] [Indexed: 01/03/2023] Open
Abstract
Pilot clinical trials of human patients implanted with devices that can chronically record and stimulate ensembles of hundreds to thousands of individual neurons offer the possibility of expanding the substrate of cognition. Parallel trains of firing rate activity can be delivered in real-time to an array of intermediate external modules that in turn can trigger parallel trains of stimulation back into the brain. These modules may be built in software, VLSI firmware, or biological tissue as in vitro culture preparations or in vivo ectopic construct organoids. Arrays of modules can be constructed as early stage whole brain emulators, following canonical intra- and inter-regional circuits. By using machine learning algorithms and classic tasks known to activate quasi-orthogonal functional connectivity patterns, bedside testing can rapidly identify ensemble tuning properties and in turn cycle through a sequence of external module architectures to explore which can causatively alter perception and behavior. Whole brain emulation both (1) serves to augment human neural function, compensating for disease and injury as an auxiliary parallel system, and (2) has its independent operation bootstrapped by a human-in-the-loop to identify optimal micro- and macro-architectures, update synaptic weights, and entrain behaviors. In this manner, closed-loop brain-computer interface pilot clinical trials can advance strong artificial intelligence development and forge new therapies to restore independence in children and adults with neurological conditions.
Collapse
Affiliation(s)
- Mijail D Serruya
- Neurology, Thomas Jefferson UniversityPhiladelphia, PA, United States
| |
Collapse
|
9
|
|
10
|
Prieto A, Prieto B, Ortigosa EM, Ros E, Pelayo F, Ortega J, Rojas I. Neural networks: An overview of early research, current frameworks and new challenges. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.06.014] [Citation(s) in RCA: 161] [Impact Index Per Article: 17.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
11
|
McDougal RA, Bulanova AS, Lytton WW. Reproducibility in Computational Neuroscience Models and Simulations. IEEE Trans Biomed Eng 2016; 63:2021-35. [PMID: 27046845 PMCID: PMC5016202 DOI: 10.1109/tbme.2016.2539602] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
OBJECTIVE Like all scientific research, computational neuroscience research must be reproducible. Big data science, including simulation research, cannot depend exclusively on journal articles as the method to provide the sharing and transparency required for reproducibility. METHODS Ensuring model reproducibility requires the use of multiple standard software practices and tools, including version control, strong commenting and documentation, and code modularity. RESULTS Building on these standard practices, model-sharing sites and tools have been developed that fit into several categories: 1) standardized neural simulators; 2) shared computational resources; 3) declarative model descriptors, ontologies, and standardized annotations; and 4) model-sharing repositories and sharing standards. CONCLUSION A number of complementary innovations have been proposed to enhance sharing, transparency, and reproducibility. The individual user can be encouraged to make use of version control, commenting, documentation, and modularity in development of models. The community can help by requiring model sharing as a condition of publication and funding. SIGNIFICANCE Model management will become increasingly important as multiscale models become larger, more detailed, and correspondingly more difficult to manage by any single investigator or single laboratory. Additional big data management complexity will come as the models become more useful in interpreting experiments, thus increasing the need to ensure clear alignment between modeling data, both parameters and results, and experiment.
Collapse
|
12
|
Cheung K, Schultz SR, Luk W. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors. Front Neurosci 2016; 9:516. [PMID: 26834542 PMCID: PMC4712299 DOI: 10.3389/fnins.2015.00516] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2015] [Accepted: 12/22/2015] [Indexed: 11/13/2022] Open
Abstract
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.
Collapse
Affiliation(s)
- Kit Cheung
- Custom Computing Research Group, Department of Computing, Imperial College LondonLondon, UK
- Centre for Neurotechnology, Department of Bioengineering, Imperial College LondonLondon, UK
| | - Simon R. Schultz
- Centre for Neurotechnology, Department of Bioengineering, Imperial College LondonLondon, UK
| | - Wayne Luk
- Custom Computing Research Group, Department of Computing, Imperial College LondonLondon, UK
| |
Collapse
|
13
|
Givon LE, Lazar AA. Neurokernel: An Open Source Platform for Emulating the Fruit Fly Brain. PLoS One 2016; 11:e0146581. [PMID: 26751378 PMCID: PMC4709234 DOI: 10.1371/journal.pone.0146581] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2015] [Accepted: 12/18/2015] [Indexed: 11/23/2022] Open
Abstract
We have developed an open software platform called Neurokernel for collaborative development of comprehensive models of the brain of the fruit fly Drosophila melanogaster and their execution and testing on multiple Graphics Processing Units (GPUs). Neurokernel provides a programming model that capitalizes upon the structural organization of the fly brain into a fixed number of functional modules to distinguish between these modules' local information processing capabilities and the connectivity patterns that link them. By defining mandatory communication interfaces that specify how data is transmitted between models of each of these modules regardless of their internal design, Neurokernel explicitly enables multiple researchers to collaboratively model the fruit fly's entire brain by integration of their independently developed models of its constituent processing units. We demonstrate the power of Neurokernel's model integration by combining independently developed models of the retina and lamina neuropils in the fly's visual system and by demonstrating their neuroinformation processing capability. We also illustrate Neurokernel's ability to take advantage of direct GPU-to-GPU data transfers with benchmarks that demonstrate scaling of Neurokernel's communication performance both over the number of interface ports exposed by an emulation's constituent modules and the total number of modules comprised by an emulation.
Collapse
Affiliation(s)
- Lev E. Givon
- Department of Electrical Engineering, Columbia University, New York, NY 10027, United States of America
| | - Aurel A. Lazar
- Department of Electrical Engineering, Columbia University, New York, NY 10027, United States of America
| |
Collapse
|
14
|
Stefanini F, Neftci EO, Sheik S, Indiveri G. PyNCS: a microkernel for high-level definition and configuration of neuromorphic electronic systems. Front Neuroinform 2014; 8:73. [PMID: 25232314 PMCID: PMC4152885 DOI: 10.3389/fninf.2014.00073] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2013] [Accepted: 08/01/2014] [Indexed: 11/13/2022] Open
Abstract
Neuromorphic hardware offers an electronic substrate for the realization of asynchronous event-based sensory-motor systems and large-scale spiking neural network architectures. In order to characterize these systems, configure them, and carry out modeling experiments, it is often necessary to interface them to workstations. The software used for this purpose typically consists of a large monolithic block of code which is highly specific to the hardware setup used. While this approach can lead to highly integrated hardware/software systems, it hampers the development of modular and reconfigurable infrastructures thus preventing a rapid evolution of such systems. To alleviate this problem, we propose PyNCS, an open-source front-end for the definition of neural network models that is interfaced to the hardware through a set of Python Application Programming Interfaces (APIs). The design of PyNCS promotes modularity, portability and expandability and separates implementation from hardware description. The high-level front-end that comes with PyNCS includes tools to define neural network models as well as to create, monitor and analyze spiking data. Here we report the design philosophy behind the PyNCS framework and describe its implementation. We demonstrate its functionality with two representative case studies, one using an event-based neuromorphic vision sensor, and one using a set of multi-neuron devices for carrying out a cognitive decision-making task involving state-dependent computation. PyNCS, already applicable to a wide range of existing spike-based neuromorphic setups, will accelerate the development of hybrid software/hardware neuromorphic systems, thanks to its code flexibility. The code is open-source and available online at https://github.com/inincs/pyNCS.
Collapse
Affiliation(s)
- Fabio Stefanini
- Department of Information Technology and Electrical Engineering, Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Emre O Neftci
- Department of Bioengineering, Institute for Neural Computation, University of California at San Diego La Jolla, CA, USA
| | - Sadique Sheik
- Department of Information Technology and Electrical Engineering, Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Giacomo Indiveri
- Department of Information Technology and Electrical Engineering, Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| |
Collapse
|
15
|
Camuñas-Mesa LA, Serrano-Gotarredona T, Ieng SH, Benosman RB, Linares-Barranco B. On the use of orientation filters for 3D reconstruction in event-driven stereo vision. Front Neurosci 2014; 8:48. [PMID: 24744694 PMCID: PMC3978326 DOI: 10.3389/fnins.2014.00048] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2013] [Accepted: 02/23/2014] [Indexed: 11/13/2022] Open
Abstract
The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction.
Collapse
Affiliation(s)
- Luis A Camuñas-Mesa
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC y Universidad de Sevilla Sevilla, Spain
| | | | - Sio H Ieng
- UMR_S968 Inserm/UPMC/CNRS 7210, Institut de la Vision, Université de Pierre et Marie Curie Paris, France
| | - Ryad B Benosman
- UMR_S968 Inserm/UPMC/CNRS 7210, Institut de la Vision, Université de Pierre et Marie Curie Paris, France
| | - Bernabe Linares-Barranco
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC y Universidad de Sevilla Sevilla, Spain
| |
Collapse
|
16
|
Macklin DN, Ruggero NA, Covert MW. The future of whole-cell modeling. Curr Opin Biotechnol 2014; 28:111-5. [PMID: 24556244 DOI: 10.1016/j.copbio.2014.01.012] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2013] [Revised: 01/19/2014] [Accepted: 01/20/2014] [Indexed: 12/21/2022]
Abstract
Integrated whole-cell modeling is poised to make a dramatic impact on molecular and systems biology, bioengineering, and medicine--once certain obstacles are overcome. From our group's experience building a whole-cell model of Mycoplasma genitalium, we identified several significant challenges to building models of more complex cells. Here we review and discuss these challenges in seven areas: first, experimental interrogation; second, data curation; third, model building and integration; fourth, accelerated computation; fifth, analysis and visualization; sixth, model validation; and seventh, collaboration and community development. Surmounting these challenges will require the cooperation of an interdisciplinary group of researchers to create increasingly sophisticated whole-cell models and make data, models, and simulations more accessible to the wider community.
Collapse
Affiliation(s)
- Derek N Macklin
- Department of Bioengineering, Stanford University, Stanford, CA, USA
| | - Nicholas A Ruggero
- Department of Chemical Engineering, Stanford University, Stanford, CA, USA
| | - Markus W Covert
- Department of Bioengineering, Stanford University, Stanford, CA, USA.
| |
Collapse
|
17
|
Moradi S, Indiveri G. An event-based neural network architecture with an asynchronous programmable synaptic memory. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2014; 8:98-107. [PMID: 24681923 DOI: 10.1109/tbcas.2013.2255873] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
We present a hybrid analog/digital very large scale integration (VLSI) implementation of a spiking neural network with programmable synaptic weights. The synaptic weight values are stored in an asynchronous Static Random Access Memory (SRAM) module, which is interfaced to a fast current-mode event-driven DAC for producing synaptic currents with the appropriate amplitude values. These currents are further integrated by current-mode integrator synapses to produce biophysically realistic temporal dynamics. The synapse output currents are then integrated by compact and efficient integrate and fire silicon neuron circuits with spike-frequency adaptation and adjustable refractory period and spike-reset voltage settings. The fabricated chip comprises a total of 32 × 32 SRAM cells, 4 × 32 synapse circuits and 32 × 1 silicon neurons. It acts as a transceiver, receiving asynchronous events in input, performing neural computation with hybrid analog/digital circuits on the input spikes, and eventually producing digital asynchronous events in output. Input, output, and synaptic weight values are transmitted to/from the chip using a common communication protocol based on the Address Event Representation (AER). Using this representation it is possible to interface the device to a workstation or a micro-controller and explore the effect of different types of Spike-Timing Dependent Plasticity (STDP) learning algorithms for updating the synaptic weights values in the SRAM module. We present experimental results demonstrating the correct operation of all the circuits present on the chip.
Collapse
|
18
|
Neftci E, Das S, Pedroni B, Kreutz-Delgado K, Cauwenberghs G. Event-driven contrastive divergence for spiking neuromorphic systems. Front Neurosci 2014; 7:272. [PMID: 24574952 PMCID: PMC3922083 DOI: 10.3389/fnins.2013.00272] [Citation(s) in RCA: 113] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2013] [Accepted: 12/22/2013] [Indexed: 11/13/2022] Open
Abstract
Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.
Collapse
Affiliation(s)
- Emre Neftci
- Institute for Neural Computation, University of CaliforniaSan Diego, La Jolla, CA, USA
| | - Srinjoy Das
- Institute for Neural Computation, University of CaliforniaSan Diego, La Jolla, CA, USA
- Electrical and Computer Engineering Department, University of CaliforniaSan Diego, La Jolla, CA, USA
| | - Bruno Pedroni
- Department of Bioengineering, University of CaliforniaSan Diego, La Jolla, CA, USA
| | - Kenneth Kreutz-Delgado
- Institute for Neural Computation, University of CaliforniaSan Diego, La Jolla, CA, USA
- Electrical and Computer Engineering Department, University of CaliforniaSan Diego, La Jolla, CA, USA
| | - Gert Cauwenberghs
- Institute for Neural Computation, University of CaliforniaSan Diego, La Jolla, CA, USA
- Department of Bioengineering, University of CaliforniaSan Diego, La Jolla, CA, USA
| |
Collapse
|
19
|
Pérez-Carrasco JA, Zhao B, Serrano C, Acha B, Serrano-Gotarredona T, Chen S, Linares-Barranco B. Mapping from frame-driven to frame-free event-driven vision systems by low-rate rate coding and coincidence processing--application to feedforward ConvNets. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2013; 35:2706-2719. [PMID: 24051730 DOI: 10.1109/tpami.2013.71] [Citation(s) in RCA: 56] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at a given "frame rate." Event-driven vision sensors take inspiration from biology. Each pixel sends out an event (spike) when it senses something meaningful is happening, without any notion of a frame. A special type of event-driven sensor is the so-called dynamic vision sensor (DVS) where each pixel computes relative changes of light or "temporal contrast." The sensor output consists of a continuous flow of pixel events that represent the moving objects in the scene. Pixel events become available with microsecond delays with respect to "reality." These events can be processed "as they flow" by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident in time, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper, we present a methodology for mapping from a properly trained neural network in a conventional frame-driven representation to an event-driven representation. The method is illustrated by studying event-driven convolutional neural networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The event-driven ConvNet is fed with recordings obtained from a real DVS camera. The event-driven ConvNet is simulated with a dedicated event-driven simulator and consists of a number of event-driven processing modules, the characteristics of which are obtained from individually manufactured hardware modules.
Collapse
|
20
|
Indiveri G, Linares-Barranco B, Legenstein R, Deligeorgis G, Prodromakis T. Integration of nanoscale memristor synapses in neuromorphic computing architectures. NANOTECHNOLOGY 2013; 24:384010. [PMID: 23999381 DOI: 10.1088/0957-4484/24/38/384010] [Citation(s) in RCA: 148] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Conventional neuro-computing architectures and artificial neural networks have often been developed with no or loose connections to neuroscience. As a consequence, they have largely ignored key features of biological neural processing systems, such as their extremely low-power consumption features or their ability to carry out robust and efficient computation using massively parallel arrays of limited precision, highly variable, and unreliable components. Recent developments in nano-technologies are making available extremely compact and low power, but also variable and unreliable solid-state devices that can potentially extend the offerings of availing CMOS technologies. In particular, memristors are regarded as a promising solution for modeling key features of biological synapses due to their nanoscale dimensions, their capacity to store multiple bits of information per element and the low energy required to write distinct states. In this paper, we first review the neuro- and neuromorphic computing approaches that can best exploit the properties of memristor and scale devices, and then propose a novel hybrid memristor-CMOS neuromorphic circuit which represents a radical departure from conventional neuro-computing approaches, as it uses memristors to directly emulate the biophysics and temporal dynamics of real synapses. We point out the differences between the use of memristors in conventional neuro-computing architectures and the hybrid memristor-CMOS circuit proposed, and argue how this circuit represents an ideal building block for implementing brain-inspired probabilistic computing paradigms that are robust to variability and fault tolerant by design.
Collapse
Affiliation(s)
- Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland.
| | | | | | | | | |
Collapse
|
21
|
Hasler J, Marr B. Finding a roadmap to achieve large neuromorphic hardware systems. Front Neurosci 2013; 7:118. [PMID: 24058330 PMCID: PMC3767911 DOI: 10.3389/fnins.2013.00118] [Citation(s) in RCA: 93] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2012] [Accepted: 06/20/2013] [Indexed: 11/13/2022] Open
Abstract
Neuromorphic systems are gaining increasing importance in an era where CMOS digital computing techniques are reaching physical limits. These silicon systems mimic extremely energy efficient neural computing structures, potentially both for solving engineering applications as well as understanding neural computation. Toward this end, the authors provide a glimpse at what the technology evolution roadmap looks like for these systems so that Neuromorphic engineers may gain the same benefit of anticipation and foresight that IC designers gained from Moore's law many years ago. Scaling of energy efficiency, performance, and size will be discussed as well as how the implementation and application space of Neuromorphic systems are expected to evolve over time.
Collapse
Affiliation(s)
- Jennifer Hasler
- School of Electrical and Computer Engineering, Georgia Institute of TechnologyAtlanta, GA, USA
| | | |
Collapse
|
22
|
Crane EA, Rothman ED, Childers D, Gerstner GE. Analysis of temporal variation in human masticatory cycles during gum chewing. Arch Oral Biol 2013; 58:1464-74. [PMID: 23915677 DOI: 10.1016/j.archoralbio.2013.06.009] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2013] [Revised: 05/08/2013] [Accepted: 06/11/2013] [Indexed: 01/06/2023]
Abstract
OBJECTIVE The study investigated modulation of fast and slow opening (FO, SO) and closing (FC, SC) chewing cycle phases using gum-chewing sequences in humans. DESIGN Twenty-two healthy adult subjects participated by chewing gum for at least 20s on the right side and at least 20s on the left side while jaw movements were tracked with a 3D motion analysis system. Jaw movement data were digitized, and chewing cycle phases were identified and analysed for all chewing cycles in a complete sequence. RESULTS All four chewing cycle phase durations were more variant than total cycle durations, a result found in other non-human primates. Significant negative correlations existed between the opening phases, SO and FO, and between the closing phases, SC and FC; however, there was less consistency in terms of which phases were negatively correlated both between subjects, and between chewing sides within subjects, compared with results reported in other species. CONCLUSIONS The coordination of intra-cycle phases appears to be flexible and to follow complex rules during gum-chewing in humans. Alternatively, the observed intra-cycle phase relationships could simply reflect: (1) variation in jaw kinematics due to variation in how gum was handled by the tongue on a chew-by-chew basis in our experimental design or (2) by variation due to data sampling noise and/or how phases were defined and identified.
Collapse
Affiliation(s)
- Elizabeth A Crane
- Department of Biologic and Materials Sciences, School of Dentistry, Ann Arbor, MI 48109-1078, USA.
| | | | | | | |
Collapse
|
23
|
Abstract
The quest to implement intelligent processing in electronic neuromorphic systems lacks methods for achieving reliable behavioral dynamics on substrates of inherently imprecise and noisy neurons. Here we report a solution to this problem that involves first mapping an unreliable hardware layer of spiking silicon neurons into an abstract computational layer composed of generic reliable subnetworks of model neurons and then composing the target behavioral dynamics as a "soft state machine" running on these reliable subnets. In the first step, the neural networks of the abstract layer are realized on the hardware substrate by mapping the neuron circuit bias voltages to the model parameters. This mapping is obtained by an automatic method in which the electronic circuit biases are calibrated against the model parameters by a series of population activity measurements. The abstract computational layer is formed by configuring neural networks as generic soft winner-take-all subnetworks that provide reliable processing by virtue of their active gain, signal restoration, and multistability. The necessary states and transitions of the desired high-level behavior are then easily embedded in the computational layer by introducing only sparse connections between some neurons of the various subnets. We demonstrate this synthesis method for a neuromorphic sensory agent that performs real-time context-dependent classification of motion patterns observed by a silicon retina.
Collapse
|
24
|
Cassidy AS, Georgiou J, Andreou AG. Design of silicon brains in the nano-CMOS era: spiking neurons, learning synapses and neural architecture optimization. Neural Netw 2013; 45:4-26. [PMID: 23886551 DOI: 10.1016/j.neunet.2013.05.011] [Citation(s) in RCA: 80] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2012] [Revised: 05/20/2013] [Accepted: 05/21/2013] [Indexed: 11/30/2022]
Abstract
We present a design framework for neuromorphic architectures in the nano-CMOS era. Our approach to the design of spiking neurons and STDP learning circuits relies on parallel computational structures where neurons are abstracted as digital arithmetic logic units and communication processors. Using this approach, we have developed arrays of silicon neurons that scale to millions of neurons in a single state-of-the-art Field Programmable Gate Array (FPGA). We demonstrate the validity of the design methodology through the implementation of cortical development in a circuit of spiking neurons, STDP synapses, and neural architecture optimization.
Collapse
Affiliation(s)
- Andrew S Cassidy
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA.
| | | | | |
Collapse
|
25
|
Wang Y, Liu SC. Active processing of spatio-temporal input patterns in silicon dendrites. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2013; 7:307-318. [PMID: 23853330 DOI: 10.1109/tbcas.2012.2199487] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Capturing the functionality of active dendritic processing into abstract mathematical models will help us to understand the role of complex biophysical neurons in neuronal computation and to build future useful neuromorphic analog Very Large Scale Integrated (aVLSI) neuronal devices. Previous work based on an aVLSI multi-compartmental neuron model demonstrates that the compartmental response in the presence of either of two widely studied classes of active mechanisms, is a nonlinear sigmoidal function of the degree of either input temporal synchrony OR input clustering level. Using the same silicon model, this work expounds the interaction between both active mechanisms in a compartment receiving input patterns of varying temporal AND spatial clustering structure and demonstrates that this compartmental response can be captured by a combined sigmoid and radial-basis function over both input dimensions. This paper further shows that the response to input spatio-temporal patterns in a one-dimensional multi-compartmental dendrite, can be described by a radial-basis like function of the degree of temporal synchrony between the inter-compartmental inputs.
Collapse
Affiliation(s)
- Yingxue Wang
- Institute of Neuroinformatics, University of Zürich and ETH Zürich, CH-8057 Zürich, Switzerland.
| | | |
Collapse
|
26
|
Dethier J, Nuyujukian P, Ryu SI, Shenoy KV, Boahen K. Design and validation of a real-time spiking-neural-network decoder for brain-machine interfaces. J Neural Eng 2013; 10:036008. [PMID: 23574919 DOI: 10.1088/1741-2560/10/3/036008] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Cortically-controlled motor prostheses aim to restore functions lost to neurological disease and injury. Several proof of concept demonstrations have shown encouraging results, but barriers to clinical translation still remain. In particular, intracortical prostheses must satisfy stringent power dissipation constraints so as not to damage cortex. APPROACH One possible solution is to use ultra-low power neuromorphic chips to decode neural signals for these intracortical implants. The first step is to explore in simulation the feasibility of translating decoding algorithms for brain-machine interface (BMI) applications into spiking neural networks (SNNs). MAIN RESULTS Here we demonstrate the validity of the approach by implementing an existing Kalman-filter-based decoder in a simulated SNN using the Neural Engineering Framework (NEF), a general method for mapping control algorithms onto SNNs. To measure this system's robustness and generalization, we tested it online in closed-loop BMI experiments with two rhesus monkeys. Across both monkeys, a Kalman filter implemented using a 2000-neuron SNN has comparable performance to that of a Kalman filter implemented using standard floating point techniques. SIGNIFICANCE These results demonstrate the tractability of SNN implementations of statistical signal processing algorithms on different monkeys and for several tasks, suggesting that a SNN decoder, implemented on a neuromorphic chip, may be a feasible computational platform for low-power fully-implanted prostheses. The validation of this closed-loop decoder system and the demonstration of its robustness and generalization hold promise for SNN implementations on an ultra-low power neuromorphic chip using the NEF.
Collapse
Affiliation(s)
- Julie Dethier
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA
| | | | | | | | | |
Collapse
|
27
|
Adaptive training of cortical feature maps for a robot sensorimotor controller. Neural Netw 2013; 44:6-21. [PMID: 23545539 DOI: 10.1016/j.neunet.2013.03.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2012] [Revised: 03/02/2013] [Accepted: 03/03/2013] [Indexed: 11/23/2022]
Abstract
This work investigates self-organising cortical feature maps (SOFMs) based upon the Kohonen Self-Organising Map (SOM) but implemented with spiking neural networks. In future work, the feature maps are intended as the basis for a sensorimotor controller for an autonomous humanoid robot. Traditional SOM methods require some modifications to be useful for autonomous robotic applications. Ideally the map training process should be self-regulating and not require predefined training files or the usual SOM parameter reduction schedules. It would also be desirable if the organised map had some flexibility to accommodate new information whilst preserving previous learnt patterns. Here methods are described which have been used to develop a cortical motor map training system which goes some way towards addressing these issues. The work is presented under the general term 'Adaptive Plasticity' and the main contribution is the development of a 'plasticity resource' (PR) which is modelled as a global parameter which expresses the rate of map development and is related directly to learning on the afferent (input) connections. The PR is used to control map training in place of a traditional learning rate parameter. In conjunction with the PR, random generation of inputs from a set of exemplar patterns is used rather than predefined datasets and enables maps to be trained without deciding in advance how much data is required. An added benefit of the PR is that, unlike a traditional learning rate, it can increase as well as decrease in response to the demands of the input and so allows the map to accommodate new information when the inputs are changed during training.
Collapse
|
28
|
Choudhary S, Sloan S, Fok S, Neckar A, Trautmann E, Gao P, Stewart T, Eliasmith C, Boahen K. Silicon Neurons That Compute. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING – ICANN 2012 2012. [DOI: 10.1007/978-3-642-33269-2_16] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
|
29
|
A biophysically-based neuromorphic model of spike rate- and timing-dependent plasticity. Proc Natl Acad Sci U S A 2011; 108:E1266-74. [PMID: 22089232 DOI: 10.1073/pnas.1106161108] [Citation(s) in RCA: 58] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023] Open
Abstract
Current advances in neuromorphic engineering have made it possible to emulate complex neuronal ion channel and intracellular ionic dynamics in real time using highly compact and power-efficient complementary metal-oxide-semiconductor (CMOS) analog very-large-scale-integrated circuit technology. Recently, there has been growing interest in the neuromorphic emulation of the spike-timing-dependent plasticity (STDP) Hebbian learning rule by phenomenological modeling using CMOS, memristor or other analog devices. Here, we propose a CMOS circuit implementation of a biophysically grounded neuromorphic (iono-neuromorphic) model of synaptic plasticity that is capable of capturing both the spike rate-dependent plasticity (SRDP, of the Bienenstock-Cooper-Munro or BCM type) and STDP rules. The iono-neuromorphic model reproduces bidirectional synaptic changes with NMDA receptor-dependent and intracellular calcium-mediated long-term potentiation or long-term depression assuming retrograde endocannabinoid signaling as a second coincidence detector. Changes in excitatory or inhibitory synaptic weights are registered and stored in a nonvolatile and compact digital format analogous to the discrete insertion and removal of AMPA or GABA receptor channels. The versatile Hebbian synapse device is applicable to a variety of neuroprosthesis, brain-machine interface, neurorobotics, neuromimetic computation, machine learning, and neural-inspired adaptive control problems.
Collapse
|
30
|
Neftci E, Chicca E, Indiveri G, Douglas R. A Systematic Method for Configuring VLSI Networks of Spiking Neurons. Neural Comput 2011; 23:2457-97. [DOI: 10.1162/neco_a_00182] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
An increasing number of research groups are developing custom hybrid analog/digital very large scale integration (VLSI) chips and systems that implement hundreds to thousands of spiking neurons with biophysically realistic dynamics, with the intention of emulating brainlike real-world behavior in hardware and robotic systems rather than simply simulating their performance on general-purpose digital computers. Although the electronic engineering aspects of these emulation systems is proceeding well, progress toward the actual emulation of brainlike tasks is restricted by the lack of suitable high-level configuration methods of the kind that have already been developed over many decades for simulations on general-purpose computers. The key difficulty is that the dynamics of the CMOS electronic analogs are determined by transistor biases that do not map simply to the parameter types and values used in typical abstract mathematical models of neurons and their networks. Here we provide a general method for resolving this difficulty. We describe a parameter mapping technique that permits an automatic configuration of VLSI neural networks so that their electronic emulation conforms to a higher-level neuronal simulation. We show that the neurons configured by our method exhibit spike timing statistics and temporal dynamics that are the same as those observed in the software simulated neurons and, in particular, that the key parameters of recurrent VLSI neural networks (e.g., implementing soft winner-take-all) can be precisely tuned. The proposed method permits a seamless integration between software simulations with hardware emulations and intertranslatability between the parameters of abstract neuronal models and their emulation counterparts. Most important, our method offers a route toward a high-level task configuration language for neuromorphic VLSI systems.
Collapse
Affiliation(s)
- Emre Neftci
- Institute of Neuroinformatics, ETH, and University of Zurich, Zurich 8057, Switzerland
| | - Elisabetta Chicca
- Institute of Neuroinformatics, ETH, and University of Zurich, Zurich 8057, Switzerland
| | - Giacomo Indiveri
- Institute of Neuroinformatics, ETH, and University of Zurich, Zurich 8057, Switzerland
| | - Rodney Douglas
- Institute of Neuroinformatics, ETH, and University of Zurich, Zurich 8057, Switzerland
| |
Collapse
|
31
|
Indiveri G, Linares-Barranco B, Hamilton TJ, van Schaik A, Etienne-Cummings R, Delbruck T, Liu SC, Dudek P, Häfliger P, Renaud S, Schemmel J, Cauwenberghs G, Arthur J, Hynna K, Folowosele F, Saighi S, Serrano-Gotarredona T, Wijekoon J, Wang Y, Boahen K. Neuromorphic silicon neuron circuits. Front Neurosci 2011; 5:73. [PMID: 21747754 PMCID: PMC3130465 DOI: 10.3389/fnins.2011.00073] [Citation(s) in RCA: 358] [Impact Index Per Article: 25.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2010] [Accepted: 05/07/2011] [Indexed: 11/13/2022] Open
Abstract
Hardware implementations of spiking neurons can be extremely useful for a large variety of applications, ranging from high-speed modeling of large-scale neural systems to real-time behaving systems, to bidirectional brain–machine interfaces. The specific circuit solutions used to implement silicon neurons depend on the application requirements. In this paper we describe the most common building blocks and techniques used to implement these circuits, and present an overview of a wide range of neuromorphic silicon neurons, which implement different computational models, ranging from biophysically realistic and conductance-based Hodgkin–Huxley models to bi-dimensional generalized adaptive integrate and fire models. We compare the different design methodologies used for each silicon neuron design described, and demonstrate their features with experimental results, measured from a wide range of fabricated VLSI chips.
Collapse
Affiliation(s)
- Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
32
|
Dethier J, Gilja V, Nuyujukian P, Elassaad SA, Shenoy KV, Boahen K. Spiking Neural Network Decoder for Brain-Machine Interfaces. INTERNATIONAL IEEE/EMBS CONFERENCE ON NEURAL ENGINEERING : [PROCEEDINGS]. INTERNATIONAL IEEE EMBS CONFERENCE ON NEURAL ENGINEERING 2011. [PMID: 24352611 DOI: 10.1109/ner.2011.5910570] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
We used a spiking neural network (SNN) to decode neural data recorded from a 96-electrode array in premotor/motor cortex while a rhesus monkey performed a point-to-point reaching arm movement task. We mapped a Kalman-filter neural prosthetic decode algorithm developed to predict the arm's velocity on to the SNN using the Neural Engineering Framework and simulated it using Nengo, a freely available software package. A 20,000-neuron network matched the standard decoder's prediction to within 0.03% (normalized by maximum arm velocity). A 1,600-neuron version of this network was within 0.27%, and run in real-time on a 3GHz PC. These results demonstrate that a SNN can implement a statistical signal processing algorithm widely used as the decoder in high-performance neural prostheses (Kalman filter), and achieve similar results with just a few thousand neurons. Hardware SNN implementations-neuromorphic chips-may offer power savings, essential for realizing fully-implantable cortically controlled prostheses.
Collapse
Affiliation(s)
- Julie Dethier
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA
| | - Vikash Gilja
- Department of Computer Science and Stanford Institute for Neuro-Innovation and Translational Neuroscience, Stanford University, Stanford, CA 94305, USA
| | - Paul Nuyujukian
- Department of Bioengineering and MSTP, Stanford University, Stanford, CA 94305, USA
| | - Shauki A Elassaad
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA
| | - Krishna V Shenoy
- Departments of Electrical Engineering and Bioengineering, and Neurosciences Program, Stanford University, Stanford, CA 94305, USA
| | - Kwabena Boahen
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
33
|
Abstract
Social species are so characterized because they form organizations that extend beyond the individual. The goal of social neuroscience is to investigate the biological mechanisms that underlie these social structures, processes, and behavior and the influences between social and neural structures and processes. Such an endeavor is challenging because it necessitates the integration of multiple levels. Mapping across systems and levels (from genome to social groups and cultures) requires interdisciplinary expertise, comparative studies, innovative methods, and integrative conceptual analysis. Examples of how social neuroscience is contributing to our understanding of the functions of the brain and nervous system are described, and societal implications of social neuroscience are considered.
Collapse
Affiliation(s)
- John T Cacioppo
- Center for Cognitive and Social Neuroscience, University of Chicago, Chicago, Illinois
| | - Jean Decety
- Center for Cognitive and Social Neuroscience, University of Chicago, Chicago, Illinois
| |
Collapse
|
34
|
Dethier J, Nuyujukian P, Eliasmith C, Stewart T, Elassaad SA, Shenoy KV, Boahen K. A Brain-Machine Interface Operating with a Real-Time Spiking Neural Network Control Algorithm. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2011; 2011:2213-2221. [PMID: 25309106 PMCID: PMC4190036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Motor prostheses aim to restore function to disabled patients. Despite compelling proof of concept systems, barriers to clinical translation remain. One challenge is to develop a low-power, fully-implantable system that dissipates only minimal power so as not to damage tissue. To this end, we implemented a Kalman-filter based decoder via a spiking neural network (SNN) and tested it in brain-machine interface (BMI) experiments with a rhesus monkey. The Kalman filter was trained to predict the arm's velocity and mapped on to the SNN using the Neural Engineering Framework (NEF). A 2,000-neuron embedded Matlab SNN implementation runs in real-time and its closed-loop performance is quite comparable to that of the standard Kalman filter. The success of this closed-loop decoder holds promise for hardware SNN implementations of statistical signal processing algorithms on neuromorphic chips, which may offer power savings necessary to overcome a major obstacle to the successful clinical translation of neural motor prostheses.
Collapse
Affiliation(s)
- Julie Dethier
- Department of Bioengineering, Stanford University, CA 94305
| | - Paul Nuyujukian
- Department of Bioengineering, School of Medicine, Stanford University, CA 94305
| | - Chris Eliasmith
- Centre for Theoretical Neuroscience, University of Waterloo, Canada
| | - Terry Stewart
- Centre for Theoretical Neuroscience, University of Waterloo, Canada
| | | | - Krishna V. Shenoy
- Department of Electrical Engineering, Department of Bioengineering, Department of Neurobiology, Stanford University, CA 94305
| | - Kwabena Boahen
- Department of Bioengineering, Stanford University, CA 94305
| |
Collapse
|
35
|
|
36
|
Neuromorphic sensory systems. Curr Opin Neurobiol 2010; 20:288-95. [DOI: 10.1016/j.conb.2010.03.007] [Citation(s) in RCA: 210] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2010] [Revised: 03/22/2010] [Accepted: 03/24/2010] [Indexed: 11/17/2022]
|
37
|
Glover JC. "The developmental and functional logic of neuronal circuits": commentary on the Kavli Prize in Neuroscience. Neuroscience 2009; 163:977-84. [PMID: 19664740 DOI: 10.1016/j.neuroscience.2009.07.047] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2009] [Revised: 07/13/2009] [Accepted: 07/24/2009] [Indexed: 11/27/2022]
Abstract
The first Kavli Prize in Neuroscience recognizes a confluence of career achievements that together provide a fundamental understanding of how brain and spinal cord circuits are assembled during development and function in the adult. The members of the Kavli Neuroscience Prize Committee have decided to reward three scientists (Sten Grillner, Thomas Jessell, and Pasko Rakic) jointly "for discoveries on the developmental and functional logic of neuronal circuits". Pasko Rakic performed groundbreaking studies of the developing cerebral cortex, including the discovery of how radial glia guide the neuronal migration that establishes cortical layers and for the radial unit hypothesis and its implications for cortical connectivity and evolution. Thomas Jessell discovered molecular principles governing the specification and patterning of different neuron types and the development of their synaptic interconnection into sensorimotor circuits. Sten Grillner elucidated principles of network organization in the vertebrate locomotor central pattern generator, along with its command systems and sensory and higher order control. The discoveries of Rakic, Jessell and Grillner provide a framework for how neurons obtain their identities and ultimate locations, establish appropriate connections with each other, and how the resultant neuronal networks operate. Their work has significantly advanced our understanding of brain development and function and created new opportunities for the treatment of neurological disorders. Each has pioneered an important area of neuroscience research and left a legacy of exceptional scientific achievement, insight, communication, mentoring and leadership.
Collapse
Affiliation(s)
- J C Glover
- Department of Physiology, Institute of Basic Medical Sciences, University of Oslo, Blindern, Oslo, Norway
| |
Collapse
|
38
|
Ballerini L. Bridging multiple levels of exploration: towards a neuroengineering-based approach to physiological and pathological problems in neuroscience. Front Neurosci 2008; 2:24-5. [PMID: 18982103 PMCID: PMC2570066 DOI: 10.3389/neuro.01.024.2008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2008] [Accepted: 06/29/2008] [Indexed: 11/17/2022] Open
|