1
|
Kudithipudi D, Schuman C, Vineyard CM, Pandit T, Merkel C, Kubendran R, Aimone JB, Orchard G, Mayr C, Benosman R, Hays J, Young C, Bartolozzi C, Majumdar A, Cardwell SG, Payvand M, Buckley S, Kulkarni S, Gonzalez HA, Cauwenberghs G, Thakur CS, Subramoney A, Furber S. Neuromorphic computing at scale. Nature 2025; 637:801-812. [PMID: 39843589 DOI: 10.1038/s41586-024-08253-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Accepted: 10/18/2024] [Indexed: 01/24/2025]
Abstract
Neuromorphic computing is a brain-inspired approach to hardware and algorithm design that efficiently realizes artificial neural networks. Neuromorphic designers apply the principles of biointelligence discovered by neuroscientists to design efficient computational systems, often for applications with size, weight and power constraints. With this research field at a critical juncture, it is crucial to chart the course for the development of future large-scale neuromorphic systems. We describe approaches for creating scalable neuromorphic architectures and identify key features. We discuss potential applications that can benefit from scaling and the main challenges that need to be addressed. Furthermore, we examine a comprehensive ecosystem necessary to sustain growth and the new opportunities that lie ahead when scaling neuromorphic systems. Our work distils ideas from several computing sub-fields, providing guidance to researchers and practitioners of neuromorphic computing who aim to push the frontier forward.
Collapse
Affiliation(s)
| | | | | | - Tej Pandit
- University of Texas at San Antonio, San Antonio, TX, USA
| | - Cory Merkel
- Rochester Institute of Technology, Rochester, NY, USA
| | | | | | | | | | | | - Joe Hays
- U.S. Naval Research Laboratory, Washington, DC, USA
| | | | | | | | | | - Melika Payvand
- Institute of Neuroinformatics, University of Zürich and ETH Zürich, Zürich, Switzerland
| | - Sonia Buckley
- National Institute of Standards and Technology, Boulder, CO, USA
| | | | | | | | | | | | | |
Collapse
|
2
|
Balkenhol J, Händel B, Biswas S, Grohmann J, Kistowski JV, Prada J, Bosman CA, Ehrenreich H, Wojcik SM, Kounev S, Blum R, Dandekar T. Beyond-local neural information processing in neuronal networks. Comput Struct Biotechnol J 2024; 23:4288-4305. [PMID: 39687759 PMCID: PMC11647244 DOI: 10.1016/j.csbj.2024.10.040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Revised: 10/23/2024] [Accepted: 10/23/2024] [Indexed: 12/18/2024] Open
Abstract
While there is much knowledge about local neuronal circuitry, considerably less is known about how neuronal input is integrated and combined across neuronal networks to encode higher order brain functions. One challenge lies in the large number of complex neural interactions. Neural networks use oscillating activity for information exchange between distributed nodes. To better understand building principles underlying the observation of synchronized oscillatory activity in a large-scale network, we developed a reductionistic neuronal network model. Fundamental building principles are laterally and temporally interconnected virtual nodes (microcircuits), wherein each node was modeled as a local oscillator. By this building principle, the neuronal network model can integrate information in time and space. The simulation gives rise to a wave interference pattern that spreads over all simulated columns in form of a travelling wave. The model design stabilizes states of efficient information processing across all participating neuronal equivalents. Model-specific oscillatory patterns, generated by complex input stimuli, were similar to electrophysiological high-frequency signals that we could confirm in the primate visual cortex during a visual perception task. Important oscillatory model pre-runners, limitations and strength of our reductionistic model are discussed. Our simple scalable model shows unique integration properties and successfully reproduces a variety of biological phenomena such as harmonics, coherence patterns, frequency-speed relationships, and oscillatory activities. We suggest that our scalable model simulates aspects of a basic building principle underlying oscillatory, large-scale integration of information in small and large brains.
Collapse
Affiliation(s)
- Johannes Balkenhol
- Department of Bioinformatics, Biocenter, University of Würzburg, 97074 Würzburg, Germany
| | - Barbara Händel
- Department of Psychology (III), University of Würzburg, 97070 Würzburg, Germany
- Department of Neurology, University Hospital Würzburg, 97080 Würzburg, Germany
| | - Sounak Biswas
- Department of Theoretical Physics I, University of Würzburg, 97074 Würzburg, Germany
| | - Johannes Grohmann
- Institute of Computer Science, Chair of Software Engineering (Computer Science II), University of Würzburg, 97074 Würzburg, Germany
| | - Jóakim v. Kistowski
- Institute of Computer Science, Chair of Software Engineering (Computer Science II), University of Würzburg, 97074 Würzburg, Germany
| | - Juan Prada
- Department of Bioinformatics, Biocenter, University of Würzburg, 97074 Würzburg, Germany
| | - Conrado A. Bosman
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, Center for Neuroscience, University of Amsterdam, 1105 BA Amsterdam, Netherlands
| | - Hannelore Ehrenreich
- Experimentelle Medizin, Zentralinstitut für Seelische Gesundheit, 68159 Mannheim, Germany
| | - Sonja M. Wojcik
- Neurosciences, Max-Planck-Institut für Multidisziplinäre Naturwissenschaften, 37075 Göttingen, Germany
| | - Samuel Kounev
- Institute of Computer Science, Chair of Software Engineering (Computer Science II), University of Würzburg, 97074 Würzburg, Germany
| | - Robert Blum
- Department of Neurology, University Hospital Würzburg, 97080 Würzburg, Germany
| | - Thomas Dandekar
- Department of Bioinformatics, Biocenter, University of Würzburg, 97074 Würzburg, Germany
- European Molecular Biology Laboratory (EMBL), 69012 Heidelberg, Germany
| |
Collapse
|
3
|
Piombo R, Ritarossi S, Mazzarello R. Ab Initio Study of Novel Phase-Change Heterostructures. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2402375. [PMID: 38812119 PMCID: PMC11304324 DOI: 10.1002/advs.202402375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Indexed: 05/31/2024]
Abstract
Neuromorphic devices constitute a novel approach to computing that takes inspiration from the brain to unify the processing and storage units. Memories based on phase-change materials (PCMs) are potential candidates for such devices due to their non-volatility and excellent scalability, however their use is hindered by their conductance variability and temporal drift in resistance. Recently, it has been shown that the utilization of phase-change heterostructures consisting of nanolayers of the Sb2Te3 PCM interleaved with a transition-metal dichalcogenide, acting as a confinement material, strongly mitigates these problems. In this work, superlattice heterostructures made of TiTe2 and two prototypical PCMs, respectively GeTe and Ge2Sb2Te5 are considered. By performing ab initio molecular dynamics simulations, it is shown that it is possible to switch the PCMs without destroying the superlattice structure and without diffusion of the atoms of the PCM across the TiTe2 nanolayers. In particular, the model containing Ge2Sb2Te5 shows weak coupling between the two materials during the switching process, which, combined with the high stability of the amorphous state of Ge2Sb2Te5, makes it a very promising candidate for neuromorphic computing applications.
Collapse
Affiliation(s)
- Riccardo Piombo
- Dipartimento di FisicaUniversità di Roma “La Sapienza”00185RomeItaly
| | - Simone Ritarossi
- Dipartimento di FisicaUniversità di Roma “La Sapienza”00185RomeItaly
| | | |
Collapse
|
4
|
Bianchi S, Muñoz-Martin I, Covi E, Bricalli A, Piccolboni G, Regev A, Molas G, Nodin JF, Andrieu F, Ielmini D. A self-adaptive hardware with resistive switching synapses for experience-based neurocomputing. Nat Commun 2023; 14:1565. [PMID: 36944647 PMCID: PMC10030830 DOI: 10.1038/s41467-023-37097-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Accepted: 03/02/2023] [Indexed: 03/23/2023] Open
Abstract
Neurobiological systems continually interact with the surrounding environment to refine their behaviour toward the best possible reward. Achieving such learning by experience is one of the main challenges of artificial intelligence, but currently it is hindered by the lack of hardware capable of plastic adaptation. Here, we propose a bio-inspired recurrent neural network, mastered by a digital system on chip with resistive-switching synaptic arrays of memory devices, which exploits homeostatic Hebbian learning for improved efficiency. All the results are discussed experimentally and theoretically, proposing a conceptual framework for benchmarking the main outcomes in terms of accuracy and resilience. To test the proposed architecture for reinforcement learning tasks, we study the autonomous exploration of continually evolving environments and verify the results for the Mars rover navigation. We also show that, compared to conventional deep learning techniques, our in-memory hardware has the potential to achieve a significant boost in speed and power-saving.
Collapse
Affiliation(s)
- S Bianchi
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano and IUNET, Milano, 20133, Italy
- Infineon Technologies, Villach, Austria
| | - I Muñoz-Martin
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano and IUNET, Milano, 20133, Italy
- Infineon Technologies, Villach, Austria
| | - E Covi
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano and IUNET, Milano, 20133, Italy
- NaMLab gGmbH, Dresden, Germany
| | | | | | - A Regev
- Weebit Nano, Hod Hasharon, Israel
| | - G Molas
- Weebit Nano, Hod Hasharon, Israel
| | - J F Nodin
- Univ. Grenoble Alpes, CEA, Leti, F-38000, Grenoble, France
| | - F Andrieu
- Univ. Grenoble Alpes, CEA, Leti, F-38000, Grenoble, France
| | - D Ielmini
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano and IUNET, Milano, 20133, Italy.
| |
Collapse
|
5
|
Rostami A, Vogginger B, Yan Y, Mayr CG. E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Front Neurosci 2022; 16:1018006. [DOI: 10.3389/fnins.2022.1018006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Accepted: 10/19/2022] [Indexed: 11/29/2022] Open
Abstract
IntroductionIn recent years, the application of deep learning models at the edge has gained attention. Typically, artificial neural networks (ANNs) are trained on graphics processing units (GPUs) and optimized for efficient execution on edge devices. Training ANNs directly at the edge is the next step with many applications such as the adaptation of models to specific situations like changes in environmental settings or optimization for individuals, e.g., optimization for speakers for speech processing. Also, local training can preserve privacy. Over the last few years, many algorithms have been developed to reduce memory footprint and computation.MethodsA specific challenge to train recurrent neural networks (RNNs) for processing sequential data is the need for the Back Propagation Through Time (BPTT) algorithm to store the network state of all time steps. This limitation is resolved by the biologically-inspired E-prop approach for training Spiking Recurrent Neural Networks (SRNNs). We implement the E-prop algorithm on a prototype of the SpiNNaker 2 neuromorphic system. A parallelization strategy is developed to split and train networks on the ARM cores of SpiNNaker 2 to make efficient use of both memory and compute resources. We trained an SRNN from scratch on SpiNNaker 2 in real-time on the Google Speech Command dataset for keyword spotting.ResultWe achieved an accuracy of 91.12% while requiring only 680 KB of memory for training the network with 25 K weights. Compared to other spiking neural networks with equal or better accuracy, our work is significantly more memory-efficient.DiscussionIn addition, we performed a memory and time profiling of the E-prop algorithm. This is used on the one hand to discuss whether E-prop or BPTT is better suited for training a model at the edge and on the other hand to explore architecture modifications to SpiNNaker 2 to speed up online learning. Finally, energy estimations predict that the SRNN can be trained on SpiNNaker2 with 12 times less energy than using a NVIDIA V100 GPU.
Collapse
|
6
|
Acharya J, Basu A, Legenstein R, Limbacher T, Poirazi P, Wu X. Dendritic Computing: Branching Deeper into Machine Learning. Neuroscience 2021; 489:275-289. [PMID: 34656706 DOI: 10.1016/j.neuroscience.2021.10.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 09/07/2021] [Accepted: 10/03/2021] [Indexed: 12/31/2022]
Abstract
In this paper, we discuss the nonlinear computational power provided by dendrites in biological and artificial neurons. We start by briefly presenting biological evidence about the type of dendritic nonlinearities, respective plasticity rules and their effect on biological learning as assessed by computational models. Four major computational implications are identified as improved expressivity, more efficient use of resources, utilizing internal learning signals, and enabling continual learning. We then discuss examples of how dendritic computations have been used to solve real-world classification problems with performance reported on well known data sets used in machine learning. The works are categorized according to the three primary methods of plasticity used-structural plasticity, weight plasticity, or plasticity of synaptic delays. Finally, we show the recent trend of confluence between concepts of deep learning and dendritic computations and highlight some future research directions.
Collapse
Affiliation(s)
| | - Arindam Basu
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong
| | - Robert Legenstein
- Institute of Theoretical Computer Science, Graz University of Technology, Austria
| | - Thomas Limbacher
- Institute of Theoretical Computer Science, Graz University of Technology, Austria
| | - Panayiota Poirazi
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology-Hellas (FORTH), Greece
| | - Xundong Wu
- School of Computer Science, Hangzhou Dianzi University, China
| |
Collapse
|
7
|
Muñoz-Martin I, Bianchi S, Hashemkhani S, Pedretti G, Melnic O, Ielmini D. A Brain-Inspired Homeostatic Neuron Based on Phase-Change Memories for Efficient Neuromorphic Computing. Front Neurosci 2021; 15:709053. [PMID: 34489628 PMCID: PMC8417123 DOI: 10.3389/fnins.2021.709053] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Accepted: 07/27/2021] [Indexed: 11/13/2022] Open
Abstract
One of the main goals of neuromorphic computing is the implementation and design of systems capable of dynamic evolution with respect to their own experience. In biology, synaptic scaling is the homeostatic mechanism which controls the frequency of neural spikes within stable boundaries for improved learning activity. To introduce such control mechanism in a hardware spiking neural network (SNN), we present here a novel artificial neuron based on phase change memory (PCM) devices capable of internal regulation via homeostatic and plastic phenomena. We experimentally show that this mechanism increases the robustness of the system thus optimizing the multi-pattern learning under spike-timing-dependent plasticity (STDP). It also improves the continual learning capability of hybrid supervised-unsupervised convolutional neural networks (CNNs), in terms of both resilience and accuracy. Furthermore, the use of neurons capable of self-regulating their fire responsivity as a function of the PCM internal state enables the design of dynamic networks. In this scenario, we propose to use the PCM-based neurons to design bio-inspired recurrent networks for autonomous decision making in navigation tasks. The agent relies on neuronal spike-frequency adaptation (SFA) to explore the environment via penalties and rewards. Finally, we show that the conductance drift of the PCM devices, contrarily to the applications in neural network accelerators, can improve the overall energy efficiency of neuromorphic computing by implementing bio-plausible active forgetting.
Collapse
Affiliation(s)
| | | | | | | | | | - Daniele Ielmini
- Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Politecnico di Milano, Milan, Italy
| |
Collapse
|
8
|
Gonzalez H, George R, Muzaffar S, Acevedo J, Hoppner S, Mayr C, Yoo J, Fitzek F, Elfadel I. Hardware Acceleration of EEG-Based Emotion Classification Systems: A Comprehensive Survey. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2021; 15:412-442. [PMID: 34125683 DOI: 10.1109/tbcas.2021.3089132] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Recent years have witnessed a growing interest in EEG-based wearable classifiers of emotions, which could enable the real-time monitoring of patients suffering from neurological disorders such as Amyotrophic Lateral Sclerosis (ALS), Autism Spectrum Disorder (ASD), or Alzheimer's. The hope is that such wearable emotion classifiers would facilitate the patients' social integration and lead to improved healthcare outcomes for them and their loved ones. Yet in spite of their direct relevance to neuro-medicine, the hardware platforms for emotion classification have yet to fill up some important gaps in their various approaches to emotion classification in a healthcare context. In this paper, we present the first hardware-focused critical review of EEG-based wearable classifiers of emotions and survey their implementation perspectives, their algorithmic foundations, and their feature extraction methodologies. We further provide a neuroscience-based analysis of current hardware accelerators of emotion classifiers and use it to map out several research opportunities, including multi-modal hardware platforms, accelerators with tightly-coupled cores operating robustly in the near/supra-threshold region, and pre-processing libraries for universal EEG-based datasets.
Collapse
|
9
|
Covi E, Donati E, Liang X, Kappel D, Heidari H, Payvand M, Wang W. Adaptive Extreme Edge Computing for Wearable Devices. Front Neurosci 2021; 15:611300. [PMID: 34045939 PMCID: PMC8144334 DOI: 10.3389/fnins.2021.611300] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Accepted: 03/24/2021] [Indexed: 11/13/2022] Open
Abstract
Wearable devices are a fast-growing technology with impact on personal healthcare for both society and economy. Due to the widespread of sensors in pervasive and distributed networks, power consumption, processing speed, and system adaptation are vital in future smart wearable devices. The visioning and forecasting of how to bring computation to the edge in smart sensors have already begun, with an aspiration to provide adaptive extreme edge computing. Here, we provide a holistic view of hardware and theoretical solutions toward smart wearable devices that can provide guidance to research in this pervasive computing era. We propose various solutions for biologically plausible models for continual learning in neuromorphic computing technologies for wearable sensors. To envision this concept, we provide a systematic outline in which prospective low power and low latency scenarios of wearable sensors in neuromorphic platforms are expected. We successively describe vital potential landscapes of neuromorphic processors exploiting complementary metal-oxide semiconductors (CMOS) and emerging memory technologies (e.g., memristive devices). Furthermore, we evaluate the requirements for edge computing within wearable devices in terms of footprint, power consumption, latency, and data size. We additionally investigate the challenges beyond neuromorphic computing hardware, algorithms and devices that could impede enhancement of adaptive edge computing in smart wearable devices.
Collapse
Affiliation(s)
| | - Elisa Donati
- Institute of Neuroinformatics, University of Zurich, Eidgenössische Technische Hochschule Zürich (ETHZ), Zurich, Switzerland
| | - Xiangpeng Liang
- Microelectronics Lab, James Watt School of Engineering, University of Glasgow, Glasgow, United Kingdom
| | - David Kappel
- Bernstein Center for Computational Neuroscience, III Physikalisches Institut–Biophysik, Georg-August Universität, Göttingen, Germany
| | - Hadi Heidari
- Microelectronics Lab, James Watt School of Engineering, University of Glasgow, Glasgow, United Kingdom
| | - Melika Payvand
- Institute of Neuroinformatics, University of Zurich, Eidgenössische Technische Hochschule Zürich (ETHZ), Zurich, Switzerland
| | - Wei Wang
- The Andrew and Erna Viterbi Department of Electrical Engineering, Technion–Israel Institute of Technology, Haifa, Israel
| |
Collapse
|
10
|
George R, Chiappalone M, Giugliano M, Levi T, Vassanelli S, Partzsch J, Mayr C. Plasticity and Adaptation in Neuromorphic Biohybrid Systems. iScience 2020; 23:101589. [PMID: 33083749 PMCID: PMC7554028 DOI: 10.1016/j.isci.2020.101589] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
Neuromorphic systems take inspiration from the principles of biological information processing to form hardware platforms that enable the large-scale implementation of neural networks. The recent years have seen both advances in the theoretical aspects of spiking neural networks for their use in classification and control tasks and a progress in electrophysiological methods that is pushing the frontiers of intelligent neural interfacing and signal processing technologies. At the forefront of these new technologies, artificial and biological neural networks are tightly coupled, offering a novel "biohybrid" experimental framework for engineers and neurophysiologists. Indeed, biohybrid systems can constitute a new class of neuroprostheses opening important perspectives in the treatment of neurological disorders. Moreover, the use of biologically plausible learning rules allows forming an overall fault-tolerant system of co-developing subsystems. To identify opportunities and challenges in neuromorphic biohybrid systems, we discuss the field from the perspectives of neurobiology, computational neuroscience, and neuromorphic engineering.
Collapse
Affiliation(s)
- Richard George
- Department of Electrical Engineering and Information Technology, Technical University of Dresden, Dresden, Germany
| | | | - Michele Giugliano
- Neuroscience Area, International School of Advanced Studies, Trieste, Italy
| | - Timothée Levi
- Laboratoire de l’Intégration du Matéeriau au Systéme, University of Bordeaux, Bordeaux, France
- LIMMS/CNRS, Institute of Industrial Science, The University of Tokyo, Tokyo, Japan
| | - Stefano Vassanelli
- Department of Biomedical Sciences and Padova Neuroscience Center, University of Padova, Padova, Italy
| | - Johannes Partzsch
- Department of Electrical Engineering and Information Technology, Technical University of Dresden, Dresden, Germany
| | - Christian Mayr
- Department of Electrical Engineering and Information Technology, Technical University of Dresden, Dresden, Germany
| |
Collapse
|
11
|
Structural plasticity on an accelerated analog neuromorphic hardware system. Neural Netw 2020; 133:11-20. [PMID: 33091719 DOI: 10.1016/j.neunet.2020.09.024] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Revised: 07/17/2020] [Accepted: 09/28/2020] [Indexed: 11/23/2022]
Abstract
In computational neuroscience, as well as in machine learning, neuromorphic devices promise an accelerated and scalable alternative to neural network simulations. Their neural connectivity and synaptic capacity depend on their specific design choices, but is always intrinsically limited. Here, we present a strategy to achieve structural plasticity that optimizes resource allocation under these constraints by constantly rewiring the pre- and postsynaptic partners while keeping the neuronal fan-in constant and the connectome sparse. In particular, we implemented this algorithm on the analog neuromorphic system BrainScaleS-2. It was executed on a custom embedded digital processor located on chip, accompanying the mixed-signal substrate of spiking neurons and synapse circuits. We evaluated our implementation in a simple supervised learning scenario, showing its ability to optimize the network topology with respect to the nature of its training data, as well as its overall computational efficiency.
Collapse
|
12
|
Liu H, Guo D, Sun F, Yang W, Furber S, Sun T. Embodied tactile perception and learning. BRAIN SCIENCE ADVANCES 2020. [DOI: 10.26599/bsa.2020.9050012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Various living creatures exhibit embodiment intelligence, which is reflected by a collaborative interaction of the brain, body, and environment. The actual behavior of embodiment intelligence is generated by a continuous and dynamic interaction between a subject and the environment through information perception and physical manipulation. The physical interaction between a robot and the environment is the basis for realizing embodied perception and learning. Tactile information plays a critical role in this physical interaction process. It can be used to ensure safety, stability, and compliance, and can provide unique information that is difficult to capture using other perception modalities. However, due to the limitations of existing sensors and perception and learning methods, the development of robotic tactile research lags significantly behind other sensing modalities, such as vision and hearing, thereby seriously restricting the development of robotic embodiment intelligence. This paper presents the current challenges related to robotic tactile embodiment intelligence and reviews the theory and methods of robotic embodied tactile intelligence. Tactile perception and learning methods for embodiment intelligence can be designed based on the development of new large‐scale tactile array sensing devices, with the aim to make breakthroughs in the neuromorphic computing technology of tactile intelligence.
Collapse
Affiliation(s)
- Huaping Liu
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Di Guo
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Fuchun Sun
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Wuqiang Yang
- Department of Electrical and Electronic Engineering, The University of Manchester, Manchester M13 9 PL, U.K
| | - Steve Furber
- Department of Computer Science, The University of Manchester, Manchester M13 9 PL, U.K
| | - Tengchen Sun
- Beijing Tashan Technology Co., Ltd., Beijing 102300, China
| |
Collapse
|