1
|
Wang B, Zhang X, Wang S, Lin N, Li Y, Yu Y, Zhang Y, Yang J, Wu X, He Y, Wang S, Wan T, Chen R, Li G, Deng Y, Qi X, Wang Z, Shang D. Topology optimization of random memristors for input-aware dynamic SNN. SCIENCE ADVANCES 2025; 11:eads5340. [PMID: 40238875 PMCID: PMC12002125 DOI: 10.1126/sciadv.ads5340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2024] [Accepted: 03/12/2025] [Indexed: 04/18/2025]
Abstract
Machine learning has advanced unprecedentedly, exemplified by GPT-4 and SORA. However, they cannot parallel human brains in efficiency and adaptability due to differences in signal representation, optimization, runtime reconfigurability, and hardware architecture. To address these challenges, we introduce pruning optimization for input-aware dynamic memristive spiking neural network (PRIME). PRIME uses spiking neurons to emulate brain's spiking mechanisms and optimizes the topology of random memristive SNNs inspired by structural plasticity, effectively mitigating memristor programming stochasticity. It also uses the input-aware early-stop policy to reduce latency and leverages memristive in-memory computing to mitigate von Neumann bottleneck. Validated on a 40-nm, 256-K memristor-based macro, PRIME achieves comparable classification accuracy and inception score to software baselines, with energy efficiency improvements of 37.8× and 62.5×. In addition, it reduces computational loads by 77 and 12.5% with minimal performance degradation and demonstrates robustness to stochastic memristor noise. PRIME paves the way for brain-inspired neuromorphic computing.
Collapse
Affiliation(s)
- Bo Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- ACCESS – AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Xinyuan Zhang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- ACCESS – AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Shaocong Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- ACCESS – AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Ning Lin
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- ACCESS – AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Yi Li
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- State Key Lab of Fabrication Technologies for Integrated Circuits, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China
- Laboratory of Microelectronic Devices & Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China
| | - Yifei Yu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- ACCESS – AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Yue Zhang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- ACCESS – AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Jichang Yang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- ACCESS – AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Xiaoshan Wu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- ACCESS – AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Yangu He
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- ACCESS – AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Songqi Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- ACCESS – AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Tao Wan
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
| | - Rui Chen
- State Key Lab of Fabrication Technologies for Integrated Circuits, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Guoqi Li
- University of Chinese Academy of Sciences, Beijing 100049, China
- Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Yue Deng
- School of Artificial Intelligence, Beihang University, Beijing 100191, China
- School of Astronautics, Beihang University, Beijing 100191, China
| | - Xiaojuan Qi
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
| | - Zhongrui Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
- ACCESS – AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
| | - Dashan Shang
- State Key Lab of Fabrication Technologies for Integrated Circuits, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China
- Laboratory of Microelectronic Devices & Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
2
|
Muir DR, Sheik S. The road to commercial success for neuromorphic technologies. Nat Commun 2025; 16:3586. [PMID: 40234391 PMCID: PMC12000578 DOI: 10.1038/s41467-025-57352-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 02/18/2025] [Indexed: 04/17/2025] Open
Abstract
Neuromorphic technologies adapt biological neural principles to synthesise high-efficiency computational devices, characterised by continuous real-time operation and sparse event-based communication. After several false starts, a confluence of advances now promises widespread commercial adoption. Gradient-based training of deep spiking neural networks is now an off-the-shelf technique for building general-purpose Neuromorphic applications, with open-source tools underwritten by theoretical results. Analog and mixed-signal Neuromorphic circuit designs are being replaced by digital equivalents in newer devices, simplifying application deployment while maintaining computational benefits. Designs for in-memory computing are also approaching commercial maturity. Solving two key problems-how to program general Neuromorphic applications; and how to deploy them at scale-clears the way to commercial success of Neuromorphic processors. Ultra-low-power Neuromorphic technology will find a home in battery-powered systems, local compute for internet-of-things devices, and consumer wearables. Inspiration from uptake of tensor processors and GPUs can help the field overcome remaining hurdles.
Collapse
Affiliation(s)
- Dylan Richard Muir
- SynSense, Zürich, Switzerland.
- University of Western Australia, Perth, Australia.
| | - Sadique Sheik
- SynSense, Zürich, Switzerland
- Unique, Zürich, Switzerland
| |
Collapse
|
3
|
Zhang T, Wozniak S, Syed GS, Mannocci P, Farronato M, Ielmini D, Sebastian A, Yang Y. Emerging Materials and Computing Paradigms for Temporal Signal Analysis. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2025; 37:e2408566. [PMID: 39935172 DOI: 10.1002/adma.202408566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2024] [Revised: 12/19/2024] [Indexed: 02/13/2025]
Abstract
In the era of relentless data generation and dynamic information streams, the demand for efficient and robust temporal signal analysis has intensified across diverse domains such as healthcare, finance, and telecommunications. This perspective study explores the unfolding landscape of emerging materials and computing paradigms that are reshaping the way temporal signals are analyzed and interpreted. Traditional signal processing techniques often fall short when confronted with the intricacies of time-varying data, prompting the exploration of innovative approaches. The rise of emerging materials and devices empowers real-time analysis by processing temporal signals in situ, mitigating latency concerns. Through this perspective, the untapped potential of emerging materials and computing paradigms for temporal signal analysis is highlighted, offering valuable insights into both challenges and opportunities. Standing on the cusp of a new era in computing, understanding and harnessing these paradigms is pivotal for unraveling the complexities embedded within the temporal dimensions of data, propelling signal analysis into realms previously deemed inaccessible.
Collapse
Affiliation(s)
- Teng Zhang
- Beijing Advanced Innovation Center for Integrated Circuits, School of Integrated Circuits, Peking University, Beijing, 100871, China
| | | | | | - Piergiulio Mannocci
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano and IU.NET, Piazza Leonardo da Vinci 32, Milano, 20133, Italy
| | - Matteo Farronato
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano and IU.NET, Piazza Leonardo da Vinci 32, Milano, 20133, Italy
| | - Daniele Ielmini
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano and IU.NET, Piazza Leonardo da Vinci 32, Milano, 20133, Italy
| | - Abu Sebastian
- IBM Research - Europe, Rüschlikon, 8803, Switzerland
| | - Yuchao Yang
- Beijing Advanced Innovation Center for Integrated Circuits, School of Integrated Circuits, Peking University, Beijing, 100871, China
- Guangdong Provincial Key Laboratory of In-Memory Computing Chips, School of Electronic and Computer Engineering, Peking University, Shenzhen, 518055, China
- Institute for Artificial Intelligence, Frontiers Science Center for Nano-optoelectronics, Peking University, Beijing, 100871, China
| |
Collapse
|
4
|
Arnold E, Spilger P, Straub JV, Müller E, Dold D, Meoni G, Schemmel J. Scalable network emulation on analog neuromorphic hardware. Front Neurosci 2025; 18:1523331. [PMID: 39975540 PMCID: PMC11835975 DOI: 10.3389/fnins.2024.1523331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2024] [Accepted: 12/30/2024] [Indexed: 02/21/2025] Open
Abstract
We present a novel software feature for the BrainScaleS-2 accelerated neuromorphic platform that facilitates the partitioned emulation of large-scale spiking neural networks. This approach is well suited for deep spiking neural networks and allows for sequential model emulation on undersized neuromorphic resources if the largest recurrent subnetwork and the required neuron fan-in fit on the substrate. We demonstrate the training of two deep spiking neural network models-using the MNIST and EuroSAT datasets-that exceed the physical size constraints of a single-chip BrainScaleS-2 system. The ability to emulate and train networks larger than the substrate provides a pathway for accurate performance evaluation in planned or scaled systems, ultimately advancing the development and understanding of large-scale models and neuromorphic computing architectures.
Collapse
Affiliation(s)
- Elias Arnold
- European Institute for Neuromorphic Computing, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Philipp Spilger
- European Institute for Neuromorphic Computing, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Jan V. Straub
- European Institute for Neuromorphic Computing, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Eric Müller
- European Institute for Neuromorphic Computing, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Dominik Dold
- Advanced Concepts Team, European Space Research and Technology Centre, European Space Agency, Noordwijk, Netherlands
| | - Gabriele Meoni
- Faculty of Aerospace Engineering, Delft University of Technology, Delft, Netherlands
| | - Johannes Schemmel
- European Institute for Neuromorphic Computing, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
5
|
Ma SY, Wang T, Laydevant J, Wright LG, McMahon PL. Quantum-limited stochastic optical neural networks operating at a few quanta per activation. Nat Commun 2025; 16:359. [PMID: 39753530 PMCID: PMC11698857 DOI: 10.1038/s41467-024-55220-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2024] [Accepted: 12/05/2024] [Indexed: 01/06/2025] Open
Abstract
Energy efficiency in computation is ultimately limited by noise, with quantum limits setting the fundamental noise floor. Analog physical neural networks hold promise for improved energy efficiency compared to digital electronic neural networks. However, they are typically operated in a relatively high-power regime so that the signal-to-noise ratio (SNR) is large (>10), and the noise can be treated as a perturbation. We study optical neural networks where all layers except the last are operated in the limit that each neuron can be activated by just a single photon, and as a result the noise on neuron activations is no longer merely perturbative. We show that by using a physics-based probabilistic model of the neuron activations in training, it is possible to perform accurate machine-learning inference in spite of the extremely high shot noise (SNR ~ 1). We experimentally demonstrated MNIST handwritten-digit classification with a test accuracy of 98% using an optical neural network with a hidden layer operating in the single-photon regime; the optical energy used to perform the classification corresponds to just 0.038 photons per multiply-accumulate (MAC) operation. Our physics-aware stochastic training approach might also prove useful with non-optical ultra-low-power hardware.
Collapse
Affiliation(s)
- Shi-Yuan Ma
- School of Applied and Engineering Physics, Cornell University, Ithaca, NY, USA.
| | - Tianyu Wang
- School of Applied and Engineering Physics, Cornell University, Ithaca, NY, USA
| | - Jérémie Laydevant
- School of Applied and Engineering Physics, Cornell University, Ithaca, NY, USA
- USRA Research Institute for Advanced Computer Science, Mountain View, CA, USA
| | - Logan G Wright
- School of Applied and Engineering Physics, Cornell University, Ithaca, NY, USA
- NTT Physics and Informatics Laboratories, NTT Research, Inc., Sunnyvale, CA, USA
- Department of Applied Physics, Yale University, New Haven, CT, USA
| | - Peter L McMahon
- School of Applied and Engineering Physics, Cornell University, Ithaca, NY, USA.
- Kavli Institute at Cornell for Nanoscale Science, Cornell University, Ithaca, NY, USA.
| |
Collapse
|
6
|
Lee YJ, Choi ES, Baek JH, Yang J, Kim J, Kim JY, Kim B, Shin D, Park SH, Im IH, Lee H, Kim Y, Choi D, Lee S, Jang HW. Memristive Artificial Synapses Based on Brownmillerite for Endurable Weight Modulation. SMALL (WEINHEIM AN DER BERGSTRASSE, GERMANY) 2025; 21:e2405749. [PMID: 39468890 DOI: 10.1002/smll.202405749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2024] [Revised: 10/13/2024] [Indexed: 10/30/2024]
Abstract
Exploring a computing paradigm that blends memory and computation functions is essential for artificial synapses. While memristors for artificial synapses are widely studied due to their energy-efficient structures, random filament conduction in general memristors makes them less preferred for endurability in long-term synaptic modulation. Herein, the topotactic phase transition (TPT) in brownmillerite-phased (110)-SrCoO2.5 (SCO2.5) is harnessed to enhance the reversibility of oxygen ion migration through 1-D oxygen vacancy channels. By employing a heteroepitaxial structured 2-terminal configuration of Au/SCO2.5/SrRuO3/SrTiO3, the brownmillerite SCO2.5-based synapse artificial synapses are exploited. Demonstration of the TPT behavior is corroborated by comparing oxygen migration energy by density-functional theory calculations and experimental results, and by monitoring the voltage pulse-induced peak shift in the Raman spectra of SCO2.5. With the voltage pulse-driven TPT behaviors, it is reliably characterized by linear, symmetric, and endurable long-term potentiation and depression performances. Notably, the durability of the TPT-based weight control mechanism is demonstrated by achieving consistent and noise-free weight updates over 32 000 iterations across 640 cycles. Furthermore, learning performances based on deep neural networks and convolutional neural networks on various image datasets yielded very high recognition accuracy. The work offers valuable insights into designing memristive synapses that enable reliable weight updates in neural networks.
Collapse
Affiliation(s)
- Yoon Jung Lee
- Department of Material Science and Engineering, Research Institute of Advanced Materials, Seoul National University, Seoul, 08826, Republic of Korea
- Department of Chemistry, Northwestern University, Evanston, IL, 60208, USA
| | - Eun Seok Choi
- School of Materials Science and Engineering, Gwangju Institute of Science and Technology, Gwangju, 61005, Republic of Korea
| | - Ji Hyun Baek
- Department of Material Science and Engineering, Research Institute of Advanced Materials, Seoul National University, Seoul, 08826, Republic of Korea
| | - Jiwoong Yang
- School of Materials Science and Engineering, Gwangju Institute of Science and Technology, Gwangju, 61005, Republic of Korea
| | - Jaehyun Kim
- Department of Material Science and Engineering, Research Institute of Advanced Materials, Seoul National University, Seoul, 08826, Republic of Korea
| | - Jae Young Kim
- Department of Material Science and Engineering, Research Institute of Advanced Materials, Seoul National University, Seoul, 08826, Republic of Korea
| | - Byungsoo Kim
- Department of Material Science and Engineering, Research Institute of Advanced Materials, Seoul National University, Seoul, 08826, Republic of Korea
| | - Donghoon Shin
- Department of Chemistry, Northwestern University, Evanston, IL, 60208, USA
| | - Sung Hyuk Park
- Department of Material Science and Engineering, Research Institute of Advanced Materials, Seoul National University, Seoul, 08826, Republic of Korea
| | - In Hyuk Im
- Department of Material Science and Engineering, Research Institute of Advanced Materials, Seoul National University, Seoul, 08826, Republic of Korea
| | - Hyeonji Lee
- Department of Material Science and Engineering, Research Institute of Advanced Materials, Seoul National University, Seoul, 08826, Republic of Korea
| | - Youngmin Kim
- Department of Material Science and Engineering, Research Institute of Advanced Materials, Seoul National University, Seoul, 08826, Republic of Korea
| | - Deokjae Choi
- Department of Chemistry, Northwestern University, Evanston, IL, 60208, USA
| | - Sanghan Lee
- School of Materials Science and Engineering, Gwangju Institute of Science and Technology, Gwangju, 61005, Republic of Korea
| | - Ho Won Jang
- Department of Material Science and Engineering, Research Institute of Advanced Materials, Seoul National University, Seoul, 08826, Republic of Korea
- Advanced Institute of Convergence Technology, Seoul National University, Suwon, 16229, Republic of Korea
| |
Collapse
|
7
|
Senn W, Dold D, Kungl AF, Ellenberger B, Jordan J, Bengio Y, Sacramento J, Petrovici MA. A neuronal least-action principle for real-time learning in cortical circuits. eLife 2024; 12:RP89674. [PMID: 39704647 DOI: 10.7554/elife.89674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2024] Open
Abstract
One of the most fundamental laws of physics is the principle of least action. Motivated by its predictive power, we introduce a neuronal least-action principle for cortical processing of sensory streams to produce appropriate behavioral outputs in real time. The principle postulates that the voltage dynamics of cortical pyramidal neurons prospectively minimizes the local somato-dendritic mismatch error within individual neurons. For output neurons, the principle implies minimizing an instantaneous behavioral error. For deep network neurons, it implies the prospective firing to overcome integration delays and correct for possible output errors right in time. The neuron-specific errors are extracted in the apical dendrites of pyramidal neurons through a cortical microcircuit that tries to explain away the feedback from the periphery, and correct the trajectory on the fly. Any motor output is in a moving equilibrium with the sensory input and the motor feedback during the ongoing sensory-motor transform. Online synaptic plasticity reduces the somatodendritic mismatch error within each cortical neuron and performs gradient descent on the output cost at any moment in time. The neuronal least-action principle offers an axiomatic framework to derive local neuronal and synaptic laws for global real-time computation and learning in the brain.
Collapse
Affiliation(s)
- Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Dominik Dold
- Department of Physiology, University of Bern, Bern, Switzerland
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- European Space Research and Technology Centre, European Space Agency, Noordwijk, Netherlands
| | - Akos F Kungl
- Department of Physiology, University of Bern, Bern, Switzerland
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Benjamin Ellenberger
- Department of Physiology, University of Bern, Bern, Switzerland
- Insel Data Science Center, University Hospital Bern, Bern, Switzerland
| | - Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
- Electrical Engineering, Yale University, New Haven, United States
| | | | - João Sacramento
- Department of Computer Science, ETH Zurich, Zurich, Switzerland
| | - Mihai A Petrovici
- Department of Physiology, University of Bern, Bern, Switzerland
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
8
|
Ravichandran N, Lansner A, Herman P. Spiking representation learning for associative memories. Front Neurosci 2024; 18:1439414. [PMID: 39371606 PMCID: PMC11450452 DOI: 10.3389/fnins.2024.1439414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Accepted: 08/29/2024] [Indexed: 10/08/2024] Open
Abstract
Networks of interconnected neurons communicating through spiking signals offer the bedrock of neural computations. Our brain's spiking neural networks have the computational capacity to achieve complex pattern recognition and cognitive functions effortlessly. However, solving real-world problems with artificial spiking neural networks (SNNs) has proved to be difficult for a variety of reasons. Crucially, scaling SNNs to large networks and processing large-scale real-world datasets have been challenging, especially when compared to their non-spiking deep learning counterparts. The critical operation that is needed of SNNs is the ability to learn distributed representations from data and use these representations for perceptual, cognitive and memory operations. In this work, we introduce a novel SNN that performs unsupervised representation learning and associative memory operations leveraging Hebbian synaptic and activity-dependent structural plasticity coupled with neuron-units modelled as Poisson spike generators with sparse firing (~1 Hz mean and ~100 Hz maximum firing rate). Crucially, the architecture of our model derives from the neocortical columnar organization and combines feedforward projections for learning hidden representations and recurrent projections for forming associative memories. We evaluated the model on properties relevant for attractor-based associative memories such as pattern completion, perceptual rivalry, distortion resistance, and prototype extraction.
Collapse
Affiliation(s)
- Naresh Ravichandran
- Computational Cognitive Brain Science Group, Department of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Anders Lansner
- Computational Cognitive Brain Science Group, Department of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
- Department of Mathematics, Stockholm University, Stockholm, Sweden
| | - Pawel Herman
- Computational Cognitive Brain Science Group, Department of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
- Digital Futures, KTH Royal Institute of Technology, Stockholm, Sweden
- Swedish e-Science Research Centre (SeRC), Stockholm, Sweden
| |
Collapse
|
9
|
Chen H, Hong Q, Wang Z, Wang C, Zeng X, Zhang J. Memristive Circuit Implementation of Caenorhabditis Elegans Mechanism for Neuromorphic Computing. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:12015-12026. [PMID: 37028291 DOI: 10.1109/tnnls.2023.3250655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
To overcome the energy efficiency bottleneck of the von Neumann architecture and scaling limit of silicon transistors, an emerging but promising solution is neuromorphic computing, a new computing paradigm inspired by how biological neural networks handle the massive amount of information in a parallel and efficient way. Recently, there is a surge of interest in the nematode worm Caenorhabditis elegans (C. elegans), an ideal model organism to probe the mechanisms of biological neural networks. In this article, we propose a neuron model for C. elegans with leaky integrate-and-fire (LIF) dynamics and adjustable integration time. We utilize these neurons to build the C. elegans neural network according to their neural physiology, which comprises: 1) sensory modules; 2) interneuron modules; and 3) motoneuron modules. Leveraging these block designs, we develop a serpentine robot system, which mimics the locomotion behavior of C. elegans upon external stimulus. Moreover, experimental results of C. elegans neurons presented in this article reveals the robustness (1% error w.r.t. 10% random noise) and flexibility of our design in term of parameter setting. The work paves the way for future intelligent systems by mimicking the C. elegans neural system.
Collapse
|
10
|
Chakraborty S, Mishra J, Roy A, Niharika, Manna S, Baral T, Nandi P, Patra S, Patra SK. Liquid-liquid phase separation in subcellular assemblages and signaling pathways: Chromatin modifications induced gene regulation for cellular physiology and functions including carcinogenesis. Biochimie 2024; 223:74-97. [PMID: 38723938 DOI: 10.1016/j.biochi.2024.05.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 03/08/2024] [Accepted: 05/04/2024] [Indexed: 05/24/2024]
Abstract
Liquid-liquid phase separation (LLPS) describes many biochemical processes, including hydrogel formation, in the integrity of macromolecular assemblages and existence of membraneless organelles, including ribosome, nucleolus, nuclear speckles, paraspeckles, promyelocytic leukemia (PML) bodies, Cajal bodies (all exert crucial roles in cellular physiology), and evidence are emerging day by day. Also, phase separation is well documented in generation of plasma membrane subdomains and interplay between membranous and membraneless organelles. Intrinsically disordered regions (IDRs) of biopolymers/proteins are the most critical sticking regions that aggravate the formation of such condensates. Remarkably, phase separated condensates are also involved in epigenetic regulation of gene expression, chromatin remodeling, and heterochromatinization. Epigenetic marks on DNA and histones cooperate with RNA-binding proteins through their IDRs to trigger LLPS for facilitating transcription. How phase separation coalesces mutant oncoproteins, orchestrate tumor suppressor genes expression, and facilitated cancer-associated signaling pathways are unravelling. That autophagosome formation and DYRK3-mediated cancer stem cell modification also depend on phase separation is deciphered in part. In view of this, and to linchpin insight into the subcellular membraneless organelle assembly, gene activation and biological reactions catalyzed by enzymes, and the downstream physiological functions, and how all these events are precisely facilitated by LLPS inducing organelle function, epigenetic modulation of gene expression in this scenario, and how it goes awry in cancer progression are summarized and presented in this article.
Collapse
Affiliation(s)
- Subhajit Chakraborty
- Epigenetics and Cancer Research Laboratory, Biochemistry and Molecular Biology Group, Department of Life Science, National Institute of Technology, Rourkela, India
| | - Jagdish Mishra
- Epigenetics and Cancer Research Laboratory, Biochemistry and Molecular Biology Group, Department of Life Science, National Institute of Technology, Rourkela, India
| | - Ankan Roy
- Epigenetics and Cancer Research Laboratory, Biochemistry and Molecular Biology Group, Department of Life Science, National Institute of Technology, Rourkela, India
| | - Niharika
- Epigenetics and Cancer Research Laboratory, Biochemistry and Molecular Biology Group, Department of Life Science, National Institute of Technology, Rourkela, India
| | - Soumen Manna
- Epigenetics and Cancer Research Laboratory, Biochemistry and Molecular Biology Group, Department of Life Science, National Institute of Technology, Rourkela, India
| | - Tirthankar Baral
- Epigenetics and Cancer Research Laboratory, Biochemistry and Molecular Biology Group, Department of Life Science, National Institute of Technology, Rourkela, India
| | - Piyasa Nandi
- Epigenetics and Cancer Research Laboratory, Biochemistry and Molecular Biology Group, Department of Life Science, National Institute of Technology, Rourkela, India
| | - Subhajit Patra
- Department of Chemical Engineering, Maulana Azad National Institute of Technology, Bhopal, India
| | - Samir Kumar Patra
- Epigenetics and Cancer Research Laboratory, Biochemistry and Molecular Biology Group, Department of Life Science, National Institute of Technology, Rourkela, India.
| |
Collapse
|
11
|
Wang X, Li H. Reservoir computing with a random memristor crossbar array. NANOTECHNOLOGY 2024; 35:415205. [PMID: 38991518 DOI: 10.1088/1361-6528/ad61ee] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Accepted: 07/11/2024] [Indexed: 07/13/2024]
Abstract
Physical implementations of reservoir computing (RC) based on the emerging memristors have become promising candidates of unconventional computing paradigms. Traditionally, sequential approaches by time-multiplexing volatile memristors have been prevalent because of their low hardware overhead. However, they suffer from the problem of speed degradation and fall short of capturing the spatial relationship between the time-domain inputs. Here, we explore a new avenue for RC using memristor crossbar arrays with device-to-device variations, which serve as physical random weight matrices of the reservoir layers, enabling faster computation thanks to the parallelism of matrix-vector multiplication as an intensive operation in RC. To achieve this new RC architecture, ultralow-current, self-selective memristors are fabricated and integrated without the need of transistors, showing greater potential of high scalability and three-dimensional integrability compared to the previous realizations. The information processing ability of our RC system is demonstrated in asks of recognizing digit images and waveforms. This work indicates that the 'nonidealities' of the emerging memristor devices and circuits are a useful source of inspiration for new computing paradigms.
Collapse
Affiliation(s)
- Xinxin Wang
- Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University, Beijing 100084, People's Republic of China
| | - Huanglong Li
- Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University, Beijing 100084, People's Republic of China
- Chinese Institute for Brain Research, Beijing 102206, People's Republic of China
| |
Collapse
|
12
|
Tenzin S, Rassau A, Chai D. Application of Event Cameras and Neuromorphic Computing to VSLAM: A Survey. Biomimetics (Basel) 2024; 9:444. [PMID: 39056885 PMCID: PMC11274992 DOI: 10.3390/biomimetics9070444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Revised: 07/15/2024] [Accepted: 07/18/2024] [Indexed: 07/28/2024] Open
Abstract
Simultaneous Localization and Mapping (SLAM) is a crucial function for most autonomous systems, allowing them to both navigate through and create maps of unfamiliar surroundings. Traditional Visual SLAM, also commonly known as VSLAM, relies on frame-based cameras and structured processing pipelines, which face challenges in dynamic or low-light environments. However, recent advancements in event camera technology and neuromorphic processing offer promising opportunities to overcome these limitations. Event cameras inspired by biological vision systems capture the scenes asynchronously, consuming minimal power but with higher temporal resolution. Neuromorphic processors, which are designed to mimic the parallel processing capabilities of the human brain, offer efficient computation for real-time data processing of event-based data streams. This paper provides a comprehensive overview of recent research efforts in integrating event cameras and neuromorphic processors into VSLAM systems. It discusses the principles behind event cameras and neuromorphic processors, highlighting their advantages over traditional sensing and processing methods. Furthermore, an in-depth survey was conducted on state-of-the-art approaches in event-based SLAM, including feature extraction, motion estimation, and map reconstruction techniques. Additionally, the integration of event cameras with neuromorphic processors, focusing on their synergistic benefits in terms of energy efficiency, robustness, and real-time performance, was explored. The paper also discusses the challenges and open research questions in this emerging field, such as sensor calibration, data fusion, and algorithmic development. Finally, the potential applications and future directions for event-based SLAM systems are outlined, ranging from robotics and autonomous vehicles to augmented reality.
Collapse
Affiliation(s)
| | - Alexander Rassau
- School of Engineering, Edith Cowan University, Perth, WA 6027, Australia; (S.T.); (D.C.)
| | | |
Collapse
|
13
|
Lu W, Zeng L, Wang J, Xiang S, Qi Y, Zheng Q, Xu N, Feng J. Imitating and exploring the human brain's resting and task-performing states via brain computing: scaling and architecture. Natl Sci Rev 2024; 11:nwae080. [PMID: 38803564 PMCID: PMC11129584 DOI: 10.1093/nsr/nwae080] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 12/19/2023] [Accepted: 01/31/2024] [Indexed: 05/29/2024] Open
Abstract
A computational human brain model with the voxel-wise assimilation method was established based on individual structural and functional imaging data. We found that the more similar the brain model is to the biological counterpart in both scale and architecture, the more similarity was found between the assimilated model and the biological brain both in resting states and during tasks by quantitative metrics. The hypothesis that resting state activity reflects internal body states was validated by the interoceptive circuit's capability to enhance the similarity between the simulation model and the biological brain. We identified that the removal of connections from the primary visual cortex (V1) to downstream visual pathways significantly decreased the similarity at the hippocampus between the model and its biological counterpart, despite a slight influence on the whole brain. In conclusion, the model and methodology present a solid quantitative framework for a digital twin brain for discovering the relationship between brain architecture and functions, and for digitally trying and testing diverse cognitive, medical and lesioning approaches that would otherwise be unfeasible in real subjects.
Collapse
Affiliation(s)
- Wenlian Lu
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China
- Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University), Ministry of Education, Fudan University, Shanghai 200433, China
- Shanghai Center for Mathematical Sciences, Fudan University, Shanghai 200433, China
| | - Longbin Zeng
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China
| | - Jiexiang Wang
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China
| | - Shitong Xiang
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China
- Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University), Ministry of Education, Fudan University, Shanghai 200433, China
| | - Yang Qi
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China
- Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University), Ministry of Education, Fudan University, Shanghai 200433, China
| | - Qibao Zheng
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China
- Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University), Ministry of Education, Fudan University, Shanghai 200433, China
| | - Ningsheng Xu
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China
| | - Jianfeng Feng
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China
- Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University), Ministry of Education, Fudan University, Shanghai 200433, China
- Shanghai Center for Mathematical Sciences, Fudan University, Shanghai 200433, China
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK
- Zhangjiang Fudan International Innovation Center, Fudan University, Shanghai 200433, China
| |
Collapse
|
14
|
Wang Y, Wang Y, Zhang X, Du J, Zhang T, Xu B. Brain topology improved spiking neural network for efficient reinforcement learning of continuous control. Front Neurosci 2024; 18:1325062. [PMID: 38694900 PMCID: PMC11062182 DOI: 10.3389/fnins.2024.1325062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 03/27/2024] [Indexed: 05/04/2024] Open
Abstract
The brain topology highly reflects the complex cognitive functions of the biological brain after million-years of evolution. Learning from these biological topologies is a smarter and easier way to achieve brain-like intelligence with features of efficiency, robustness, and flexibility. Here we proposed a brain topology-improved spiking neural network (BT-SNN) for efficient reinforcement learning. First, hundreds of biological topologies are generated and selected as subsets of the Allen mouse brain topology with the help of the Tanimoto hierarchical clustering algorithm, which has been widely used in analyzing key features of the brain connectome. Second, a few biological constraints are used to filter out three key topology candidates, including but not limited to the proportion of node functions (e.g., sensation, memory, and motor types) and network sparsity. Third, the network topology is integrated with the hybrid numerical solver-improved leaky-integrated and fire neurons. Fourth, the algorithm is then tuned with an evolutionary algorithm named adaptive random search instead of backpropagation to guide synaptic modifications without affecting raw key features of the topology. Fifth, under the test of four animal-survival-like RL tasks (i.e., dynamic controlling in Mujoco), the BT-SNN can achieve higher scores than not only counterpart SNN using random topology but also some classical ANNs (i.e., long-short-term memory and multi-layer perception). This result indicates that the research effort of incorporating biological topology and evolutionary learning rules has much in store for the future.
Collapse
Affiliation(s)
- Yongjian Wang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Yansong Wang
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Xinhe Zhang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Jiulin Du
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- School of Life Science and Technology, ShanghaiTech University, Shanghai, China
| | - Tielin Zhang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Bo Xu
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
15
|
Donati E, Valle G. Neuromorphic hardware for somatosensory neuroprostheses. Nat Commun 2024; 15:556. [PMID: 38228580 PMCID: PMC10791662 DOI: 10.1038/s41467-024-44723-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Accepted: 01/03/2024] [Indexed: 01/18/2024] Open
Abstract
In individuals with sensory-motor impairments, missing limb functions can be restored using neuroprosthetic devices that directly interface with the nervous system. However, restoring the natural tactile experience through electrical neural stimulation requires complex encoding strategies. Indeed, they are presently limited in effectively conveying or restoring tactile sensations by bandwidth constraints. Neuromorphic technology, which mimics the natural behavior of neurons and synapses, holds promise for replicating the encoding of natural touch, potentially informing neurostimulation design. In this perspective, we propose that incorporating neuromorphic technologies into neuroprostheses could be an effective approach for developing more natural human-machine interfaces, potentially leading to advancements in device performance, acceptability, and embeddability. We also highlight ongoing challenges and the required actions to facilitate the future integration of these advanced technologies.
Collapse
Affiliation(s)
- Elisa Donati
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland.
| | - Giacomo Valle
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL, USA.
| |
Collapse
|
16
|
Zheng H, Zheng Z, Hu R, Xiao B, Wu Y, Yu F, Liu X, Li G, Deng L. Temporal dendritic heterogeneity incorporated with spiking neural networks for learning multi-timescale dynamics. Nat Commun 2024; 15:277. [PMID: 38177124 PMCID: PMC10766638 DOI: 10.1038/s41467-023-44614-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 12/21/2023] [Indexed: 01/06/2024] Open
Abstract
It is widely believed the brain-inspired spiking neural networks have the capability of processing temporal information owing to their dynamic attributes. However, how to understand what kind of mechanisms contributing to the learning ability and exploit the rich dynamic properties of spiking neural networks to satisfactorily solve complex temporal computing tasks in practice still remains to be explored. In this article, we identify the importance of capturing the multi-timescale components, based on which a multi-compartment spiking neural model with temporal dendritic heterogeneity, is proposed. The model enables multi-timescale dynamics by automatically learning heterogeneous timing factors on different dendritic branches. Two breakthroughs are made through extensive experiments: the working mechanism of the proposed model is revealed via an elaborated temporal spiking XOR problem to analyze the temporal feature integration at different levels; comprehensive performance benefits of the model over ordinary spiking neural networks are achieved on several temporal computing benchmarks for speech recognition, visual recognition, electroencephalogram signal recognition, and robot place recognition, which shows the best-reported accuracy and model compactness, promising robustness and generalization, and high execution efficiency on neuromorphic hardware. This work moves neuromorphic computing a significant step toward real-world applications by appropriately exploiting biological observations.
Collapse
Affiliation(s)
- Hanle Zheng
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Zhong Zheng
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Rui Hu
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Bo Xiao
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Yujie Wu
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Fangwen Yu
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Xue Liu
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Guoqi Li
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Lei Deng
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China.
| |
Collapse
|
17
|
Sakemi Y, Yamamoto K, Hosomi T, Aihara K. Sparse-firing regularization methods for spiking neural networks with time-to-first-spike coding. Sci Rep 2023; 13:22897. [PMID: 38129555 PMCID: PMC10739753 DOI: 10.1038/s41598-023-50201-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 12/16/2023] [Indexed: 12/23/2023] Open
Abstract
The training of multilayer spiking neural networks (SNNs) using the error backpropagation algorithm has made significant progress in recent years. Among the various training schemes, the error backpropagation method that directly uses the firing time of neurons has attracted considerable attention because it can realize ideal temporal coding. This method uses time-to-first-spike (TTFS) coding, in which each neuron fires at most once, and this restriction on the number of firings enables information to be processed at a very low firing frequency. This low firing frequency increases the energy efficiency of information processing in SNNs. However, only an upper limit has been provided for TTFS-coded SNNs, and the information-processing capability of SNNs at lower firing frequencies has not been fully investigated. In this paper, we propose two spike-timing-based sparse-firing (SSR) regularization methods to further reduce the firing frequency of TTFS-coded SNNs. Both methods are characterized by the fact that they only require information about the firing timing and associated weights. The effects of these regularization methods were investigated on the MNIST, Fashion-MNIST, and CIFAR-10 datasets using multilayer perceptron networks and convolutional neural network structures.
Collapse
Affiliation(s)
- Yusuke Sakemi
- Research Center for Mathematical Engineering, Chiba Institute of Technology, Narashino, Japan.
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo, Tokyo, Japan.
| | | | | | - Kazuyuki Aihara
- Research Center for Mathematical Engineering, Chiba Institute of Technology, Narashino, Japan
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo, Tokyo, Japan
| |
Collapse
|
18
|
Capone C, Lupo C, Muratore P, Paolucci PS. Beyond spiking networks: The computational advantages of dendritic amplification and input segregation. Proc Natl Acad Sci U S A 2023; 120:e2220743120. [PMID: 38019856 PMCID: PMC10710097 DOI: 10.1073/pnas.2220743120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 10/11/2023] [Indexed: 12/01/2023] Open
Abstract
The brain can efficiently learn a wide range of tasks, motivating the search for biologically inspired learning rules for improving current artificial intelligence technology. Most biological models are composed of point neurons and cannot achieve state-of-the-art performance in machine learning. Recent works have proposed that input segregation (neurons receive sensory information and higher-order feedback in segregated compartments), and nonlinear dendritic computation would support error backpropagation in biological neurons. However, these approaches require propagating errors with a fine spatiotemporal structure to all the neurons, which is unlikely to be feasible in a biological network. To relax this assumption, we suggest that bursts and dendritic input segregation provide a natural support for target-based learning, which propagates targets rather than errors. A coincidence mechanism between the basal and the apical compartments allows for generating high-frequency bursts of spikes. This architecture supports a burst-dependent learning rule, based on the comparison between the target bursting activity triggered by the teaching signal and the one caused by the recurrent connections, providing support for target-based learning. We show that this framework can be used to efficiently solve spatiotemporal tasks, such as context-dependent store and recall of three-dimensional trajectories, and navigation tasks. Finally, we suggest that this neuronal architecture naturally allows for orchestrating "hierarchical imitation learning", enabling the decomposition of challenging long-horizon decision-making tasks into simpler subtasks. We show a possible implementation of this in a two-level network, where the high network produces the contextual signal for the low network.
Collapse
Affiliation(s)
- Cristiano Capone
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome00185, Italy
| | - Cosimo Lupo
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome00185, Italy
| | - Paolo Muratore
- Scuola Internazionale Superiore di Studi Avanzati (SISSA), Visual Neuroscience Lab, Trieste34136, Italy
| | | |
Collapse
|
19
|
Ma G, Yan R, Tang H. Exploiting noise as a resource for computation and learning in spiking neural networks. PATTERNS (NEW YORK, N.Y.) 2023; 4:100831. [PMID: 37876899 PMCID: PMC10591140 DOI: 10.1016/j.patter.2023.100831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 07/06/2023] [Accepted: 08/07/2023] [Indexed: 10/26/2023]
Abstract
Networks of spiking neurons underpin the extraordinary information-processing capabilities of the brain and have become pillar models in neuromorphic artificial intelligence. Despite extensive research on spiking neural networks (SNNs), most studies are established on deterministic models, overlooking the inherent non-deterministic, noisy nature of neural computations. This study introduces the noisy SNN (NSNN) and the noise-driven learning (NDL) rule by incorporating noisy neuronal dynamics to exploit the computational advantages of noisy neural processing. The NSNN provides a theoretical framework that yields scalable, flexible, and reliable computation and learning. We demonstrate that this framework leads to spiking neural models with competitive performance, improved robustness against challenging perturbations compared with deterministic SNNs, and better reproducing probabilistic computation in neural coding. Generally, this study offers a powerful and easy-to-use tool for machine learning, neuromorphic intelligence practitioners, and computational neuroscience researchers.
Collapse
Affiliation(s)
- Gehua Ma
- College of Computer Science and Technology, Zhejiang University, Hangzhou, PRC
| | - Rui Yan
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, PRC
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, PRC
- State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou, PRC
| |
Collapse
|
20
|
Zhang Y, Xiang S, Jiang S, Han Y, Guo X, Zheng L, Shi Y, Hao Y. Hybrid photonic deep convolutional residual spiking neural networks for text classification. OPTICS EXPRESS 2023; 31:28489-28502. [PMID: 37710902 DOI: 10.1364/oe.497218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 07/30/2023] [Indexed: 09/16/2023]
Abstract
Spiking neural networks (SNNs) offer powerful computation capability due to its event-driven nature and temporal processing. However, it is still limited to shallow structure and simple tasks due to the training difficulty. In this work, we propose a deep convolutional residual spiking neural network (DCRSNN) for text classification tasks. In the DCRSNN, the feature extraction is achieved via a convolution SNN with residual connection, using the surrogate gradient direct training technique. Classification is performed by a fully-connected network. We also suggest a hybrid photonic DCRSNN, in which photonic SNNs are used for classification with a converted training method. The accuracy of hard and soft reset methods, as well as three different surrogate functions, were evaluated and compared across four different datasets. Results indicated a maximum accuracy of 76.36% for MR, 91.03% for AG News, 88.06% for IMDB and 93.99% for Yelp review polarity. Soft reset methods used in the deep convolutional SNN yielded slightly better accuracy than their hard reset counterparts. We also considered the effects of different pooling methods and observation time windows and found that the convergence accuracy achieved by convolutional SNNs was comparable to that of convolutional neural networks under the same conditions. Moreover, the hybrid photonic DCRSNN also shows comparable testing accuracy. This work provides new insights into extending the SNN applications in the field of text classification and natural language processing, which is interesting for the resources-restrained scenarios.
Collapse
|
21
|
McDonnell KJ. Leveraging the Academic Artificial Intelligence Silecosystem to Advance the Community Oncology Enterprise. J Clin Med 2023; 12:4830. [PMID: 37510945 PMCID: PMC10381436 DOI: 10.3390/jcm12144830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 07/05/2023] [Accepted: 07/07/2023] [Indexed: 07/30/2023] Open
Abstract
Over the last 75 years, artificial intelligence has evolved from a theoretical concept and novel paradigm describing the role that computers might play in our society to a tool with which we daily engage. In this review, we describe AI in terms of its constituent elements, the synthesis of which we refer to as the AI Silecosystem. Herein, we provide an historical perspective of the evolution of the AI Silecosystem, conceptualized and summarized as a Kuhnian paradigm. This manuscript focuses on the role that the AI Silecosystem plays in oncology and its emerging importance in the care of the community oncology patient. We observe that this important role arises out of a unique alliance between the academic oncology enterprise and community oncology practices. We provide evidence of this alliance by illustrating the practical establishment of the AI Silecosystem at the City of Hope Comprehensive Cancer Center and its team utilization by community oncology providers.
Collapse
Affiliation(s)
- Kevin J McDonnell
- Center for Precision Medicine, Department of Medical Oncology & Therapeutics Research, City of Hope Comprehensive Cancer Center, Duarte, CA 91010, USA
| |
Collapse
|
22
|
Dorzhigulov A, Saxena V. Spiking CMOS-NVM mixed-signal neuromorphic ConvNet with circuit- and training-optimized temporal subsampling. Front Neurosci 2023; 17:1177592. [PMID: 37534034 PMCID: PMC10390782 DOI: 10.3389/fnins.2023.1177592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 06/26/2023] [Indexed: 08/04/2023] Open
Abstract
We increasingly rely on deep learning algorithms to process colossal amount of unstructured visual data. Commonly, these deep learning algorithms are deployed as software models on digital hardware, predominantly in data centers. Intrinsic high energy consumption of Cloud-based deployment of deep neural networks (DNNs) inspired researchers to look for alternatives, resulting in a high interest in Spiking Neural Networks (SNNs) and dedicated mixed-signal neuromorphic hardware. As a result, there is an emerging challenge to transfer DNN architecture functionality to energy-efficient spiking non-volatile memory (NVM)-based hardware with minimal loss in the accuracy of visual data processing. Convolutional Neural Network (CNN) is the staple choice of DNN for visual data processing. However, the lack of analog-friendly spiking implementations and alternatives for some core CNN functions, such as MaxPool, hinders the conversion of CNNs into the spike domain, thus hampering neuromorphic hardware development. To address this gap, in this work, we propose MaxPool with temporal multiplexing for Spiking CNNs (SCNNs), which is amenable for implementation in mixed-signal circuits. In this work, we leverage the temporal dynamics of internal membrane potential of Integrate & Fire neurons to enable MaxPool decision-making in the spiking domain. The proposed MaxPool models are implemented and tested within the SCNN architecture using a modified version of the aihwkit framework, a PyTorch-based toolkit for modeling and simulating hardware-based neural networks. The proposed spiking MaxPool scheme can decide even before the complete spatiotemporal input is applied, thus selectively trading off latency with accuracy. It is observed that by allocating just 10% of the spatiotemporal input window for a pooling decision, the proposed spiking MaxPool achieves up to 61.74% accuracy with a 2-bit weight resolution in the CIFAR10 dataset classification task after training with back propagation, with only about 1% performance drop compared to 62.78% accuracy of the 100% spatiotemporal window case with the 2-bit weight resolution to reflect foundry-integrated ReRAM limitations. In addition, we propose the realization of one of the proposed spiking MaxPool techniques in an NVM crossbar array along with periphery circuits designed in a 130nm CMOS technology. The energy-efficiency estimation results show competitive performance compared to recent neuromorphic chip designs.
Collapse
|
23
|
Buckley SM, Tait AN, McCaughan AN, Shastri BJ. Photonic online learning: a perspective. NANOPHOTONICS 2023; 12:833-845. [PMID: 36909290 PMCID: PMC9995662 DOI: 10.1515/nanoph-2022-0553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Revised: 10/31/2022] [Accepted: 12/03/2022] [Indexed: 06/18/2023]
Abstract
Emerging neuromorphic hardware promises to solve certain problems faster and with higher energy efficiency than traditional computing by using physical processes that take place at the device level as the computational primitives in neural networks. While initial results in photonic neuromorphic hardware are very promising, such hardware requires programming or "training" that is often power-hungry and time-consuming. In this article, we examine the online learning paradigm, where the machinery for training is built deeply into the hardware itself. We argue that some form of online learning will be necessary if photonic neuromorphic hardware is to achieve its true potential.
Collapse
Affiliation(s)
- Sonia Mary Buckley
- Applied Physics Division, National Institute of Standards and Technology, Boulder, CO80305, USA
| | - Alexander N. Tait
- Department of Physics, Engineering Physics and Astronomy, Queen’s University, Kingston, ON, Canada
| | - Adam N. McCaughan
- Applied Physics Division, National Institute of Standards and Technology, Boulder, CO80305, USA
| | - Bhavin J. Shastri
- Department of Physics, Engineering Physics and Astronomy, Queen’s University, Kingston, ON, Canada
| |
Collapse
|
24
|
Szegedi V, Bakos E, Furdan S, Kovács BH, Varga D, Erdélyi M, Barzó P, Szücs A, Tamás G, Lamsa K. HCN channels at the cell soma ensure the rapid electrical reactivity of fast-spiking interneurons in human neocortex. PLoS Biol 2023; 21:e3002001. [PMID: 36745683 PMCID: PMC9934405 DOI: 10.1371/journal.pbio.3002001] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 02/16/2023] [Accepted: 01/17/2023] [Indexed: 02/07/2023] Open
Abstract
Accumulating evidence indicates that there are substantial species differences in the properties of mammalian neurons, yet theories on circuit activity and information processing in the human brain are based heavily on results obtained from rodents and other experimental animals. This knowledge gap may be particularly important for understanding the neocortex, the brain area responsible for the most complex neuronal operations and showing the greatest evolutionary divergence. Here, we examined differences in the electrophysiological properties of human and mouse fast-spiking GABAergic basket cells, among the most abundant inhibitory interneurons in cortex. Analyses of membrane potential responses to current input, pharmacologically isolated somatic leak currents, isolated soma outside-out patch recordings, and immunohistochemical staining revealed that human neocortical basket cells abundantly express hyperpolarization-activated cyclic nucleotide-gated cation (HCN) channel isoforms HCN1 and HCN2 at the cell soma membrane, whereas these channels are sparse at the rodent basket cell soma membrane. Antagonist experiments showed that HCN channels in human neurons contribute to the resting membrane potential and cell excitability at the cell soma, accelerate somatic membrane potential kinetics, and shorten the lag between excitatory postsynaptic potentials and action potential generation. These effects are important because the soma of human fast-spiking neurons without HCN channels exhibit low persistent ion leak and slow membrane potential kinetics, compared with mouse fast-spiking neurons. HCN channels speed up human cell membrane potential kinetics and help attain an input-output rate close to that of rodent cells. Computational modeling demonstrated that HCN channel activity at the human fast-spiking cell soma membrane is sufficient to accelerate the input-output function as observed in cell recordings. Thus, human and mouse fast-spiking neurons exhibit functionally significant differences in ion channel composition at the cell soma membrane to set the speed and fidelity of their input-output function. These HCN channels ensure fast electrical reactivity of fast-spiking cells in human neocortex.
Collapse
Affiliation(s)
- Viktor Szegedi
- Department of Physiology, Anatomy and Neuroscience, University of Szeged, Szeged, Hungary
- Hungarian Centre of Excellence for Molecular Medicine Research Group for Human neuron physiology and therapy, Szeged, Hungary
| | - Emőke Bakos
- Department of Physiology, Anatomy and Neuroscience, University of Szeged, Szeged, Hungary
- Hungarian Centre of Excellence for Molecular Medicine Research Group for Human neuron physiology and therapy, Szeged, Hungary
| | - Szabina Furdan
- Department of Physiology, Anatomy and Neuroscience, University of Szeged, Szeged, Hungary
- Hungarian Centre of Excellence for Molecular Medicine Research Group for Human neuron physiology and therapy, Szeged, Hungary
| | - Bálint H. Kovács
- Department of Optics and Quantum Electronics, University of Szeged, Szeged, Hungary
| | - Dániel Varga
- Department of Optics and Quantum Electronics, University of Szeged, Szeged, Hungary
| | - Miklós Erdélyi
- Department of Optics and Quantum Electronics, University of Szeged, Szeged, Hungary
| | - Pál Barzó
- Department of Neurosurgery, University of Szeged, Szeged, Hungary
| | - Attila Szücs
- Hungarian Centre of Excellence for Molecular Medicine Research Group for Human neuron physiology and therapy, Szeged, Hungary
- Neuronal Cell Biology Research Group, Eötvös Loránd University, Budapest, Budapest, Hungary
| | - Gábor Tamás
- MTA-SZTE Research Group for Cortical Microcircuits, Department of Physiology, Anatomy and Neuroscience, University of Szeged, Szeged, Hungary
| | - Karri Lamsa
- Department of Physiology, Anatomy and Neuroscience, University of Szeged, Szeged, Hungary
- Hungarian Centre of Excellence for Molecular Medicine Research Group for Human neuron physiology and therapy, Szeged, Hungary
- * E-mail: ,
| |
Collapse
|
25
|
Pehle C, Wetterich C. Neuromorphic quantum computing. Phys Rev E 2022; 106:045311. [PMID: 36397478 DOI: 10.1103/physreve.106.045311] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Accepted: 08/09/2022] [Indexed: 06/16/2023]
Abstract
Quantum computation builds on the use of correlations. Correlations could also play a central role for artificial intelligence, neuromorphic computing or "biological computing." As a step toward a systematic exploration of "correlated computing" we demonstrate that neuromorphic computing can perform quantum operations. Spiking neurons in the active or silent states are connected to the two states of Ising spins. A quantum density matrix is constructed from the expectation values and correlations of the Ising spins. We show for a two qubit system that quantum gates can be learned as a change of parameters for neural network dynamics. These changes respect restrictions which ensure the quantum correlations. Our proposal for probabilistic computing goes beyond Markov chains and is not based on transition probabilities. Constraints on classical probability distributions relate changes made in one part of the system to other parts, similar to entangled quantum systems.
Collapse
Affiliation(s)
- Christian Pehle
- Kirchhoff-Institute for Physics, Heidelberg University, Im Neuenheimer Feld 227, 69120 Heidelberg, Germany
| | - Christof Wetterich
- Institute for Theoretical Physics, Heidelberg University, Philosophenweg 16, 69120 Heidelberg, Germany
| |
Collapse
|
26
|
Ivanov D, Chezhegov A, Kiselev M, Grunin A, Larionov D. Neuromorphic artificial intelligence systems. Front Neurosci 2022; 16:959626. [PMID: 36188479 PMCID: PMC9516108 DOI: 10.3389/fnins.2022.959626] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 08/17/2022] [Indexed: 11/23/2022] Open
Abstract
Modern artificial intelligence (AI) systems, based on von Neumann architecture and classical neural networks, have a number of fundamental limitations in comparison with the mammalian brain. In this article we discuss these limitations and ways to mitigate them. Next, we present an overview of currently available neuromorphic AI projects in which these limitations are overcome by bringing some brain features into the functioning and organization of computing systems (TrueNorth, Loihi, Tianjic, SpiNNaker, BrainScaleS, NeuronFlow, DYNAP, Akida, Mythic). Also, we present the principle of classifying neuromorphic AI systems by the brain features they use: connectionism, parallelism, asynchrony, impulse nature of information transfer, on-device-learning, local learning, sparsity, analog, and in-memory computing. In addition to reviewing new architectural approaches used by neuromorphic devices based on existing silicon microelectronics technologies, we also discuss the prospects for using a new memristor element base. Examples of recent advances in the use of memristors in neuromorphic applications are also given.
Collapse
Affiliation(s)
- Dmitry Ivanov
- Cifrum, Moscow, Russia
- Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, Moscow, Russia
- *Correspondence: Dmitry Ivanov
| | | | - Mikhail Kiselev
- Cifrum, Moscow, Russia
- Laboratory of Neuromorphic Computations, Department of Physics, Chuvash State University, Cheboksary, Russia
| | - Andrey Grunin
- Faculty of Physics, Lomonosov Moscow State University, Moscow, Russia
| | | |
Collapse
|
27
|
Müller E, Schmitt S, Mauch C, Billaudelle S, Grübl A, Güttler M, Husmann D, Ilmberger J, Jeltsch S, Kaiser J, Klähn J, Kleider M, Koke C, Montes J, Müller P, Partzsch J, Passenberg F, Schmidt H, Vogginger B, Weidner J, Mayr C, Schemmel J. The operating system of the neuromorphic BrainScaleS-1 system. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.05.081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
28
|
Müller E, Arnold E, Breitwieser O, Czierlinski M, Emmel A, Kaiser J, Mauch C, Schmitt S, Spilger P, Stock R, Stradmann Y, Weis J, Baumbach A, Billaudelle S, Cramer B, Ebert F, Göltz J, Ilmberger J, Karasenko V, Kleider M, Leibfried A, Pehle C, Schemmel J. A Scalable Approach to Modeling on Accelerated Neuromorphic Hardware. Front Neurosci 2022; 16:884128. [PMID: 35663548 PMCID: PMC9157770 DOI: 10.3389/fnins.2022.884128] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 04/20/2022] [Indexed: 11/29/2022] Open
Abstract
Neuromorphic systems open up opportunities to enlarge the explorative space for computational research. However, it is often challenging to unite efficiency and usability. This work presents the software aspects of this endeavor for the BrainScaleS-2 system, a hybrid accelerated neuromorphic hardware architecture based on physical modeling. We introduce key aspects of the BrainScaleS-2 Operating System: experiment workflow, API layering, software design, and platform operation. We present use cases to discuss and derive requirements for the software and showcase the implementation. The focus lies on novel system and software features such as multi-compartmental neurons, fast re-configuration for hardware-in-the-loop training, applications for the embedded processors, the non-spiking operation mode, interactive platform access, and sustainable hardware/software co-development. Finally, we discuss further developments in terms of hardware scale-up, system usability, and efficiency.
Collapse
Affiliation(s)
- Eric Müller
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Elias Arnold
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Oliver Breitwieser
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Milena Czierlinski
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Arne Emmel
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Jakob Kaiser
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christian Mauch
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Sebastian Schmitt
- Third Institute of Physics, University of Göttingen, Göttingen, Germany
| | - Philipp Spilger
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Raphael Stock
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Yannik Stradmann
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johannes Weis
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Baumbach
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- Department of Physiology, University of Bern, Bern, Switzerland
| | | | - Benjamin Cramer
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Falk Ebert
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Julian Göltz
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Joscha Ilmberger
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Vitali Karasenko
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Mitja Kleider
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Aron Leibfried
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christian Pehle
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johannes Schemmel
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
29
|
Pehle C, Billaudelle S, Cramer B, Kaiser J, Schreiber K, Stradmann Y, Weis J, Leibfried A, Müller E, Schemmel J. The BrainScaleS-2 Accelerated Neuromorphic System With Hybrid Plasticity. Front Neurosci 2022; 16:795876. [PMID: 35281488 PMCID: PMC8907969 DOI: 10.3389/fnins.2022.795876] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Accepted: 01/27/2022] [Indexed: 12/30/2022] Open
Abstract
Since the beginning of information processing by electronic components, the nervous system has served as a metaphor for the organization of computational primitives. Brain-inspired computing today encompasses a class of approaches ranging from using novel nano-devices for computation to research into large-scale neuromorphic architectures, such as TrueNorth, SpiNNaker, BrainScaleS, Tianjic, and Loihi. While implementation details differ, spiking neural networks-sometimes referred to as the third generation of neural networks-are the common abstraction used to model computation with such systems. Here we describe the second generation of the BrainScaleS neuromorphic architecture, emphasizing applications enabled by this architecture. It combines a custom analog accelerator core supporting the accelerated physical emulation of bio-inspired spiking neural network primitives with a tightly coupled digital processor and a digital event-routing network.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | - Johannes Schemmel
- Electronic Visions, Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
30
|
Masset P, Zavatone-Veth JA, Connor JP, Murthy VN, Pehlevan C. Natural gradient enables fast sampling in spiking neural networks. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2022; 35:22018-22034. [PMID: 37476623 PMCID: PMC10358281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/22/2023]
Abstract
For animals to navigate an uncertain world, their brains need to estimate uncertainty at the timescales of sensations and actions. Sampling-based algorithms afford a theoretically-grounded framework for probabilistic inference in neural circuits, but it remains unknown how one can implement fast sampling algorithms in biologically-plausible spiking networks. Here, we propose to leverage the population geometry, controlled by the neural code and the neural dynamics, to implement fast samplers in spiking neural networks. We first show that two classes of spiking samplers-efficient balanced spiking networks that simulate Langevin sampling, and networks with probabilistic spike rules that implement Metropolis-Hastings sampling-can be unified within a common framework. We then show that careful choice of population geometry, corresponding to the natural space of parameters, enables rapid inference of parameters drawn from strongly-correlated high-dimensional distributions in both networks. Our results suggest design principles for algorithms for sampling-based probabilistic inference in spiking neural networks, yielding potential inspiration for neuromorphic computing and testable predictions for neurobiology.
Collapse
Affiliation(s)
- Paul Masset
- Center for Brain Science, Harvard University Cambridge, MA 02138
- Department of Molecular and Cellular Biology, Harvard University Cambridge, MA 02138
| | - Jacob A Zavatone-Veth
- Center for Brain Science, Harvard University Cambridge, MA 02138
- Department of Physics, Harvard University Cambridge, MA 02138
| | - J Patrick Connor
- John A. Paulson School of Engineering and Applied Sciences, Harvard University Cambridge, MA 02138
| | - Venkatesh N Murthy
- Center for Brain Science, Harvard University Cambridge, MA 02138
- Department of Molecular and Cellular Biology, Harvard University Cambridge, MA 02138
| | - Cengiz Pehlevan
- Center for Brain Science, Harvard University Cambridge, MA 02138
- John A. Paulson School of Engineering and Applied Sciences, Harvard University Cambridge, MA 02138
| |
Collapse
|