1
|
Chakraborty S, Mishra J, Roy A, Niharika, Manna S, Baral T, Nandi P, Patra S, Patra SK. Liquid-liquid phase separation in subcellular assemblages and signaling pathways: Chromatin modifications induced gene regulation for cellular physiology and functions including carcinogenesis. Biochimie 2024; 223:74-97. [PMID: 38723938 DOI: 10.1016/j.biochi.2024.05.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 03/08/2024] [Accepted: 05/04/2024] [Indexed: 05/24/2024]
Abstract
Liquid-liquid phase separation (LLPS) describes many biochemical processes, including hydrogel formation, in the integrity of macromolecular assemblages and existence of membraneless organelles, including ribosome, nucleolus, nuclear speckles, paraspeckles, promyelocytic leukemia (PML) bodies, Cajal bodies (all exert crucial roles in cellular physiology), and evidence are emerging day by day. Also, phase separation is well documented in generation of plasma membrane subdomains and interplay between membranous and membraneless organelles. Intrinsically disordered regions (IDRs) of biopolymers/proteins are the most critical sticking regions that aggravate the formation of such condensates. Remarkably, phase separated condensates are also involved in epigenetic regulation of gene expression, chromatin remodeling, and heterochromatinization. Epigenetic marks on DNA and histones cooperate with RNA-binding proteins through their IDRs to trigger LLPS for facilitating transcription. How phase separation coalesces mutant oncoproteins, orchestrate tumor suppressor genes expression, and facilitated cancer-associated signaling pathways are unravelling. That autophagosome formation and DYRK3-mediated cancer stem cell modification also depend on phase separation is deciphered in part. In view of this, and to linchpin insight into the subcellular membraneless organelle assembly, gene activation and biological reactions catalyzed by enzymes, and the downstream physiological functions, and how all these events are precisely facilitated by LLPS inducing organelle function, epigenetic modulation of gene expression in this scenario, and how it goes awry in cancer progression are summarized and presented in this article.
Collapse
Affiliation(s)
- Subhajit Chakraborty
- Epigenetics and Cancer Research Laboratory, Biochemistry and Molecular Biology Group, Department of Life Science, National Institute of Technology, Rourkela, India
| | - Jagdish Mishra
- Epigenetics and Cancer Research Laboratory, Biochemistry and Molecular Biology Group, Department of Life Science, National Institute of Technology, Rourkela, India
| | - Ankan Roy
- Epigenetics and Cancer Research Laboratory, Biochemistry and Molecular Biology Group, Department of Life Science, National Institute of Technology, Rourkela, India
| | - Niharika
- Epigenetics and Cancer Research Laboratory, Biochemistry and Molecular Biology Group, Department of Life Science, National Institute of Technology, Rourkela, India
| | - Soumen Manna
- Epigenetics and Cancer Research Laboratory, Biochemistry and Molecular Biology Group, Department of Life Science, National Institute of Technology, Rourkela, India
| | - Tirthankar Baral
- Epigenetics and Cancer Research Laboratory, Biochemistry and Molecular Biology Group, Department of Life Science, National Institute of Technology, Rourkela, India
| | - Piyasa Nandi
- Epigenetics and Cancer Research Laboratory, Biochemistry and Molecular Biology Group, Department of Life Science, National Institute of Technology, Rourkela, India
| | - Subhajit Patra
- Department of Chemical Engineering, Maulana Azad National Institute of Technology, Bhopal, India
| | - Samir Kumar Patra
- Epigenetics and Cancer Research Laboratory, Biochemistry and Molecular Biology Group, Department of Life Science, National Institute of Technology, Rourkela, India.
| |
Collapse
|
2
|
Sakemi Y, Yamamoto K, Hosomi T, Aihara K. Sparse-firing regularization methods for spiking neural networks with time-to-first-spike coding. Sci Rep 2023; 13:22897. [PMID: 38129555 PMCID: PMC10739753 DOI: 10.1038/s41598-023-50201-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 12/16/2023] [Indexed: 12/23/2023] Open
Abstract
The training of multilayer spiking neural networks (SNNs) using the error backpropagation algorithm has made significant progress in recent years. Among the various training schemes, the error backpropagation method that directly uses the firing time of neurons has attracted considerable attention because it can realize ideal temporal coding. This method uses time-to-first-spike (TTFS) coding, in which each neuron fires at most once, and this restriction on the number of firings enables information to be processed at a very low firing frequency. This low firing frequency increases the energy efficiency of information processing in SNNs. However, only an upper limit has been provided for TTFS-coded SNNs, and the information-processing capability of SNNs at lower firing frequencies has not been fully investigated. In this paper, we propose two spike-timing-based sparse-firing (SSR) regularization methods to further reduce the firing frequency of TTFS-coded SNNs. Both methods are characterized by the fact that they only require information about the firing timing and associated weights. The effects of these regularization methods were investigated on the MNIST, Fashion-MNIST, and CIFAR-10 datasets using multilayer perceptron networks and convolutional neural network structures.
Collapse
Affiliation(s)
- Yusuke Sakemi
- Research Center for Mathematical Engineering, Chiba Institute of Technology, Narashino, Japan.
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo, Tokyo, Japan.
| | | | | | - Kazuyuki Aihara
- Research Center for Mathematical Engineering, Chiba Institute of Technology, Narashino, Japan
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo, Tokyo, Japan
| |
Collapse
|
3
|
Liu S, Leung VCH, Dragotti PL. First-spike coding promotes accurate and efficient spiking neural networks for discrete events with rich temporal structures. Front Neurosci 2023; 17:1266003. [PMID: 37849889 PMCID: PMC10577212 DOI: 10.3389/fnins.2023.1266003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 09/11/2023] [Indexed: 10/19/2023] Open
Abstract
Spiking neural networks (SNNs) are well-suited to process asynchronous event-based data. Most of the existing SNNs use rate-coding schemes that focus on firing rate (FR), and so they generally ignore the spike timing in events. On the contrary, methods based on temporal coding, particularly time-to-first-spike (TTFS) coding, can be accurate and efficient but they are difficult to train. Currently, there is limited research on applying TTFS coding to real events, since traditional TTFS-based methods impose one-spike constraint, which is not realistic for event-based data. In this study, we present a novel decision-making strategy based on first-spike (FS) coding that encodes FS timings of the output neurons to investigate the role of the first-spike timing in classifying real-world event sequences with complex temporal structures. To achieve FS coding, we propose a novel surrogate gradient learning method for discrete spike trains. In the forward pass, output spikes are encoded into discrete times to generate FS times. In the backpropagation, we develop an error assignment method that propagates error from FS times to spikes through a Gaussian window, and then supervised learning for spikes is implemented through a surrogate gradient approach. Additional strategies are introduced to facilitate the training of FS timings, such as adding empty sequences and employing different parameters for different layers. We make a comprehensive comparison between FS and FR coding in the experiments. Our results show that FS coding achieves comparable accuracy to FR coding while leading to superior energy efficiency and distinct neuronal dynamics on data sequences with very rich temporal structures. Additionally, a longer time delay in the first spike leads to higher accuracy, indicating important information is encoded in the timing of the first spike.
Collapse
Affiliation(s)
- Siying Liu
- Communications and Signal Processing Group, Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
| | | | | |
Collapse
|
4
|
Bacho F, Chu D. Exploring Trade-Offs in Spiking Neural Networks. Neural Comput 2023; 35:1627-1656. [PMID: 37523463 DOI: 10.1162/neco_a_01609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 06/03/2023] [Indexed: 08/02/2023]
Abstract
Spiking neural networks (SNNs) have emerged as a promising alternative to traditional deep neural networks for low-power computing. However, the effectiveness of SNNs is not solely determined by their performance but also by their energy consumption, prediction speed, and robustness to noise. The recent method Fast & Deep, along with others, achieves fast and energy-efficient computation by constraining neurons to fire at most once. Known as time-to-first-spike (TTFS), this constraint, however, restricts the capabilities of SNNs in many aspects. In this work, we explore the relationships of performance, energy consumption, speed, and stability when using this constraint. More precisely, we highlight the existence of trade-offs where performance and robustness are gained at the cost of sparsity and prediction latency. To improve these trade-offs, we propose a relaxed version of Fast & Deep that allows for multiple spikes per neuron. Our experiments show that relaxing the spike constraint provides higher performance while also benefiting from faster convergence, similar sparsity, comparable prediction latency, and better robustness to noise compared to TTFS SNNs. By highlighting the limitations of TTFS and demonstrating the advantages of unconstrained SNNs, we provide valuable insight for the development of effective learning strategies for neuromorphic computing.
Collapse
Affiliation(s)
- Florian Bacho
- CEMS, School of Computing, University of Kent, Canterbury CT2 7NF, U.K.
| | - Dominique Chu
- CEMS, School of Computing, University of Kent, Canterbury CT2 7NF, U.K.
| |
Collapse
|
5
|
Zhang H, Li Y, He B, Fan X, Wang Y, Zhang Y. Direct training high-performance spiking neural networks for object recognition and detection. Front Neurosci 2023; 17:1229951. [PMID: 37614339 PMCID: PMC10442545 DOI: 10.3389/fnins.2023.1229951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Accepted: 07/19/2023] [Indexed: 08/25/2023] Open
Abstract
Introduction The spiking neural network (SNN) is a bionic model that is energy-efficient when implemented on neuromorphic hardwares. The non-differentiability of the spiking signals and the complicated neural dynamics make direct training of high-performance SNNs a great challenge. There are numerous crucial issues to explore for the deployment of direct training SNNs, such as gradient vanishing and explosion, spiking signal decoding, and applications in upstream tasks. Methods To address gradient vanishing, we introduce a binary selection gate into the basic residual block and propose spiking gate (SG) ResNet to implement residual learning in SNNs. We propose two appropriate representations of the gate signal and verify that SG ResNet can overcome gradient vanishing or explosion by analyzing the gradient backpropagation. For the spiking signal decoding, a better decoding scheme than rate coding is achieved by our attention spike decoder (ASD), which dynamically assigns weights to spiking signals along the temporal, channel, and spatial dimensions. Results and discussion The SG ResNet and ASD modules are evaluated on multiple object recognition datasets, including the static ImageNet, CIFAR-100, CIFAR-10, and neuromorphic DVS-CIFAR10 datasets. Superior accuracy is demonstrated with a tiny simulation time step of four, specifically 94.52% top-1 accuracy on CIFAR-10 and 75.64% top-1 accuracy on CIFAR-100. Spiking RetinaNet is proposed using SG ResNet as the backbone and ASD module for information decoding as the first direct-training hybrid SNN-ANN detector for RGB images. Spiking RetinaNet with a SG ResNet34 backbone achieves an mAP of 0.296 on the object detection dataset MSCOCO.
Collapse
Affiliation(s)
- Hong Zhang
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou, China
| | - Yang Li
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou, China
| | - Bin He
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou, China
| | - Xiongfei Fan
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou, China
| | - Yue Wang
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou, China
| | - Yu Zhang
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou, China
- Key Laboratory of Collaborative Sensing and Autonomous Unmanned Systems of Zhejiang Province, Hangzhou, China
| |
Collapse
|
6
|
Mysin I. Phase relations of interneuronal activity relative to theta rhythm. Front Neural Circuits 2023; 17:1198573. [PMID: 37484208 PMCID: PMC10358363 DOI: 10.3389/fncir.2023.1198573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Accepted: 06/20/2023] [Indexed: 07/25/2023] Open
Abstract
The theta rhythm plays a crucial role in synchronizing neural activity during attention and memory processes. However, the mechanisms behind the formation of neural activity during theta rhythm generation remain unknown. To address this, we propose a mathematical model that explains the distribution of interneurons in the CA1 field during the theta rhythm phase. Our model consists of a network of seven types of interneurons in the CA1 field that receive inputs from the CA3 field, entorhinal cortex, and local pyramidal neurons in the CA1 field. By adjusting the parameters of the connections in the model. We demonstrate that it is possible to replicate the experimentally observed phase relations between interneurons and the theta rhythm. Our model predicts that populations of interneurons receive unimodal excitation and inhibition with coinciding peaks, and that excitation dominates to determine the firing dynamics of interneurons.
Collapse
|
7
|
Winston CN, Mastrovito D, Shea-Brown E, Mihalas S. Heterogeneity in Neuronal Dynamics Is Learned by Gradient Descent for Temporal Processing Tasks. Neural Comput 2023; 35:555-592. [PMID: 36827598 PMCID: PMC10044000 DOI: 10.1162/neco_a_01571] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 11/02/2022] [Indexed: 02/26/2023]
Abstract
Individual neurons in the brain have complex intrinsic dynamics that are highly diverse. We hypothesize that the complex dynamics produced by networks of complex and heterogeneous neurons may contribute to the brain's ability to process and respond to temporally complex data. To study the role of complex and heterogeneous neuronal dynamics in network computation, we develop a rate-based neuronal model, the generalized-leaky-integrate-and-fire-rate (GLIFR) model, which is a rate equivalent of the generalized-leaky-integrate-and-fire model. The GLIFR model has multiple dynamical mechanisms, which add to the complexity of its activity while maintaining differentiability. We focus on the role of after-spike currents, currents induced or modulated by neuronal spikes, in producing rich temporal dynamics. We use machine learning techniques to learn both synaptic weights and parameters underlying intrinsic dynamics to solve temporal tasks. The GLIFR model allows the use of standard gradient descent techniques rather than surrogate gradient descent, which has been used in spiking neural networks. After establishing the ability to optimize parameters using gradient descent in single neurons, we ask how networks of GLIFR neurons learn and perform on temporally challenging tasks, such as sequential MNIST. We find that these networks learn diverse parameters, which gives rise to diversity in neuronal dynamics, as demonstrated by clustering of neuronal parameters. GLIFR networks have mixed performance when compared to vanilla recurrent neural networks, with higher performance in pixel-by-pixel MNIST but lower in line-by-line MNIST. However, they appear to be more robust to random silencing. We find that the ability to learn heterogeneity and the presence of after-spike currents contribute to these gains in performance. Our work demonstrates both the computational robustness of neuronal complexity and diversity in networks and a feasible method of training such models using exact gradients.
Collapse
Affiliation(s)
- Chloe N Winston
- Departments of Neuroscience and Computer Science, University of Washington, Seattle, WA 98195, U.S.A
- University of Washington Computational Neuroscience Center, Seattle, WA 98195, U.S.A.
| | - Dana Mastrovito
- Allen Institute for Brain Science, Seattle, WA 98109, U.S.A.
| | - Eric Shea-Brown
- University of Washington Computational Neuroscience Center, Seattle, WA 98195, U.S.A
- Allen Institute for Brain Science, Seattle, WA 98109, U.S.A
- Department of Applied Mathematics, University of Washington, Seattle, WA 98195, U.S.A.
| | - Stefan Mihalas
- University of Washington Computational Neuroscience Center, Seattle, WA 98195, U.S.A
- Allen Institute for Brain Science, Seattle, WA 98109, U.S.A
- Department of Applied Mathematics, University of Washington, Seattle, WA 98195, U.S.A.
| |
Collapse
|
8
|
Bauer FC, Lenz G, Haghighatshoar S, Sheik S. EXODUS: Stable and efficient training of spiking neural networks. Front Neurosci 2023; 17:1110444. [PMID: 36845419 PMCID: PMC9945199 DOI: 10.3389/fnins.2023.1110444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Accepted: 01/09/2023] [Indexed: 02/10/2023] Open
Abstract
Introduction Spiking Neural Networks (SNNs) are gaining significant traction in machine learning tasks where energy-efficiency is of utmost importance. Training such networks using the state-of-the-art back-propagation through time (BPTT) is, however, very time-consuming. Previous work employs an efficient GPU-accelerated backpropagation algorithm called SLAYER, which speeds up training considerably. SLAYER, however, does not take into account the neuron reset mechanism while computing the gradients, which we argue to be the source of numerical instability. To counteract this, SLAYER introduces a gradient scale hyper parameter across layers, which needs manual tuning. Methods In this paper, we modify SLAYER and design an algorithm called EXODUS, that accounts for the neuron reset mechanism and applies the Implicit Function Theorem (IFT) to calculate the correct gradients (equivalent to those computed by BPTT). We furthermore eliminate the need for ad-hoc scaling of gradients, thus, reducing the training complexity tremendously. Results We demonstrate, via computer simulations, that EXODUS is numerically stable and achieves comparable or better performance than SLAYER especially in various tasks with SNNs that rely on temporal features.
Collapse
|
9
|
Müller E, Schmitt S, Mauch C, Billaudelle S, Grübl A, Güttler M, Husmann D, Ilmberger J, Jeltsch S, Kaiser J, Klähn J, Kleider M, Koke C, Montes J, Müller P, Partzsch J, Passenberg F, Schmidt H, Vogginger B, Weidner J, Mayr C, Schemmel J. The operating system of the neuromorphic BrainScaleS-1 system. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.05.081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
10
|
Meng Q, Yan S, Xiao M, Wang Y, Lin Z, Luo ZQ. Training much deeper spiking neural networks with a small number of time-steps. Neural Netw 2022; 153:254-268. [PMID: 35759953 DOI: 10.1016/j.neunet.2022.06.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 05/05/2022] [Accepted: 06/01/2022] [Indexed: 11/26/2022]
Abstract
Spiking Neural Network (SNN) is a promising energy-efficient neural architecture when implemented on neuromorphic hardware. The Artificial Neural Network (ANN) to SNN conversion method, which is the most effective SNN training method, has successfully converted moderately deep ANNs to SNNs with satisfactory performance. However, this method requires a large number of time-steps, which hurts the energy efficiency of SNNs. How to effectively covert a very deep ANN (e.g., more than 100 layers) to an SNN with a small number of time-steps remains a difficult task. To tackle this challenge, this paper makes the first attempt to propose a novel error analysis framework that takes both the "quantization error" and the "deviation error" into account, which comes from the discretization of SNN dynamicsthe neuron's coding scheme and the inconstant input currents at intermediate layers, respectively. Particularly, our theories reveal that the "deviation error" depends on both the spike threshold and the input variance. Based on our theoretical analysis, we further propose the Threshold Tuning and Residual Block Restructuring (TTRBR) method that can convert very deep ANNs (>100 layers) to SNNs with negligible accuracy degradation while requiring only a small number of time-steps. With very deep networks, our TTRBR method achieves state-of-the-art (SOTA) performance on the CIFAR-10, CIFAR-100, and ImageNet classification tasks.
Collapse
Affiliation(s)
- Qingyan Meng
- The Chinese University of Hong Kong, Shenzhen, China; Shenzhen Research Institute of Big Data, Shenzhen 518115, China.
| | - Shen Yan
- Center for Data Science, Peking University, China.
| | - Mingqing Xiao
- Key Laboratory of Machine Perception (MOE), School of Artificial Intelligence, Peking University, China.
| | - Yisen Wang
- Key Laboratory of Machine Perception (MOE), School of Artificial Intelligence, Peking University, China; Institute for Artificial Intelligence, Peking University, China.
| | - Zhouchen Lin
- Key Laboratory of Machine Perception (MOE), School of Artificial Intelligence, Peking University, China; Institute for Artificial Intelligence, Peking University, China; Peng Cheng Laboratory, China.
| | - Zhi-Quan Luo
- The Chinese University of Hong Kong, Shenzhen, China; Shenzhen Research Institute of Big Data, Shenzhen 518115, China.
| |
Collapse
|
11
|
Zou Z, Alimohamadi H, Zakeri A, Imani F, Kim Y, Najafi MH, Imani M. Memory-inspired spiking hyperdimensional network for robust online learning. Sci Rep 2022; 12:7641. [PMID: 35538126 PMCID: PMC9090930 DOI: 10.1038/s41598-022-11073-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 04/08/2022] [Indexed: 11/09/2022] Open
Abstract
Recently, brain-inspired computing models have shown great potential to outperform today's deep learning solutions in terms of robustness and energy efficiency. Particularly, Spiking Neural Networks (SNNs) and HyperDimensional Computing (HDC) have shown promising results in enabling efficient and robust cognitive learning. Despite the success, these two brain-inspired models have different strengths. While SNN mimics the physical properties of the human brain, HDC models the brain on a more abstract and functional level. Their design philosophies demonstrate complementary patterns that motivate their combination. With the help of the classical psychological model on memory, we propose SpikeHD, the first framework that fundamentally combines Spiking neural network and hyperdimensional computing. SpikeHD generates a scalable and strong cognitive learning system that better mimics brain functionality. SpikeHD exploits spiking neural networks to extract low-level features by preserving the spatial and temporal correlation of raw event-based spike data. Then, it utilizes HDC to operate over SNN output by mapping the signal into high-dimensional space, learning the abstract information, and classifying the data. Our extensive evaluation on a set of benchmark classification problems shows that SpikeHD provides the following benefit compared to SNN architecture: (1) significantly enhance learning capability by exploiting two-stage information processing, (2) enables substantial robustness to noise and failure, and (3) reduces the network size and required parameters to learn complex information.
Collapse
Affiliation(s)
- Zhuowen Zou
- University of California San Diego, La Jolla, CA, 92093, USA
- University of California Irvine, Irvine, CA, 92697, USA
| | | | - Ali Zakeri
- University of California Irvine, Irvine, CA, 92697, USA
| | - Farhad Imani
- University of Connecticut, Storrs, CT, 06269, USA
| | - Yeseong Kim
- Daegu Gyeongbuk Institute of Science and Technology, Daegu, South Korea
| | | | - Mohsen Imani
- University of California Irvine, Irvine, CA, 92697, USA.
| |
Collapse
|
12
|
Pehle C, Billaudelle S, Cramer B, Kaiser J, Schreiber K, Stradmann Y, Weis J, Leibfried A, Müller E, Schemmel J. The BrainScaleS-2 Accelerated Neuromorphic System With Hybrid Plasticity. Front Neurosci 2022; 16:795876. [PMID: 35281488 PMCID: PMC8907969 DOI: 10.3389/fnins.2022.795876] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Accepted: 01/27/2022] [Indexed: 12/30/2022] Open
Abstract
Since the beginning of information processing by electronic components, the nervous system has served as a metaphor for the organization of computational primitives. Brain-inspired computing today encompasses a class of approaches ranging from using novel nano-devices for computation to research into large-scale neuromorphic architectures, such as TrueNorth, SpiNNaker, BrainScaleS, Tianjic, and Loihi. While implementation details differ, spiking neural networks-sometimes referred to as the third generation of neural networks-are the common abstraction used to model computation with such systems. Here we describe the second generation of the BrainScaleS neuromorphic architecture, emphasizing applications enabled by this architecture. It combines a custom analog accelerator core supporting the accelerated physical emulation of bio-inspired spiking neural network primitives with a tightly coupled digital processor and a digital event-routing network.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | - Johannes Schemmel
- Electronic Visions, Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
13
|
Büchel J, Zendrikov D, Solinas S, Indiveri G, Muir DR. Supervised training of spiking neural networks for robust deployment on mixed-signal neuromorphic processors. Sci Rep 2021; 11:23376. [PMID: 34862429 PMCID: PMC8642544 DOI: 10.1038/s41598-021-02779-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 11/22/2021] [Indexed: 11/14/2022] Open
Abstract
Mixed-signal analog/digital circuits emulate spiking neurons and synapses with extremely high energy efficiency, an approach known as "neuromorphic engineering". However, analog circuits are sensitive to process-induced variation among transistors in a chip ("device mismatch"). For neuromorphic implementation of Spiking Neural Networks (SNNs), mismatch causes parameter variation between identically-configured neurons and synapses. Each chip exhibits a different distribution of neural parameters, causing deployed networks to respond differently between chips. Current solutions to mitigate mismatch based on per-chip calibration or on-chip learning entail increased design complexity, area and cost, making deployment of neuromorphic devices expensive and difficult. Here we present a supervised learning approach that produces SNNs with high robustness to mismatch and other common sources of noise. Our method trains SNNs to perform temporal classification tasks by mimicking a pre-trained dynamical system, using a local learning rule from non-linear control theory. We demonstrate our method on two tasks requiring temporal memory, and measure the robustness of our approach to several forms of noise and mismatch. We show that our approach is more robust than common alternatives for training SNNs. Our method provides robust deployment of pre-trained networks on mixed-signal neuromorphic hardware, without requiring per-device training or calibration.
Collapse
Affiliation(s)
- Julian Büchel
- SynSense, Thurgauerstrasse 40, 8050, Zurich, Switzerland
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland
| | - Dmitrii Zendrikov
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland
| | - Sergio Solinas
- Department of Biomedical Science, University of Sassari, Piazza Università, 21, 07100, Sassari, Sardegna, Italy
| | - Giacomo Indiveri
- SynSense, Thurgauerstrasse 40, 8050, Zurich, Switzerland
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland
| | - Dylan R Muir
- SynSense, Thurgauerstrasse 40, 8050, Zurich, Switzerland.
| |
Collapse
|
14
|
Yin B, Corradi F, Bohté SM. Accurate and efficient time-domain classification with adaptive spiking recurrent neural networks. NAT MACH INTELL 2021. [DOI: 10.1038/s42256-021-00397-w] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|