1
|
Galinsky VL, Frank LR. Critically synchronized brain waves form an effective, robust and flexible basis for human memory and learning. Sci Rep 2023; 13:4343. [PMID: 36928606 PMCID: PMC10020450 DOI: 10.1038/s41598-023-31365-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 03/10/2023] [Indexed: 03/18/2023] Open
Abstract
The effectiveness, robustness, and flexibility of memory and learning constitute the very essence of human natural intelligence, cognition, and consciousness. However, currently accepted views on these subjects have, to date, been put forth without any basis on a true physical theory of how the brain communicates internally via its electrical signals. This lack of a solid theoretical framework has implications not only for our understanding of how the brain works, but also for wide range of computational models developed from the standard orthodox view of brain neuronal organization and brain network derived functioning based on the Hodgkin-Huxley ad-hoc circuit analogies that have produced a multitude of Artificial, Recurrent, Convolution, Spiking, etc., Neural Networks (ARCSe NNs) that have in turn led to the standard algorithms that form the basis of artificial intelligence (AI) and machine learning (ML) methods. Our hypothesis, based upon our recently developed physical model of weakly evanescent brain wave propagation (WETCOW) is that, contrary to the current orthodox model that brain neurons just integrate and fire under accompaniment of slow leaking, they can instead perform much more sophisticated tasks of efficient coherent synchronization/desynchronization guided by the collective influence of propagating nonlinear near critical brain waves, the waves that currently assumed to be nothing but inconsequential subthreshold noise. In this paper we highlight the learning and memory capabilities of our WETCOW framework and then apply it to the specific application of AI/ML and Neural Networks. We demonstrate that the learning inspired by these critically synchronized brain waves is shallow, yet its timing and accuracy outperforms deep ARCSe counterparts on standard test datasets. These results have implications for both our understanding of brain function and for the wide range of AI/ML applications.
Collapse
Affiliation(s)
- Vitaly L Galinsky
- Center for Scientific Computation in Imaging, University of California at San Diego, La Jolla, CA, 92037-0854, USA.
| | - Lawrence R Frank
- Center for Scientific Computation in Imaging, University of California at San Diego, La Jolla, CA, 92037-0854, USA
- Center for Functional MRI, University of California at San Diego, La Jolla, CA, 92037-0677, USA
| |
Collapse
|
2
|
Guo W, Yantir HE, Fouda ME, Eltawil AM, Salama KN. Toward the Optimal Design and FPGA Implementation of Spiking Neural Networks. IEEE Trans Neural Netw Learn Syst 2022; 33:3988-4002. [PMID: 33571097 DOI: 10.1109/tnnls.2021.3055421] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The performance of a biologically plausible spiking neural network (SNN) largely depends on the model parameters and neural dynamics. This article proposes a parameter optimization scheme for improving the performance of a biologically plausible SNN and a parallel on-field-programmable gate array (FPGA) online learning neuromorphic platform for the digital implementation based on two numerical methods, namely, the Euler and third-order Runge-Kutta (RK3) methods. The optimization scheme explores the impact of biological time constants on information transmission in the SNN and improves the convergence rate of the SNN on digit recognition with a suitable choice of the time constants. The parallel digital implementation leads to a significant speedup over software simulation on a general-purpose CPU. The parallel implementation with the Euler method enables around 180× ( 20× ) training (inference) speedup over a Pytorch-based SNN simulation on CPU. Moreover, compared with previous work, our parallel implementation shows more than 300× ( 240× ) improvement on speed and 180× ( 250× ) reduction in energy consumption for training (inference). In addition, due to the high-order accuracy, the RK3 method is demonstrated to gain 2× training speedup over the Euler method, which makes it suitable for online training in real-time applications.
Collapse
|
3
|
Wang H, He Z, Wang T, He J, Zhou X, Wang Y, Liu L, Wu N, Tian M, Shi C. TripleBrain: A Compact Neuromorphic Hardware Core With Fast On-Chip Self-Organizing and Reinforcement Spike-Timing Dependent Plasticity. IEEE Trans Biomed Circuits Syst 2022; 16:636-650. [PMID: 35802542 DOI: 10.1109/tbcas.2022.3189240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Human brain cortex acts as a rich inspiration source for constructing efficient artificial cognitive systems. In this paper, we investigate to incorporate multiple brain-inspired computing paradigms for compact, fast and high-accuracy neuromorphic hardware implementation. We propose the TripleBrain hardware core that tightly combines three common brain-inspired factors: the spike-based processing and plasticity, the self-organizing map (SOM) mechanism and the reinforcement learning scheme, to improve object recognition accuracy and processing throughput, while keeping low resource costs. The proposed hardware core is fully event-driven to mitigate unnecessary operations, and enables various on-chip learning rules (including the proposed SOM-STDP & R-STDP rule and the R-SOM-STDP rule regarded as the two variants of our TripleBrain learning rule) with different accuracy-latency tradeoffs to satisfy user requirements. An FPGA prototype of the neuromorphic core was implemented and elaborately tested. It realized high-speed learning (1349 frame/s) and inference (2698 frame/s), and obtained comparably high recognition accuracies of 95.10%, 80.89%, 100%, 94.94%, 82.32%, 100% and 97.93% on the MNIST, ETH-80, ORL-10, Yale-10, N-MNIST, Poker-DVS and Posture-DVS datasets, respectively, while only consuming 4146 (7.59%) slices, 32 (3.56%) DSPs and 131 (24.04%) Block RAMs on a Xilinx Zynq-7045 FPGA chip. Our neuromorphic core is very attractive for real-time resource-limited edge intelligent systems.
Collapse
|
4
|
Klassert R, Baumbach A, Petrovici MA, Gärttner M. Variational learning of quantum ground states on spiking neuromorphic hardware. iScience 2022; 25:104707. [PMID: 35992070 PMCID: PMC9386107 DOI: 10.1016/j.isci.2022.104707] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 05/05/2022] [Accepted: 06/28/2022] [Indexed: 11/26/2022] Open
Abstract
Recent research has demonstrated the usefulness of neural networks as variational ansatz functions for quantum many-body states. However, high-dimensional sampling spaces and transient autocorrelations confront these approaches with a challenging computational bottleneck. Compared to conventional neural networks, physical model devices offer a fast, efficient and inherently parallel substrate capable of related forms of Markov chain Monte Carlo sampling. Here, we demonstrate the ability of a neuromorphic chip to represent the ground states of quantum spin models by variational energy minimization. We develop a training algorithm and apply it to the transverse field Ising model, showing good performance at moderate system sizes (N≤10). A systematic hyperparameter study shows that performance depends on sample quality, which is limited by temporal parameter variations on the analog neuromorphic chip. Our work thus provides an important step towards harnessing the capabilities of neuromorphic hardware for tackling the curse of dimensionality in quantum many-body problems. Variational scheme for representing quantum ground states with neuromorphic hardware Accelerated physical system yields system-size independent sample generation time Accurate learning of ground states across a quantum phase transition Detailed analysis of algorithmic and technical limitations
Collapse
|
5
|
Yang S, Gao T, Wang J, Deng B, Lansdell B, Linares-Barranco B. Efficient Spike-Driven Learning With Dendritic Event-Based Processing. Front Neurosci 2021; 15:601109. [PMID: 33679295 PMCID: PMC7933681 DOI: 10.3389/fnins.2021.601109] [Citation(s) in RCA: 60] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Accepted: 01/21/2021] [Indexed: 11/22/2022] Open
Abstract
A critical challenge in neuromorphic computing is to present computationally efficient algorithms of learning. When implementing gradient-based learning, error information must be routed through the network, such that each neuron knows its contribution to output, and thus how to adjust its weight. This is known as the credit assignment problem. Exactly implementing a solution like backpropagation involves weight sharing, which requires additional bandwidth and computations in a neuromorphic system. Instead, models of learning from neuroscience can provide inspiration for how to communicate error information efficiently, without weight sharing. Here we present a novel dendritic event-based processing (DEP) algorithm, using a two-compartment leaky integrate-and-fire neuron with partially segregated dendrites that effectively solves the credit assignment problem. In order to optimize the proposed algorithm, a dynamic fixed-point representation method and piecewise linear approximation approach are presented, while the synaptic events are binarized during learning. The presented optimization makes the proposed DEP algorithm very suitable for implementation in digital or mixed-signal neuromorphic hardware. The experimental results show that spiking representations can rapidly learn, achieving high performance by using the proposed DEP algorithm. We find the learning capability is affected by the degree of dendritic segregation, and the form of synaptic feedback connections. This study provides a bridge between the biological learning and neuromorphic learning, and is meaningful for the real-time applications in the field of artificial intelligence.
Collapse
Affiliation(s)
- Shuangming Yang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Tian Gao
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Jiang Wang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Bin Deng
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Benjamin Lansdell
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, United States
| | | |
Collapse
|
6
|
Xu C, Zhang W, Liu Y, Li P. Boosting Throughput and Efficiency of Hardware Spiking Neural Accelerators Using Time Compression Supporting Multiple Spike Codes. Front Neurosci 2020; 14:104. [PMID: 32140093 PMCID: PMC7043203 DOI: 10.3389/fnins.2020.00104] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 01/27/2020] [Indexed: 12/01/2022] Open
Abstract
Spiking neural networks (SNNs) are the third generation of neural networks and can explore both rate and temporal coding for energy-efficient event-driven computation. However, the decision accuracy of existing SNN designs is contingent upon processing a large number of spikes over a long period. Nevertheless, the switching power of SNN hardware accelerators is proportional to the number of spikes processed while the length of spike trains limits throughput and static power efficiency. This paper presents the first study on developing temporal compression to significantly boost throughput and reduce energy dissipation of digital hardware SNN accelerators while being applicable to multiple spike codes. The proposed compression architectures consist of low-cost input spike compression units, novel input-and-output-weighted spiking neurons, and reconfigurable time constant scaling to support large and flexible time compression ratios. Our compression architectures can be transparently applied to any given pre-designed SNNs employing either rate or temporal codes while incurring minimal modification of the neural models, learning algorithms, and hardware design. Using spiking speech and image recognition datasets, we demonstrate the feasibility of supporting large time compression ratios of up to 16×, delivering up to 15.93×, 13.88×, and 86.21× improvements in throughput, energy dissipation, the tradeoffs between hardware area, runtime, energy, and classification accuracy, respectively based on different spike codes on a Xilinx Zynq-7000 FPGA. These results are achieved while incurring little extra hardware overhead.
Collapse
Affiliation(s)
- Changqing Xu
- School of Microelectronics, Xidian University, Xi'an, China
| | - Wenrui Zhang
- Department of Electrical and Computer Engineering, University of California, Santa Barbara, Santa Barbara, CA, United States
| | - Yu Liu
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX, United States
| | - Peng Li
- Department of Electrical and Computer Engineering, University of California, Santa Barbara, Santa Barbara, CA, United States
| |
Collapse
|