1
|
Liu Y, Tian H, Wu F, Liu A, Li Y, Sun H, Lanza M, Ren TL. Cellular automata imbedded memristor-based recirculated logic in-memory computing. Nat Commun 2023; 14:2695. [PMID: 37165017 PMCID: PMC10172358 DOI: 10.1038/s41467-023-38299-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Accepted: 04/20/2023] [Indexed: 05/12/2023] Open
Abstract
Memristor-based circuits offer low hardware costs and in-memory computing, but full-memristive circuit integration for different algorithm remains limited. Cellular automata (CA) has been noticed for its well-known parallel, bio-inspired, computational characteristics. Running CA on conventional chips suffers from low parallelism and high hardware costs. Establishing dedicated hardware for CA remains elusive. We propose a recirculated logic operation scheme (RLOS) using memristive hardware and 2D transistors for CA evolution, significantly reducing hardware complexity. RLOS's versatility supports multiple CA algorithms on a single circuit, including elementary CA rules and more complex majority classification and edge detection algorithms. Results demonstrate up to a 79-fold reduction in hardware costs compared to FPGA-based approaches. RLOS-based reservoir computing is proposed for edge computing development, boasting the lowest hardware cost (6 components/per cell) among existing implementations. This work advances efficient, low-cost CA hardware and encourages edge computing hardware exploration.
Collapse
Affiliation(s)
- Yanming Liu
- School of Integrated Circuits, Tsinghua University, 100084, Beijing, China
- Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, 100084, Beijing, China
| | - He Tian
- School of Integrated Circuits, Tsinghua University, 100084, Beijing, China.
- Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, 100084, Beijing, China.
| | - Fan Wu
- School of Integrated Circuits, Tsinghua University, 100084, Beijing, China
- Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, 100084, Beijing, China
| | - Anhan Liu
- School of Integrated Circuits, Tsinghua University, 100084, Beijing, China
- Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, 100084, Beijing, China
| | - Yihao Li
- Weiyang College, Tsinghua University, 100084, Beijing, China
| | - Hao Sun
- School of Integrated Circuits, Tsinghua University, 100084, Beijing, China
- Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, 100084, Beijing, China
| | - Mario Lanza
- Physical Science and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia
| | - Tian-Ling Ren
- School of Integrated Circuits, Tsinghua University, 100084, Beijing, China.
- Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, 100084, Beijing, China.
| |
Collapse
|
2
|
Teeters JL, Kleyko D, Kanerva P, Olshausen BA. On separating long- and short-term memories in hyperdimensional computing. Front Neurosci 2023; 16:867568. [PMID: 36699525 PMCID: PMC9869149 DOI: 10.3389/fnins.2022.867568] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 10/05/2022] [Indexed: 01/12/2023] Open
Abstract
Operations on high-dimensional, fixed-width vectors can be used to distribute information from several vectors over a single vector of the same width. For example, a set of key-value pairs can be encoded into a single vector with multiplication and addition of the corresponding key and value vectors: the keys are bound to their values with component-wise multiplication, and the key-value pairs are combined into a single superposition vector with component-wise addition. The superposition vector is, thus, a memory which can then be queried for the value of any of the keys, but the result of the query is approximate. The exact vector is retrieved from a codebook (a.k.a. item memory), which contains vectors defined in the system. To perform these operations, the item memory vectors and the superposition vector must be the same width. Increasing the capacity of the memory requires increasing the width of the superposition and item memory vectors. In this article, we demonstrate that in a regime where many (e.g., 1,000 or more) key-value pairs are stored, an associative memory which maps key vectors to value vectors requires less memory and less computing to obtain the same reliability of storage as a superposition vector. These advantages are obtained because the number of storage locations in an associate memory can be increased without increasing the width of the vectors in the item memory. An associative memory would not replace a superposition vector as a medium of storage, but could augment it, because data recalled from an associative memory could be used in algorithms that use a superposition vector. This would be analogous to how human working memory (which stores about seven items) uses information recalled from long-term memory (which is much larger than the working memory). We demonstrate the advantages of an associative memory experimentally using the storage of large finite-state automata, which could model the storage and recall of state-dependent behavior by brains.
Collapse
Affiliation(s)
- Jeffrey L. Teeters
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA, United States
| | - Denis Kleyko
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA, United States
- Intelligent Systems Lab, Research Institutes of Sweden, Kista, Sweden
| | - Pentti Kanerva
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA, United States
| | - Bruno A. Olshausen
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA, United States
| |
Collapse
|
3
|
Kleyko D, Davies M, Frady EP, Kanerva P, Kent SJ, Olshausen BA, Osipov E, Rabaey JM, Rachkovskij DA, Rahimi A, Sommer FT. Vector Symbolic Architectures as a Computing Framework for Emerging Hardware. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2022; 110:1538-1571. [PMID: 37868615 PMCID: PMC10588678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 10/24/2023]
Abstract
This article reviews recent progress in the development of the computing framework Vector Symbolic Architectures (also known as Hyperdimensional Computing). This framework is well suited for implementation in stochastic, emerging hardware and it naturally expresses the types of cognitive operations required for Artificial Intelligence (AI). We demonstrate in this article that the field-like algebraic structure of Vector Symbolic Architectures offers simple but powerful operations on high-dimensional vectors that can support all data structures and manipulations relevant to modern computing. In addition, we illustrate the distinguishing feature of Vector Symbolic Architectures, "computing in superposition," which sets it apart from conventional computing. It also opens the door to efficient solutions to the difficult combinatorial search problems inherent in AI applications. We sketch ways of demonstrating that Vector Symbolic Architectures are computationally universal. We see them acting as a framework for computing with distributed representations that can play a role of an abstraction layer for emerging computing hardware. This article serves as a reference for computer architects by illustrating the philosophy behind Vector Symbolic Architectures, techniques of distributed computing with them, and their relevance to emerging computing hardware, such as neuromorphic computing.
Collapse
Affiliation(s)
- Denis Kleyko
- Redwood Center for Theoretical Neuroscience at the University of California at Berkeley, CA 94720, USA and also with the Intelligent Systems Lab at Research Institutes of Sweden, 16440 Kista, Sweden
| | - Mike Davies
- Neuromorphic Computing Lab, Intel Labs, Santa Clara, CA 95054, USA
| | - E Paxon Frady
- Neuromorphic Computing Lab, Intel Labs, Santa Clara, CA 95054, USA
| | - Pentti Kanerva
- Redwood Center for Theoretical Neuroscience at the University of California at Berkeley, CA 94720, USA
| | - Spencer J Kent
- Redwood Center for Theoretical Neuroscience at the University of California at Berkeley, CA 94720, USA
| | - Bruno A Olshausen
- Redwood Center for Theoretical Neuroscience at the University of California at Berkeley, CA 94720, USA
| | - Evgeny Osipov
- Department of Computer Science Electrical and Space Engineering, Luleå University of Technology, 97187 Luleå, Sweden
| | - Jan M Rabaey
- Department of Electrical Engineering and Computer Sciences at the University of California at Berkeley, CA 94720, USA
| | - Dmitri A Rachkovskij
- International Research and Training Center for Information Technologies and Systems, 03680 Kyiv, Ukraine, and with the Department of Computer Science Electrical and Space Engineering, Luleå University of Technology, 97187 Luleå, Sweden
| | - Abbas Rahimi
- IBM Research - Zurich, 8803 Rüschlikon, Switzerland
| | - Friedrich T Sommer
- Neuromorphic Computing Lab, Intel Labs, Santa Clara, CA 95054, USA and also with the Redwood Center for Theoretical Neuroscience at the University of California at Berkeley, CA 94720, USA
| |
Collapse
|
4
|
Menon A, Sun D, Sabouri S, Lee K, Aristio M, Liew H, Rabaey JM. A Highly Energy-Efficient Hyperdimensional Computing Processor for Biosignal Classification. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2022; 16:524-534. [PMID: 35776812 DOI: 10.1109/tbcas.2022.3187944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Hyperdimensional computing (HDC) is a brain-inspired computing paradigm that operates on pseudo-random hypervectors to perform high-accuracy classifications for biomedical applications. The energy efficiency of prior HDC processors for this computationally minimal algorithm is dominated by costly hypervector memory storage, which grows linearly with the number of sensors. To address this, the memory is replaced with a light-weight cellular automaton for on-the-fly hypervector generation. The use of this technique is explored in conjunction with vector folding for various real-time classification latencies in post-layout simulation on an emotion recognition dataset with 200 channels. The proposed architecture achieves 39.1 nJ/prediction; a 4.9× energy efficiency improvement, 9.5× per channel, over the state-of-the-art HDC processor. At maximum throughput, the architecture achieves a 10.7× improvement, 33.5× per channel. An optimized support vector machine (SVM) processor is designed in this work for the same use-case. HDC is 9.5× more energy-efficient than the SVM, paving the way for it to become the paradigm of choice for high-accuracy, on-board biosignal classification.
Collapse
|
5
|
Menon A, Natarajan A, Agashe R, Sun D, Aristio M, Liew H, Shao YS, Rabaey JM. Efficient emotion recognition using hyperdimensional computing with combinatorial channel encoding and cellular automata. Brain Inform 2022; 9:14. [PMID: 35759153 PMCID: PMC9237202 DOI: 10.1186/s40708-022-00162-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 06/15/2022] [Indexed: 12/02/2022] Open
Abstract
In this paper, a hardware-optimized approach to emotion recognition based on the efficient brain-inspired hyperdimensional computing (HDC) paradigm is proposed. Emotion recognition provides valuable information for human-computer interactions; however, the large number of input channels (> 200) and modalities (> 3 ) involved in emotion recognition are significantly expensive from a memory perspective. To address this, methods for memory reduction and optimization are proposed, including a novel approach that takes advantage of the combinatorial nature of the encoding process, and an elementary cellular automaton. HDC with early sensor fusion is implemented alongside the proposed techniques achieving two-class multi-modal classification accuracies of > 76% for valence and > 73% for arousal on the multi-modal AMIGOS and DEAP data sets, almost always better than state of the art. The required vector storage is seamlessly reduced by 98% and the frequency of vector requests by at least 1/5. The results demonstrate the potential of efficient hyperdimensional computing for low-power, multi-channeled emotion recognition tasks.
Collapse
Affiliation(s)
- Alisha Menon
- Department of Electrical Engineering and Computer Science, University of California at Berkeley, Berkeley, CA USA
| | - Anirudh Natarajan
- Department of Electrical Engineering and Computer Science, University of California at Berkeley, Berkeley, CA USA
| | - Reva Agashe
- Department of Electrical Engineering and Computer Science, University of California at Berkeley, Berkeley, CA USA
| | - Daniel Sun
- Department of Electrical Engineering and Computer Science, University of California at Berkeley, Berkeley, CA USA
| | - Melvin Aristio
- Department of Electrical Engineering and Computer Science, University of California at Berkeley, Berkeley, CA USA
| | - Harrison Liew
- Department of Electrical Engineering and Computer Science, University of California at Berkeley, Berkeley, CA USA
| | - Yakun Sophia Shao
- Department of Electrical Engineering and Computer Science, University of California at Berkeley, Berkeley, CA USA
| | - Jan M. Rabaey
- Department of Electrical Engineering and Computer Science, University of California at Berkeley, Berkeley, CA USA
| |
Collapse
|