1
|
Raviv N. Linear Codes for Hyperdimensional Computing. Neural Comput 2024; 36:1084-1120. [PMID: 38669691 DOI: 10.1162/neco_a_01665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Accepted: 01/18/2024] [Indexed: 04/28/2024]
Abstract
Hyperdimensional computing (HDC) is an emerging computational paradigm for representing compositional information as high-dimensional vectors and has a promising potential in applications ranging from machine learning to neuromorphic computing. One of the long-standing challenges in HDC is factoring a compositional representation to its constituent factors, also known as the recovery problem. In this article, we take a novel approach to solve the recovery problem and propose the use of random linear codes. These codes are subspaces over the Boolean field and are a well-studied topic in information theory with various applications in digital communication. We begin by showing that hyperdimensional encoding using random linear codes retains favorable properties of the prevalent (ordinary) random codes; hence, HD representations using the two methods have comparable information storage capabilities. We proceed to show that random linear codes offer a rich subcode structure that can be used to form key-value stores, which encapsulate the most used cases of HDC. Most important, we show that under the framework we develop, random linear codes admit simple recovery algorithms to factor (either bundled or bound) compositional representations. The former relies on constructing certain linear equation systems over the Boolean field, the solution to which reduces the search space dramatically and strictly outperforms exhaustive search in many cases. The latter employs the subspace structure of these codes to achieve provably correct factorization. Both methods are strictly faster than the state-of-the-art resonator networks, often by an order of magnitude. We implemented our techniques in Python using a benchmark software library and demonstrated promising experimental results.
Collapse
Affiliation(s)
- Netanel Raviv
- Washington University in St. Louis, St. Louis, MO, U.S.A.
| |
Collapse
|
2
|
Osipov E, Kahawala S, Haputhanthri D, Kempitiya T, De Silva D, Alahakoon D, Kleyko D. Hyperseed: Unsupervised Learning With Vector Symbolic Architectures. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:6583-6597. [PMID: 36383581 DOI: 10.1109/tnnls.2022.3211274] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Motivated by recent innovations in biologically inspired neuromorphic hardware, this article presents a novel unsupervised machine learning algorithm named Hyperseed that draws on the principles of vector symbolic architectures (VSAs) for fast learning of a topology preserving feature map of unlabeled data. It relies on two major operations of VSA, binding and bundling. The algorithmic part of Hyperseed is expressed within the Fourier holographic reduced representations (FHRR) model, which is specifically suited for implementation on spiking neuromorphic hardware. The two primary contributions of the Hyperseed algorithm are few-shot learning and a learning rule based on single vector operation. These properties are empirically evaluated on synthetic datasets and on illustrative benchmark use cases, IRIS classification, and a language identification task using the n -gram statistics. The results of these experiments confirm the capabilities of Hyperseed and its applications in neuromorphic hardware.
Collapse
|
3
|
Hernández-Cano A, Ni Y, Zou Z, Zakeri A, Imani M. Hyperdimensional computing with holographic and adaptive encoder. Front Artif Intell 2024; 7:1371988. [PMID: 38655269 PMCID: PMC11037243 DOI: 10.3389/frai.2024.1371988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 03/18/2024] [Indexed: 04/26/2024] Open
Abstract
Introduction Brain-inspired computing has become an emerging field, where a growing number of works focus on developing algorithms that bring machine learning closer to human brains at the functional level. As one of the promising directions, Hyperdimensional Computing (HDC) is centered around the idea of having holographic and high-dimensional representation as the neural activities in our brains. Such representation is the fundamental enabler for the efficiency and robustness of HDC. However, existing HDC-based algorithms suffer from limitations within the encoder. To some extent, they all rely on manually selected encoders, meaning that the resulting representation is never adapted to the tasks at hand. Methods In this paper, we propose FLASH, a novel hyperdimensional learning method that incorporates an adaptive and learnable encoder design, aiming at better overall learning performance while maintaining good properties of HDC representation. Current HDC encoders leverage Random Fourier Features (RFF) for kernel correspondence and enable locality-preserving encoding. We propose to learn the encoder matrix distribution via gradient descent and effectively adapt the kernel for a more suitable HDC encoding. Results Our experiments on various regression datasets show that tuning the HDC encoder can significantly boost the accuracy, surpassing the current HDC-based algorithm and providing faster inference than other baselines, including RFF-based kernel ridge regression. Discussion The results indicate the importance of an adaptive encoder and customized high-dimensional representation in HDC.
Collapse
Affiliation(s)
- Alejandro Hernández-Cano
- Department of Computer Science, École polytechnique fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Yang Ni
- Department of Computer Science, University of California, Irvine, Irvine, CA, United States
| | - Zhuowen Zou
- Department of Computer Science, University of California, Irvine, Irvine, CA, United States
| | - Ali Zakeri
- Department of Computer Science, University of California, Irvine, Irvine, CA, United States
| | - Mohsen Imani
- Department of Computer Science, University of California, Irvine, Irvine, CA, United States
| |
Collapse
|
4
|
Kempitiya T, Alahakoon D, Osipov E, Kahawala S, De Silva D. A Two-Layer Self-Organizing Map with Vector Symbolic Architecture for Spatiotemporal Sequence Learning and Prediction. Biomimetics (Basel) 2024; 9:175. [PMID: 38534860 DOI: 10.3390/biomimetics9030175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Revised: 03/05/2024] [Accepted: 03/09/2024] [Indexed: 03/28/2024] Open
Abstract
We propose a new nature- and neuro-science-inspired algorithm for spatiotemporal learning and prediction based on sequential recall and vector symbolic architecture. A key novelty is the learning of spatial and temporal patterns as decoupled concepts where the temporal pattern sequences are constructed using the learned spatial patterns as an alphabet of elements. The decoupling, motivated by cognitive neuroscience research, provides the flexibility for fast and adaptive learning with dynamic changes to data and concept drift and as such is better suited for real-time learning and prediction. The algorithm further addresses several key computational requirements for predicting the next occurrences based on real-life spatiotemporal data, which have been found to be challenging with current state-of-the-art algorithms. Firstly, spatial and temporal patterns are detected using unsupervised learning from unlabeled data streams in changing environments; secondly, vector symbolic architecture (VSA) is used to manage variable-length sequences; and thirdly, hyper dimensional (HD) computing-based associative memory is used to facilitate the continuous prediction of the next occurrences in sequential patterns. The algorithm has been empirically evaluated using two benchmark and three time-series datasets to demonstrate its advantages compared to the state-of-the-art in spatiotemporal unsupervised sequence learning where the proposed ST-SOM algorithm is able to achieve 45% error reduction compared to HTM algorithm.
Collapse
Affiliation(s)
- Thimal Kempitiya
- Centre for Data Analytics and Cognition, La Trobe University, Melbourne, VIC 3086, Australia
| | - Damminda Alahakoon
- Centre for Data Analytics and Cognition, La Trobe University, Melbourne, VIC 3086, Australia
| | - Evgeny Osipov
- Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, 971 87 Luleå, Sweden
| | - Sachin Kahawala
- Centre for Data Analytics and Cognition, La Trobe University, Melbourne, VIC 3086, Australia
| | - Daswin De Silva
- Centre for Data Analytics and Cognition, La Trobe University, Melbourne, VIC 3086, Australia
| |
Collapse
|
5
|
Kleyko D, Bybee C, Huang PC, Kymn CJ, Olshausen BA, Frady EP, Sommer FT. Efficient Decoding of Compositional Structure in Holistic Representations. Neural Comput 2023; 35:1159-1186. [PMID: 37187162 DOI: 10.1162/neco_a_01590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 02/27/2023] [Indexed: 05/17/2023]
Abstract
We investigate the task of retrieving information from compositional distributed representations formed by hyperdimensional computing/vector symbolic architectures and present novel techniques that achieve new information rate bounds. First, we provide an overview of the decoding techniques that can be used to approach the retrieval task. The techniques are categorized into four groups. We then evaluate the considered techniques in several settings that involve, for example, inclusion of external noise and storage elements with reduced precision. In particular, we find that the decoding techniques from the sparse coding and compressed sensing literature (rarely used for hyperdimensional computing/vector symbolic architectures) are also well suited for decoding information from the compositional distributed representations. Combining these decoding techniques with interference cancellation ideas from communications improves previously reported bounds (Hersche et al., 2021) of the information rate of the distributed representations from 1.20 to 1.40 bits per dimension for smaller codebooks and from 0.60 to 1.26 bits per dimension for larger codebooks.
Collapse
Affiliation(s)
- Denis Kleyko
- Redwood Center for Theoretical Neuroscience, University of California at Berkeley, Berkeley, CA 94720, U.S.A
- Intelligent Systems Laboratory, Research Institutes of Sweden, 16440 Kista, Sweden
| | - Connor Bybee
- Redwood Center for Theoretical Neuroscience, University of California at Berkeley, Berkeley, CA 94720, U.S.A.
| | - Ping-Chen Huang
- Redwood Center for Theoretical Neuroscience, University of California at Berkeley, Berkeley, CA 94720, U.S.A.
| | - Christopher J Kymn
- Redwood Center for Theoretical Neuroscience, University of California at Berkeley, Berkeley, CA 94720, U.S.A.
| | - Bruno A Olshausen
- Redwood Center for Theoretical Neuroscience, University of California at Berkeley, Berkeley, CA 94720, U.S.A.
| | - E Paxon Frady
- Neuromorphic Computing Laboratory, Intel Labs, Santa Clara, CA 95054, U.S.A.
| | - Friedrich T Sommer
- Redwood Center for Theoretical Neuroscience, University of California at Berkeley, Berkeley, CA 94720, U.S.A
- Neuromorphic Computing Laboratory, Intel Labs, Santa Clara, CA 95054, U.S.A.
| |
Collapse
|
6
|
Frady EP, Kleyko D, Sommer FT. Variable Binding for Sparse Distributed Representations: Theory and Applications. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2191-2204. [PMID: 34478381 DOI: 10.1109/tnnls.2021.3105949] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Variable binding is a cornerstone of symbolic reasoning and cognition. But how binding can be implemented in connectionist models has puzzled neuroscientists, cognitive psychologists, and neural network researchers for many decades. One type of connectionist model that naturally includes a binding operation is vector symbolic architectures (VSAs). In contrast to other proposals for variable binding, the binding operation in VSAs is dimensionality-preserving, which enables representing complex hierarchical data structures, such as trees, while avoiding a combinatoric expansion of dimensionality. Classical VSAs encode symbols by dense randomized vectors, in which information is distributed throughout the entire neuron population. By contrast, in the brain, features are encoded more locally, by the activity of single neurons or small groups of neurons, often forming sparse vectors of neural activation. Following Laiho et al. (2015), we explore symbolic reasoning with a special case of sparse distributed representations. Using techniques from compressed sensing, we first show that variable binding in classical VSAs is mathematically equivalent to tensor product binding between sparse feature vectors, another well-known binding operation which increases dimensionality. This theoretical result motivates us to study two dimensionality-preserving binding methods that include a reduction of the tensor matrix into a single sparse vector. One binding method for general sparse vectors uses random projections, the other, block-local circular convolution, is defined for sparse vectors with block structure, sparse block-codes. Our experiments reveal that block-local circular convolution binding has ideal properties, whereas random projection based binding also works, but is lossy. We demonstrate in example applications that a VSA with block-local circular convolution and sparse block-codes reaches similar performance as classical VSAs. Finally, we discuss our results in the context of neuroscience and neural networks.
Collapse
|
7
|
Rachkovskij DA. Representation of spatial objects by shift-equivariant similarity-preserving hypervectors. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07619-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
8
|
Kleyko D, Frady EP, Kheffache M, Osipov E. Integer Echo State Networks: Efficient Reservoir Computing for Digital Hardware. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1688-1701. [PMID: 33351770 DOI: 10.1109/tnnls.2020.3043309] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We propose an approximation of echo state networks (ESNs) that can be efficiently implemented on digital hardware based on the mathematics of hyperdimensional computing. The reservoir of the proposed integer ESN (intESN) is a vector containing only n -bits integers (where is normally sufficient for a satisfactory performance). The recurrent matrix multiplication is replaced with an efficient cyclic shift operation. The proposed intESN approach is verified with typical tasks in reservoir computing: memorizing of a sequence of inputs, classifying time series, and learning dynamic processes. Such architecture results in dramatic improvements in memory footprint and computational efficiency, with minimal performance loss. The experiments on a field-programmable gate array confirm that the proposed intESN approach is much more energy efficient than the conventional ESN.
Collapse
|
9
|
Osipov GS, Panov AI. Planning Rational Behavior of Cognitive Semiotic Agents in a Dynamic Environment. SCIENTIFIC AND TECHNICAL INFORMATION PROCESSING 2022. [DOI: 10.3103/s0147688221060113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
10
|
Abstract
AbstractVector Symbolic Architectures combine a high-dimensional vector space with a set of carefully designed operators in order to perform symbolic computations with large numerical vectors. Major goals are the exploitation of their representational power and ability to deal with fuzziness and ambiguity. Over the past years, several VSA implementations have been proposed. The available implementations differ in the underlying vector space and the particular implementations of the VSA operators. This paper provides an overview of eleven available VSA implementations and discusses their commonalities and differences in the underlying vector space and operators. We create a taxonomy of available binding operations and show an important ramification for non self-inverse binding operations using an example from analogical reasoning. A main contribution is the experimental comparison of the available implementations in order to evaluate (1) the capacity of bundles, (2) the approximation quality of non-exact unbinding operations, (3) the influence of combining binding and bundling operations on the query answering performance, and (4) the performance on two example applications: visual place- and language-recognition. We expect this comparison and systematization to be relevant for development of VSAs, and to support the selection of an appropriate VSA for a particular task. The implementations are available.
Collapse
|
11
|
Watkinson N, Givargis T, Joe V, Nicolau A, Veidenbaum A. Detecting COVID-19 Related Pneumonia On CT Scans Using Hyperdimensional Computing. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3970-3973. [PMID: 34892100 DOI: 10.1109/embc46164.2021.9630898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Pneumonia is a common complication associated with COVID-19 infections. Unlike common versions of pneumonia that spread quickly through large lung regions, COVID-19 related pneumonia starts in small localized pockets before spreading over the course of several days. This makes the infection more resilient and with a high probability of developing acute respiratory distress syndrome. Because of the peculiar spread pattern, the use of pulmonary computerized tomography (CT) scans was key in identifying COVID-19 infections. Identifying uncommon pulmonary diseases could be a strong line of defense in early detection of new respiratory infection-causing viruses. In this paper we describe a classification algorithm based on hyperdimensional computing for the detection of COVID-19 pneumonia in CT scans. We test our algorithm using three different datasets. The highest reported accuracy is 95.2% with an F1 score of 0.90, and all three models had a precision of 1 (0 false positives).
Collapse
|
12
|
Watkinson N, Givargis T, Joe V, Nicolau A, Veidenbaum A. Class-Modeling of Septic Shock With Hyperdimensional Computing. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:1653-1659. [PMID: 34891603 DOI: 10.1109/embc46164.2021.9630353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Sepsis arises when a patient's immune system has an extreme reaction to an infection. This is followed by septic shock if damage to organ tissue is so extensive that it causes a total systemic failure. Early detection of septic shock among septic patients could save critical time for preparation and prevention treatment. Due to the high variance in symptoms and patient state before shock, it is challenging to create a protocol that would be effective across patients. However, since septic shock is an acute change in patient state, modeling patient stability could be more effective in detecting a condition that departs from it. In this paper we present a one-class classification approach to septic shock using hyperdimensional computing. We built various models that consider different contexts and can be adapted according to a target priority. Among septic patients, the models can detect septic shock accurately with 90% sensitivity and overall accuracy of 60% of the cases up to three hours before the onset of septic shock, with the ability to adjust predictions according to incoming data. Additionally, the models can be easily adapted to prioritize sensitivity (increase true positives) or specificity (decrease false positives).
Collapse
|
13
|
Mitrokhin A, Sutor P, Summers-Stay D, Fermüller C, Aloimonos Y. Symbolic Representation and Learning With Hyperdimensional Computing. Front Robot AI 2021; 7:63. [PMID: 33501231 PMCID: PMC7805681 DOI: 10.3389/frobt.2020.00063] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2020] [Accepted: 04/15/2020] [Indexed: 11/13/2022] Open
Abstract
It has been proposed that machine learning techniques can benefit from symbolic representations and reasoning systems. We describe a method in which the two can be combined in a natural and direct way by use of hyperdimensional vectors and hyperdimensional computing. By using hashing neural networks to produce binary vector representations of images, we show how hyperdimensional vectors can be constructed such that vector-symbolic inference arises naturally out of their output. We design the Hyperdimensional Inference Layer (HIL) to facilitate this process and evaluate its performance compared to baseline hashing networks. In addition to this, we show that separate network outputs can directly be fused at the vector symbolic level within HILs to improve performance and robustness of the overall model. Furthermore, to the best of our knowledge, this is the first instance in which meaningful hyperdimensional representations of images are created on real data, while still maintaining hyperdimensionality.
Collapse
Affiliation(s)
- Anton Mitrokhin
- Computer Vision Laboratory, Department of Computer Science, University of Maryland Institute for Advanced Computer Studies, University of Maryland, College Park, MD, United States
| | - Peter Sutor
- Computer Vision Laboratory, Department of Computer Science, University of Maryland Institute for Advanced Computer Studies, University of Maryland, College Park, MD, United States
| | - Douglas Summers-Stay
- Computational and Information Sciences Directorate, Army Research Laboratory, Adelphi, MD, United States
| | - Cornelia Fermüller
- Computer Vision Laboratory, Department of Computer Science, University of Maryland Institute for Advanced Computer Studies, University of Maryland, College Park, MD, United States
| | - Yiannis Aloimonos
- Computer Vision Laboratory, Department of Computer Science, University of Maryland Institute for Advanced Computer Studies, University of Maryland, College Park, MD, United States
| |
Collapse
|
14
|
Frady EP, Kent SJ, Olshausen BA, Sommer FT. Resonator Networks, 1: An Efficient Solution for Factoring High-Dimensional, Distributed Representations of Data Structures. Neural Comput 2020; 32:2311-2331. [PMID: 33080162 DOI: 10.1162/neco_a_01331] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The ability to encode and manipulate data structures with distributed neural representations could qualitatively enhance the capabilities of traditional neural networks by supporting rule-based symbolic reasoning, a central property of cognition. Here we show how this may be accomplished within the framework of Vector Symbolic Architectures (VSAs) (Plate, 1991; Gayler, 1998; Kanerva, 1996), whereby data structures are encoded by combining high-dimensional vectors with operations that together form an algebra on the space of distributed representations. In particular, we propose an efficient solution to a hard combinatorial search problem that arises when decoding elements of a VSA data structure: the factorization of products of multiple codevectors. Our proposed algorithm, called a resonator network, is a new type of recurrent neural network that interleaves VSA multiplication operations and pattern completion. We show in two examples-parsing of a tree-like data structure and parsing of a visual scene-how the factorization problem arises and how the resonator network can solve it. More broadly, resonator networks open the possibility of applying VSAs to myriad artificial intelligence problems in real-world domains. The companion article in this issue (Kent, Frady, Sommer, & Olshausen, 2020) presents a rigorous analysis and evaluation of the performance of resonator networks, showing it outperforms alternative approaches.
Collapse
Affiliation(s)
- E Paxon Frady
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA 94720, U.S.A., and Intel Laboratories, Neuromorphic Computing Lab, San Francisco, CA, 94111, U.S.A.
| | - Spencer J Kent
- Redwood Center for Theoretical Neuroscience and Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA 94720, U.S.A.
| | - Bruno A Olshausen
- Redwood Center for Theoretical Neuroscience, Helen Wills Neuroscience Institute, and School of Optometry, University of California, Berkeley, Berkeley, CA 94720, U.S.A.
| | - Friedrich T Sommer
- Redwood Center for Theoretical Neuroscience and Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, U.S.A., and Intel Laboratories, Neuromorphic Computing Lab, San Francisco, CA 94111, U.S.A.
| |
Collapse
|
15
|
Neubert P, Schubert S, Protzel P. An Introduction to Hyperdimensional Computing for Robotics. KUNSTLICHE INTELLIGENZ 2019. [DOI: 10.1007/s13218-019-00623-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|