1
|
Ravichandran N, Lansner A, Herman P. Spiking representation learning for associative memories. Front Neurosci 2024; 18:1439414. [PMID: 39371606 PMCID: PMC11450452 DOI: 10.3389/fnins.2024.1439414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Accepted: 08/29/2024] [Indexed: 10/08/2024] Open
Abstract
Networks of interconnected neurons communicating through spiking signals offer the bedrock of neural computations. Our brain's spiking neural networks have the computational capacity to achieve complex pattern recognition and cognitive functions effortlessly. However, solving real-world problems with artificial spiking neural networks (SNNs) has proved to be difficult for a variety of reasons. Crucially, scaling SNNs to large networks and processing large-scale real-world datasets have been challenging, especially when compared to their non-spiking deep learning counterparts. The critical operation that is needed of SNNs is the ability to learn distributed representations from data and use these representations for perceptual, cognitive and memory operations. In this work, we introduce a novel SNN that performs unsupervised representation learning and associative memory operations leveraging Hebbian synaptic and activity-dependent structural plasticity coupled with neuron-units modelled as Poisson spike generators with sparse firing (~1 Hz mean and ~100 Hz maximum firing rate). Crucially, the architecture of our model derives from the neocortical columnar organization and combines feedforward projections for learning hidden representations and recurrent projections for forming associative memories. We evaluated the model on properties relevant for attractor-based associative memories such as pattern completion, perceptual rivalry, distortion resistance, and prototype extraction.
Collapse
Affiliation(s)
- Naresh Ravichandran
- Computational Cognitive Brain Science Group, Department of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Anders Lansner
- Computational Cognitive Brain Science Group, Department of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
- Department of Mathematics, Stockholm University, Stockholm, Sweden
| | - Pawel Herman
- Computational Cognitive Brain Science Group, Department of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
- Digital Futures, KTH Royal Institute of Technology, Stockholm, Sweden
- Swedish e-Science Research Centre (SeRC), Stockholm, Sweden
| |
Collapse
|
2
|
Sacouto L, Wichert A. Competitive learning to generate sparse representations for associative memory. Neural Netw 2023; 168:32-43. [PMID: 37734137 DOI: 10.1016/j.neunet.2023.09.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 08/07/2023] [Accepted: 09/03/2023] [Indexed: 09/23/2023]
Abstract
One of the most well established brain principles, Hebbian learning, has led to the theoretical concept of neural assemblies. Based on it, many interesting brain theories have spawned. Palm's work implements this concept through multiple binary Willshaw associative memories, in a model that not only has a wide cognitive explanatory power but also makes neuroscientific predictions. Yet, Willshaw's associative memory can only achieve top capacity when the stored vectors are extremely sparse (number of active bits can grow logarithmically with the vector's length). This strict requirement makes it difficult to apply any model that uses this associative memory, like Palm's, to real data. Hence the fact that most works apply the memory to optimal randomly generated codes that do not represent any information. This issue creates the need for encoders that can take real data, and produce sparse representations - a problem which is also raised following Barlow's efficient coding principle. In this work, we propose a biologically-constrained network that encodes images into codes that are suitable for Willshaw's associative memory. The network is organized into groups of neurons that specialize on local receptive fields, and learn through a competitive scheme. After conducting auto- and hetero-association experiments on two visual data sets, we can conclude that our network not only beats sparse coding baselines, but also that it comes close to the performance achieved using optimal random codes.
Collapse
Affiliation(s)
- Luis Sacouto
- INESC-id & Instituto Superior Tecnico, University of Lisbon, Av. Rovisco Pais 1, Lisbon, 1049-001, Portugal.
| | - Andreas Wichert
- INESC-id & Instituto Superior Tecnico, University of Lisbon, Av. Rovisco Pais 1, Lisbon, 1049-001, Portugal.
| |
Collapse
|
3
|
Frady EP, Kleyko D, Sommer FT. Variable Binding for Sparse Distributed Representations: Theory and Applications. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2191-2204. [PMID: 34478381 DOI: 10.1109/tnnls.2021.3105949] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Variable binding is a cornerstone of symbolic reasoning and cognition. But how binding can be implemented in connectionist models has puzzled neuroscientists, cognitive psychologists, and neural network researchers for many decades. One type of connectionist model that naturally includes a binding operation is vector symbolic architectures (VSAs). In contrast to other proposals for variable binding, the binding operation in VSAs is dimensionality-preserving, which enables representing complex hierarchical data structures, such as trees, while avoiding a combinatoric expansion of dimensionality. Classical VSAs encode symbols by dense randomized vectors, in which information is distributed throughout the entire neuron population. By contrast, in the brain, features are encoded more locally, by the activity of single neurons or small groups of neurons, often forming sparse vectors of neural activation. Following Laiho et al. (2015), we explore symbolic reasoning with a special case of sparse distributed representations. Using techniques from compressed sensing, we first show that variable binding in classical VSAs is mathematically equivalent to tensor product binding between sparse feature vectors, another well-known binding operation which increases dimensionality. This theoretical result motivates us to study two dimensionality-preserving binding methods that include a reduction of the tensor matrix into a single sparse vector. One binding method for general sparse vectors uses random projections, the other, block-local circular convolution, is defined for sparse vectors with block structure, sparse block-codes. Our experiments reveal that block-local circular convolution binding has ideal properties, whereas random projection based binding also works, but is lossy. We demonstrate in example applications that a VSA with block-local circular convolution and sparse block-codes reaches similar performance as classical VSAs. Finally, we discuss our results in the context of neuroscience and neural networks.
Collapse
|
4
|
Khona M, Fiete IR. Attractor and integrator networks in the brain. Nat Rev Neurosci 2022; 23:744-766. [DOI: 10.1038/s41583-022-00642-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/22/2022] [Indexed: 11/06/2022]
|
5
|
Yang C, Xiong Z, Liu J, Chao L, Chen Y. A Path Integration Approach Based on Multiscale Grid Cells for Large-Scale Navigation. IEEE Trans Cogn Dev Syst 2022. [DOI: 10.1109/tcds.2021.3092609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Chuang Yang
- Navigation Research Center, School of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Zhi Xiong
- Navigation Research Center, School of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Jianye Liu
- Navigation Research Center, School of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Lijun Chao
- Navigation Research Center, School of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Yudi Chen
- Navigation Research Center, School of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| |
Collapse
|
6
|
Mofrad AA, Mofrad SA, Yazidi A, Parker MG. On Neural Associative Memory Structures: Storage and Retrieval of Sequences in a Chain of Tournaments. Neural Comput 2021; 33:2550-2577. [PMID: 34412117 DOI: 10.1162/neco_a_01417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2021] [Accepted: 04/06/2021] [Indexed: 11/04/2022]
Abstract
Associative memories enjoy many interesting properties in terms of error correction capabilities, robustness to noise, storage capacity, and retrieval performance, and their usage spans over a large set of applications. In this letter, we investigate and extend tournament-based neural networks, originally proposed by Jiang, Gripon, Berrou, and Rabbat (2016), a novel sequence storage associative memory architecture with high memory efficiency and accurate sequence retrieval. We propose a more general method for learning the sequences, which we call feedback tournament-based neural networks. The retrieval process is also extended to both directions: forward and backward-in other words, any large-enough segment of a sequence can produce the whole sequence. Furthermore, two retrieval algorithms, cache-winner and explore-winner, are introduced to increase the retrieval performance. Through simulation results, we shed light on the strengths and weaknesses of each algorithm.
Collapse
Affiliation(s)
| | - Samaneh Abolpour Mofrad
- Department of Computer Science, Electrical Engineering, and Mathematical Sciences, Western Norway University of Applied Sciences, 5063 Bergen, Norway, and Mohn Medical Imaging and Visualization Center, Haukeland University Hospital, 5021 Bergen, Norway
| | - Anis Yazidi
- Department of Computer Science, OsloMet, Oslo Metropolitan University, 0130 Oslo, Norway and Department of Plastic and Reconstructive Surgery, Oslo University Hospital, 0318 Oslo, Norway
| | | |
Collapse
|
7
|
Knoblauch A, Palm G. Iterative Retrieval and Block Coding in Autoassociative and Heteroassociative Memory. Neural Comput 2020; 32:205-260. [DOI: 10.1162/neco_a_01247] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Neural associative memories (NAM) are perceptron-like single-layer networks with fast synaptic learning typically storing discrete associations between pairs of neural activity patterns. Gripon and Berrou ( 2011 ) investigated NAM employing block coding, a particular sparse coding method, and reported a significant increase in storage capacity. Here we verify and extend their results for both heteroassociative and recurrent autoassociative networks. For this we provide a new analysis of iterative retrieval in finite autoassociative and heteroassociative networks that allows estimating storage capacity for random and block patterns. Furthermore, we have implemented various retrieval algorithms for block coding and compared them in simulations to our theoretical results and previous simulation data. In good agreement of theory and experiments, we find that finite networks employing block coding can store significantly more memory patterns. However, due to the reduced information per block pattern, it is not possible to significantly increase stored information per synapse. Asymptotically, the information retrieval capacity converges to the known limits [Formula: see text] and [Formula: see text] also for block coding. We have also implemented very large recurrent networks up to [Formula: see text] neurons, showing that maximal capacity [Formula: see text] bit per synapse occurs for finite networks having a size [Formula: see text] similar to cortical macrocolumns.
Collapse
Affiliation(s)
- Andreas Knoblauch
- KEIM Institute, Albstadt-Sigmaringen University, D-72458 Albstadt, Germany
| | - Günther Palm
- Ulm University, Institute for Neural Information Processing, D-89081 Ulm, Germany
| |
Collapse
|
8
|
Wouafo H, Chavet C, Coussy P. Clone-Based Encoded Neural Networks to Design Efficient Associative Memories. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:3186-3199. [PMID: 30703044 DOI: 10.1109/tnnls.2018.2890658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In this paper, we introduce a neural network (NN) model named clone-based neural network (CbNN) to design associative memories. Neurons in CbNN can be cloned statically or dynamically which allows to increase the number of data that can be stored and retrieved. Thanks to their plasticity, CbNN can handle correlated information more robustly than existing models and thus provides better memory capacity. We experiment this model in encoded neural networks also known as Gripon-Berrou NNs. Numerical simulations demonstrate that memory and recall abilities of CbNN outperform state of the art for the same memory footprint.
Collapse
|
9
|
Abstract
It is still unknown how associative biological memories operate. Hopfield networks are popular models of associative memory, but they suffer from spurious memories and low efficiency. Here, we present a new model of an associative memory that overcomes these deficiencies. We call this model sparse associative memory (SAM) because it is based on sparse projections from neural patterns to pattern-specific neurons. These sparse projections have been shown to be sufficient to uniquely encode a neural pattern. Based on this principle, we investigate theoretically and in simulation our SAM model, which turns out to have high memory efficiency and a vanishingly small probability of spurious memories. This model may serve as a basic building block of brain functions involving associative memory.
Collapse
|
10
|
Wang Y, Mi X, Rosa GJM, Chen Z, Lin P, Wang S, Bao Z. Technical note: an R package for fitting sparse neural networks with application in animal breeding. J Anim Sci 2018. [PMID: 29529218 DOI: 10.1093/jas/sky071] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Neural networks (NNs) have emerged as a new tool for genomic selection (GS) in animal breeding. However, the properties of NN used in GS for the prediction of phenotypic outcomes are not well characterized due to the problem of over-parameterization of NN and difficulties in using whole-genome marker sets as high-dimensional NN input. In this note, we have developed an R package called snnR that finds an optimal sparse structure of a NN by minimizing the square error subject to a penalty on the L1-norm of the parameters (weights and biases), therefore solving the problem of over-parameterization in NN. We have also tested some models fitted in the snnR package to demonstrate their feasibility and effectiveness to be used in several cases as examples. In comparison of snnR to the R package brnn (the Bayesian regularized single layer NNs), with both using the entries of a genotype matrix or a genomic relationship matrix as inputs, snnR has greatly improved the computational efficiency and the prediction ability for the GS in animal breeding because snnR implements a sparse NN with many hidden layers.
Collapse
Affiliation(s)
- Yangfan Wang
- Ministry of Education Key Laboratory of Marine Genetics and Breeding, Ocean University of China, Qingdao, China
| | - Xue Mi
- Ministry of Education Key Laboratory of Marine Genetics and Breeding, Ocean University of China, Qingdao, China
| | | | - Zhihui Chen
- Division of Cell and Developmental Biology, College of Life Science, University of Dundee, Dundee, UK
| | - Ping Lin
- Division of Mathematics, University of Dundee, Dundee, UK
| | - Shi Wang
- Ministry of Education Key Laboratory of Marine Genetics and Breeding, Ocean University of China, Qingdao, China.,Laboratory for Marine Biology and Biotechnology, Qingdao National Laboratory for Marine Science and Technology, Qingdao, China
| | - Zhenmin Bao
- Ministry of Education Key Laboratory of Marine Genetics and Breeding, Ocean University of China, Qingdao, China.,Laboratory for Marine Fisheries Science and Food Production Processes, Qingdao National Laboratory for Marine Science and Technology, Qingdao, China
| |
Collapse
|
11
|
Hillar CJ, Tran NM. Robust Exponential Memory in Hopfield Networks. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2018; 8:1. [PMID: 29340803 PMCID: PMC5770423 DOI: 10.1186/s13408-017-0056-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/20/2017] [Accepted: 11/22/2017] [Indexed: 06/07/2023]
Abstract
The Hopfield recurrent neural network is a classical auto-associative model of memory, in which collections of symmetrically coupled McCulloch-Pitts binary neurons interact to perform emergent computation. Although previous researchers have explored the potential of this network to solve combinatorial optimization problems or store reoccurring activity patterns as attractors of its deterministic dynamics, a basic open problem is to design a family of Hopfield networks with a number of noise-tolerant memories that grows exponentially with neural population size. Here, we discover such networks by minimizing probability flow, a recently proposed objective for estimating parameters in discrete maximum entropy models. By descending the gradient of the convex probability flow, our networks adapt synaptic weights to achieve robust exponential storage, even when presented with vanishingly small numbers of training patterns. In addition to providing a new set of low-density error-correcting codes that achieve Shannon's noisy channel bound, these networks also efficiently solve a variant of the hidden clique problem in computer science, opening new avenues for real-world applications of computational models originating from biology.
Collapse
|
12
|
Bakhtiary AH, Lapedriza A, Masip D. Winner takes all hashing for speeding up the training of neural networks in large class problems. Pattern Recognit Lett 2017. [DOI: 10.1016/j.patrec.2017.01.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
13
|
Mofrad AA, Parker MG. Nested-Clique Network Model of Neural Associative Memory. Neural Comput 2017; 29:1681-1695. [PMID: 28410053 DOI: 10.1162/neco_a_00964] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Clique-based neural associative memories introduced by Gripon and Berrou (GB), have been shown to have good performance, and in our previous work we improved the learning capacity and retrieval rate by local coding and precoding in the presence of partial erasures. We now take a step forward and consider nested-clique graph structures for the network. The GB model stores patterns as small cliques, and we here replace these by nested cliques. Simulation results show that the nested-clique structure enhances the clique-based model.
Collapse
Affiliation(s)
| | - Matthew G Parker
- Selmer Center, Department of Informatics, University of Bergen, Bergen 5020, Norway
| |
Collapse
|
14
|
Mofrad AA, Parker MG, Ferdosi Z, Tadayon MH. Clique-Based Neural Associative Memories with Local Coding and Precoding. Neural Comput 2016; 28:1553-73. [PMID: 27348736 DOI: 10.1162/neco_a_00856] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Techniques from coding theory are able to improve the efficiency of neuroinspired and neural associative memories by forcing some construction and constraints on the network. In this letter, the approach is to embed coding techniques into neural associative memory in order to increase their performance in the presence of partial erasures. The motivation comes from recent work by Gripon, Berrou, and coauthors, which revisited Willshaw networks and presented a neural network with interacting neurons that partitioned into clusters. The model introduced stores patterns as small-size cliques that can be retrieved in spite of partial error. We focus on improving the success of retrieval by applying two techniques: doing a local coding in each cluster and then applying a precoding step. We use a slightly different decoding scheme, which is appropriate for partial erasures and converges faster. Although the ideas of local coding and precoding are not new, the way we apply them is different. Simulations show an increase in the pattern retrieval capacity for both techniques. Moreover, we use self-dual additive codes over field [Formula: see text], which have very interesting properties and a simple-graph representation.
Collapse
Affiliation(s)
| | - Matthew G Parker
- Selmer Center, Department of Informatics, University of Bergen, Bergen 5020, Norway
| | - Zahra Ferdosi
- Department of Mathematics and Computer Science, Amirkabir University of Technology, Tehran, Iran
| | | |
Collapse
|
15
|
The sound of emotions-Towards a unifying neural network perspective of affective sound processing. Neurosci Biobehav Rev 2016; 68:96-110. [PMID: 27189782 DOI: 10.1016/j.neubiorev.2016.05.002] [Citation(s) in RCA: 117] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2016] [Revised: 05/01/2016] [Accepted: 05/04/2016] [Indexed: 12/15/2022]
Abstract
Affective sounds are an integral part of the natural and social environment that shape and influence behavior across a multitude of species. In human primates, these affective sounds span a repertoire of environmental and human sounds when we vocalize or produce music. In terms of neural processing, cortical and subcortical brain areas constitute a distributed network that supports our listening experience to these affective sounds. Taking an exhaustive cross-domain view, we accordingly suggest a common neural network that facilitates the decoding of the emotional meaning from a wide source of sounds rather than a traditional view that postulates distinct neural systems for specific affective sound types. This new integrative neural network view unifies the decoding of affective valence in sounds, and ascribes differential as well as complementary functional roles to specific nodes within a common neural network. It also highlights the importance of an extended brain network beyond the central limbic and auditory brain systems engaged in the processing of affective sounds.
Collapse
|
16
|
Jiang X, Gripon V, Berrou C, Rabbat M. Storing Sequences in Binary Tournament-Based Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:913-925. [PMID: 27101078 DOI: 10.1109/tnnls.2015.2431319] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
An extension to a recently introduced architecture of clique-based neural networks is presented. This extension makes it possible to store sequences with high efficiency. To obtain this property, network connections are provided with orientation and with flexible redundancy carried by both spatial and temporal redundancies, a mechanism of anticipation being introduced in the model. In addition to the sequence storage with high efficiency, this new scheme also offers biological plausibility. In order to achieve accurate sequence retrieval, a double-layered structure combining heteroassociation and autoassociation is also proposed.
Collapse
|
17
|
Boguslawski B, Gripon V, Seguin F, Heitzmann F. Twin Neurons for Efficient Real-World Data Distribution in Networks of Neural Cliques: Applications in Power Management in Electronic Circuits. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:375-387. [PMID: 26513805 DOI: 10.1109/tnnls.2015.2480545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Associative memories are data structures that allow retrieval of previously stored messages given part of their content. They, thus, behave similarly to the human brain's memory that is capable, for instance, of retrieving the end of a song, given its beginning. Among different families of associative memories, sparse ones are known to provide the best efficiency (ratio of the number of bits stored to that of the bits used). Recently, a new family of sparse associative memories achieving almost optimal efficiency has been proposed. Their structure, relying on binary connections and neurons, induces a direct mapping between input messages and stored patterns. Nevertheless, it is well known that nonuniformity of the stored messages can lead to a dramatic decrease in performance. In this paper, we show the impact of nonuniformity on the performance of this recent model, and we exploit the structure of the model to improve its performance in practical applications, where data are not necessarily uniform. In order to approach the performance of networks with uniformly distributed messages presented in theoretical studies, twin neurons are introduced. To assess the adapted model, twin neurons are used with the real-world data to optimize power consumption of electronic circuits in practical test cases.
Collapse
|
18
|
|
19
|
Karbasi A, Salavati AH, Shokrollahi A, Varshney LR. Noise Facilitation in Associative Memories of Exponential Capacity. Neural Comput 2014; 26:2493-526. [DOI: 10.1162/neco_a_00655] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms have allowed reliable learning and recall of an exponential number of patterns that satisfy certain subspace constraints. Although these designs correct external errors in recall, they assume neurons that compute noiselessly, in contrast to the highly variable neurons in brain regions thought to operate associatively, such as hippocampus and olfactory cortex. Here we consider associative memories with boundedly noisy internal computations and analytically characterize performance. As long as the internal noise level is below a specified threshold, the error probability in the recall phase can be made exceedingly small. More surprising, we show that internal noise improves the performance of the recall phase while the pattern retrieval capacity remains intact: the number of stored patterns does not reduce with noise (up to a threshold). Computational experiments lend additional support to our theoretical analysis. This work suggests a functional benefit to noisy neurons in biological neuronal networks.
Collapse
Affiliation(s)
| | | | - Amin Shokrollahi
- Ecole Polytechnique Federale de Lausanne, Lausanne 1015, Switzerland
| | - Lav R. Varshney
- University of Illinois at Urbana-Champaign, Urbana, IL 61801, U.S.A
| |
Collapse
|
20
|
Aliabadi BK, Berrou C, Gripon V, Jiang X. Storing sparse messages in networks of neural cliques. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2014; 25:980-989. [PMID: 24808043 DOI: 10.1109/tnnls.2013.2285253] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
An extension to a recently introduced binary neural network is proposed to allow the storage of sparse messages, in large numbers and with high memory efficiency. This new network is justified both in biological and informational terms. The storage and retrieval rules are detailed and illustrated by various simulation results.
Collapse
|
21
|
Salavati AH, Kumar KR, Shokrollahi A. Nonbinary associative memory with exponential pattern retrieval capacity and iterative learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2014; 25:557-570. [PMID: 24807451 DOI: 10.1109/tnnls.2013.2277608] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
We consider the problem of neural association for a network of nonbinary neurons. Here, the task is to first memorize a set of patterns using a network of neurons whose states assume values from a finite number of integer levels. Later, the same network should be able to recall the previously memorized patterns from their noisy versions. Prior work in this area consider storing a finite number of purely random patterns, and have shown that the pattern retrieval capacities (maximum number of patterns that can be memorized) scale only linearly with the number of neurons in the network. In our formulation of the problem, we concentrate on exploiting redundancy and internal structure of the patterns to improve the pattern retrieval capacity. Our first result shows that if the given patterns have a suitable linear-algebraic structure, i.e., comprise a subspace of the set of all possible patterns, then the pattern retrieval capacity is exponential in terms of the number of neurons. The second result extends the previous finding to cases where the patterns have weak minor components, i.e., the smallest eigenvalues of the correlation matrix tend toward zero. We will use these minor components (or the basis vectors of the pattern null space) to increase both the pattern retrieval capacity and error correction capabilities. An iterative algorithm is proposed for the learning phase, and two simple algorithms are presented for the recall phase. Using analytical methods and simulations, we show that the proposed methods can tolerate a fair amount of errors in the input while being able to memorize an exponentially large number of patterns.
Collapse
|
22
|
Saad E, Prokhorov D, Wunsch D. Comparative study of stock trend prediction using time delay, recurrent and probabilistic neural networks. ACTA ACUST UNITED AC 1998; 9:1456-70. [DOI: 10.1109/72.728395] [Citation(s) in RCA: 320] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|