1
|
Blanco Malerba S, Pieropan M, Burak Y, Azeredo da Silveira R. Random compressed coding with neurons. Cell Rep 2025; 44:115412. [PMID: 40111998 DOI: 10.1016/j.celrep.2025.115412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 11/20/2023] [Accepted: 02/18/2025] [Indexed: 03/22/2025] Open
Abstract
Classical models of efficient coding in neurons assume simple mean responses-"tuning curves"- such as bell-shaped or monotonic functions of a stimulus feature. Real neurons, however, can be more complex: grid cells, for example, exhibit periodic responses that impart the neural population code with high accuracy. But do highly accurate codes require fine-tuning of the response properties? We address this question with the use of a simple model: a population of neurons with random, spatially extended, and irregular tuning curves. Irregularity enhances the local resolution of the code but gives rise to catastrophic, global errors. For optimal smoothness of the tuning curves, when local and global errors balance out, the neural population compresses information about a continuous stimulus into a low-dimensional representation, and the resulting distributed code achieves exponential accuracy. An analysis of recordings from monkey motor cortex points to such "compressed efficient coding." Efficient codes do not require a finely tuned design-they emerge robustly from irregularity or randomness.
Collapse
Affiliation(s)
- Simone Blanco Malerba
- Laboratoire de Physique de l'Ecole Normale Supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université de Paris, 75005 Paris, France; Institute for Neural Information Processing, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf, 20251 Hamburg, Germany
| | - Mirko Pieropan
- Laboratoire de Physique de l'Ecole Normale Supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université de Paris, 75005 Paris, France
| | - Yoram Burak
- Racah Institute of Physics, Hebrew University of Jerusalem, Jerusalem 9190401, Israel; Edmond and Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, Jerusalem 9190401, Israel
| | - Rava Azeredo da Silveira
- Laboratoire de Physique de l'Ecole Normale Supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université de Paris, 75005 Paris, France; Institute of Molecular and Clinical Ophthalmology Basel, 4031 Basel, Switzerland; Faculty of Science, University of Basel, 4056 Basel, Switzerland; Department of Economics, University of Zurich, 8001 Zurich, Switzerland.
| |
Collapse
|
2
|
Kymn CJ, Kleyko D, Frady EP, Bybee C, Kanerva P, Sommer FT, Olshausen BA. Computing With Residue Numbers in High-Dimensional Representation. Neural Comput 2024; 37:1-37. [PMID: 39556514 DOI: 10.1162/neco_a_01723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Accepted: 08/10/2024] [Indexed: 11/20/2024]
Abstract
We introduce residue hyperdimensional computing, a computing framework that unifies residue number systems with an algebra defined over random, high-dimensional vectors. We show how residue numbers can be represented as high-dimensional vectors in a manner that allows algebraic operations to be performed with component-wise, parallelizable operations on the vector elements. The resulting framework, when combined with an efficient method for factorizing high-dimensional vectors, can represent and operate on numerical values over a large dynamic range using resources that scale only logarithmically with the range, a vast improvement over previous methods. It also exhibits impressive robustness to noise. We demonstrate the potential for this framework to solve computationally difficult problems in visual perception and combinatorial optimization, showing improvement over baseline methods. More broadly, the framework provides a possible account for the computational operations of grid cells in the brain, and it suggests new machine learning architectures for representing and manipulating numerical data.
Collapse
Affiliation(s)
- Christopher J Kymn
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, CA 94720, U.S.A.
| | - Denis Kleyko
- Centre for Applied Autonomous Sensor Systems, Orebro University, Orebro SE-701 82, Sweden
- Intelligent Systems Lab, Research Institutes of Sweden, 164 40 Kista, Sweden
| | - E Paxon Frady
- Neuromorphic Computing Lab, Intel, Santa Clara, CA 95054, U.S.A.
| | - Connor Bybee
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, CA 94720, U.S.A.
| | - Pentti Kanerva
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, CA 94720, U.S.A.
| | - Friedrich T Sommer
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, CA 94720, U.S.A
- Neuromorphic Computing Lab, Intel, Santa Clara, CA 95054, U.S.A.
| | - Bruno A Olshausen
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, CA 94720, U.S.A.
| |
Collapse
|
3
|
Lei W, Clark DA, Demb JB. Compartmentalized pooling generates orientation selectivity in wide-field amacrine cells. Proc Natl Acad Sci U S A 2024; 121:e2411130121. [PMID: 39602271 PMCID: PMC11626119 DOI: 10.1073/pnas.2411130121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Accepted: 10/29/2024] [Indexed: 11/29/2024] Open
Abstract
Orientation is one of the most salient features in visual scenes. Neurons at multiple levels of the visual system detect orientation, but in many cases, the underlying biophysical mechanisms remain unresolved. Here, we studied mechanisms for orientation detection at the earliest stage in the visual system, in B/K wide-field amacrine cells (B/K WACs), a group of giant, nonspiking interneurons in the mouse retina that coexpress Bhlhe22 (B) and Kappa Opioid Receptor (K). B/K WACs exhibit orientation-tuned calcium signals along their long, straight, unbranching dendrites, which contain both synaptic inputs and outputs. Simultaneous dendritic calcium imaging and somatic voltage recordings reveal that individual B/K dendrites are electrotonically isolated, exhibiting a spatially confined yet extended receptive field along the dendrite, which we term "compartmentalized pooling." Further, the receptive field of a B/K WAC dendrite exhibits center-surround antagonism. Phenomenological receptive field models demonstrate that compartmentalized pooling generates orientation selectivity, and center-surround antagonism shapes band-pass spatial frequency tuning. At the microcircuit level, B/K WACs receive excitation driven by one contrast polarity (e.g., "ON") and glycinergic inhibition driven by the opposite polarity (e.g., "OFF"). However, this "crossover" inhibition is not essential for generating orientation selectivity. A minimal biophysical model reproduced compartmentalized pooling from feedforward excitatory inputs combined with a substantial increase in the specific membrane resistance between somatic and dendritic compartments. Collectively, our results reveal the biophysical mechanism for generating orientation selectivity in dendrites of B/K WACs, enriching our understanding of the diverse strategies employed throughout the visual system to detect orientation.
Collapse
Affiliation(s)
- Wanyu Lei
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT06511
- Integrated Graduate Program in Physical and Engineering Biology, Yale University, New Haven, CT06511
| | - Damon A. Clark
- Department of Molecular, Cellular and Developmental Biology, Yale University, New Haven, CT06511
- Department of Physics, Yale University, New Haven, CT06511
- Quantitative Biology Institute, Yale University, New Haven, CT06511
- Department of Neuroscience, Yale University, New Haven, CT06511
- Wu Tsai Institute, Yale University, New Haven, CT06511
| | - Jonathan B. Demb
- Department of Neuroscience, Yale University, New Haven, CT06511
- Wu Tsai Institute, Yale University, New Haven, CT06511
- Department of Ophthalmology and Visual Science, Yale University, New Haven, CT06511
- Department of Cellular and Molecular Physiology, Yale University, New Haven, CT06511
| |
Collapse
|
4
|
Ruben BS, Pehlevan C. Learning Curves for Noisy Heterogeneous Feature-Subsampled Ridge Ensembles. ARXIV 2024:arXiv:2307.03176v3. [PMID: 37461424 PMCID: PMC10350086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 07/25/2023]
Abstract
Feature bagging is a well-established ensembling method which aims to reduce prediction variance by combining predictions of many estimators trained on subsets or projections of features. Here, we develop a theory of feature-bagging in noisy least-squares ridge ensembles and simplify the resulting learning curves in the special case of equicorrelated data. Using analytical learning curves, we demonstrate that subsampling shifts the double-descent peak of a linear predictor. This leads us to introduce heterogeneous feature ensembling, with estimators built on varying numbers of feature dimensions, as a computationally efficient method to mitigate double-descent. Then, we compare the performance of a feature-subsampling ensemble to a single linear predictor, describing a trade-off between noise amplification due to subsampling and noise reduction due to ensembling. Our qualitative insights carry over to linear classifiers applied to image classification tasks with realistic datasets constructed using a state-of-the-art deep learning feature map.
Collapse
Affiliation(s)
| | - Cengiz Pehlevan
- Center for Brain Science, Harvard University, Cambridge, MA 02138
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138
- Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University, Cambridge, MA 02138
| |
Collapse
|
5
|
Kymn CJ, Kleyko D, Frady EP, Bybee C, Kanerva P, Sommer FT, Olshausen BA. Computing with Residue Numbers in High-Dimensional Representation. ARXIV 2023:arXiv:2311.04872v1. [PMID: 37986727 PMCID: PMC10659444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Abstract
We introduce Residue Hyperdimensional Computing, a computing framework that unifies residue number systems with an algebra defined over random, high-dimensional vectors. We show how residue numbers can be represented as high-dimensional vectors in a manner that allows algebraic operations to be performed with component-wise, parallelizable operations on the vector elements. The resulting framework, when combined with an efficient method for factorizing high-dimensional vectors, can represent and operate on numerical values over a large dynamic range using vastly fewer resources than previous methods, and it exhibits impressive robustness to noise. We demonstrate the potential for this framework to solve computationally difficult problems in visual perception and combinatorial optimization, showing improvement over baseline methods. More broadly, the framework provides a possible account for the computational operations of grid cells in the brain, and it suggests new machine learning architectures for representing and manipulating numerical data.
Collapse
Affiliation(s)
- Christopher J Kymn
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, CA
| | - Denis Kleyko
- Centre for Applied Autonomous Sensor Systems, Örebro University, Sweden
- Intelligent Systems Lab, Research Institutes of Sweden, Kista, Sweden
| | | | - Connor Bybee
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, CA
| | - Pentti Kanerva
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, CA
| | - Friedrich T Sommer
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, CA
- Neuromorphic Computing Lab, Intel, Santa Clara, CA
| | - Bruno A Olshausen
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, CA
| |
Collapse
|
6
|
Qin S, Farashahi S, Lipshutz D, Sengupta AM, Chklovskii DB, Pehlevan C. Coordinated drift of receptive fields in Hebbian/anti-Hebbian network models during noisy representation learning. Nat Neurosci 2023; 26:339-349. [PMID: 36635497 DOI: 10.1038/s41593-022-01225-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 10/28/2022] [Indexed: 01/13/2023]
Abstract
Recent experiments have revealed that neural population codes in many brain areas continuously change even when animals have fully learned and stably perform their tasks. This representational 'drift' naturally leads to questions about its causes, dynamics and functions. Here we explore the hypothesis that neural representations optimize a representational objective with a degenerate solution space, and noisy synaptic updates drive the network to explore this (near-)optimal space causing representational drift. We illustrate this idea and explore its consequences in simple, biologically plausible Hebbian/anti-Hebbian network models of representation learning. We find that the drifting receptive fields of individual neurons can be characterized by a coordinated random walk, with effective diffusion constants depending on various parameters such as learning rate, noise amplitude and input statistics. Despite such drift, the representational similarity of population codes is stable over time. Our model recapitulates experimental observations in the hippocampus and posterior parietal cortex and makes testable predictions that can be probed in future experiments.
Collapse
Affiliation(s)
- Shanshan Qin
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Shiva Farashahi
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
| | - David Lipshutz
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
| | - Anirvan M Sengupta
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
- Department of Physics and Astronomy, Rutgers University, New Brunswick, NJ, USA
| | - Dmitri B Chklovskii
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
- NYU Langone Medical Center, New York, NY, USA
| | - Cengiz Pehlevan
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA.
- Center for Brain Science, Harvard University, Cambridge, MA, USA.
| |
Collapse
|