1
|
Blanco Malerba S, Pieropan M, Burak Y, Azeredo da Silveira R. Random compressed coding with neurons. Cell Rep 2025; 44:115412. [PMID: 40111998 DOI: 10.1016/j.celrep.2025.115412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 11/20/2023] [Accepted: 02/18/2025] [Indexed: 03/22/2025] Open
Abstract
Classical models of efficient coding in neurons assume simple mean responses-"tuning curves"- such as bell-shaped or monotonic functions of a stimulus feature. Real neurons, however, can be more complex: grid cells, for example, exhibit periodic responses that impart the neural population code with high accuracy. But do highly accurate codes require fine-tuning of the response properties? We address this question with the use of a simple model: a population of neurons with random, spatially extended, and irregular tuning curves. Irregularity enhances the local resolution of the code but gives rise to catastrophic, global errors. For optimal smoothness of the tuning curves, when local and global errors balance out, the neural population compresses information about a continuous stimulus into a low-dimensional representation, and the resulting distributed code achieves exponential accuracy. An analysis of recordings from monkey motor cortex points to such "compressed efficient coding." Efficient codes do not require a finely tuned design-they emerge robustly from irregularity or randomness.
Collapse
Affiliation(s)
- Simone Blanco Malerba
- Laboratoire de Physique de l'Ecole Normale Supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université de Paris, 75005 Paris, France; Institute for Neural Information Processing, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf, 20251 Hamburg, Germany
| | - Mirko Pieropan
- Laboratoire de Physique de l'Ecole Normale Supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université de Paris, 75005 Paris, France
| | - Yoram Burak
- Racah Institute of Physics, Hebrew University of Jerusalem, Jerusalem 9190401, Israel; Edmond and Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, Jerusalem 9190401, Israel
| | - Rava Azeredo da Silveira
- Laboratoire de Physique de l'Ecole Normale Supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université de Paris, 75005 Paris, France; Institute of Molecular and Clinical Ophthalmology Basel, 4031 Basel, Switzerland; Faculty of Science, University of Basel, 4056 Basel, Switzerland; Department of Economics, University of Zurich, 8001 Zurich, Switzerland.
| |
Collapse
|
2
|
Ghazinouri B, Nejad MM, Cheng S. Navigation and the efficiency of spatial coding: insights from closed-loop simulations. Brain Struct Funct 2024; 229:577-592. [PMID: 37029811 PMCID: PMC10978723 DOI: 10.1007/s00429-023-02637-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 03/28/2023] [Indexed: 04/09/2023]
Abstract
Spatial learning is critical for survival and its underlying neuronal mechanisms have been studied extensively. These studies have revealed a wealth of information about the neural representations of space, such as place cells and boundary cells. While many studies have focused on how these representations emerge in the brain, their functional role in driving spatial learning and navigation has received much less attention. We extended an existing computational modeling tool-chain to study the functional role of spatial representations using closed-loop simulations of spatial learning. At the heart of the model agent was a spiking neural network that formed a ring attractor. This network received inputs from place and boundary cells and the location of the activity bump in this network was the output. This output determined the movement directions of the agent. We found that the navigation performance depended on the parameters of the place cell input, such as their number, the place field sizes, and peak firing rate, as well as, unsurprisingly, the size of the goal zone. The dependence on the place cell parameters could be accounted for by just a single variable, the overlap index, but this dependence was nonmonotonic. By contrast, performance scaled monotonically with the Fisher information of the place cell population. Our results therefore demonstrate that efficiently encoding spatial information is critical for navigation performance.
Collapse
Affiliation(s)
- Behnam Ghazinouri
- Faculty of Computer Science, Institute for Neural Computation, Ruhr University Bochum, Universitätsstrasse 150, 44801, Bochum, Germany
| | - Mohammadreza Mohagheghi Nejad
- Faculty of Computer Science, Institute for Neural Computation, Ruhr University Bochum, Universitätsstrasse 150, 44801, Bochum, Germany
| | - Sen Cheng
- Faculty of Computer Science, Institute for Neural Computation, Ruhr University Bochum, Universitätsstrasse 150, 44801, Bochum, Germany.
| |
Collapse
|
3
|
Johnston WJ, Freedman DJ. Redundant representations are required to disambiguate simultaneously presented complex stimuli. PLoS Comput Biol 2023; 19:e1011327. [PMID: 37556470 PMCID: PMC10442167 DOI: 10.1371/journal.pcbi.1011327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 08/21/2023] [Accepted: 07/04/2023] [Indexed: 08/11/2023] Open
Abstract
A pedestrian crossing a street during rush hour often looks and listens for potential danger. When they hear several different horns, they localize the cars that are honking and decide whether or not they need to modify their motor plan. How does the pedestrian use this auditory information to pick out the corresponding cars in visual space? The integration of distributed representations like these is called the assignment problem, and it must be solved to integrate distinct representations across but also within sensory modalities. Here, we identify and analyze a solution to the assignment problem: the representation of one or more common stimulus features in pairs of relevant brain regions-for example, estimates of the spatial position of cars are represented in both the visual and auditory systems. We characterize how the reliability of this solution depends on different features of the stimulus set (e.g., the size of the set and the complexity of the stimuli) and the details of the split representations (e.g., the precision of each stimulus representation and the amount of overlapping information). Next, we implement this solution in a biologically plausible receptive field code and show how constraints on the number of neurons and spikes used by the code force the brain to navigate a tradeoff between local and catastrophic errors. We show that, when many spikes and neurons are available, representing stimuli from a single sensory modality can be done more reliably across multiple brain regions, despite the risk of assignment errors. Finally, we show that a feedforward neural network can learn the optimal solution to the assignment problem, even when it receives inputs in two distinct representational formats. We also discuss relevant results on assignment errors from the human working memory literature and show that several key predictions of our theory already have support.
Collapse
Affiliation(s)
- W. Jeffrey Johnston
- Graduate Program in Computational Neuroscience and the Department of Neurobiology, The University of Chicago, Chicago, Illinois, United States of America
- Center for Theoretical Neuroscience and Mortimer B. Zuckerman Mind, Brain and Behavior Institute, Columbia University, New York, New York, United States of America
| | - David J. Freedman
- Graduate Program in Computational Neuroscience and the Department of Neurobiology, The University of Chicago, Chicago, Illinois, United States of America
- Neuroscience Institute, The University of Chicago, Chicago, Illinois, United States of America
| |
Collapse
|
4
|
Lenninger M, Skoglund M, Herman PA, Kumar A. Are single-peaked tuning curves tuned for speed rather than accuracy? eLife 2023; 12:e84531. [PMID: 37191292 PMCID: PMC10259479 DOI: 10.7554/elife.84531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Accepted: 05/11/2023] [Indexed: 05/17/2023] Open
Abstract
According to the efficient coding hypothesis, sensory neurons are adapted to provide maximal information about the environment, given some biophysical constraints. In early visual areas, stimulus-induced modulations of neural activity (or tunings) are predominantly single-peaked. However, periodic tuning, as exhibited by grid cells, has been linked to a significant increase in decoding performance. Does this imply that the tuning curves in early visual areas are sub-optimal? We argue that the time scale at which neurons encode information is imperative to understand the advantages of single-peaked and periodic tuning curves, respectively. Here, we show that the possibility of catastrophic (large) errors creates a trade-off between decoding time and decoding ability. We investigate how decoding time and stimulus dimensionality affect the optimal shape of tuning curves for removing catastrophic errors. In particular, we focus on the spatial periods of the tuning curves for a class of circular tuning curves. We show an overall trend for minimal decoding time to increase with increasing Fisher information, implying a trade-off between accuracy and speed. This trade-off is reinforced whenever the stimulus dimensionality is high, or there is ongoing activity. Thus, given constraints on processing speed, we present normative arguments for the existence of the single-peaked tuning organization observed in early visual areas.
Collapse
Affiliation(s)
- Movitz Lenninger
- Division of Information Science and Engineering, KTH Royal Institute of TechnologyStockholmSweden
| | - Mikael Skoglund
- Division of Information Science and Engineering, KTH Royal Institute of TechnologyStockholmSweden
| | - Pawel Andrzej Herman
- Division of Computational Science and Technology, KTH Royal Institute of TechnologyStockholmSweden
| | - Arvind Kumar
- Division of Computational Science and Technology, KTH Royal Institute of TechnologyStockholmSweden
- Science for Life LaboratoryStockholmSweden
| |
Collapse
|
5
|
Vaccari FE, Diomedi S, Filippini M, Hadjidimitrakis K, Fattori P. New insights on single-neuron selectivity in the era of population-level approaches. Front Integr Neurosci 2022; 16:929052. [PMID: 36249900 PMCID: PMC9554653 DOI: 10.3389/fnint.2022.929052] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 09/02/2022] [Indexed: 11/13/2022] Open
Abstract
In the past, neuroscience was focused on individual neurons seen as the functional units of the nervous system, but this approach fell short over time to account for new experimental evidence, especially for what concerns associative and motor cortices. For this reason and thanks to great technological advances, a part of modern research has shifted the focus from the responses of single neurons to the activity of neural ensembles, now considered the real functional units of the system. However, on a microscale, individual neurons remain the computational components of these networks, thus the study of population dynamics cannot prescind from studying also individual neurons which represent their natural substrate. In this new framework, ideas such as the capability of single cells to encode a specific stimulus (neural selectivity) may become obsolete and need to be profoundly revised. One step in this direction was made by introducing the concept of “mixed selectivity,” the capacity of single cells to integrate multiple variables in a flexible way, allowing individual neurons to participate in different networks. In this review, we outline the most important features of mixed selectivity and we also present recent works demonstrating its presence in the associative areas of the posterior parietal cortex. Finally, in discussing these findings, we present some open questions that could be addressed by future studies.
Collapse
Affiliation(s)
| | - Stefano Diomedi
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Matteo Filippini
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Alma Mater Research Institute for Human-Centered Artificial Intelligence, University of Bologna, Bologna, Italy
- *Correspondence: Patrizia Fattori
| | | | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Alma Mater Research Institute for Human-Centered Artificial Intelligence, University of Bologna, Bologna, Italy
- Matteo Filippini
| |
Collapse
|
6
|
Sarel A, Palgi S, Blum D, Aljadeff J, Las L, Ulanovsky N. Natural switches in behaviour rapidly modulate hippocampal coding. Nature 2022; 609:119-127. [PMID: 36002570 PMCID: PMC9433324 DOI: 10.1038/s41586-022-05112-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Accepted: 07/14/2022] [Indexed: 11/30/2022]
Abstract
Throughout their daily lives, animals and humans often switch between different behaviours. However, neuroscience research typically studies the brain while the animal is performing one behavioural task at a time, and little is known about how brain circuits represent switches between different behaviours. Here we tested this question using an ethological setting: two bats flew together in a long 135 m tunnel, and switched between navigation when flying alone (solo) and collision avoidance as they flew past each other (cross-over). Bats increased their echolocation click rate before each cross-over, indicating attention to the other bat1–9. Hippocampal CA1 neurons represented the bat’s own position when flying alone (place coding10–14). Notably, during cross-overs, neurons switched rapidly to jointly represent the interbat distance by self-position. This neuronal switch was very fast—as fast as 100 ms—which could be revealed owing to the very rapid natural behavioural switch. The neuronal switch correlated with the attention signal, as indexed by echolocation. Interestingly, the different place fields of the same neuron often exhibited very different tuning to interbat distance, creating a complex non-separable coding of position by distance. Theoretical analysis showed that this complex representation yields more efficient coding. Overall, our results suggest that during dynamic natural behaviour, hippocampal neurons can rapidly switch their core computation to represent the relevant behavioural variables, supporting behavioural flexibility. During rapid behavioural switches in flying bats, hippocampal neurons can rapidly switch their core computation to represent the relevant behavioural variables, supporting behavioural flexibility.
Collapse
Affiliation(s)
- Ayelet Sarel
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| | - Shaked Palgi
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| | - Dan Blum
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| | - Johnatan Aljadeff
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel.,Department of Neurobiology, University of California, San Diego, CA, USA
| | - Liora Las
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel.
| | - Nachum Ulanovsky
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel.
| |
Collapse
|
7
|
Perry BAL, Lomi E, Mitchell AS. Thalamocortical interactions in cognition and disease: the mediodorsal and anterior thalamic nuclei. Neurosci Biobehav Rev 2021; 130:162-177. [PMID: 34216651 DOI: 10.1016/j.neubiorev.2021.05.032] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 04/12/2021] [Accepted: 05/17/2021] [Indexed: 01/15/2023]
Abstract
The mediodorsal thalamus (MD) and anterior thalamic nuclei (ATN) are two adjacent brain nodes that support our ability to make decisions, learn, update information, form and retrieve memories, and find our way around. The MD and PFC work in partnerships to support cognitive processes linked to successful learning and decision-making, while the ATN and extended hippocampal system together coordinate the encoding and retrieval of memories and successful spatial navigation. Yet, while these distinctions may appear to be segregated, both the MD and ATN together support our higher cognitive functions as they regulate and are influenced by interconnected fronto-temporal neural networks and subcortical inputs. Our review focuses on recent studies in animal models and in humans. This evidence is re-shaping our understanding of the importance of MD and ATN cortico-thalamocortical pathways in influencing complex cognitive functions. Given the evidence from clinical settings and neuroscience research labs, the MD and ATN should be considered targets for effective treatments in neuropsychiatric diseases and disorders and neurodegeneration.
Collapse
Affiliation(s)
- Brook A L Perry
- Department of Experimental Psychology, Oxford University, The Tinsley Building, Mansfield Road, OX1 3SR, United Kingdom
| | - Eleonora Lomi
- Department of Experimental Psychology, Oxford University, The Tinsley Building, Mansfield Road, OX1 3SR, United Kingdom
| | - Anna S Mitchell
- Department of Experimental Psychology, Oxford University, The Tinsley Building, Mansfield Road, OX1 3SR, United Kingdom.
| |
Collapse
|
8
|
Aulet LS, Lourenco SF. Numerosity and cumulative surface area are perceived holistically as integral dimensions. J Exp Psychol Gen 2020; 150:145-156. [PMID: 32567881 DOI: 10.1037/xge0000874] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Human and nonhuman animals have a remarkable capacity to rapidly estimate the quantity of objects in the environment. The dominant view of this ability posits an abstract numerosity code, uncontaminated by nonnumerical visual information. The present study provides novel evidence in contradiction to this view by demonstrating that number and cumulative surface area are perceived holistically, classically known as integral dimensions. Whether assessed explicitly (Experiment 1) or implicitly (Experiment 2), perceived similarity for dot arrays that varied parametrically in number and cumulative area was best modeled by Euclidean, as opposed to city-block, distance within the stimulus space, comparable to other integral dimensions (brightness/saturation and radial frequency components) but different from separable dimensions (shape/color and brightness/size). Moreover, Euclidean distance remained the best-performing model, even when compared to models that controlled for other magnitude properties (e.g., density) or image similarity. These findings suggest that numerosity perception entails the obligatory processing of nonnumerical magnitude. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
|
9
|
Harel Y, Meir R. Optimal Multivariate Tuning with Neuron-Level and Population-Level Energy Constraints. Neural Comput 2020; 32:794-828. [PMID: 32069175 DOI: 10.1162/neco_a_01267] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Optimality principles have been useful in explaining many aspects of biological systems. In the context of neural encoding in sensory areas, optimality is naturally formulated in a Bayesian setting as neural tuning which minimizes mean decoding error. Many works optimize Fisher information, which approximates the minimum mean square error (MMSE) of the optimal decoder for long encoding time but may be misleading for short encoding times. We study MMSE-optimal neural encoding of a multivariate stimulus by uniform populations of spiking neurons, under firing rate constraints for each neuron as well as for the entire population. We show that the population-level constraint is essential for the formulation of a well-posed problem having finite optimal tuning widths and optimal tuning aligns with the principal components of the prior distribution. Numerical evaluation of the two-dimensional case shows that encoding only the dimension with higher variance is optimal for short encoding times. We also compare direct MMSE optimization to optimization of several proxies to MMSE: Fisher information, maximum likelihood estimation error, and the Bayesian Cramér-Rao bound. We find that optimization of these measures yields qualitatively misleading results regarding MMSE-optimal tuning and its dependence on encoding time and energy constraints.
Collapse
Affiliation(s)
- Yuval Harel
- Department of Electrical Engineering, Technion-Israel Institute of Technology, Haifa 3200003, Israel
| | - Ron Meir
- Department of Electrical Engineering, Technion-Israel Institute of Technology, Haifa 3200003, Israel
| |
Collapse
|
10
|
Johnston WJ, Palmer SE, Freedman DJ. Nonlinear mixed selectivity supports reliable neural computation. PLoS Comput Biol 2020; 16:e1007544. [PMID: 32069273 PMCID: PMC7048320 DOI: 10.1371/journal.pcbi.1007544] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2019] [Revised: 02/28/2020] [Accepted: 11/12/2019] [Indexed: 12/17/2022] Open
Abstract
Neuronal activity in the brain is variable, yet both perception and behavior are generally reliable. How does the brain achieve this? Here, we show that the conjunctive coding of multiple stimulus features, commonly known as nonlinear mixed selectivity, may be used by the brain to support reliable information transmission using unreliable neurons. Nonlinearly mixed feature representations have been observed throughout primary sensory, decision-making, and motor brain areas. In these areas, different features are almost always nonlinearly mixed to some degree, rather than represented separately or with only additive (linear) mixing, which we refer to as pure selectivity. Mixed selectivity has been previously shown to support flexible linear decoding for complex behavioral tasks. Here, we show that it has another important benefit: in many cases, it makes orders of magnitude fewer decoding errors than pure selectivity even when both forms of selectivity use the same number of spikes. This benefit holds for sensory, motor, and more abstract, cognitive representations. Further, we show experimental evidence that mixed selectivity exists in the brain even when it does not enable behaviorally useful linear decoding. This suggests that nonlinear mixed selectivity may be a general coding scheme exploited by the brain for reliable and efficient neural computation. Neurons in the brain are unreliable, while both perception and behavior are generally reliable. In this work, we study how the neural population response to sensory, motor, and cognitive features can produce this reliability. Across the brain, single neurons have been shown to respond to particular conjunctions of multiple features, termed nonlinear mixed selectivity. In this work, we show that populations of these mixed selective neurons lead to many fewer decoding errors than populations without mixed selectivity, even when both neural codes are given the same number of spikes. We show that the reliability benefits from mixed selectivity are quite general, holding under different assumptions about metabolic costs and neural noise as well as for both categorical and sensory errors. Further, previous theoretical work has shown that mixed selectivity enables the learning of complex behaviors with simple decoders. Through the analysis of neural data, we show that the brain implements mixed selectivity even when it would not serve this purpose. Thus, we argue that the brain also implements mixed selectivity to exploit its general benefits for reliable and efficient neural computation.
Collapse
Affiliation(s)
- W. Jeffrey Johnston
- Graduate Program in Computational Neuroscience, The University of Chicago, Chicago, Illinois, United States of America
- Department of Neurobiology, The University of Chicago, Chicago, Illinois, United States of America
- * E-mail:
| | - Stephanie E. Palmer
- Graduate Program in Computational Neuroscience, The University of Chicago, Chicago, Illinois, United States of America
- Grossman Institute for Neuroscience, Quantitative Biology, and Human Behavior, The University of Chicago, Chicago, Illinois, United States of America
- Department of Organismal Biology and Anatomy, The University of Chicago, Chicago, Illinois, United States of America
- Department of Physics, The University of Chicago, Chicago, Illinois, United States of America
| | - David J. Freedman
- Graduate Program in Computational Neuroscience, The University of Chicago, Chicago, Illinois, United States of America
- Department of Neurobiology, The University of Chicago, Chicago, Illinois, United States of America
- Grossman Institute for Neuroscience, Quantitative Biology, and Human Behavior, The University of Chicago, Chicago, Illinois, United States of America
| |
Collapse
|
11
|
Angelaki DE, Laurens J. The head direction cell network: attractor dynamics, integration within the navigation system, and three-dimensional properties. Curr Opin Neurobiol 2020; 60:136-144. [PMID: 31877492 PMCID: PMC7002189 DOI: 10.1016/j.conb.2019.12.002] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2019] [Revised: 12/02/2019] [Accepted: 12/04/2019] [Indexed: 11/30/2022]
Abstract
Knowledge of head direction cell function has progressed remarkably in recent years. The predominant theory that they form an attractor has been confirmed by several experiments. Candidate pathways that may convey visual input have been identified. The pre-subicular circuitry that conveys head direction signals to the medial entorhinal cortex, potentially sustaining path integration by grid cells, has been resolved. Although the neuronal substrate of the attractor remains unknown in mammals, a simple head direction network, whose structure is astoundingly similar to neuronal models theorized decades earlier, has been identified in insects. Finally, recent experiments have revealed that these cells do not encode head direction in the horizontal plane only, but also in vertical planes, thus providing a 3D orientation signal.
Collapse
Affiliation(s)
- Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas, USA; Center for Neural Science and Tandon School of Engineering, New York University, NY, USA
| | - Jean Laurens
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas, USA; Ernst Strüngmann Institute for Neuroscience, Frankfurt, Germany.
| |
Collapse
|
12
|
Yu F, Shang J, Hu Y, Milford M. NeuroSLAM: a brain-inspired SLAM system for 3D environments. BIOLOGICAL CYBERNETICS 2019; 113:515-545. [PMID: 31571007 DOI: 10.1007/s00422-019-00806-9] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Accepted: 09/14/2019] [Indexed: 06/10/2023]
Abstract
Roboticists have long drawn inspiration from nature to develop navigation and simultaneous localization and mapping (SLAM) systems such as RatSLAM. Animals such as birds and bats possess superlative navigation capabilities, robustly navigating over large, three-dimensional environments, leveraging an internal neural representation of space combined with external sensory cues and self-motion cues. This paper presents a novel neuro-inspired 4DoF (degrees of freedom) SLAM system named NeuroSLAM, based upon computational models of 3D grid cells and multilayered head direction cells, integrated with a vision system that provides external visual cues and self-motion cues. NeuroSLAM's neural network activity drives the creation of a multilayered graphical experience map in a real time, enabling relocalization and loop closure through sequences of familiar local visual cues. A multilayered experience map relaxation algorithm is used to correct cumulative errors in path integration after loop closure. Using both synthetic and real-world datasets comprising complex, multilayered indoor and outdoor environments, we demonstrate NeuroSLAM consistently producing topologically correct three-dimensional maps.
Collapse
Affiliation(s)
- Fangwen Yu
- Faculty of Information Engineering, China University of Geosciences and National Engineering Research Center for Geographic Information System, Wuhan, 430074, China
- Science and Engineering Faculty, Queensland University of Technology and Australian Centre for Robotic Vision, Brisbane, QLD, 4000, Australia
| | - Jianga Shang
- Faculty of Information Engineering, China University of Geosciences and National Engineering Research Center for Geographic Information System, Wuhan, 430074, China.
| | - Youjian Hu
- Faculty of Information Engineering, China University of Geosciences and National Engineering Research Center for Geographic Information System, Wuhan, 430074, China
| | - Michael Milford
- Science and Engineering Faculty, Queensland University of Technology and Australian Centre for Robotic Vision, Brisbane, QLD, 4000, Australia
| |
Collapse
|
13
|
Anticipatory Neural Activity Improves the Decoding Accuracy for Dynamic Head-Direction Signals. J Neurosci 2019; 39:2847-2859. [PMID: 30692223 DOI: 10.1523/jneurosci.2605-18.2019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2018] [Revised: 12/16/2018] [Accepted: 01/11/2019] [Indexed: 11/21/2022] Open
Abstract
Insects and vertebrates harbor specific neurons that encode the animal's head direction (HD) and provide an internal compass for spatial navigation. Each HD cell fires most strongly in one preferred direction. As the animal turns its head, however, HD cells in rat anterodorsal thalamic nucleus (ADN) and other brain areas fire already before their preferred direction is reached, as if the neurons anticipated the future HD. This phenomenon has been explained at a mechanistic level, but a functional interpretation is still missing. To close this gap, we use a computational approach based on the movement statistics of male rats and a simple model for the neural responses within the ADN HD network. Network activity is read out using population vectors in a biologically plausible manner, so that only past spikes are taken into account. We find that anticipatory firing improves the representation of the present HD by reducing the motion-induced temporal bias inherent in causal decoding. The amount of anticipation observed in ADN enhances the precision of the HD compass read-out by up to 40%. More generally, our theoretical framework predicts that neural integration times not only reflect biophysical constraints, but also the statistics of behaviorally relevant stimuli; in particular, anticipatory tuning should be found wherever neurons encode sensory signals that change gradually in time.SIGNIFICANCE STATEMENT Across different brain regions, populations of noisy neurons encode dynamically changing stimuli. Decoding a time-varying stimulus from the population response involves a trade-off: For short read-out times, stimulus estimates are unreliable as the number of stochastic spikes is small; for long read-outs, estimates are biased because they lag behind the true stimulus. We show that optimal decoding of temporally correlated stimuli not only relies on finding the right read-out time window but requires neurons to anticipate future stimulus values. We apply this general framework to the rodent head-direction system and show that the experimentally observed anticipation of future head directions can be explained at a quantitative level from the neuronal tuning properties, network size, and the animal's head-movement statistics.
Collapse
|