1
|
Stavropoulos A, Lakshminarasimhan KJ, Angelaki DE. Belief embodiment through eye movements facilitates memory-guided navigation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.21.554107. [PMID: 37662309 PMCID: PMC10473632 DOI: 10.1101/2023.08.21.554107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
Neural network models optimized for task performance often excel at predicting neural activity but do not explain other properties such as the distributed representation across functionally distinct areas. Distributed representations may arise from animals' strategies for resource utilization, however, fixation-based paradigms deprive animals of a vital resource: eye movements. During a naturalistic task in which humans use a joystick to steer and catch flashing fireflies in a virtual environment lacking position cues, subjects physically track the latent task variable with their gaze. We show this strategy to be true also during an inertial version of the task in the absence of optic flow and demonstrate that these task-relevant eye movements reflect an embodiment of the subjects' dynamically evolving internal beliefs about the goal. A neural network model with tuned recurrent connectivity between oculomotor and evidence-integrating frontoparietal circuits accounted for this behavioral strategy. Critically, this model better explained neural data from monkeys' posterior parietal cortex compared to task-optimized models unconstrained by such an oculomotor-based cognitive strategy. These results highlight the importance of unconstrained movement in working memory computations and establish a functional significance of oculomotor signals for evidence-integration and navigation computations via embodied cognition.
Collapse
Affiliation(s)
| | | | - Dora E. Angelaki
- Center for Neural Science, New York University, New York, NY, USA
- Tandon School of Engineering, New York University, New York, NY, USA
| |
Collapse
|
2
|
Jin Y, Jensen G, Gottlieb J, Ferrera V. Superstitious learning of abstract order from random reinforcement. Proc Natl Acad Sci U S A 2022; 119:e2202789119. [PMID: 35998221 PMCID: PMC9436361 DOI: 10.1073/pnas.2202789119] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 07/01/2022] [Indexed: 11/18/2022] Open
Abstract
Humans and other animals often infer spurious associations among unrelated events. However, such superstitious learning is usually accounted for by conditioned associations, raising the question of whether an animal could develop more complex cognitive structures independent of reinforcement. Here, we tasked monkeys with discovering the serial order of two pictorial sets: a "learnable" set in which the stimuli were implicitly ordered and monkeys were rewarded for choosing the higher-rank stimulus and an "unlearnable" set in which stimuli were unordered and feedback was random regardless of the choice. We replicated prior results that monkeys reliably learned the implicit order of the learnable set. Surprisingly, the monkeys behaved as though some ordering also existed in the unlearnable set, showing consistent choice preference that transferred to novel untrained pairs in this set, even under a preference-discouraging reward schedule that gave rewards more frequently to the stimulus that was selected less often. In simulations, a model-free reinforcement learning algorithm (Q-learning) displayed a degree of consistent ordering among the unlearnable set but, unlike the monkeys, failed to do so under the preference-discouraging reward schedule. Our results suggest that monkeys infer abstract structures from objectively random events using heuristics that extend beyond stimulus-outcome conditional learning to more cognitive model-based learning mechanisms.
Collapse
Affiliation(s)
- Yuhao Jin
- Department of Biological Sciences, Columbia University, New York, NY 10027
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
| | - Greg Jensen
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
- Department of Psychology, Reed College, Portland, OR 97202
- Department of Neuroscience, Columbia University, New York, NY 10027
| | - Jacqueline Gottlieb
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
- Department of Neuroscience, Columbia University, New York, NY 10027
- Kavli Institute for Brain Science, Columbia University, New York, NY 10027
| | - Vincent Ferrera
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
- Department of Neuroscience, Columbia University, New York, NY 10027
- Kavli Institute for Brain Science, Columbia University, New York, NY 10027
| |
Collapse
|
3
|
Song M, Wang X, Zhang H, Li J. Proactive Information Sampling in Value-Based Decision-Making: Deciding When and Where to Saccade. Front Hum Neurosci 2019; 13:35. [PMID: 30804770 PMCID: PMC6378309 DOI: 10.3389/fnhum.2019.00035] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2018] [Accepted: 01/22/2019] [Indexed: 01/26/2023] Open
Abstract
Evidence accumulation has been the core component in recent development of perceptual and value-based decision-making theories. Most studies have focused on the evaluation of evidence between alternative options. What remains largely unknown is the process that prepares evidence: how may the decision-maker sample different sources of information sequentially, if they can only sample one source at a time? Here we propose a theoretical framework in prescribing how different sources of information should be sampled to facilitate the decision process: beliefs for different noisy sources are updated in a Bayesian manner and participants can proactively allocate resource for sampling (i.e., saccades) among different sources to maximize the information gain in such process. We show that our framework can account for human participants' actual choice and saccade behavior in a two-alternative value-based decision-making task. Moreover, our framework makes novel predictions about the empirical eye movement patterns.
Collapse
Affiliation(s)
- Mingyu Song
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China.,Princeton Neuroscience Institute, Princeton University, Princeton, NJ, United States
| | - Xingyu Wang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China.,Department of Industrial Engineering and Management Sciences, Northwestern University, Evanston, IL, United States
| | - Hang Zhang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China.,PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing, China.,Peking-Tsinghua Center for Life Sciences, Beijing, China
| | - Jian Li
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| |
Collapse
|
4
|
Ryali CK, Reddy G, Yu AJ. Demystifying excessively volatile human learning: A Bayesian persistent prior and a neural approximation. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2018; 31:2781-2790. [PMID: 34366637 PMCID: PMC8341474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Understanding how humans and animals learn about statistical regularities in stable and volatile environments, and utilize these regularities to make predictions and decisions, is an important problem in neuroscience and psychology. Using a Bayesian modeling framework, specifically the Dynamic Belief Model (DBM), it has previously been shown that humans tend to make the default assumption that environmental statistics undergo abrupt, unsignaled changes, even when environmental statistics are actually stable. Because exact Bayesian inference in this setting, an example of switching state space models, is computationally intensive, a number of approximately Bayesian and heuristic algorithms have been proposed to account for learning/prediction in the brain. Here, we examine a neurally plausible algorithm, a special case of leaky integration dynamics we denote as EXP (for exponential filtering), that is significantly simpler than all previously suggested algorithms except for the delta-learning rule, and which far outperforms the delta rule in approximating Bayesian prediction performance. We derive the theoretical relationship between DBM and EXP, and show that EXP gains computational efficiency by foregoing the representation of inferential uncertainty (as does the delta rule), but that it nevertheless achieves near-Bayesian performance due to its ability to incorporate a "persistent prior" influence unique to DBM and absent from the other algorithms. Furthermore, we show that EXP is comparable to DBM but better than all other models in reproducing human behavior in a visual search task, suggesting that human learning and prediction also incorporates an element of persistent prior. More broadly, our work demonstrates that when observations are information-poor, detecting changes or modulating the learning rate is both difficult and thus unnecessary for making Bayes-optimal predictions.
Collapse
Affiliation(s)
- Chaitanya K Ryali
- Department of Computer Science and Engineering, University of California San Diego, 9500 Gilman Drive La Jolla, CA 92093
| | - Gautam Reddy
- Department of Physics, University of California San Diego, 9500 Gilman Drive La Jolla, CA 92093
| | - Angela J Yu
- Department of Cognitive Science, University of California San Diego, 9500 Gilman Drive La Jolla, CA 92093
| |
Collapse
|
5
|
Efficient Active Sensing with Categorized Further Explorations for a Home Behavior-Monitoring Robot. JOURNAL OF HEALTHCARE ENGINEERING 2017; 2017:6952695. [PMID: 29359038 PMCID: PMC5735316 DOI: 10.1155/2017/6952695] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Revised: 08/15/2017] [Accepted: 09/24/2017] [Indexed: 11/23/2022]
Abstract
Mobile robotics is a potential solution to home behavior monitoring for the elderly. For a mobile robot in the real world, there are several types of uncertainties for its perceptions, such as the ambiguity between a target object and the surrounding objects and occlusions by furniture. The problem could be more serious for a home behavior-monitoring system, which aims to accurately recognize the activity of a target person, in spite of these uncertainties. It detects irregularities and categorizes situations requiring further explorations, which strategically maximize the information needed for activity recognition while minimizing the costs. Two schemes of active sensing, based on two irregularity detections, namely, heuristic-based and template-matching-based irregularity detections, were implemented and examined for body contour-based activity recognition. Their time cost and accuracy in activity recognition were evaluated through experiments in both a controlled scenario and a home living scenario. Experiment results showed that the categorized further explorations guided the robot system to sense the target person actively. As a result, with the proposed approach, the robot system has achieved higher accuracy of activity recognition.
Collapse
|
6
|
Sutton EE, Demir A, Stamper SA, Fortune ES, Cowan NJ. Dynamic modulation of visual and electrosensory gains for locomotor control. J R Soc Interface 2017; 13:rsif.2016.0057. [PMID: 27170650 DOI: 10.1098/rsif.2016.0057] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2016] [Accepted: 04/13/2016] [Indexed: 11/12/2022] Open
Abstract
Animal nervous systems resolve sensory conflict for the control of movement. For example, the glass knifefish, Eigenmannia virescens, relies on visual and electrosensory feedback as it swims to maintain position within a moving refuge. To study how signals from these two parallel sensory streams are used in refuge tracking, we constructed a novel augmented reality apparatus that enables the independent manipulation of visual and electrosensory cues to freely swimming fish (n = 5). We evaluated the linearity of multisensory integration, the change to the relative perceptual weights given to vision and electrosense in relation to sensory salience, and the effect of the magnitude of sensory conflict on sensorimotor gain. First, we found that tracking behaviour obeys superposition of the sensory inputs, suggesting linear sensorimotor integration. In addition, fish rely more on vision when electrosensory salience is reduced, suggesting that fish dynamically alter sensorimotor gains in a manner consistent with Bayesian integration. However, the magnitude of sensory conflict did not significantly affect sensorimotor gain. These studies lay the theoretical and experimental groundwork for future work investigating multisensory control of locomotion.
Collapse
Affiliation(s)
- Erin E Sutton
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Alican Demir
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Sarah A Stamper
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Eric S Fortune
- Department of Biological Sciences, New Jersey Institute of Technology, Newark, NJ, USA
| | - Noah J Cowan
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
7
|
Abstract
A key component of interacting with the world is how to direct ones' sensors so as to extract task-relevant information - a process referred to as active sensing. In this review, we present a framework for active sensing that forms a closed loop between an ideal observer, that extracts task-relevant information from a sequence of observations, and an ideal planner which specifies the actions that lead to the most informative observations. We discuss active sensing as an approximation to exploration in the wider framework of reinforcement learning, and conversely, discuss several sensory, perceptual, and motor processes as approximations to active sensing. Based on this framework, we introduce a taxonomy of sensing strategies, identify hallmarks of active sensing, and discuss recent advances in formalizing and quantifying active sensing.
Collapse
Affiliation(s)
- Scott Cheng-Hsin Yang
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK
| | - Daniel M Wolpert
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK.,Department of Cognitive Science, Central European University, Budapest H-1051, Hungary
| |
Collapse
|