1
|
Dong Y, Lengyel G, Shivkumar S, Anzai A, DiRisio GF, Haefner RM, DeAngelis GC. How to reward animals based on their subjective percepts: A Bayesian approach to online estimation of perceptual biases. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2024.07.25.605047. [PMID: 39091868 PMCID: PMC11291170 DOI: 10.1101/2024.07.25.605047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/04/2024]
Abstract
Elucidating the neural basis of perceptual biases, such as those produced by visual illusions, can provide powerful insights into the neural mechanisms of perceptual inference. However, studying the subjective percepts of animals poses a fundamental challenge: unlike human participants, animals cannot be verbally instructed to report what they see, hear, or feel. Instead, they must be trained to perform a task for reward, and researchers must infer from their responses what the animal perceived. However, animals' responses are shaped by reward feedback, thus raising the major concern that the reward regimen may alter the animal's decision strategy or even their intrinsic perceptual biases. Using simulations of a reinforcement learning agent, we demonstrate that conventional reward strategies fail to allow accurate estimation of perceptual biases. We developed a method that estimates perceptual bias during task performance and then computes the reward for each trial based on the evolving estimate of the animal's perceptual bias. Our approach makes use of multiple stimulus contexts to dissociate perceptual biases from decision-related biases. Starting with an informative prior, our Bayesian method updates a posterior over the perceptual bias after each trial. The prior can be specified based on data from past sessions, thus reducing the variability of the online estimates and allowing it to converge to a stable estimate over a small number of trials. After validating our method on synthetic data, we apply it to estimate perceptual biases of monkeys in a motion direction discrimination task in which varying background optic flow induces robust perceptual biases. This method overcomes an important challenge to understanding the neural basis of subjective percepts.
Collapse
Affiliation(s)
- Yelin Dong
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Gabor Lengyel
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Sabyasachi Shivkumar
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Akiyuki Anzai
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Grace F DiRisio
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Ralf M Haefner
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| |
Collapse
|
2
|
Giguere AP, Cavanaugh MR, Huxlin KR, Tadin D, Fajen BR, Diaz GJ. The effect of unilateral cortical blindness on lane position and gaze behavior in a virtual reality steering task. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.02.06.636925. [PMID: 39974989 PMCID: PMC11839085 DOI: 10.1101/2025.02.06.636925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2025]
Abstract
Adults with cortically-induced blindness (CB) affecting a quarter to a half of their visual field show greater variability in lane positioning when driving compared to those with intact vision. Because humans rely on visual information from optic flow to control steering, we hypothesized that these lane biases are caused in part by a disruption to motion processing caused by CB. To investigate, we examined the steering behavior of 21 CB drivers (11 left-sided, 10 right-sided visual deficits) and 9 visually intact controls in a naturalistic virtual environment. Participants were instructed to maintain a central lane position while traveling at 19 m/s along a procedurally generated single-lane road. Turn direction (left/right) and turn radius (35m/55m/75m) varied between trials, and the quality of optic flow information was indirectly manipulated by altering the environmental texture density (low/medium/high). Right-sided CB participants maintained a similar average distance from the inner road edge as controls. Those with left-sided CB were less affected by changes in optic flow and turn direction. These differences were not explained by age, time since stroke, sparing of central vision, gaze direction, or saccade rate. Our results suggest that some left-sided CB participants place a lower weighting on optic flow information in the control of steering, possibly as a result of lateralization in the processing of motion. More broadly, our findings show that CB steering and gaze behavior are remarkably preserved despite the presence of visual deficits across large portions of the visual field.
Collapse
|
3
|
Liang J, Zhaoping L. Trans-saccadic integration for object recognition peters out with pre-saccadic object eccentricity as target-directed saccades become more saliency-driven. Vision Res 2025; 226:108500. [PMID: 39608201 DOI: 10.1016/j.visres.2024.108500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Revised: 07/23/2024] [Accepted: 10/02/2024] [Indexed: 11/30/2024]
Abstract
Bringing objects from peripheral locations to fovea via saccades facilitates their recognition. Human observers integrate pre- and post-saccadic information for recognition. This integration has only been investigated using instructed saccades to prescribed locations. Typically, the target has a fixed pre-saccadic location in an uncluttered scene and is viewed by a pre-determined post-saccadic duration. Consequently, whether trans-saccadic integration is limited or absent when the pre-saccadic target eccentricity is too large in cluttered scenes in unknown. Our study revealed this limit during visual exploration, when observers decided themselves when and to where to make their saccades. We asked thirty observers (400 trials each) to find and report as quickly as possible a target amongst 404 non-targets in an image spanning 57.3°×33.8° in visual angle. We measured the target's pre-saccadic eccentricity e, the duration Tpre of the fixation before the saccade, and the post-saccadic foveal viewing duration Tpost. This Tpost increased with e before starting to saturate around eccentricity ep=10°-20°. Meanwhile, Tpre increased much more slowly with e and started decreasing before ep. These observations imply the following at sufficiently large pre-saccadic eccentricities: the trans-saccadic integration ceases, target recognition relies exclusively on post-saccadic foveal vision, decision to saccade to the target relies exclusively on target saliency rather than identification. These implications should be applicable to general behavior, although ep should depend on object and scene properties. They are consistent with the Central-peripheral Dichotomy that central and peripheral vision are specialized for seeing and looking, respectively.
Collapse
Affiliation(s)
- Junhao Liang
- Eberhard Karls University of Tübingen and Max Planck Institute for Biological Cybernetics, Tübingen, 72076, Germany
| | - Li Zhaoping
- Eberhard Karls University of Tübingen and Max Planck Institute for Biological Cybernetics, Tübingen, 72076, Germany.
| |
Collapse
|
4
|
Vafaii H, Yates JL, Butts DA. Hierarchical VAEs provide a normative account of motion processing in the primate brain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.27.559646. [PMID: 37808629 PMCID: PMC10557690 DOI: 10.1101/2023.09.27.559646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
Abstract
The relationship between perception and inference, as postulated by Helmholtz in the 19th century, is paralleled in modern machine learning by generative models like Variational Autoencoders (VAEs) and their hierarchical variants. Here, we evaluate the role of hierarchical inference and its alignment with brain function in the domain of motion perception. We first introduce a novel synthetic data framework, Retinal Optic Flow Learning (ROFL), which enables control over motion statistics and their causes. We then present a new hierarchical VAE and test it against alternative models on two downstream tasks: (i) predicting ground truth causes of retinal optic flow (e.g., self-motion); and (ii) predicting the responses of neurons in the motion processing pathway of primates. We manipulate the model architectures (hierarchical versus non-hierarchical), loss functions, and the causal structure of the motion stimuli. We find that hierarchical latent structure in the model leads to several improvements. First, it improves the linear decodability of ground truth factors and does so in a sparse and disentangled manner. Second, our hierarchical VAE outperforms previous state-of-the-art models in predicting neuronal responses and exhibits sparse latent-to-neuron relationships. These results depend on the causal structure of the world, indicating that alignment between brains and artificial neural networks depends not only on architecture but also on matching ecologically relevant stimulus statistics. Taken together, our results suggest that hierarchical Bayesian inference underlines the brain's understanding of the world, and hierarchical VAEs can effectively model this understanding.
Collapse
|
5
|
Tas AC, Parker JL. The role of color in transsaccadic object correspondence. J Vis 2023; 23:5. [PMID: 37535373 PMCID: PMC10408768 DOI: 10.1167/jov.23.8.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 06/29/2023] [Indexed: 08/04/2023] Open
Abstract
With each saccade, visual information is disrupted, and the visual system is tasked with establishing object correspondence between the presaccadic and postsaccadic representations of the saccade target. There is substantial evidence that the visual system consults spatiotemporal continuity when determining object correspondence across saccades. The evidence for surface feature continuity, however, is mixed. Surface features that are integral to the saccade target object's identity (e.g., shape and contrast polarity) are informative of object continuity, but features that may only imply the state of the object (e.g., orientation) are ignored. The present study tested whether color information is consulted to determine transsaccadic object continuity. We used two variations of the intrasaccadic target displacement task. In Experiments 1 and 2, participants reported the direction of the target displacement. In Experiments 3 and 4, they instead reported whether they detected any target movement. In all experiments, we manipulated the saccade target's continuity by removing it briefly (i.e., blanking) and by changing its color. We found that large color changes can disrupt stability and increase sensitivity to displacements for both direction and movement reports, although not as strongly as long blank durations (250 ms). Interestingly, even smaller color changes, but not blanking, reduced response biases. These results indicate that disrupting surface feature continuity may impact the process of transsaccadic object correspondence more strongly than spatiotemporal disruptions by both increasing the sensitivity and decreasing the response bias.
Collapse
Affiliation(s)
- A Caglar Tas
- Department of Psychology, University of Tennessee, Knoxville, TN, USA
| | - Jessica L Parker
- Department of Psychology, University of Tennessee, Knoxville, TN, USA
| |
Collapse
|
6
|
Laurin AS, Bleau M, Gedjakouchian J, Fournet R, Pisella L, Khan AZ. Post-saccadic changes disrupt attended pre-saccadic object memory. J Vis 2021; 21:8. [PMID: 34347017 PMCID: PMC8340665 DOI: 10.1167/jov.21.8.8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Trans-saccadic memory consists of keeping track of objects’ locations and features across saccades; pre-saccadic information is remembered and compared with post-saccadic information. It has been shown to have limited resources and involve attention with respect to the selection of objects and features. In support, a previous study showed that recognition of distinct post-saccadic objects in the visual scene is impaired when pre-saccadic objects are relevant and thus already encoded in memory (Poth, Herwig, Schneider, 2015). Here, we investigated the inverse (i.e. how the memory of pre-saccadic objects is affected by abrupt but irrelevant changes in the post-saccadic visual scene). We also modulated the amount of attention to the relevant pre-saccadic object by having participants either make a saccade to it or elsewhere and observed that pre-saccadic attentional facilitation affected how much post-saccadic changes disrupted trans-saccadic memory of pre-saccadic objects. Participants identified a flashed symbol (d, b, p, or q, among distracters), at one of six placeholders (figures “8”) arranged in circle around fixation while planning a saccade to one of them. They reported the identity of the symbol after the saccade. We changed the post-saccadic scene in Experiment one by removing the entire scene, only the placeholder where the pre-saccadic symbol was presented, or all other placeholders except this one. We observed reduced identification performance when only the saccade-target placeholder disappeared after the saccade. In Experiment two, we changed one placeholder location (inward/outward shift or rotation re. saccade vector) after the saccade and observed that identification performance decreased with increased shift/rotation of the saccade-target placeholder. We conclude that pre-saccadic memory is disrupted by abrupt attention-capturing post-saccadic changes of visual scene, particularly when these changes involve the object prioritized by being the goal of a saccade. These findings support the notion that limited trans-saccadic memory resources are disrupted when object correspondence at saccadic goal is broken through removal or location change.
Collapse
Affiliation(s)
- Anne-Sophie Laurin
- University of Montreal, Department of Psychology, Montreal, Quebec, Canada.,
| | - Maxime Bleau
- University of Montreal, School of Optometry, Montreal, Quebec, Canada.,
| | | | - Romain Fournet
- University of Montreal, School of Optometry, Montreal, Quebec, Canada.,
| | - Laure Pisella
- ImpAct, INSERM UM1028, CNRS UMR 5292, University Claude Bernard Lyon 1, Lyon, France.,
| | | |
Collapse
|
7
|
Kong G, Aagten-Murphy D, McMaster JMV, Bays PM. Transsaccadic integration operates independently in different feature dimensions. J Vis 2021; 21:7. [PMID: 34264290 PMCID: PMC8288057 DOI: 10.1167/jov.21.7.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Our knowledge about objects in our environment reflects an integration of current visual input with information from preceding gaze fixations. Such a mechanism may reduce uncertainty but requires the visual system to determine which information obtained in different fixations should be combined or kept separate. To investigate the basis of this decision, we conducted three experiments. Participants viewed a stimulus in their peripheral vision and then made a saccade that shifted the object into the opposite hemifield. During the saccade, the object underwent changes of varying magnitude in two feature dimensions (Experiment 1, color and location; Experiments 2 and 3, color and orientation). Participants reported whether they detected any change and estimated one of the postsaccadic features. Integration of presaccadic with postsaccadic input was observed as a bias in estimates toward the presaccadic feature value. In all experiments, presaccadic bias weakened as the magnitude of the transsaccadic change in the estimated feature increased. Changes in the other feature, despite having a similar probability of detection, had no effect on integration. Results were quantitatively captured by an observer model where the decision whether to integrate information from sequential fixations is made independently for each feature and coupled to awareness of a feature change.
Collapse
Affiliation(s)
- Garry Kong
- Department of Psychology, University of Cambridge, Cambridge, UK.,
| | | | | | - Paul M Bays
- Department of Psychology, University of Cambridge, Cambridge, UK.,
| |
Collapse
|
8
|
Abstract
Visual processing varies dramatically across the visual field. These differences start in the retina and continue all the way to the visual cortex. Despite these differences in processing, the perceptual experience of humans is remarkably stable and continuous across the visual field. Research in the last decade has shown that processing in peripheral and foveal vision is not independent, but is more directly connected than previously thought. We address three core questions on how peripheral and foveal vision interact, and review recent findings on potentially related phenomena that could provide answers to these questions. First, how is the processing of peripheral and foveal signals related during fixation? Peripheral signals seem to be processed in foveal retinotopic areas to facilitate peripheral object recognition, and foveal information seems to be extrapolated toward the periphery to generate a homogeneous representation of the environment. Second, how are peripheral and foveal signals re-calibrated? Transsaccadic changes in object features lead to a reduction in the discrepancy between peripheral and foveal appearance. Third, how is peripheral and foveal information stitched together across saccades? Peripheral and foveal signals are integrated across saccadic eye movements to average percepts and to reduce uncertainty. Together, these findings illustrate that peripheral and foveal processing are closely connected, mastering the compromise between a large peripheral visual field and high resolution at the fovea.
Collapse
Affiliation(s)
- Emma E M Stewart
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, Marburg, Germany.,
| | - Matteo Valsecchi
- Dipartimento di Psicologia, Universitá di Bologna, Bologna, Italy.,
| | - Alexander C Schütz
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, Marburg, Germany.,Center for Mind, Brain and Behavior, Philipps-Universität Marburg, Marburg, Germany., https://www.uni-marburg.de/en/fb04/team-schuetz/team/alexander-schutz
| |
Collapse
|