1
|
Nagarajan S, Meethal NSK, Pel JJM, Asokan R, Negiloni K, George R. Characterization of Gaze Metrics and Visual Search Pattern Among Glaucoma Patients While Viewing Real-World Images. J Glaucoma 2024; 33:987-996. [PMID: 39235404 DOI: 10.1097/ijg.0000000000002493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 08/27/2024] [Indexed: 09/06/2024]
Abstract
PRCIS We quantified and compared the gaze metrics during target-oriented visual search tasks between glaucoma and healthy controls. On the basis of a mathematical concept we showed that due to glaucoma, focal search becomes prominent over global search. PURPOSE Visual search (VS) which is essential for target identification and navigation is significantly impacted by glaucoma. VS metrics can be influenced by differences in cultural exposure or coping strategies, leading to varying VS patterns. This study aimed to explore and label the pattern of VS based on gaze metrics quantified using eye-tracking technology. METHODS Twenty-seven glaucoma subjects and 30 healthy controls [median age 51 (14) and 54 (19) y, respectively] underwent a VS experiment during which they had to identify specific targets from real-world images. Eye movements were recorded using a remote eye-tracker and gaze metrics-fixation count (FC), fixation duration (FD), saccade amplitude (SA), and VS time (VST) were computed and compared between the study groups. A Z -score-based coefficient " K " was derived to label the search patterns as global ( K ≤ - 0.1: short FD with long SA), focal ( K ≥+0.1: long FD with short SA), or a combination ( K between ±0.1). RESULTS Similar to other ethnicities, Indian glaucoma subjects also exhibited statistically significantly increased FC, FD, and VST ( P =0.01). Healthy controls presented a comparable proportion of focal (47%) and global (42%) search patterns while glaucoma subjects exhibited predominantly focal (56%) than global search patterns (26%, P =0.008). CONCLUSIONS This study suggests that glaucoma subjects perform more focal searches during active gaze scanning. This change in viewing behavior reflects underlying compensatory strategies adapted for coping with their visual impairments. These search patterns can be influenced by factors such as saliency which requires further investigation.
Collapse
Affiliation(s)
- Sangeetha Nagarajan
- Medical Research Foundation
- Elite School of Optometry Affiliated to SASTRA Deemed University, Chennai, Tamil Nadu, India
- Department of Neuroscience, Erasmus Medical Centre, Rotterdam, The Netherlands
| | - Najiya Sundu K Meethal
- Medical Research Foundation
- Department of Neuroscience, Erasmus Medical Centre, Rotterdam, The Netherlands
| | - Johan J M Pel
- Department of Neuroscience, Erasmus Medical Centre, Rotterdam, The Netherlands
| | - Rashima Asokan
- Medical Research Foundation
- Elite School of Optometry Affiliated to SASTRA Deemed University, Chennai, Tamil Nadu, India
| | | | | |
Collapse
|
2
|
Baltaretu BR, Schuetz I, Võ MLH, Fiehler K. Scene semantics affects allocentric spatial coding for action in naturalistic (virtual) environments. Sci Rep 2024; 14:15549. [PMID: 38969745 PMCID: PMC11226608 DOI: 10.1038/s41598-024-66428-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Accepted: 07/01/2024] [Indexed: 07/07/2024] Open
Abstract
Interacting with objects in our environment requires determining their locations, often with respect to surrounding objects (i.e., allocentrically). According to the scene grammar framework, these usually small, local objects are movable within a scene and represent the lowest level of a scene's hierarchy. How do higher hierarchical levels of scene grammar influence allocentric coding for memory-guided actions? Here, we focused on the effect of large, immovable objects (anchors) on the encoding of local object positions. In a virtual reality study, participants (n = 30) viewed one of four possible scenes (two kitchens or two bathrooms), with two anchors connected by a shelf, onto which were presented three local objects (congruent with one anchor) (Encoding). The scene was re-presented (Test) with 1) local objects missing and 2) one of the anchors shifted (Shift) or not (No shift). Participants, then, saw a floating local object (target), which they grabbed and placed back on the shelf in its remembered position (Response). Eye-tracking data revealed that both local objects and anchors were fixated, with preference for local objects. Additionally, anchors guided allocentric coding of local objects, despite being task-irrelevant. Overall, anchors implicitly influence spatial coding of local object locations for memory-guided actions within naturalistic (virtual) environments.
Collapse
Affiliation(s)
- Bianca R Baltaretu
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10F, 35394, Giessen, Hesse, Germany.
| | - Immo Schuetz
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10F, 35394, Giessen, Hesse, Germany
| | - Melissa L-H Võ
- Department of Psychology, Goethe University Frankfurt, 60323, Frankfurt am Main, Hesse, Germany
| | - Katja Fiehler
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10F, 35394, Giessen, Hesse, Germany
| |
Collapse
|
3
|
Beitner J, Helbing J, David EJ, Võ MLH. Using a flashlight-contingent window paradigm to investigate visual search and object memory in virtual reality and on computer screens. Sci Rep 2024; 14:8596. [PMID: 38615047 PMCID: PMC11379806 DOI: 10.1038/s41598-024-58941-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 04/04/2024] [Indexed: 04/15/2024] Open
Abstract
A popular technique to modulate visual input during search is to use gaze-contingent windows. However, these are often rather discomforting, providing the impression of visual impairment. To counteract this, we asked participants in this study to search through illuminated as well as dark three-dimensional scenes using a more naturalistic flashlight with which they could illuminate the rooms. In a surprise incidental memory task, we tested the identities and locations of objects encountered during search. Importantly, we tested this study design in both immersive virtual reality (VR; Experiment 1) and on a desktop-computer screen (Experiment 2). As hypothesized, searching with a flashlight increased search difficulty and memory usage during search. We found a memory benefit for identities of distractors in the flashlight condition in VR but not in the computer screen experiment. Surprisingly, location memory was comparable across search conditions despite the enormous difference in visual input. Subtle differences across experiments only appeared in VR after accounting for previous recognition performance, hinting at a benefit of flashlight search in VR. Our findings highlight that removing visual information does not necessarily impair location memory, and that screen experiments using virtual environments can elicit the same major effects as VR setups.
Collapse
Affiliation(s)
- Julia Beitner
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany.
| | - Jason Helbing
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Erwan Joël David
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
- LIUM, Le Mans Université, Le Mans, France
| | - Melissa Lê-Hoa Võ
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
4
|
Callahan-Flintoft C, Jensen E, Naeem J, Nonte MW, Madison AM, Ries AJ. A Comparison of Head Movement Classification Methods. SENSORS (BASEL, SWITZERLAND) 2024; 24:1260. [PMID: 38400418 PMCID: PMC10893452 DOI: 10.3390/s24041260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Revised: 02/06/2024] [Accepted: 02/09/2024] [Indexed: 02/25/2024]
Abstract
To understand human behavior, it is essential to study it in the context of natural movement in immersive, three-dimensional environments. Virtual reality (VR), with head-mounted displays, offers an unprecedented compromise between ecological validity and experimental control. However, such technological advancements mean that new data streams will become more widely available, and therefore, a need arises to standardize methodologies by which these streams are analyzed. One such data stream is that of head position and rotation tracking, now made easily available from head-mounted systems. The current study presents five candidate algorithms of varying complexity for classifying head movements. Each algorithm is compared against human rater classifications and graded based on the overall agreement as well as biases in metrics such as movement onset/offset time and movement amplitude. Finally, we conclude this article by offering recommendations for the best practices and considerations for VR researchers looking to incorporate head movement analysis in their future studies.
Collapse
Affiliation(s)
- Chloe Callahan-Flintoft
- U.S. Army Combat Capabilities Development Command (DEVCOM) Army Research Laboratory, Aberdeen, MD 21005, USA; (A.M.M.); (A.J.R.)
| | - Emily Jensen
- Department of Computer Science, University of Colorado Boulder, Boulder, CO 80303, USA;
| | - Jasim Naeem
- DCS Corporation, Alexandria, VA 22310, USA; (J.N.); (M.W.N.)
| | | | - Anna M. Madison
- U.S. Army Combat Capabilities Development Command (DEVCOM) Army Research Laboratory, Aberdeen, MD 21005, USA; (A.M.M.); (A.J.R.)
- Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO 80840, USA
| | - Anthony J. Ries
- U.S. Army Combat Capabilities Development Command (DEVCOM) Army Research Laboratory, Aberdeen, MD 21005, USA; (A.M.M.); (A.J.R.)
- Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO 80840, USA
| |
Collapse
|
5
|
Draschkow D, Anderson NC, David E, Gauge N, Kingstone A, Kumle L, Laurent X, Nobre AC, Shiels S, Võ MLH. Using XR (Extended Reality) for Behavioral, Clinical, and Learning Sciences Requires Updates in Infrastructure and Funding. POLICY INSIGHTS FROM THE BEHAVIORAL AND BRAIN SCIENCES 2023; 10:317-323. [PMID: 37900910 PMCID: PMC10602770 DOI: 10.1177/23727322231196305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/31/2023]
Abstract
Extended reality (XR, including augmented and virtual reality) creates a powerful intersection between information technology and cognitive, clinical, and education sciences. XR technology has long captured the public imagination, and its development is the focus of major technology companies. This article demonstrates the potential of XR to (1) deliver behavioral insights, (2) transform clinical treatments, and (3) improve learning and education. However, without appropriate policy, funding, and infrastructural investment, many research institutions will struggle to keep pace with the advances and opportunities of XR. To realize the full potential of XR for basic and translational research, funding should incentivize (1) appropriate training, (2) open software solutions, and (3) collaborations between complementary academic and industry partners. Bolstering the XR research infrastructure with the right investments and incentives is vital for delivering on the potential for transformative discoveries, innovations, and applications.
Collapse
Affiliation(s)
- Dejan Draschkow
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Nicola C. Anderson
- Department of Psychology, University of British Columbia, Vancouver, Canada
| | - Erwan David
- Department of Psychology, Scene Grammar Lab, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Nathan Gauge
- OxSTaR Oxford Simulation Teaching and Research, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Alan Kingstone
- Department of Psychology, University of British Columbia, Vancouver, Canada
| | - Levi Kumle
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Xavier Laurent
- Centre for Teaching and Learning, University of Oxford, Oxford, UK
| | - Anna C. Nobre
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
- Wu Tsai Institute, Yale University, New Haven, USA
| | - Sally Shiels
- OxSTaR Oxford Simulation Teaching and Research, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Melissa L.-H. Võ
- Department of Psychology, Scene Grammar Lab, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
6
|
Kasowski J, Johnson BA, Neydavood R, Akkaraju A, Beyeler M. A systematic review of extended reality (XR) for understanding and augmenting vision loss. J Vis 2023; 23:5. [PMID: 37140911 PMCID: PMC10166121 DOI: 10.1167/jov.23.5.5] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 04/04/2023] [Indexed: 05/05/2023] Open
Abstract
Over the past decade, extended reality (XR) has emerged as an assistive technology not only to augment residual vision of people losing their sight but also to study the rudimentary vision restored to blind people by a visual neuroprosthesis. A defining quality of these XR technologies is their ability to update the stimulus based on the user's eye, head, or body movements. To make the best use of these emerging technologies, it is valuable and timely to understand the state of this research and identify any shortcomings that are present. Here we present a systematic literature review of 227 publications from 106 different venues assessing the potential of XR technology to further visual accessibility. In contrast to other reviews, we sample studies from multiple scientific disciplines, focus on technology that augments a person's residual vision, and require studies to feature a quantitative evaluation with appropriate end users. We summarize prominent findings from different XR research areas, show how the landscape has changed over the past decade, and identify scientific gaps in the literature. Specifically, we highlight the need for real-world validation, the broadening of end-user participation, and a more nuanced understanding of the usability of different XR-based accessibility aids.
Collapse
Affiliation(s)
- Justin Kasowski
- Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara, CA, USA
| | - Byron A Johnson
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Ryan Neydavood
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Anvitha Akkaraju
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Michael Beyeler
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
- Department of Computer Science, University of California, Santa Barbara, CA, USA
| |
Collapse
|
7
|
Beitner J, Helbing J, Draschkow D, David EJ, Võ MLH. Flipping the world upside down: Using eye tracking in virtual reality to study visual search in inverted scenes. J Eye Mov Res 2023; 15:10.16910/jemr.15.3.5. [PMID: 37215533 PMCID: PMC10195094 DOI: 10.16910/jemr.15.3.5] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/19/2024] Open
Abstract
Image inversion is a powerful tool for investigating cognitive mechanisms of visual perception. However, studies have mainly used inversion in paradigms presented on twodimensional computer screens. It remains open whether disruptive effects of inversion also hold true in more naturalistic scenarios. In our study, we used scene inversion in virtual reality in combination with eye tracking to investigate the mechanisms of repeated visual search through three-dimensional immersive indoor scenes. Scene inversion affected all gaze and head measures except fixation durations and saccade amplitudes. Our behavioral results, surprisingly, did not entirely follow as hypothesized: While search efficiency dropped significantly in inverted scenes, participants did not utilize more memory as measured by search time slopes. This indicates that despite the disruption, participants did not try to compensate the increased difficulty by using more memory. Our study highlights the importance of investigating classical experimental paradigms in more naturalistic scenarios to advance research on daily human behavior.
Collapse
Affiliation(s)
- Julia Beitner
- Department of Psychology, Goethe University Frankfurt, Germany
- Corresponding author,
| | - Jason Helbing
- Department of Psychology, Goethe University Frankfurt, Germany
| | - Dejan Draschkow
- Department of Experimental Psychology, University of Oxford, UK
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, UK
| | - Erwan J David
- Department of Psychology, Goethe University Frankfurt, Germany
| | - Melissa L-H Võ
- Department of Psychology, Goethe University Frankfurt, Germany
| |
Collapse
|
8
|
Bischof WF, Anderson NC, Kingstone A. Eye and head movements while encoding and recognizing panoramic scenes in virtual reality. PLoS One 2023; 18:e0282030. [PMID: 36800398 PMCID: PMC9937482 DOI: 10.1371/journal.pone.0282030] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 02/06/2023] [Indexed: 02/18/2023] Open
Abstract
One approach to studying the recognition of scenes and objects relies on the comparison of eye movement patterns during encoding and recognition. Past studies typically analyzed the perception of flat stimuli of limited extent presented on a computer monitor that did not require head movements. In contrast, participants in the present study saw omnidirectional panoramic scenes through an immersive 3D virtual reality viewer, and they could move their head freely to inspect different parts of the visual scenes. This allowed us to examine how unconstrained observers use their head and eyes to encode and recognize visual scenes. By studying head and eye movement within a fully immersive environment, and applying cross-recurrence analysis, we found that eye movements are strongly influenced by the content of the visual environment, as are head movements-though to a much lesser degree. Moreover, we found that the head and eyes are linked, with the head supporting, and by and large mirroring the movements of the eyes, consistent with the notion that the head operates to support the acquisition of visual information by the eyes.
Collapse
Affiliation(s)
- Walter F. Bischof
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada
| | - Nicola C. Anderson
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada
| | - Alan Kingstone
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
9
|
Abstract
This chapter explores the current state of the art in eye tracking within 3D virtual environments. It begins with the motivation for eye tracking in Virtual Reality (VR) in psychological research, followed by descriptions of the hardware and software used for presenting virtual environments as well as for tracking eye and head movements in VR. This is followed by a detailed description of an example project on eye and head tracking while observers look at 360° panoramic scenes. The example is illustrated with descriptions of the user interface and program excerpts to show the measurement of eye and head movements in VR. The chapter continues with fundamentals of data analysis, in particular methods for the determination of fixations and saccades when viewing spherical displays. We then extend these methodological considerations to determining the spatial and temporal coordination of the eyes and head in VR perception. The chapter concludes with a discussion of outstanding problems and future directions for conducting eye- and head-tracking research in VR. We hope that this chapter will serve as a primer for those intending to implement VR eye tracking in their own research.
Collapse
|
10
|
Srikantharajah J, Ellard C. How central and peripheral vision influence focal and ambient processing during scene viewing. J Vis 2022; 22:4. [PMID: 36322076 PMCID: PMC9639699 DOI: 10.1167/jov.22.12.4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
Central and peripheral vision carry out different functions during scene processing. The ambient mode of visual processing is more likely to involve peripheral visual processes, whereas the focal mode of visual processing is more likely to involve central visual processes. Although the ambient mode is responsible for navigating space and comprehending scene layout, the focal mode gathers detailed information as central vision is oriented to salient areas of the visual field. Previous work suggests that during the time course of scene viewing, there is a transition from ambient processing during the first few seconds to focal processing during later time intervals, characterized by longer fixations and shorter saccades. In this study, we identify the influence of central and peripheral vision on changes in eye movements and the transition from ambient to focal processing during the time course of scene processing. Using a gaze-contingent protocol, we restricted the visual field to central or peripheral vision while participants freely viewed scenes for 20 seconds. Results indicated that fixation durations are shorter when vision is restricted to central vision compared to normal vision. During late visual processing, fixations in peripheral vision were longer than those in central vision. We show that a transition from more ambient to more focal processing during scene viewing will occur even when vision is restricted to only central vision or peripheral vision.
Collapse
Affiliation(s)
| | - Colin Ellard
- Department of Psychology, University of Waterloo, Waterloo, Canada,
| |
Collapse
|
11
|
Helbing J, Draschkow D, L-H Võ M. Auxiliary Scene-Context Information Provided by Anchor Objects Guides Attention and Locomotion in Natural Search Behavior. Psychol Sci 2022; 33:1463-1476. [PMID: 35942922 DOI: 10.1177/09567976221091838] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Successful adaptive behavior requires efficient attentional and locomotive systems. Previous research has thoroughly investigated how we achieve this efficiency during natural behavior by exploiting prior knowledge related to targets of our actions (e.g., attending to metallic targets when looking for a pot) and to the environmental context (e.g., looking for the pot in the kitchen). Less is known about whether and how individual nontarget components of the environment support natural behavior. In our immersive virtual reality task, 24 adult participants searched for objects in naturalistic scenes in which we manipulated the presence and arrangement of large, static objects that anchor predictions about targets (e.g., the sink provides a prediction for the location of the soap). Our results show that gaze and body movements in this naturalistic setting are strongly guided by these anchors. These findings demonstrate that objects auxiliary to the target are incorporated into the representations guiding attention and locomotion.
Collapse
Affiliation(s)
- Jason Helbing
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| | - Dejan Draschkow
- Brain and Cognition Laboratory, Department of Experimental Psychology, University of Oxford.,Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford
| | - Melissa L-H Võ
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| |
Collapse
|
12
|
David EJ, Lebranchu P, Perreira Da Silva M, Le Callet P. What are the visuo-motor tendencies of omnidirectional scene free-viewing in virtual reality? J Vis 2022; 22:12. [PMID: 35323868 PMCID: PMC8963670 DOI: 10.1167/jov.22.4.12] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 02/08/2022] [Indexed: 11/24/2022] Open
Abstract
Central and peripheral vision during visual tasks have been extensively studied on two-dimensional screens, highlighting their perceptual and functional disparities. This study has two objectives: replicating on-screen gaze-contingent experiments removing central or peripheral field of view in virtual reality, and identifying visuo-motor biases specific to the exploration of 360 scenes with a wide field of view. Our results are useful for vision modelling, with applications in gaze position prediction (e.g., content compression and streaming). We ask how previous on-screen findings translate to conditions where observers can use their head to explore stimuli. We implemented a gaze-contingent paradigm to simulate loss of vision in virtual reality, participants could freely view omnidirectional natural scenes. This protocol allows the simulation of vision loss with an extended field of view (\(\gt \)80°) and studying the head's contributions to visual attention. The time-course of visuo-motor variables in our pure free-viewing task reveals long fixations and short saccades during first seconds of exploration, contrary to literature in visual tasks guided by instructions. We show that the effect of vision loss is reflected primarily on eye movements, in a manner consistent with two-dimensional screens literature. We hypothesize that head movements mainly serve to explore the scenes during free-viewing, the presence of masks did not significantly impact head scanning behaviours. We present new fixational and saccadic visuo-motor tendencies in a 360° context that we hope will help in the creation of gaze prediction models dedicated to virtual reality.
Collapse
Affiliation(s)
- Erwan Joël David
- Department of Psychology, Goethe-Universität, Frankfurt, Germany
| | - Pierre Lebranchu
- LS2N UMR CNRS 6004, University of Nantes and Nantes University Hospital, Nantes, France
| | | | - Patrick Le Callet
- LS2N UMR CNRS 6004, University of Nantes, Nantes, France
- http://pagesperso.ls2n.fr/~lecallet-p/index.html
| |
Collapse
|
13
|
Peacock CE, Zhang T, David-John B, Murdison TS, Boring MJ, Benko H, Jonker TR. Gaze dynamics are sensitive to target orienting for working memory encoding in virtual reality. J Vis 2022; 22:2. [PMID: 34982104 PMCID: PMC8742516 DOI: 10.1167/jov.22.1.2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Numerous studies have demonstrated that visuospatial attention is a requirement for successful working memory encoding. It is unknown, however, whether this established relationship manifests in consistent gaze dynamics as people orient their visuospatial attention toward an encoding target when searching for information in naturalistic environments. To test this hypothesis, participants' eye movements were recorded while they searched for and encoded objects in a virtual apartment (Experiment 1). We decomposed gaze into 61 features that capture gaze dynamics and a trained sliding window logistic regression model that has potential for use in real-time systems to predict when participants found target objects for working memory encoding. A model trained on group data successfully predicted when people oriented to a target for encoding for the trained task (Experiment 1) and for a novel task (Experiment 2), where a new set of participants found objects and encoded an associated nonword in a cluttered virtual kitchen. Six of these features were predictive of target orienting for encoding, even during the novel task, including decreased distances between subsequent fixation/saccade events, increased fixation probabilities, and slower saccade decelerations before encoding. This suggests that as people orient toward a target to encode new information at the end of search, they decrease task-irrelevant, exploratory sampling behaviors. This behavior was common across the two studies. Together, this research demonstrates how gaze dynamics can be used to capture target orienting for working memory encoding and has implications for real-world use in technology and special populations.
Collapse
Affiliation(s)
| | - Ting Zhang
- Reality Labs Research; Redmond, WA, USA.,
| | | | | | | | | | | |
Collapse
|
14
|
Correction: David et al. Effects of Transient Loss of Vision on Head and Eye Movements during Visual Search in a Virtual Environment. Brain Sci. 2020, 10, 841. Brain Sci 2021; 11:brainsci11091215. [PMID: 34573274 PMCID: PMC8470248 DOI: 10.3390/brainsci11091215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Accepted: 08/30/2021] [Indexed: 11/17/2022] Open
Abstract
We wish to make the following correction to the published paper "Effects of Transient Loss of Vision on Head and Eye Movements during Visual Search in a Virtual Environment" [...].
Collapse
|
15
|
David EJ, Beitner J, Võ MLH. The importance of peripheral vision when searching 3D real-world scenes: A gaze-contingent study in virtual reality. J Vis 2021; 21:3. [PMID: 34251433 PMCID: PMC8287039 DOI: 10.1167/jov.21.7.3] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 05/09/2021] [Indexed: 11/24/2022] Open
Abstract
Visual search in natural scenes is a complex task relying on peripheral vision to detect potential targets and central vision to verify them. The segregation of the visual fields has been particularly established by on-screen experiments. We conducted a gaze-contingent experiment in virtual reality in order to test how the perceived roles of central and peripheral visions translated to more natural settings. The use of everyday scenes in virtual reality allowed us to study visual attention by implementing a fairly ecological protocol that cannot be implemented in the real world. Central or peripheral vision was masked during visual search, with target objects selected according to scene semantic rules. Analyzing the resulting search behavior, we found that target objects that were not spatially constrained to a probable location within the scene impacted search measures negatively. Our results diverge from on-screen studies in that search performances were only slightly affected by central vision loss. In particular, a central mask did not impact verification times when the target was grammatically constrained to an anchor object. Our findings demonstrates that the role of central vision (up to 6 degrees of eccentricities) in identifying objects in natural scenes seems to be minor, while the role of peripheral preprocessing of targets in immersive real-world searches may have been underestimated by on-screen experiments.
Collapse
Affiliation(s)
- Erwan Joël David
- Department of Psychology, Goethe-Universität, Frankfurt, Germany
| | - Julia Beitner
- Department of Psychology, Goethe-Universität, Frankfurt, Germany
| | | |
Collapse
|
16
|
Kristjánsson Á, Draschkow D. Keeping it real: Looking beyond capacity limits in visual cognition. Atten Percept Psychophys 2021; 83:1375-1390. [PMID: 33791942 PMCID: PMC8084831 DOI: 10.3758/s13414-021-02256-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/23/2020] [Indexed: 11/23/2022]
Abstract
Research within visual cognition has made tremendous strides in uncovering the basic operating characteristics of the visual system by reducing the complexity of natural vision to artificial but well-controlled experimental tasks and stimuli. This reductionist approach has for example been used to assess the basic limitations of visual attention, visual working memory (VWM) capacity, and the fidelity of visual long-term memory (VLTM). The assessment of these limits is usually made in a pure sense, irrespective of goals, actions, and priors. While it is important to map out the bottlenecks our visual system faces, we focus here on selected examples of how such limitations can be overcome. Recent findings suggest that during more natural tasks, capacity may be higher than reductionist research suggests and that separable systems subserve different actions, such as reaching and looking, which might provide important insights about how pure attentional or memory limitations could be circumvented. We also review evidence suggesting that the closer we get to naturalistic behavior, the more we encounter implicit learning mechanisms that operate "for free" and "on the fly." These mechanisms provide a surprisingly rich visual experience, which can support capacity-limited systems. We speculate whether natural tasks may yield different estimates of the limitations of VWM, VLTM, and attention, and propose that capacity measurements should also pass the real-world test within naturalistic frameworks. Our review highlights various approaches for this and suggests that our understanding of visual cognition will benefit from incorporating the complexities of real-world cognition in experimental approaches.
Collapse
Affiliation(s)
- Árni Kristjánsson
- School of Health Sciences, University of Iceland, Reykjavík, Iceland.
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia.
| | - Dejan Draschkow
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| |
Collapse
|