1
|
Harris DJ, O'Malley CA, Arthur T, Evans J, Buckingham G. Comparing object lifting kinematics and the size-weight illusion between physical reality and virtual reality. Atten Percept Psychophys 2025:10.3758/s13414-025-03091-w. [PMID: 40426004 DOI: 10.3758/s13414-025-03091-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/15/2025] [Indexed: 05/29/2025]
Abstract
This study compared the size-weight illusion (SWI) and object lifting kinematics between physical and virtual conditions, shedding light on the nuanced disparities in perception and action across different environmental mediums. We examined whether prior expectations about object weight based on size cues, which affect the experience of real-world object interactions, are different in virtual reality (VR). Employing a highly realistic virtual environment with precisely matched visual size and haptic cues, we tested the hypothesis that VR, which may be experienced as uncertain, unfamiliar, or unpredictable, would induce a smaller SWI due to a diminished effect of prior expectations. Participants (N = 25) reported the felt heaviness of lifted objects that varied in both volume and mass in physical reality and a VR environment. Reach and lift kinematics, and self-reported presence, were also recorded. Our findings showed no differences between how participants perceived the SWI between real and virtual environments, although there was a trend towards a smaller illusion in VR. Contrary to our predictions, participants who experienced more presence in VR did not experience a larger SWI-instead, the inverse relationship was observed. Notably, differences in reach velocities between physical and virtual conditions suggested a more controlled approach in VR. These findings highlight the intricate relationship between immersion and sensorimotor processes in virtual environments, emphasising the need for deeper exploration into the underlying mechanisms that shape human interactions with immersive technologies, particularly the prior expectations associated with virtual environments.
Collapse
Affiliation(s)
- David John Harris
- School of Public Health and Sport Sciences, Faculty of Health and Life Sciences, Medical School, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK.
| | - Callum Aaron O'Malley
- School of Public Health and Sport Sciences, Faculty of Health and Life Sciences, Medical School, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - Tom Arthur
- School of Public Health and Sport Sciences, Faculty of Health and Life Sciences, Medical School, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - Jack Evans
- School of Public Health and Sport Sciences, Faculty of Health and Life Sciences, Medical School, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - Gavin Buckingham
- School of Public Health and Sport Sciences, Faculty of Health and Life Sciences, Medical School, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| |
Collapse
|
2
|
Patel KY, Wilcox LM, Maloney LT, Ehinger KA, Patel JY, Murray RF. An equivalent illuminant analysis of lightness constancy with physical objects and in virtual reality. Behav Res Methods 2025; 57:170. [PMID: 40360714 DOI: 10.3758/s13428-025-02688-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/07/2025] [Indexed: 05/15/2025]
Abstract
Several previous studies have found significant differences between visual perception in real and virtual environments. Given the increasing use of virtual reality (VR) in performance-critical applications such as medical training and vision research, it is important to understand these differences. Here, we compared lightness constancy in physical and VR environments using a task where viewers matched the reflectance of a fronto-parallel match patch to the reflectance of a reference patch at a range of 3D orientations relative to a light source. We used a custom-built physical apparatus and four VR conditions: (1) All-Cue (replicating the physical apparatus), (2) Reduced-Depth (no disparity or parallax), (3) Shadowless (no cast shadows), and (4) Reduced-Context (no surrounding objects). Lightness constancy was markedly better in the physical condition than in all four VR conditions. Surprisingly, viewers achieved a degree of lightness constancy even in the Reduced-Context condition, despite the absence of lighting cues. In a follow-up experiment, we re-tested the All-Cue and Reduced-Context conditions in VR with new observers, each participating in only one condition. Here, we found lower levels of constancy than in the first experiment, suggesting that experience across multiple experimental settings and possibly exposure to the physical apparatus during instructions had enhanced performance. We conclude that even when robust lighting and shape cues are available, lightness constancy is substantially better in real environments than in virtual environments. We consider possible explanations for this finding, such as the imperfect models of materials and lighting that are used for rendering in real-time VR.
Collapse
Affiliation(s)
- Khushbu Y Patel
- Department of Psychology and Centre for Vision Research, York University, Toronto, Canada.
| | - Laurie M Wilcox
- Department of Psychology and Centre for Vision Research, York University, Toronto, Canada
| | | | - Krista A Ehinger
- School of Computing and Information Systems, The University of Melbourne, Melbourne, Australia
| | - Jaykishan Y Patel
- Department of Psychology and Centre for Vision Research, York University, Toronto, Canada
| | - Richard F Murray
- Department of Psychology and Centre for Vision Research, York University, Toronto, Canada
| |
Collapse
|
3
|
Kisker J, Johnsdorf M, Sagehorn M, Hofmann T, Gruber T, Schöne B. Comparative analysis of early visual processes across presentation modalities: The event-related potential evoked by real-life, virtual reality, and planar objects. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2025:10.3758/s13415-025-01294-0. [PMID: 40199787 DOI: 10.3758/s13415-025-01294-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/03/2025] [Indexed: 04/10/2025]
Abstract
Characteristics of real-life objects, such as binocular depth, potentially yield visual processes going beyond what examinations of planar pictures as experimental cues can reveal. While virtual reality (VR) is used to approximate real-life features in experimental settings, this approach fundamentally hinges on whether the distinct modalities are processed in a similar way. To examine which stages of early visual processing depend on modality-specific characteristics, our study compares the electrophysiological responses to 2D (PC), VR, and real-life (RL) objects. To this end, participants passively explored abstract objects in one of these modalities followed by active exploration in a delayed matching-to-sample-task. Our results indicate that all modalities fundamentally yield comparable visual processes. Remarkably, our RL setup evoked the P1-N1-P2 complex corresponding to the well-established ERP morphology. However, the magnitude of the ERP response during real-life visual processing was more comparable to the response to VR than to PC. Indicating effects of stereoscopy on the earliest processing stages, the P1 differentiated only between PC and RL, and the N1 differentiated PC from both other conditions. In contrast, the P2 distinguished VR from both other conditions, which potentially indicated stereoscopic visual fatigue. Complementary analysis of the alpha-band response revealed higher attentional demands in response to PC and VR compared with RL, ruling out that the ERP-based results are exclusively driven by attentional effects. Whereas comparable fundamental processes are likely occurring under all modalities, our study advises the use of VR if the processes' magnitude is of relevance, emphasizing its value to approximate real-life visual processing.
Collapse
Affiliation(s)
- Joanna Kisker
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Lise-Meitner-Straße 3, 49076, Osnabrück, Germany.
| | - Marike Johnsdorf
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Lise-Meitner-Straße 3, 49076, Osnabrück, Germany
| | - Merle Sagehorn
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Lise-Meitner-Straße 3, 49076, Osnabrück, Germany
| | - Thomas Hofmann
- Industrial Design, Engineering and Computer Science, University of Applied Sciences Osnabrück, Osnabrück, Germany
| | - Thomas Gruber
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Lise-Meitner-Straße 3, 49076, Osnabrück, Germany
| | - Benjamin Schöne
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Lise-Meitner-Straße 3, 49076, Osnabrück, Germany
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
4
|
Cheng M, Chukoskie L. Impact of Visual Clutter in VR on Visuomotor Integration in Autistic Individuals. IEEE Trans Neural Syst Rehabil Eng 2025; 33:829-840. [PMID: 40031526 DOI: 10.1109/tnsre.2025.3543131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Autistic individuals often exhibit superior local visual sensitivity but may struggle with global visual processing, affecting their visuomotor integration (VMI). Goal-directed overhand throwing is common in both the physical environment (PE) and virtual reality (VR) games, demanding spatial and temporal accuracy to perceive position and motion, and precise VMI. Understanding VMI in autistic individuals and exploring supportive designs in VR are crucial for rehabilitation and improving accessibility. We assessed static visuospatial accuracy and VMI with autistic ( ) and non-autistic ( ) adults using spatial estimation and overhand throwing tasks with eye and hand tracking, comparing VR to PE. In VR, all participants exhibited reduced visual accuracy, increased visual scanning, and shortened quiet eye duration and eye following duration after the ball release, which led to decreased throwing performance. However, simplifying visual information in VR throwing improved these measures, and resulted in autistic individuals outperforming non-autistic peers.
Collapse
|
5
|
Chiossi F, Trautmannsheimer I, Ou C, Gruenefeld U, Mayer S. Searching Across Realities: Investigating ERPs and Eye-Tracking Correlates of Visual Search in Mixed Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:6997-7007. [PMID: 39264778 DOI: 10.1109/tvcg.2024.3456172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/14/2024]
Abstract
Mixed Reality allows us to integrate virtual and physical content into users' environments seamlessly. Yet, how this fusion affects perceptual and cognitive resources and our ability to find virtual or physical objects remains uncertain. Displaying virtual and physical information simultaneously might lead to divided attention and increased visual complexity, impacting users' visual processing, performance, and workload. In a visual search task, we asked participants to locate virtual and physical objects in Augmented Reality and Augmented Virtuality to understand the effects on performance. We evaluated search efficiency and attention allocation for virtual and physical objects using event-related potentials, fixation and saccade metrics, and behavioral measures. We found that users were more efficient in identifying objects in Augmented Virtuality, while virtual objects gained saliency in Augmented Virtuality. This suggests that visual fidelity might increase the perceptual load of the scene. Reduced amplitude in distractor positivity ERP, and fixation patterns supported improved distractor suppression and search efficiency in Augmented Virtuality. We discuss design implications for mixed reality adaptive systems based on physiological inputs for interaction.
Collapse
|
6
|
Patel KY, Wilcox LM, Maloney LT, Ehinger KA, Patel JY, Wiedenmann E, Murray RF. Lightness constancy in reality, in virtual reality, and on flat-panel displays. Behav Res Methods 2024; 56:6389-6407. [PMID: 38443726 DOI: 10.3758/s13428-024-02352-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/30/2024] [Indexed: 03/07/2024]
Abstract
Virtual reality (VR) displays are being used in an increasingly wide range of applications. However, previous work shows that viewers often perceive scene properties very differently in real and virtual environments and so realistic perception of virtual stimuli should always be a carefully tested conclusion, not an assumption. One important property for realistic scene perception is surface color. To evaluate how well virtual platforms support realistic perception of achromatic surface color, we assessed lightness constancy in a physical apparatus with real lights and surfaces, in a commercial VR headset, and on a traditional flat-panel display. We found that lightness constancy was good in all three environments, though significantly better in the real environment than on the flat-panel display. We also found that variability across observers was significantly greater in VR and on the flat-panel display than in the physical environment. We conclude that these discrepancies should be taken into account in applications where realistic perception is critical but also that in many cases VR can be used as a flexible alternative to flat-panel displays and a reasonable proxy for real environments.
Collapse
Affiliation(s)
- Khushbu Y Patel
- Department of Psychology and Centre for Vision Research, York University, Toronto, Canada.
| | - Laurie M Wilcox
- Department of Psychology and Centre for Vision Research, York University, Toronto, Canada
| | | | - Krista A Ehinger
- School of Computing and Information Systems, University of Melbourne, Melbourne, Australia
| | - Jaykishan Y Patel
- Department of Psychology and Centre for Vision Research, York University, Toronto, Canada
| | - Emma Wiedenmann
- Department of Psychology and Centre for Vision Research, York University, Toronto, Canada
- Department of Psychology, Carl Von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Richard F Murray
- Department of Psychology and Centre for Vision Research, York University, Toronto, Canada
| |
Collapse
|
7
|
Yildiz GY, Skarbez R, Sperandio I, Chen SJ, Mulder IJ, Chouinard PA. Linear perspective cues have a greater effect on the perceptual rescaling of distant stimuli than textures in the virtual environment. Atten Percept Psychophys 2024; 86:653-665. [PMID: 38182938 DOI: 10.3758/s13414-023-02834-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/14/2023] [Indexed: 01/07/2024]
Abstract
The presence of pictorial depth cues in virtual environments is important for minimising distortions driven by unnatural viewing conditions (e.g., vergence-accommodation conflict). Our aim was to determine how different pictorial depth cues affect size constancy in virtual environments under binocular and monocular viewing conditions. We systematically removed linear perspective cues and textures of a hallway in a virtual environment. The experiment was performed using the method of constant stimuli. The task required participants to compare the size of 'far' (10 m) and 'near' (5 m) circles displayed inside a virtual environment with one or both or none of the pictorial depth cues. Participants performed the experiment under binocular and monocular viewing conditions while wearing a virtual reality headset. ANOVA revealed that size constancy was greater for both the far and the near circles in the virtual environment with pictorial depth cues compared to the one without cues. However, the effect of linear perspective cues was stronger than textures, especially for the far circle. We found no difference between the binocular and monocular viewing conditions across the different virtual environments. We conclude that linear perspective cues exert a stronger effect than textures on the perceptual rescaling of far stimuli placed in the virtual environment, and that this effect does not vary between binocular and monocular viewing conditions.
Collapse
Affiliation(s)
- Gizem Y Yildiz
- Department of Psychology, Counselling, and Therapy, La Trobe University, George Singer Building, Room 460, 75 Kingsbury Drive, Bundoora, Victoria, 3086, Australia
- Institute of Neuroscience and Medicine (INM-3), Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Richard Skarbez
- Department of Computer Science and Information Technology, La Trobe University, Melbourne, VIC, Australia
| | - Irene Sperandio
- Department of Psychology and Cognitive Science, University of Trento, Rovereto, TN, Italy
| | - Sandra J Chen
- Department of Psychology, Counselling, and Therapy, La Trobe University, George Singer Building, Room 460, 75 Kingsbury Drive, Bundoora, Victoria, 3086, Australia
| | - Indiana J Mulder
- Department of Psychology, Counselling, and Therapy, La Trobe University, George Singer Building, Room 460, 75 Kingsbury Drive, Bundoora, Victoria, 3086, Australia
| | - Philippe A Chouinard
- Department of Psychology, Counselling, and Therapy, La Trobe University, George Singer Building, Room 460, 75 Kingsbury Drive, Bundoora, Victoria, 3086, Australia.
| |
Collapse
|
8
|
Yoo SA, Lee S, Joo SJ. Monocular cues are superior to binocular cues for size perception when they are in conflict in virtual reality. Cortex 2023; 166:80-90. [PMID: 37343313 DOI: 10.1016/j.cortex.2023.05.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 03/16/2023] [Accepted: 05/22/2023] [Indexed: 06/23/2023]
Abstract
Three-dimensional (3D) depth information is important to estimate object sizes. The visual system extracts 3D depth information using both binocular cues and monocular cues. However, how these different depth signals interact with each other to compute the object size in 3D space is unclear. Here, we aim to study the relative contribution of monocular and binocular depth information to size perception in a modified Ponzo context by manipulating their relations in a virtual reality environment. Specifically, we compared the amount of the size illusion in the following two conditions, in which monocular cues and binocular disparity in the Ponzo context can indicate the same depth sign (congruent) or opposite depth sign (incongruent). Our results show an increase in the amount of the Ponzo illusion in the congruent condition. In contrast, in the incongruent condition, we find that the two cues indicating the opposite depth signs do not cancel out the Ponzo illusion, suggesting that the effects of the two cues are not equal. Rather, binocular disparity information seems to be suppressed and the size judgment is mainly dependent on the monocular depth information when the two cues are in conflict. Our results suggest that monocular and binocular depth signals are fused for size perception only when they both indicate the same depth sign and top-down 3D depth information based on monocular cues contributes more to size perception than binocular disparity when they are in conflict in virtual reality.
Collapse
Affiliation(s)
- Sang-Ah Yoo
- Department of Psychology, Pusan National University, Busan, Republic of Korea
| | - Suhyun Lee
- Department of Psychology, Pusan National University, Busan, Republic of Korea
| | - Sung Jun Joo
- Department of Psychology, Pusan National University, Busan, Republic of Korea.
| |
Collapse
|
9
|
Mangalam M, Yarossi M, Furmanek MP, Krakauer JW, Tunik E. Investigating and acquiring motor expertise using virtual reality. J Neurophysiol 2023; 129:1482-1491. [PMID: 37194954 PMCID: PMC10281781 DOI: 10.1152/jn.00088.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 04/25/2023] [Accepted: 05/11/2023] [Indexed: 05/18/2023] Open
Abstract
After just months of simulated training, on January 19, 2019 a 23-year-old E-sports pro-gamer, Enzo Bonito, took to the racetrack and beat Lucas di Grassi, a Formula E and ex-Formula 1 driver with decades of real-world racing experience. This event raised the possibility that practicing in virtual reality can be surprisingly effective for acquiring motor expertise in real-world tasks. Here, we evaluate the potential of virtual reality to serve as a space for training to expert levels in highly complex real-world tasks in time windows much shorter than those required in the real world and at much lower financial cost without the hazards of the real world. We also discuss how VR can also serve as an experimental platform for exploring the science of expertise more generally.
Collapse
Affiliation(s)
- Madhur Mangalam
- Department of Physical Therapy, Movement, and Rehabilitation Science, Northeastern University, Boston, Massachusetts, United States
- Division of Biomechanics and Research Development, Department of Biomechanics, University of Nebraska at Omaha, Omaha, Nebraska, United States
- Center for Research in Human Movement Variability, University of Nebraska at Omaha, Omaha, Nebraska, United States
| | - Mathew Yarossi
- Department of Physical Therapy, Movement, and Rehabilitation Science, Northeastern University, Boston, Massachusetts, United States
- Department of Electrical and Computer Engineering, Northeastern University, Boston, Massachusetts, United States
| | - Mariusz P Furmanek
- Department of Physical Therapy, Movement, and Rehabilitation Science, Northeastern University, Boston, Massachusetts, United States
- Institute of Sport Sciences, The Jerzy Kukuczka Academy of Physical Education in Katowice, Katowice, Poland
- Physical Therapy Department, University of Rhode Island, Kingston, Rhode Island, United States
| | - John W Krakauer
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, Maryland, United States
- Department of Neuroscience, The Johns Hopkins University School of Medicine, Baltimore, Maryland, United States
- Department of Physical Medicine and Rehabilitation, The Johns Hopkins University School of Medicine, Baltimore, Maryland, United States
- The Santa Fe Institute, Santa Fe, New Mexico, United States
| | - Eugene Tunik
- Department of Physical Therapy, Movement, and Rehabilitation Science, Northeastern University, Boston, Massachusetts, United States
- Department of Electrical and Computer Engineering, Northeastern University, Boston, Massachusetts, United States
| |
Collapse
|