1
|
Lin LPY, Linkenauger SA. Jumping and leaping estimations using optic flow. Psychon Bull Rev 2024; 31:1759-1767. [PMID: 38286911 PMCID: PMC11358219 DOI: 10.3758/s13423-024-02459-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/09/2024] [Indexed: 01/31/2024]
Abstract
Optic flow provides information on movement direction and speed during locomotion. Changing the relationship between optic flow and walking speed via training has been shown to influence subsequent distance and hill steepness estimations. Previous research has shown that experience with slow optic flow at a given walking speed was associated with increased effort and distance overestimation in comparison to experiencing with fast optic flow at the same walking speed. Here, we investigated whether exposure to different optic flow speeds relative to gait influences perceptions of leaping and jumping ability. Participants estimated their maximum leaping and jumping ability after exposure to either fast or moderate optic flow at the same walking speed. Those calibrated to fast optic flow estimated farther leaping and jumping abilities than those calibrated to moderate optic flow. Findings suggest that recalibration between optic flow and walking speed may specify an action boundary when calibrated or scaled to actions such as leaping, and possibly, the manipulation of optic flow speed has resulted in a change in the associated anticipated effort for walking a prescribed distance, which in turn influence one's perceived action capabilities for jumping and leaping.
Collapse
Affiliation(s)
- Lisa P Y Lin
- Department of General Psychology, Justus-Liebig University Gießen, Gießen, Germany.
| | | |
Collapse
|
2
|
Zhou L, Wei W, Ooi TL, He ZJ. An allocentric human odometer for perceiving distances on the ground plane. eLife 2024; 12:RP88095. [PMID: 39023517 PMCID: PMC11257686 DOI: 10.7554/elife.88095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024] Open
Abstract
We reliably judge locations of static objects when we walk despite the retinal images of these objects moving with every step we take. Here, we showed our brains solve this optical illusion by adopting an allocentric spatial reference frame. We measured perceived target location after the observer walked a short distance from the home base. Supporting the allocentric coding scheme, we found the intrinsic bias , which acts as a spatial reference frame for perceiving location of a dimly lit target in the dark, remained grounded at the home base rather than traveled along with the observer. The path-integration mechanism responsible for this can utilize both active and passive (vestibular) translational motion signals, but only along the horizontal direction. This asymmetric path-integration finding in human visual space perception is reminiscent of the asymmetric spatial memory finding in desert ants, pointing to nature's wondrous and logically simple design for terrestrial creatures.
Collapse
Affiliation(s)
- Liu Zhou
- Department of Psychological and Brain Sciences, University of LouisvilleLouisvilleUnited States
| | - Wei Wei
- Department of Psychological and Brain Sciences, University of LouisvilleLouisvilleUnited States
- College of Optometry, The Ohio State UniversityColumbusUnited States
| | - Teng Leng Ooi
- College of Optometry, The Ohio State UniversityColumbusUnited States
| | - Zijiang J He
- Department of Psychological and Brain Sciences, University of LouisvilleLouisvilleUnited States
| |
Collapse
|
3
|
Zhou L, Wei W, Ooi TL, He ZJ. An allocentric human odometer for perceiving distances on the ground plane. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.03.22.533725. [PMID: 38645085 PMCID: PMC11030244 DOI: 10.1101/2023.03.22.533725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
We reliably judge locations of static objects when we walk despite the retinal images of these objects moving with every step we take. Here, we showed our brains solve this optical illusion by adopting an allocentric spatial reference frame. We measured perceived target location after the observer walked a short distance from the home base. Supporting the allocentric coding scheme, we found the intrinsic bias 1, 2 , which acts as a spatial reference frame for perceiving location of a dimly lit target in the dark, remained grounded at the home base rather than traveled along with the observer. The path-integration mechanism responsible for this can utilize both active and passive (vestibular) translational motion signals, but only along the horizontal direction. This anisotropic path-integration finding in human visual space perception is reminiscent of the anisotropic spatial memory finding in desert ants 3 , pointing to nature's wondrous and logically simple design for terrestrial creatures.
Collapse
|
4
|
Zanchi S, Cuturi LF, Sandini G, Gori M, Ferrè ER. Vestibular contribution to spatial encoding. Eur J Neurosci 2023; 58:4034-4042. [PMID: 37688501 DOI: 10.1111/ejn.16146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 08/23/2023] [Accepted: 08/25/2023] [Indexed: 09/11/2023]
Abstract
Determining the spatial relation between objects and our location in the surroundings is essential for survival. Vestibular inputs provide key information about the position and movement of our head in the three-dimensional space, contributing to spatial navigation. Yet, their role in encoding spatial localisation of environmental targets remains to be fully understood. We probed the accuracy and precision of healthy participants' representations of environmental space by measuring their ability to encode the spatial location of visual targets (Experiment 1). Participants were asked to detect a visual light and then walk towards it. Vestibular signalling was artificially disrupted using stochastic galvanic vestibular stimulation (sGVS) applied selectively during encoding targets' location. sGVS impaired the accuracy and precision of locating the environmental visual targets. Importantly, this effect was specific to the visual modality. The location of acoustic targets was not influenced by vestibular alterations (Experiment 2). Our findings indicate that the vestibular system plays a role in localising visual targets in the surrounding environment, suggesting a crucial functional interaction between vestibular and visual signals for the encoding of the spatial relationship between our body position and the surrounding objects.
Collapse
Affiliation(s)
- Silvia Zanchi
- Unit of Visually Impaired People, Italian Institute of Technology, Genoa, Italy
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Luigi F Cuturi
- Unit of Visually Impaired People, Italian Institute of Technology, Genoa, Italy
- Department of Cognitive Sciences, Psychology, Education and Cultural Studies, University of Messina, Messina, Italy
| | - Giulio Sandini
- Robotics Brain and Cognitive Sciences, Italian Institute of Technology, Genoa, Italy
| | - Monica Gori
- Unit of Visually Impaired People, Italian Institute of Technology, Genoa, Italy
| | - Elisa R Ferrè
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| |
Collapse
|
5
|
Gao L, Huang Y, Zhang Y, Zhang X, Liu Z, Pan JS, Yu M. Monocular information for perceiving large egocentric distance: A comparison between monocularly blind patients and normally sighted observers. Vision Res 2023; 211:108279. [PMID: 37422937 DOI: 10.1016/j.visres.2023.108279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 06/18/2023] [Accepted: 06/19/2023] [Indexed: 07/11/2023]
Abstract
The debate surrounding the advantages of binocular versus monocular vision has persisted for decades. This study aimed to investigate whether individuals with monocular vision loss could accurately and precisely perceive large egocentric distances in real-world environments, under natural viewing conditions, comparable to those with normal vision. A total of 49 participants took part in the study, divided into three groups based on their viewing conditions. Two experiments were conducted to assess the accuracy and precision of estimating egocentric distances to visual targets and the coordination of actions during blind walking. In Experiment 1, participants were positioned in both a hallway and a large open field, tasked with judging the midpoint of self-to-target distances spanning from 5 to 30 m. Experiment 2 involved a blind walking task, where participants attempted to walk towards the same targets without visual or environmental feedback at an unusually rapid pace. The findings revealed that perceptual accuracy and precision were primarily influenced by the environmental context, motion condition, and target distance, rather than the visual conditions. Surprisingly, individuals with monocular vision loss demonstrated comparable accuracy and precision in perceiving egocentric distances to that of individuals with normal vision.
Collapse
Affiliation(s)
- Le Gao
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Yiru Huang
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Yuning Zhang
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Xinyi Zhang
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Zitian Liu
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Jing S Pan
- Department of Psychology, Sun Yat-sen University, Guangzhou 510275, China.
| | - Minbin Yu
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China.
| |
Collapse
|
6
|
Machowska-Krupa W, Cych P. Differences in Coordination Motor Abilities between Orienteers and Athletics Runners. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:2643. [PMID: 36768012 PMCID: PMC9915626 DOI: 10.3390/ijerph20032643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 01/24/2023] [Accepted: 01/27/2023] [Indexed: 06/18/2023]
Abstract
This study aimed to examine the differences in coordination motor abilities between track and field (T&F) runners and foot orienteers (Foot-O). Another purpose of this study was to analyse gender differences in terms of coordination motor abilities. Coordination skills tests were undertaken by 11 Foot-O and 11 T&F runners. Each group consisted of five women and six men who lived in the Lower Silesia region of Poland. The Foot-O group consisted of 11 orienteers aged 24.09 (±4.78) years, with a minimum 10 years of experience, while the T&F group consisted of 11 long-distance runners aged 24.91 (±4.04) years and with a performance level at distances of 5 km and 10 km equivalent to that for orienteering. Some of the participants represented world-class level (e.g., world junior medallists), and most of them were of national elite level. Coordination tests of motor abilities were chosen for their reliability and repeatability and included tests of spatial orientation, rhythmisation of movements, balance and kinaesthetic differentiation. The Foot-O group performed significantly better than the T&F group in terms of some coordination abilities. Differences were observed between the Foot-O and T&F runners in balance ability measured during the "Walk on the bench" test. Further research should be carried out in this area in order to confirm these differences.
Collapse
|
7
|
Creem-Regehr SH, Stefanucci JK, Bodenheimer B. Perceiving distance in virtual reality: theoretical insights from contemporary technologies. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210456. [PMID: 36511405 PMCID: PMC9745869 DOI: 10.1098/rstb.2021.0456] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Decades of research have shown that absolute egocentric distance is underestimated in virtual environments (VEs) when compared with the real world. This finding has implications on the use of VEs for applications that require an accurate sense of absolute scale. Fortunately, this underperception of scale can be attenuated by several factors, making perception more similar to (but still not the same as) that of the real world. Here, we examine these factors as two categories: (i) experience inherent to the observer, and (ii) characteristics inherent to the display technology. We analyse how these factors influence the sources of information for absolute distance perception with the goal of understanding how the scale of virtual spaces is calibrated. We identify six types of cues that change with these approaches, contributing both to a theoretical understanding of depth perception in VEs and a call for future research that can benefit from changing technologies. This article is part of the theme issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
| | | | - Bobby Bodenheimer
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, USA
| |
Collapse
|
8
|
Bosco A, Sanz Diez P, Filippini M, Fattori P. The influence of action on perception spans different effectors. Front Syst Neurosci 2023; 17:1145643. [PMID: 37205054 PMCID: PMC10185787 DOI: 10.3389/fnsys.2023.1145643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 04/10/2023] [Indexed: 05/21/2023] Open
Abstract
Perception and action are fundamental processes that characterize our life and our possibility to modify the world around us. Several pieces of evidence have shown an intimate and reciprocal interaction between perception and action, leading us to believe that these processes rely on a common set of representations. The present review focuses on one particular aspect of this interaction: the influence of action on perception from a motor effector perspective during two phases, action planning and the phase following execution of the action. The movements performed by eyes, hands, and legs have a different impact on object and space perception; studies that use different approaches and paradigms have formed an interesting general picture that demonstrates the existence of an action effect on perception, before as well as after its execution. Although the mechanisms of this effect are still being debated, different studies have demonstrated that most of the time this effect pragmatically shapes and primes perception of relevant features of the object or environment which calls for action; at other times it improves our perception through motor experience and learning. Finally, a future perspective is provided, in which we suggest that these mechanisms can be exploited to increase trust in artificial intelligence systems that are able to interact with humans.
Collapse
Affiliation(s)
- Annalisa Bosco
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Alma Mater Research Institute for Human-Centered Artificial Intelligence (Alma Human AI), University of Bologna, Bologna, Italy
- *Correspondence: Annalisa Bosco
| | - Pablo Sanz Diez
- Carl Zeiss Vision International GmbH, Aalen, Germany
- Institute for Ophthalmic Research, Eberhard Karls University Tüebingen, Tüebingen, Germany
| | - Matteo Filippini
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Alma Mater Research Institute for Human-Centered Artificial Intelligence (Alma Human AI), University of Bologna, Bologna, Italy
| |
Collapse
|
9
|
Does path integration contribute to human navigation in large-scale space? Psychon Bull Rev 2022:10.3758/s13423-022-02216-8. [DOI: 10.3758/s13423-022-02216-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2022] [Indexed: 11/19/2022]
|
10
|
Zhu H, Gu Z, Ohno R, Kong Y. Effect of landscape design on depth perception in classical Chinese gardens: A quantitative analysis using virtual reality simulation. Front Psychol 2022; 13:963600. [DOI: 10.3389/fpsyg.2022.963600] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 09/28/2022] [Indexed: 11/06/2022] Open
Abstract
It is common for visitors to have rich and varied experiences in the limited space of a classical Chinese garden. This leads to the sense that the garden’s scale is much larger than it really is. A main reason for this perceptual bias is the gardener’s manipulation of visual information. Most studies have discussed this phenomenon in terms of qualitative description with fragmented perspectives taken from static points, without considering ambient visual information or continuously changing observation points. A general question arises, then, on why depth perception can vary from one observation point to another along a garden path. To better understand the spatial experience in classical Chinese gardens, this study focused on variations in perceived depth among different observation points and aimed to identify influential visual information through psychophysical experimentation. As stimuli for the experiment, panoramic photos of Liu garden were taken from three positions at Lvyin Pavilion. Considering the effects of pictorial visual cues on depth perception, the photos were processed to create 18 kinds of stimuli (six image treatments * three positions). Two tasks were presented to the participants. In Task 1, 71 participants were asked to rate the depth value of the garden using the magnitude estimation method in a cave automatic virtual environment (CAVE). Statistical analysis of Task 1 revealed that depth values differed significantly among different viewpoints. In Task 2, participants were asked to compare 18 stimuli and 3D images presented on three connected monitors and to judge the depth of the garden using the adjustment method. The results of Task 2 again showed that depth values differed significantly among different viewpoints. In both tasks, ambient information (i.e., the perspective of interior space) significantly influenced depth perception.
Collapse
|
11
|
Chen S, Li Y, Pan JS. Monocular Perception of Equidistance: The Effects of Viewing Experience and Motion-generated Information. Optom Vis Sci 2022; 99:470-478. [PMID: 35149634 DOI: 10.1097/opx.0000000000001878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
SIGNIFICANCE Using static depth information, normal observers monocularly perceived equidistance with high accuracy. With dynamic depth information and/or monocular viewing experience, they perceived with high precision. Therefore, monocular patients, who were adapted to monocular viewing, should be able to perceive equidistance and perform related tasks. PURPOSE This study investigated whether normal observers could accurately and precisely perceive equidistance with one eye, in different viewing environments, with various optical information and monocular viewing experience. METHODS Sixteen normally sighted observers monocularly perceived the distance (5 to 30 m) between a target and the self and replicated it either in some hallways that contained ample static monocular depth information but had a limited field of view or on a lawn that contained less depth information but had a large field of view. Participants remained stationary or walked 5 m before performing the task, as a manipulation of the availability of dynamic depth information. Eight observers wore eye patches for 3 hours before the experiment and gained monocular viewing experience, whereas the others did not. Both accuracy and precision were measured. RESULTS As long as static monocular depth information was available, equidistance perception was effectively accurate, despite minute underestimation. Perception precision was improved by prior monocular walking and/or experience with monocularity. Accuracy and precision were not affected by the viewing environments. CONCLUSIONS Using static and dynamic monocular depth information and/or with monocular experience, normal observers judged equidistance with reliable accuracy and precision. This implied that patients with monocular vision, who are better adapted than participants of this study, should also be able to perceive equidistance and perform distance-dependent tasks in natural viewing environments.
Collapse
Affiliation(s)
- Shenying Chen
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| | - Yusi Li
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| | | |
Collapse
|
12
|
Dukes JM, Norman JF, Shartzer CD. Visual distance perception indoors, outdoors, and in the dark. Vision Res 2022; 194:107992. [PMID: 35030510 DOI: 10.1016/j.visres.2021.107992] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2021] [Revised: 11/03/2021] [Accepted: 12/20/2021] [Indexed: 11/19/2022]
Abstract
The ability to visually perceive distances in depth was evaluated in two experiments. In both experiments, the observers were required to bisect a distance interval oriented in depth (8 m total extent in Experiment 1 and 7 m in Experiment 2). The purpose of Experiment 1 was to examine the effects of environmental context (indoors in the dark, indoors in the light, and outdoors) and monocular versus binocular viewing. The purpose of Experiment 2 was to manipulate linear perspective to determine its importance for perceiving depth interval magnitudes. In the outdoor environment, the observers' bisection judgments indicated perceptual compression of farther distances similar to that obtained in many previous studies. In contrast, the observers' judgments in the indoor lighted environment were consistent with the perceptual expansion of farther distances. There was also a beneficial effect of binocular viewing upon the precision of the observers' repeated judgments, but the size of this effect was large only within the dark environment. Finally, linear perspective was found to significantly modulate the observers' bisection judgments such that they became accurate only when perspective was available.
Collapse
Affiliation(s)
- Jessica M Dukes
- Department of Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - J Farley Norman
- Department of Psychological Sciences, Ogden College of Science and Engineering, Western Kentucky University, Bowling Green, KY, USA.
| | - Challee D Shartzer
- Department of Psychological Sciences, Ogden College of Science and Engineering, Western Kentucky University, Bowling Green, KY, USA
| |
Collapse
|
13
|
Baxter BA, Warren WH. A day at the beach: Does visually perceived distance depend on the energetic cost of walking? J Vis 2021; 21:13. [PMID: 34812836 PMCID: PMC8626849 DOI: 10.1167/jov.21.12.13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
It takes less effort to walk from here to the Tiki Hut on the brick walkway than on the sandy beach. Does that influence how far away the Tiki Hut looks? The energetic cost of walking on dry sand is twice that of walking on firm ground (Lejeune et al., 1998). If perceived distance depends on the energetic cost or anticipated effort of walking (Proffitt, 2006), then the distance of a target viewed over sand should appear much greater than one viewed over brick. If perceived distance is specified by optical information (e.g., declination angle from the horizon; Ooi et al., 2001), then the distances should appear similar. Participants (N = 13) viewed a target at a distance of 5, 7, 9, or 11 m over sand or brick and then blind-walked an equivalent distance on the same or different terrain. First, we observed no main effect of walked terrain; walked distances on sand and brick were the same (p = 0.46), indicating that locomotion was calibrated to each substrate. Second, responses were actually greater after viewing over brick than over sand (p < 0.001), opposite to the prediction of the energetic hypothesis. This unexpected overshooting can be explained by the slight incline of the brick walkway, which partially raises the visually perceived eye level (VPEL) and increases the target distance specified by the declination angle. The result is thus consistent with the information hypothesis. We conclude that visually perceived egocentric distance depends on optical information and not on the anticipated energetic cost of walking.
Collapse
Affiliation(s)
- Brittany A Baxter
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA.,
| | - William H Warren
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA.,
| |
Collapse
|
14
|
The role of vision and proprioception in self-motion encoding: An immersive virtual reality study. Atten Percept Psychophys 2021; 83:2865-2878. [PMID: 34341941 PMCID: PMC8460581 DOI: 10.3758/s13414-021-02344-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/10/2021] [Indexed: 11/08/2022]
Abstract
Past research on the advantages of multisensory input for remembering spatial information has mainly focused on memory for objects or surrounding environments. Less is known about the role of cue combination in memory for own body location in space. In a previous study, we investigated participants' accuracy in reproducing a rotation angle in a self-rotation task. Here, we focus on the memory aspect of the task. Participants had to rotate themselves back to a specified starting position in three different sensory conditions: a blind condition, a condition with disrupted proprioception, and a condition where both vision and proprioception were reliably available. To investigate the difference between encoding and storage phases of remembering proprioceptive information, rotation amplitude and recall delay were manipulated. The task was completed in a real testing room and in immersive virtual reality (IVR) simulations of the same environment. We found that proprioceptive accuracy is lower when vision is not available and that performance is generally less accurate in IVR. In reality conditions, the degree of rotation affected accuracy only in the blind condition, whereas in IVR, it caused more errors in both the blind condition and to a lesser degree when proprioception was disrupted. These results indicate an improvement in encoding own body location when vision and proprioception are optimally integrated. No reliable effect of delay was found.
Collapse
|
15
|
Zhang J, Yang X, Jin Z, Li L. Distance Estimation in Virtual Reality Is Affected by Both the Virtual and the Real-World Environments. Iperception 2021; 12:20416695211023956. [PMID: 34211686 PMCID: PMC8216372 DOI: 10.1177/20416695211023956] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Accepted: 05/19/2021] [Indexed: 11/17/2022] Open
Abstract
The experience in virtual reality (VR) is unique, in that observers are in a real-world location while browsing through a virtual scene. Previous studies have investigated the effect of the virtual environment on distance estimation. However, it is unclear how the real-world environment influences distance estimation in VR. Here, we measured the distance estimation using a bisection (Experiment 1) and a blind-walking (Experiments 2 and 3) method. Participants performed distance judgments in VR, which rendered either virtual indoor or outdoor scenes. Experiments were also carried out in either real-world indoor or outdoor locations. In the bisection experiment, judged distance in virtual outdoor was greater than that in virtual indoor. However, the real-world environment had no impact on distance judgment estimated by bisection. In the blind-walking experiment, judged distance in real-world outdoor was greater than that in real-world indoor. On the other hand, the virtual environment had no impact on distance judgment estimated by blind-walking. Generally, our results suggest that both the virtual and real-world environments have an impact on distance judgment in VR. Especially, the real-world environment where a person is physically located during a VR experience influences the person's distance estimation in VR.
Collapse
Affiliation(s)
- Junjun Zhang
- MOE Key Lab for Neuroinformation, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China
| | - Xiaoyan Yang
- MOE Key Lab for Neuroinformation, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China
| | - Zhenlan Jin
- MOE Key Lab for Neuroinformation, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China
| | - Ling Li
- MOE Key Lab for Neuroinformation, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
16
|
Abstract
With the increase in popularity of consumer virtual reality headsets, for research and other applications, it is important to understand the accuracy of 3D perception in VR. We investigated the perceptual accuracy of near-field virtual distances using a size and shape constancy task, in two commercially available devices. Participants wore either the HTC Vive or the Oculus Rift and adjusted the size of a virtual stimulus to match the geometric qualities (size and depth) of a physical stimulus they were able to refer to haptically. The judgments participants made allowed for an indirect measure of their perception of the egocentric, virtual distance to the stimuli. The data show under-constancy and are consistent with research from carefully calibrated psychophysical techniques. There was no difference in the degree of constancy found in the two headsets. We conclude that consumer virtual reality headsets provide a sufficiently high degree of accuracy in distance perception, to allow them to be used confidently in future experimental vision science, and other research applications in psychology.
Collapse
|
17
|
Feldstein IT, Kölsch FM, Konrad R. Egocentric Distance Perception: A Comparative Study Investigating Differences Between Real and Virtual Environments. Perception 2020; 49:940-967. [PMID: 33002392 DOI: 10.1177/0301006620951997] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Virtual reality systems are a popular tool in behavioral sciences. The participants' behavior is, however, a response to cognitively processed stimuli. Consequently, researchers must ensure that virtually perceived stimuli resemble those present in the real world to ensure the ecological validity of collected findings. Our article provides a literature review relating to distance perception in virtual reality. Furthermore, we present a new study that compares verbal distance estimates within real and virtual environments. The virtual space-a replica of a real outdoor area-was displayed using a state-of-the-art head-mounted display. Investigated distances ranged from 8 to 13 m. Overall, the results show no significant difference between egocentric distance estimates in real and virtual environments. However, a more in-depth analysis suggests that the order in which participants were exposed to the two environments may affect the outcome. Furthermore, the study suggests that a rising experience of immersion leads to an alignment of the estimated virtual distances with the real ones. The results also show that the discrepancy between estimates of real and virtual distances increases with the incongruity between virtual and actual eye heights, demonstrating the importance of an accurately set virtual eye height.
Collapse
Affiliation(s)
- Ilja T Feldstein
- Harvard Medical School, Department of Ophthalmology, United States
| | - Felix M Kölsch
- Technical University of Munich, Department of Mechanical Engineering, Germany
| | - Robert Konrad
- Stanford University, Department of Electrical Engineering, United States
| |
Collapse
|
18
|
Abstract
Judging the poses, sizes, and shapes of objects accurately is necessary for organisms and machines to operate successfully in the world. Retinal images of three-dimensional objects are mapped by the rules of projective geometry and preserve the invariants of that geometry. Since Plato, it has been debated whether geometry is innate to the human brain, and Poincare and Einstein thought it worth examining whether formal geometry arises from experience with the world. We examine if humans have learned to exploit projective geometry to estimate sizes and aspects of three-dimensional shape that are related to relative lengths and aspect ratios. Numerous studies have examined size invariance as a function of physical distance, which changes scale on the retina. However, it is surprising that possible constancy or inconstancy of relative size seems not to have been investigated for object pose, which changes retinal image size differently along different axes. We show systematic underestimation of length for extents pointing toward or away from the observer, both for static objects and dynamically rotating objects. Observers do correct for projected shortening according to the optimal back-transform, obtained by inverting the projection function, but the correction is inadequate by a multiplicative factor. The clue is provided by the greater underestimation for longer objects, and the observation that they seem to be more slanted toward the observer. Adding a multiplicative factor for perceived slant in the back-transform model provides good fits to the corrections used by observers. We quantify the slant illusion with two different slant matching measurements, and use a dynamic demonstration to show that the slant illusion perceptually dominates length nonrigidity. In biological and mechanical objects, distortions of shape are manifold, and changes in aspect ratio and relative limb sizes are functionally important. Our model shows that observers try to retain invariance of these aspects of shape to three-dimensional rotation by correcting retinal image distortions due to perspective projection, but the corrections can fall short. We discuss how these results imply that humans have internalized particular aspects of projective geometry through evolution or learning, and if humans assume that images are preserving the continuity, collinearity, and convergence invariances of projective geometry, that would simply explain why illusions such as Ames’ chair appear cohesive despite being a projection of disjointed elements, and thus supplement the generic viewpoint assumption.
Collapse
Affiliation(s)
- Akihito Maruya
- Graduate Center for Vision Research, State University of New York, New York, NY
| | - Qasim Zaidi
- Graduate Center for Vision Research, State University of New York, New York, NY
| |
Collapse
|
19
|
Muryy A, Siddharth N, Nardelli N, Glennerster A, Torr PHS. Lessons from reinforcement learning for biological representations of space. Vision Res 2020; 174:79-93. [PMID: 32683096 DOI: 10.1016/j.visres.2020.05.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 04/26/2020] [Accepted: 05/26/2020] [Indexed: 10/23/2022]
Abstract
Neuroscientists postulate 3D representations in the brain in a variety of different coordinate frames (e.g. 'head-centred', 'hand-centred' and 'world-based'). Recent advances in reinforcement learning demonstrate a quite different approach that may provide a more promising model for biological representations underlying spatial perception and navigation. In this paper, we focus on reinforcement learning methods that reward an agent for arriving at a target image without any attempt to build up a 3D 'map'. We test the ability of this type of representation to support geometrically consistent spatial tasks such as interpolating between learned locations using decoding of feature vectors. We introduce a hand-crafted representation that has, by design, a high degree of geometric consistency and demonstrate that, in this case, information about the persistence of features as the camera translates (e.g. distant features persist) can improve performance on the geometric tasks. These examples avoid Cartesian (in this case, 2D) representations of space. Non-Cartesian, learned representations provide an important stimulus in neuroscience to the search for alternatives to a 'cognitive map'.
Collapse
Affiliation(s)
- Alex Muryy
- School of Psychology and Clinical Language Sciences, University of Reading, UK
| | - N Siddharth
- Department of Engineering Science, University of Oxford, UK
| | | | - Andrew Glennerster
- School of Psychology and Clinical Language Sciences, University of Reading, UK.
| | | |
Collapse
|
20
|
Viewpoint oscillation improves the perception of distance travelled in static observers but not during treadmill walking. Exp Brain Res 2020; 238:1073-1083. [PMID: 32211928 PMCID: PMC7181415 DOI: 10.1007/s00221-020-05786-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2018] [Accepted: 03/16/2020] [Indexed: 11/25/2022]
Abstract
Optic flow has been found to be a significant cue for static observers’ perception of distance travelled. In previous research conducted in a large-scale immersive display (CAVE), adding viewpoint oscillations to a radial optic flow simulating forward self-motion was found to modulate this perception. In the present two experiments, we investigated (1) whether the improved distance travelled perceptions observed with an oscillating viewpoint in a CAVE were also obtained when the subjects were wearing a head mounted display (HMD, an Oculus Rift) and (2) whether the absence of viewpoint oscillations during treadmill walking was liable to affect the subjects’ perception of self-motion. In Experiment 1, static observers performed a distance travelled estimation task while facing either a purely linear visual simulation of self-motion (in depth) or the same flow in addition to viewpoint oscillations based on the subjects’ own head oscillations previously recorded during treadmill walking. Results show that the benefits of viewpoint oscillations observed in a CAVE persisted when the participants were wearing an HMD. In Experiment 2, participants had to carry out the same task while walking on a treadmill under two different visual conditions simulating self-motion in depth: the one with and the other without the visual consequences of their head translations. Results showed that viewpoint oscillations did not improve the accuracy of subjects’ distance travelled estimations. A comparison between the two experiments showed that adding internal dynamic information about actual self-motion to visual information did not allow participants better estimates.
Collapse
|
21
|
St George RJ, Day BL, Butler AA, Fitzpatrick RC. Stepping in circles: how locomotor signals of rotation adapt over time. J Physiol 2020; 598:2125-2136. [PMID: 32133628 DOI: 10.1113/jp279171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Accepted: 03/02/2020] [Indexed: 11/08/2022] Open
Abstract
KEY POINTS While it has been well described that prolonged rotational stepping will adapt the podokinetic sense of rotation, the mechanisms involved are not clearly understood. By studying podokinetic after-rotations following conditioning rotations not previously reported we have shown that slower rotational velocities are more readily adapted than faster velocities and adaptation occurs more quickly than previously thought. We propose a dynamic feedback model of vestibular and podokinetic adaptation that can fit rotation trajectories across multiple conditions and data sets. Two adaptation processes were identified that may reflect central and peripheral processes and the discussion unifies prior findings in the podokinetic literature under this new framework. The findings show the technique is feasible for people with locomotor turning problems. ABSTRACT After a prolonged period stepping in circles, people walk with a curved trajectory when attempting to walk in a straight line without vision. Podokinetic adaptation shows promise in clinical populations to improve locomotor turning; however, the adaptive mechanisms involved are poorly understood. The first phase of this study asks: how does the podokinetic conditioning velocity affect the response velocity and how quickly can adaptation occur? The second phase of the study asks: can a mathematical feedback model account for the rotation trajectories across different conditioning parameters and different datasets? Twelve healthy participants stepped in place on the axis of a rotating surface ranging from 4 to 20 deg s-1 for durations of 1-10 min, while using visual cues to maintain a constant heading direction. Afterward on solid ground, participants were blindfolded and attempted to step without rotating. Participants unknowingly stepped in circles opposite to the direction of the prior platform rotation for all conditions. The angular velocity of this response peaked within 1 min and the ratio of the stimulus-to-response peak velocity fitted a decreasing power function. The response then decayed exponentially. The feedback model of podokinetic and vestibular adaptive processes had a good fit with the data and suggested that podokinetic adaptation is explained by a short (141 s) and a long (27 min) time constant. The podokinetic system adapts more quickly than previously thought and subjects adapt more readily to slower rotation than to faster rotation. These findings will have implications for clinical applications of the technique.
Collapse
Affiliation(s)
- Rebecca J St George
- Sensorimotor Neuroscience and Ageing Research Group, School of Psychological Sciences, College of Health and Medicine, University of Tasmania, Hobart, Australia
| | - Brian L Day
- Department of Clinical and Movement Neurosciences, UCL Queen Square Institute of Neurology, London, UK
| | - Annie A Butler
- Neuroscience Research Australia, Sydney, Australia.,School of Medical Sciences, University of New South Wales, Sydney, Australia
| | | |
Collapse
|
22
|
Guth D, LaDuke R. Veering by Blind Pedestrians: Individual Differences and Their Implications for Instruction. JOURNAL OF VISUAL IMPAIRMENT & BLINDNESS 2020. [DOI: 10.1177/0145482x9508900107] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This article reports the measurement of the “veering tendency” of four blind pedestrians over three 15-trial test sessions. The findings illustrate between- and within-subject differences in patterns of veering, and the implications of these differences for orientation and mobility instruction are discussed.
Collapse
Affiliation(s)
- D. Guth
- Department of Blind Rehabilitation, Western Michigan University, Kalamazoo, MI 49008
| | - R. LaDuke
- Department of Blind Rehabilitation, Western Michigan University, Kalamazoo, MI 49008
| |
Collapse
|
23
|
Spatially incongruent sounds affect visual localization in virtual environments. Atten Percept Psychophys 2020; 82:2067-2075. [PMID: 31900858 DOI: 10.3758/s13414-019-01929-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Distance underestimations along the depth plane are widely found in virtual environments. However, past findings have shown that changes in the visual aspects of virtual reality settings do not lead to more accurate depth estimates. Therefore, we examined if nonvisual stimuli, namely, sounds, could serve as cues that affect observers' depth perception. Accordingly, we conducted two distance discrimination tasks to examine whether observers' depth localization is affected by a spatially incongruent sound. In Experiment 1, a spatially incongruent sound made a visual target appear farther away than a visual target presented with no sound only when a far-distance range (i.e., longer than 12 m) was introduced. Experiment 2 further indicated that the sound shifted visual localization only when audiovisual spatial disparity did not exceed 4°. Taken together, our findings suggest that the depth localization of a visual object in virtual reality can be altered by a spatially incongruent sound, and provide a potential approach that we can adopt a spatially incongruent sound as a cue to reduce the depth compression in VR.
Collapse
|
24
|
Machowska W, Cych P, Siemieński A, Migasiewicz J. Effect of orienteering experience on walking and running in the absence of vision and hearing. PeerJ 2019; 7:e7736. [PMID: 31579610 PMCID: PMC6766364 DOI: 10.7717/peerj.7736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2019] [Accepted: 08/25/2019] [Indexed: 12/02/2022] Open
Abstract
Purpose This study aimed to examine differences between track and field (T&F) runners and foot-orienteers (Foot-O) in the walking and running tests in the absence of vision and hearing. We attempted to determine whether experienced foot orienteers show better ability to maintain the indicated direction compared to track and field runners. Methods This study examined 11 Foot-O and 11 T&F runners. The study consisted of an interview, a field experiment of walking and running in a straight line in the absence of vision and hearing, and coordination skills tests. Results Participants moved straight min. 20 m and max. 40 m during the walking test and min. 20 m and max. 125 m during the running test and then they moved around in a circle. Significant differences between groups were found for the distance covered by walking. Differences between sexes were documented for the distance covered by running and angular deviations. Relationship between lateralization and tendencies to veer were not found. Differences were observed between Foot-O and T&F groups in terms of coordination abilities. Conclusions Participants moved in circles irrespective of the type of movement and experience in practicing the sport. Orienteers may use information about their tendencies to turning more often left or right to correct it during their races in dense forests with limited visibility or during night orienteering competition.
Collapse
Affiliation(s)
- Weronika Machowska
- Department of Sports Didactics, University School of Physical Education in Wrocław, Wrocław, Lower Silesia, Poland
| | - Piotr Cych
- Department of Sports Didactics, University School of Physical Education in Wrocław, Wrocław, Lower Silesia, Poland
| | - Adam Siemieński
- Department of Biomechanics, University School of Physical Education in Wrocław, Wrocław, Lower Silesia, Poland
| | - Juliusz Migasiewicz
- Department of Sports Didactics, University School of Physical Education in Wrocław, Wrocław, Lower Silesia, Poland
| |
Collapse
|
25
|
Burkitt JJ, Campos JL, Lyons JL. Iterative Spatial Updating During Forward Linear Walking Revealed Using a Continuous Pointing Task. J Mot Behav 2019; 52:145-166. [PMID: 30982465 DOI: 10.1080/00222895.2019.1599807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
The continuous pointing task uses target-directed pointing responses to determine how perceived distance traveled is estimated during forward linear walking movements. To more precisely examine the regulation of this online process, the current study measured upper extremity joint angles and step-cycle kinematics in full vision and no-vision continuous pointing movements. Results show perceptual under-estimation of traveled distance in no-vision trials compared to full vision trials. Additionally, parsing of the shoulder plane of elevation trajectories revealed discontinuities that reflected this perceptual under-estimation and that were most frequently coupled with the early portion of the right foot swing phase of the step-cycle. This suggests that spatial updating may be composed of discrete iterations that are associated with gait parameters.
Collapse
Affiliation(s)
- James J Burkitt
- Department of Kinesiology, McMaster University, Hamilton, ON, Canada.,Faculty of Health Sciences, University of Ontario Institute of Technology, Oshawa, ON, Canada
| | - Jennifer L Campos
- Toronto Rehabilitation Institute - University Health Network, Toronto, ON, Canada.,Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - James L Lyons
- Department of Kinesiology, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
26
|
Abstract
An experiment was conducted to evaluate the ability of 28 younger and older adults to visually bisect distances in depth both indoors and outdoors; half of the observers were male and half were female. Observers viewed 15-m and 30-m distance extents in four different environmental settings (two outdoor grassy fields and an indoor hallway and atrium) and were required to adjust the position of a marker to place it at the midpoint of each stimulus distance interval. Overall, the observers' judgments were more accurate indoors than outdoors. In outdoor environments, many individual observers exhibited perceptual compression of farther distances (e.g., these observers placed the marker closer than the actual physical midpoints of the stimulus distance intervals). There were significant modulatory effects of both age and sex upon the accuracy and precision of the observers' judgments. The judgments of the male observers were more accurate than those of the female observers and they were less influenced by environmental context. In addition, the accuracies of the younger observers' judgments were less influenced by context than those of the older observers. With regard to the precision of the observers' judgments, the older females exhibited much more variability across repeated judgments than the other groups of observers (younger males, younger females, and older males). The results of our study demonstrate that age and sex are important variables that significantly affect the visual perception of distance.
Collapse
|
27
|
Going the distance and beyond: simulated low vision increases perception of distance traveled during locomotion. PSYCHOLOGICAL RESEARCH 2018; 83:1349-1362. [PMID: 29680863 DOI: 10.1007/s00426-018-1019-2] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2018] [Accepted: 04/13/2018] [Indexed: 10/17/2022]
Abstract
In a series of experiments, we tested the hypothesis that severely degraded viewing conditions during locomotion distort the perception of distance traveled. Some research suggests that there is little-to-no systematic error in perceiving closer distances from a static viewpoint with severely degraded acuity and contrast sensitivity (which we will refer to as blur). However, several related areas of research-extending across domains of perception, attention, and spatial learning-suggest that degraded acuity and contrast sensitivity would affect estimates of distance traveled during locomotion. In a first experiment, we measured estimations of distance traveled in a real-world locomotion task and found that distances were overestimated with blur compared to normal vision using two measures: verbal reports and visual matching (Experiments 1 a, b, and c). In Experiment 2, participants indicated their estimate of the length of a previously traveled path by actively walking an equivalent distance in a viewing condition that either matched their initial path (e.g., blur/blur) or did not match (e.g., blur/normal). Overestimation in blur was found only when participants learned the path in blur and made estimates in normal vision (not in matched blur learning/judgment trials), further suggesting a reliance on dynamic visual information in estimates of distance traveled. In Experiment 3, we found evidence that perception of speed is similarly affected by the blur vision condition, showing an overestimation in perception of speed experienced in wheelchair locomotion during blur compared to normal vision. Taken together, our results demonstrate that severely degraded acuity and contrast sensitivity may increase people's tendency to overestimate perception of distance traveled, perhaps because of an increased perception of speed of self-motion.
Collapse
|
28
|
Adams H, Narasimham G, Rieser J, Creem-Regehr S, Stefanucci J, Bodenheimer B. Locomotive Recalibration and Prism Adaptation of Children and Teens in Immersive Virtual Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018. [PMID: 29543159 DOI: 10.1109/tvcg.2018.2794072] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
As virtual reality expands in popularity, an increasingly diverse audience is gaining exposure to immersive virtual environments (IVEs). A significant body of research has demonstrated how perception and action work in such environments, but most of this work has been done studying adults. Less is known about how physical and cognitive development affect perception and action in IVEs, particularly as applied to preteen and teenage children. Accordingly, in the current study we assess how preteens (children aged 8-12 years) and teenagers (children aged 15-18 years) respond to mismatches between their motor behavior and the visual information presented by an IVE. Over two experiments, we evaluate how these individuals recalibrate their actions across functionally distinct systems of movement. The first experiment analyzed forward walking recalibration after exposure to an IVE with either increased or decreased visual flow. Visual flow during normal bipedal locomotion was manipulated to be either twice or half as fast as the physical gait. The second experiment leveraged a prism throwing adaptation paradigm to test the effect of recalibration on throwing movement. In the first experiment, our results show no differences across age groups, although subjects generally experienced a post-exposure effect of shortened distance estimation after experiencing visually faster flow and longer distance estimation after experiencing visually slower flow. In the second experiment, subjects generally showed the typical prism adaptation behavior of a throwing after-effect error. The error lasted longer for preteens than older children. Our results have implications for the design of virtual systems with children as a target audience.
Collapse
|
29
|
Hecht H, Ramdohr M, von Castell C. Underestimation of large distances in active and passive locomotion. Exp Brain Res 2018; 236:1603-1609. [PMID: 29582108 DOI: 10.1007/s00221-018-5245-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2017] [Accepted: 03/22/2018] [Indexed: 11/29/2022]
Abstract
Our ability to estimate distances, be it verbally or by locomotion, is exquisite at close range (action space). At distances above 100 m (vista space), verbal estimates continue to be quite accurate, whereas locomotor estimates have been found to be grossly underestimated. Until now, however, the latter have been performed on a treadmill, which might not translate to real-world walking. We investigated if the motor underestimation found on the treadmill holds up in a natural environment. Observers viewed pictures of objects at distances between 10 and 245 m and were asked to reproduce these distances in a blindfolded walking task (using passive movement or an active production method). Active and passive locomotor judgments underestimated far distances above 100 m. We conclude that underestimation of large distances does not depend on the medium (treadmill vs. real-world) but rather on the sensory modality and effort involved in the task.
Collapse
Affiliation(s)
- Heiko Hecht
- Psychologisches Institut, Johannes Gutenberg-Universität Mainz, Wallstraße 3, 55122, Mainz, Germany
| | - Max Ramdohr
- Psychologisches Institut, Johannes Gutenberg-Universität Mainz, Wallstraße 3, 55122, Mainz, Germany
| | - Christoph von Castell
- Psychologisches Institut, Johannes Gutenberg-Universität Mainz, Wallstraße 3, 55122, Mainz, Germany.
| |
Collapse
|
30
|
Abstract
When walking to intercept a moving target, people take an interception path that appears to anticipate the target's trajectory. According to the constant bearing strategy, the observer holds the bearing direction of the target constant based on current visual information, consistent with on-line control. Alternatively, the interception path might be based on an internal model of the target's motion, known as model-based control. To investigate these two accounts, participants walked to intercept a moving target in a virtual environment. We degraded the target's visibility by blurring the target to varying degrees in the midst of a trial, in order to influence its perceived speed and position. Reduced levels of visibility progressively impaired interception accuracy and precision; total occlusion impaired performance most and yielded nonadaptive heading adjustments. Thus, performance strongly depended on current visual information and deteriorated qualitatively when it was withdrawn. The results imply that locomotor interception is normally guided by current information rather than an internal model of target motion, consistent with on-line control.
Collapse
Affiliation(s)
- Huaiyong Zhao
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, RI, USA Current affiliation: Department of Psychology, Technical University Darmstadt, Darmstadt, Hesse,
| | - William H Warren
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, RI, ://www.brown.edu/Departments/CLPS/people/william-warren
| |
Collapse
|
31
|
Direct-location versus verbal report methods for measuring auditory distance perception in the far field. Behav Res Methods 2017; 50:1234-1247. [PMID: 28786043 DOI: 10.3758/s13428-017-0939-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In this study we evaluated whether a method of direct location is an appropriate response method for measuring auditory distance perception of far-field sound sources. We designed an experimental set-up that allows participants to indicate the distance at which they perceive the sound source by moving a visual marker. We termed this method Cross-Modal Direct Location (CMDL) since the response procedure involves the visual modality while the stimulus is presented through the auditory modality. Three experiments were conducted with sound sources located from 1 to 6 m. The first one compared the perceived distances obtained using either the CMDL device or verbal report (VR), which is the response method more frequently used for reporting auditory distance in the far field, and found differences on response compression and bias. In Experiment 2, participants reported visual distance estimates to the visual marker that were found highly accurate. Then, we asked the same group of participants to report VR estimates of auditory distance and found that the spatial visual information, obtained from the previous task, did not influence their reports. Finally, Experiment 3 compared the same responses that Experiment 1 but interleaving the methods, showing a weak, but complex, mutual influence. However, the estimates obtained with each method remained statistically different. Our results show that the auditory distance psychophysical functions obtained with the CMDL method are less susceptible to previously reported underestimation for distances over 2 m.
Collapse
|
32
|
Piekarski S, Lajoie Y, Paquet N. Effect of Transient Perturbations of Short-Term Memory on Target-Directed Blind Locomotion. J Mot Behav 2017. [PMID: 28632102 DOI: 10.1080/00222895.2016.1271301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
It is difficult to walk without vision to a nearby destination if there is a time delay between watching the destination and walking toward it. Indeed, path deviation occurred when delays were introduced before initiating straight ahead blindfolded walking (R. A. Tyrrell, K. K., Rudolph, B. G., Eggers, & H. W. Leibowitz, 1993 ). The questions are whether the location of a 60-s delay in the walking path and whether performing a cognitive task during the delay influence the accuracy in reaching a previously seen target while walking without vision. Thirty young adults walked blindfolded and stopped when they believed they had reached a target at 8 m. Delays were 60 s in duration, were located at 0, 4, and 7 m, and involved waiting or backward counting. Significant differences were found between 0-m and 4-m delay locations for distance to target, distance travelled and path deviation (p < .05). Significant effect of backward counting during the 60-s delay was found at the 0-m delay for distance travelled (p < .05). The interaction between retaining visual guidance information during 60 s and performing a cognitive task likely influenced target-directed blind navigation.
Collapse
Affiliation(s)
- Sarah Piekarski
- a School of Interdisciplinary Health Sciences, Faculty of Health Sciences , University of Ottawa , Canada
| | - Yves Lajoie
- b School of Human Kinetics, Faculty of Health Sciences , University of Ottawa , Canada
| | - Nicole Paquet
- b School of Human Kinetics, Faculty of Health Sciences , University of Ottawa , Canada.,c School of Rehabilitation Sciences, Faculty of Health Sciences , University of Ottawa , Canada
| |
Collapse
|
33
|
Abstract
Attention readily facilitates the detection and discrimination of objects, but it is not known whether it helps to form the vast volume of visual space that contains the objects and where actions are implemented. Conventional wisdom suggests not, given the effortless ease with which we perceive three-dimensional (3D) scenes on opening our eyes. Here, we show evidence to the contrary. In Experiment 1, the observer judged the location of a briefly presented target, placed either on the textured ground or ceiling surface. Judged location was more accurate for a target on the ground, provided that the ground was visible and that the observer directed attention to the lower visual field, not the upper field. This reveals that attention facilitates space perception with reference to the ground. Experiment 2 showed that judged location of a target in mid-air, with both ground and ceiling surfaces present, was more accurate when the observer directed their attention to the lower visual field; this indicates that the attention effect extends to visual space above the ground. These findings underscore the role of attention in anchoring visual orientation in space, which is arguably a primal event that enhances one's ability to interact with objects and surface layouts within the visual space. The fact that the effect of attention was contingent on the ground being visible suggests that our terrestrial visual system is best served by its ecological niche.
Collapse
Affiliation(s)
- Liu Zhou
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Chenglong Deng
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Teng Leng Ooi
- College of Optometry, The Ohio State University, Columbus, Ohio 43210, USA
| | - Zijiang J He
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China.,Department of Psychological and Brain Sciences, University of Louisville, Louisville, Kentucky 40292, USA.,CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
| |
Collapse
|
34
|
Morard MD, Besson D, Laroche D, Naaïm A, Gremeaux V, Casillas JM. Fixed-distance walk tests at comfortable and fast speed: Potential tools for the functional assessment of coronary patients? Ann Phys Rehabil Med 2016; 60:13-19. [PMID: 27915207 DOI: 10.1016/j.rehab.2016.11.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2016] [Revised: 11/04/2016] [Accepted: 11/04/2016] [Indexed: 01/16/2023]
Abstract
OBJECTIVES There is ambiguity concerning the walk tests available for functional assessment of coronary patients, particularly for the walking speed. This study explores the psychometric properties of two walking tests, based on fixed-distance tests, at comfortable and fast velocity, in stabilized patients at the end of a cardiac rehabilitation program. METHODS At a three-day interval 58 coronary patients (mean age of 64.85±6.03 years, 50 men) performed three walk tests, the first two at a comfortable speed in a random order (6-minute walk test - 6MWT - and 400-metre comfortable walk test - 400mCWT) and the third at a brisk speed (200-metre fast walk test - 200mFWT). A modified Bruce treadmill test was associated at the end of the second phase. Monitored main parameters were: heart rate, walking velocity, VO2. RESULTS Tolerance to the 3 tests was satisfactory. The reliability of the main parameters was good (intraclass correlation coefficient>0.8). The VO2 concerning 6MWT and 400mCWT were not significantly different (P=0.33) and were lower to the first ventilatory threshold determined by the stress test (P<0.001): 16.2±3.0 vs. 16.5±2.6 vs. 20.7±5.1mL·min-1·kg-1 respectively. The VO2 of the 200mFWT (20.2±3.7) was not different from the first ventilatory threshold. CONCLUSIONS 400mCWT and 200mFWT are feasible, well-tolerated and reliable. They explore two levels of effort intensity (lower and not different to the first ventilatory threshold respectively). 400mCWT is a possible alternative to 6MWT. Associated with 200mFWT it should allow a better measurement of physical capacities and better customization of exercise training.
Collapse
Affiliation(s)
- Marie-Doriane Morard
- CIC INSERM 1432, Plateforme d'Investigation Technologique, CHU de Dijon, Dijon, France; Cardiac Rehabilitation Department, University Hospital of Dijon, Dijon, France
| | - Delphine Besson
- CIC INSERM 1432, Plateforme d'Investigation Technologique, CHU de Dijon, Dijon, France; Cardiac Rehabilitation Department, University Hospital of Dijon, Dijon, France
| | - Davy Laroche
- CIC INSERM 1432, Plateforme d'Investigation Technologique, CHU de Dijon, Dijon, France
| | - Alexandre Naaïm
- CIC INSERM 1432, Plateforme d'Investigation Technologique, CHU de Dijon, Dijon, France
| | - Vincent Gremeaux
- CIC INSERM 1432, Plateforme d'Investigation Technologique, CHU de Dijon, Dijon, France; INSERM U1093, Cognition, Action, Plasticité Sensori-motrice, Dijon, France; Cardiac Rehabilitation Department, University Hospital of Dijon, Dijon, France
| | - Jean-Marie Casillas
- CIC INSERM 1432, Plateforme d'Investigation Technologique, CHU de Dijon, Dijon, France; INSERM U1093, Cognition, Action, Plasticité Sensori-motrice, Dijon, France; Cardiac Rehabilitation Department, University Hospital of Dijon, Dijon, France.
| |
Collapse
|
35
|
James KR, Caird JK. The Effects of Optic Flow, Proprioception, and Texture on Novice Locomotion in Virtual Environments. ACTA ACUST UNITED AC 2016. [DOI: 10.1177/154193129503902110] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The ability of a user to move to different locations within a virtual environment (VE) is a fundamental action that subserves the activities of exploration and manipulation. By empirical analogy, the perceptual information used to locomote to a target within a virtual environment is compared to the perceptual information used to walk to a location in the real world. An experiment is reported that had participants move to a location as accurately as possible within a VE where a target object was presented. The amount of visual feedback available to participants was manipulated. Three conditions were compared: static viewing of the target and virtual environment before locomotion, the disappearance of the target object as movement to the object was initiated, and locomotion to the target while both object and environment were present. In addition, the composition of virtual environments was either textured or polygonal. Error measures indicated that users locomote within VE's with less accuracy than those that walk blindfolded in the real world. Texture had its largest effect on the accuracy of movement when optic flow was not available, that is, static estimates of distance. Discussions center on the relative contribution of visual, cognitive, and proprioceptive information to VE user movement accuracy.
Collapse
Affiliation(s)
- K. R. James
- Department of Psychology University of Calgary Calgary, Alberta
| | - J. K. Caird
- Department of Psychology University of Calgary Calgary, Alberta
| |
Collapse
|
36
|
Zhou L, Ooi TL, He ZJ. Intrinsic spatial knowledge about terrestrial ecology favors the tall for judging distance. SCIENCE ADVANCES 2016; 2:e1501070. [PMID: 27602402 PMCID: PMC5007070 DOI: 10.1126/sciadv.1501070] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/10/2015] [Accepted: 08/02/2016] [Indexed: 06/06/2023]
Abstract
Our sense of vision reliably directs and guides our everyday actions, such as reaching and walking. This ability is especially fascinating because the optical images of natural scenes that project into our eyes are insufficient to adequately form a perceptual space. It has been proposed that the brain makes up for this inadequacy by using its intrinsic spatial knowledge. However, it is unclear what constitutes intrinsic spatial knowledge and how it is acquired. We investigated this question and showed evidence of an ecological basis, which uses the statistical spatial relationship between the observer and the terrestrial environment, namely, the ground surface. We found that in dark and reduced-cue environments where intrinsic knowledge has a greater contribution, perceived target location is more accurate when referenced to the ground than to the ceiling. Furthermore, taller observers more accurately localized the target. Superior performance was also observed in the full-cue environment, even when we compensated for the observers' heights by having the taller observer sit on a chair and the shorter observers stand on a box. Although fascinating, this finding dovetails with the prediction of the ecological hypothesis for intrinsic spatial knowledge. It suggests that an individual's accumulated lifetime experiences of being tall and his or her constant interactions with ground-based objects not only determine intrinsic spatial knowledge but also endow him or her with an advantage in spatial ability in the intermediate distance range.
Collapse
Affiliation(s)
- Liu Zhou
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Science and Technology Commission of Shanghai Municipality), Institute of Cognitive Neurosciences, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Teng Leng Ooi
- College of Optometry, Ohio State University, Columbus, OH 43210, USA
| | - Zijiang J. He
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Science and Technology Commission of Shanghai Municipality), Institute of Cognitive Neurosciences, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY 40292, USA
| |
Collapse
|
37
|
Lappi O. Eye movements in the wild: Oculomotor control, gaze behavior & frames of reference. Neurosci Biobehav Rev 2016; 69:49-68. [PMID: 27461913 DOI: 10.1016/j.neubiorev.2016.06.006] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2015] [Revised: 05/14/2016] [Accepted: 06/08/2016] [Indexed: 11/19/2022]
Abstract
Understanding the brain's capacity to encode complex visual information from a scene and to transform it into a coherent perception of 3D space and into well-coordinated motor commands are among the outstanding questions in the study of integrative brain function. Eye movement methodologies have allowed us to begin addressing these questions in increasingly naturalistic tasks, where eye and body movements are ubiquitous and, therefore, the applicability of most traditional neuroscience methods restricted. This review explores foundational issues in (1) how oculomotor and motor control in lab experiments extrapolates into more complex settings and (2) how real-world gaze behavior in turn decomposes into more elementary eye movement patterns. We review the received typology of oculomotor patterns in laboratory tasks, and how they map onto naturalistic gaze behavior (or not). We discuss the multiple coordinate systems needed to represent visual gaze strategies, how the choice of reference frame affects the description of eye movements, and the related but conceptually distinct issue of coordinate transformations between internal representations within the brain.
Collapse
Affiliation(s)
- Otto Lappi
- Cognitive Science, Institute of Behavioural Sciences, PO BOX 9, 00014 University of Helsinki, Finland.
| |
Collapse
|
38
|
He ZJ, Wu B, Ooi TL, Yarbrough G, Wu J. Judging Egocentric Distance on the Ground: Occlusion and Surface Integration. Perception 2016; 33:789-806. [PMID: 15460507 DOI: 10.1068/p5256a] [Citation(s) in RCA: 66] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
On the basis of the finding that a common and homogeneous ground surface is vital for accurate egocentric distance judgments (Sinai et al, 1998 Nature395 497–500), we propose a sequential-surface-integration-process (SSIP) hypothesis to elucidate how the visual system constructs a representation of the ground-surface in the intermediate distance range. According to the SSIP hypothesis, a near ground-surface representation is formed from near depth cues, and is utilized as an anchor to integrate the more distant surfaces by using texture-gradient information as the depth cue. The SSIP hypothesis provides an explanation for the finding that egocentric distance judgment is underestimated when a texture boundary exists on the ground surface that commonly supports the observer and target. We tested the prediction that the fidelity of the visually represented ground-surface reference frame depends on how the visual system selects the surface information for integration. Specifically, if information is selected along a direct route between the observer and target where the ground surface is disrupted by an occluding object, the ground surface will be inaccurately represented. In experiments 1–3 we used a perceptual task and two different visually directed tasks to show that this leads to egocentric distance underestimation. Judgment is accurate however, when the observer selects the continuous ground information bypassing the occluding object (indirect route), as found in experiments 4 and 5 with a visually directed task. Altogether, our findings provide support for the SSIP hypothesis and reveal, surprisingly, that the phenomenal visual space is not unique but depends on how optic information is selected.
Collapse
Affiliation(s)
- Zijiang J He
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY 40292, USA.
| | | | | | | | | |
Collapse
|
39
|
Durgin FH, Gigone K. Enhanced Optic Flow Speed Discrimination While Walking: Contextual Tuning of Visual Coding. Perception 2016; 36:1465-75. [PMID: 18265829 DOI: 10.1068/p5845] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
We tested the hypothesis that long-term adaptation to the normal contingencies between walking and its multisensory consequences (including optic flow) leads to enhanced discrimination of appropriate visual speeds during self-motion. In experiments 1 (task 1) and 2 a two-interval forced-choice procedure was used to compare the perceived speed of a simulated visual flow field viewed while walking with the perceived speed of a flow field viewed while standing. Both experiments demonstrated subtractive reductions in apparent speed. In experiments 1 and 3 discrimination thresholds were measured for optic flow speed while walking and while standing. Consistent with the optimal-coding hypothesis, speed discrimination for visual speeds near walking speed was enhanced during walking. Reduced sensitivity was found for slower visual speeds. The multisensory context of walking alters the coding of optic flow in a way that enhances speed discrimination in the expected range of flow speeds.
Collapse
Affiliation(s)
- Frank H Durgin
- Department of Psychology, Swarthmore College, 500 College Avenue, Swarthmore, PA 19081, USA
| | - Krista Gigone
- Department of Psychology, Swarthmore College, 500 College Avenue, Swarthmore, PA 19081, USA
| |
Collapse
|
40
|
Ooi TL, Wu B, He ZJ. Perceptual Space in the Dark Affected by the Intrinsic Bias of the Visual System. Perception 2016; 35:605-24. [PMID: 16836053 DOI: 10.1068/p5492] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Correct judgment of egocentric/absolute distance in the intermediate distance range requires both the angular declination below the horizon and ground-surface information being represented accurately. This requirement can be met in the light environment but not in the dark, where the ground surface is invisible and hence cannot be represented accurately. We previously showed that a target in the dark is judged at the intersection of the projection line from the eye to the target that defines the angular declination below the horizon and an implicit surface. The implicit surface can be approximated as a slant surface with its far end slanted toward the frontoparallel plane. We hypothesize that the implicit slant surface reflects the intrinsic bias of the visual system and helps to define the perceptual space. Accordingly, we conducted two experiments in the dark to further elucidate the characteristics of the implicit slant surface. In the first experiment we measured the egocentric location of a dimly lit target on, or above, the ground, using the blind-walking-gesturing paradigm. Our results reveal that the judged target locations could be fitted by a line (surface), which indicates an intrinsic bias with a geographical slant of about 12.4°. In the second experiment, with an exocentric/relative-distance task, we measured the judged ratio of aspect ratio of a fluorescent L-shaped target. Using trigonometric analysis, we found that the judged ratio of aspect ratio can be accounted for by assuming that the L-shaped target was perceived on an implicit slant surface with an average geographical slant of 14.4° That the data from the two experiments with different tasks can be fitted by implicit slant surfaces suggests that the intrinsic bias has a role in determining perceived space in the dark. The possible contribution of the intrinsic bias to representing the ground surface and its impact on space perception in the light environment are also discussed.
Collapse
Affiliation(s)
- Teng Leng Ooi
- Department of Basic Sciences, Pennsylvania College of Optometry, 8360 Old York Road, Elkins Park, PA 19027, USA.
| | | | | |
Collapse
|
41
|
Wu J, He ZJ, Ooi TL. Visually Perceived Eye Level and Horizontal Midline of the Body Trunk Influenced by Optic Flow. Perception 2016; 34:1045-60. [PMID: 16245484 DOI: 10.1068/p5416] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
The eye level and the horizontal midline of the body trunk can serve, respectively as references for judging the vertical and horizontal egocentric directions. We investigated whether the optic-flow pattern, which is the dynamic motion information generated when one moves in the visual world, can be used by the visual system to determine and calibrate these two references. Using a virtual-reality setup to generate the optic-flow pattern, we showed that judged elevation of the eye level and the azimuth of the horizontal midline of the body trunk are biased toward the positional placement of the focus of expansion (FOE) of the optic-flow pattern. Furthermore, for the vertical reference, prolonged viewing of an optic-flow pattern with lowered FOE not only causes a lowered judged eye level after removal of the optic-flow pattern, but also an overestimation of distance in the dark. This is equivalent to a reduction in the judged angular declination of the object after adaptation, indicating that the optic-flow information also plays a role in calibrating the extraretinal signals used to establish the vertical reference.
Collapse
Affiliation(s)
- Jun Wu
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY 40292, USA
| | | | | |
Collapse
|
42
|
Paquet N, Rainville C, Lajoie Y, Tremblay F. Reproducibility of Distance and Direction Errors Associated with Forward, Backward, and Sideway Walking in the Context of Blind Navigation. Perception 2016; 36:525-36. [PMID: 17564199 DOI: 10.1068/p5532] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
The ability to navigate without vision towards a previously seen target has been extensively studied, but its reliability over time has yet to be established. Our aims were to determine distance and direction errors made during blind navigation across four different directions involving three different gait patterns (stepping forward, stepping sideway, and stepping backward), and to establish the test – retest reproducibility of these errors. Twenty young healthy adults participated in two testing sessions separated by 7 days. They were shown targets located, respectively, 8 m ahead, 8 m behind, and 8 m to their right and left. With vision occluded by opaque goggles, they walked forward (target ahead), backward (target behind), and sideway (right and left targets) until they perceived to be on the target. Subjects were not provided with feedback about their performance. Walked distance, angular deviation, and body rotation were measured. The mean estimated distance error was similar across the four walking directions and ranged from 16 to 80 cm with respect to the 8 m target. In contrast, direction errors were significantly larger during sideway navigation (walking in the frontal plane: leftward, 10° ± 15° deviation; rightward, 18° ± 13°) than during forward and backward navigation (walking in the sagittal plane). In general, distance and direction errors were only moderately reproducible between the two sessions [intraclass correlation coefficients (ICCs) ranging from 0.682 to 0.705]. Among the four directions, rightward navigation showed the best reproducibility with ICCs ranging from 0.607 to 0.726, and backward navigation had the worst reliability with ICCs ranging from 0.094 to 0.554. These findings indicate that errors associated with blind navigation across different walking directions and involving different gait patterns are only moderately to poorly reproducible on repeated testing, especially for walking backward. The biomechanical constraints and increased cognitive loading imposed by changing the walking pattern to backward stepping may underlie the poor performance in this direction.
Collapse
Affiliation(s)
- Nicole Paquet
- School of Rehabilitation Sciences, University of Ottawa, 451 Smyth Road, Ottawa, Ontario K1H 8M5, Canada.
| | | | | | | |
Collapse
|
43
|
Abstract
Perception informs people about the opportunities for action and their associated costs. To this end, explicit awareness of spatial layout varies not only with relevant optical and ocular-motor variables, but also as a function of the costs associated with performing intended actions. Although explicit awareness is mutable in this respect, visually guided actions directed at the immediate environment are not. When the metabolic costs associated with walking an extent increase—perhaps because one is wearing a heavy backpack—hills appear steeper and distances to targets appear greater. When one is standing on a high balcony, the apparent distance to the ground is correlated with one's fear of falling. Perceiving spatial layout combines the geometry of the world with behavioral goals and the costs associated with achieving these goals.
Collapse
|
44
|
Legge GE, Gage R, Baek Y, Bochsler TM. Indoor Spatial Updating with Reduced Visual Information. PLoS One 2016; 11:e0150708. [PMID: 26943674 PMCID: PMC4778963 DOI: 10.1371/journal.pone.0150708] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2015] [Accepted: 02/18/2016] [Indexed: 11/19/2022] Open
Abstract
Purpose Spatial updating refers to the ability to keep track of position and orientation while moving through an environment. People with impaired vision may be less accurate in spatial updating with adverse consequences for indoor navigation. In this study, we asked how artificial restrictions on visual acuity and field size affect spatial updating, and also judgments of the size of rooms. Methods Normally sighted young adults were tested with artificial restriction of acuity in Mild Blur (Snellen 20/135) and Severe Blur (Snellen 20/900) conditions, and a Narrow Field (8°) condition. The subjects estimated the dimensions of seven rectangular rooms with and without these visual restrictions. They were also guided along three-segment paths in the rooms. At the end of each path, they were asked to estimate the distance and direction to the starting location. In Experiment 1, the subjects walked along the path. In Experiment 2, they were pushed in a wheelchair to determine if reduced proprioceptive input would result in poorer spatial updating. Results With unrestricted vision, mean Weber fractions for room-size estimates were near 20%. Severe Blur but not Mild Blur yielded larger errors in room-size judgments. The Narrow Field was associated with increased error, but less than with Severe Blur. There was no effect of visual restriction on estimates of distance back to the starting location, and only Severe Blur yielded larger errors in the direction estimates. Contrary to expectation, the wheelchair subjects did not exhibit poorer updating performance than the walking subjects, nor did they show greater dependence on visual condition. Discussion If our results generalize to people with low vision, severe deficits in acuity or field will adversely affect the ability to judge the size of indoor spaces, but updating of position and orientation may be less affected by visual impairment.
Collapse
Affiliation(s)
- Gordon E. Legge
- Department of Psychology, University of Minnesota, Twin Cities, Minnesota, United States of America
- * E-mail:
| | - Rachel Gage
- Department of Psychology, University of Minnesota, Twin Cities, Minnesota, United States of America
| | - Yihwa Baek
- Department of Psychology, University of Minnesota, Twin Cities, Minnesota, United States of America
| | - Tiana M. Bochsler
- Department of Psychology, University of Minnesota, Twin Cities, Minnesota, United States of America
| |
Collapse
|
45
|
Pulling out all the stops to make the distance: Effects of effort and optical information in distance perception responses made by rope pulling. Atten Percept Psychophys 2015; 78:685-99. [DOI: 10.3758/s13414-015-1035-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
46
|
Philbeck JW, Witt JK. Action-specific influences on perception and postperceptual processes: Present controversies and future directions. Psychol Bull 2015; 141:1120-44. [PMID: 26501227 PMCID: PMC4621785 DOI: 10.1037/a0039738] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The action-specific perception account holds that people perceive the environment in terms of their ability to act in it. In this view, for example, decreased ability to climb a hill because of fatigue makes the hill visually appear to be steeper. Though influential, this account has not been universally accepted, and in fact a heated controversy has emerged. The opposing view holds that action capability has little or no influence on perception. Heretofore, the debate has been quite polarized, with efforts largely being focused on supporting one view and dismantling the other. We argue here that polarized debate can impede scientific progress and that the search for similarities between 2 sides of a debate can sharpen the theoretical focus of both sides and illuminate important avenues for future research. In this article, we present a synthetic review of this debate, drawing from the literatures of both approaches, to clarify both the surprising similarities and the core differences between them. We critically evaluate existing evidence, discuss possible mechanisms of action-specific effects, and make recommendations for future research. A primary focus of future work will involve not only the development of methods that guard against action-specific postperceptual effects but also development of concrete, well-constrained underlying mechanisms. The criteria for what constitutes acceptable control of postperceptual effects and what constitutes an appropriately specific mechanism vary between approaches, and bridging this gap is a central challenge for future research.
Collapse
|
47
|
Geuss MN, Stefanucci JK, Creem-Regehr SH, Thompson WB, Mohler BJ. Effect of Display Technology on Perceived Scale of Space. HUMAN FACTORS 2015; 57:1235-1247. [PMID: 26060237 DOI: 10.1177/0018720815590300] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2013] [Accepted: 05/12/2015] [Indexed: 06/04/2023]
Abstract
OBJECTIVE Our goal was to evaluate the degree to which display technologies influence the perception of size in an image. BACKGROUND Research suggests that factors such as whether an image is displayed stereoscopically, whether a user's viewpoint is tracked, and the field of view of a given display can affect users' perception of scale in the displayed image. METHOD Participants directly estimated the size of a gap by matching the distance between their hands to the gap width and judged their ability to pass unimpeded through the gap in one of five common implementations of three display technologies (two head-mounted displays [HMD] and a back-projection screen). RESULTS Both measures of gap width were similar for the two HMD conditions and the back projection with stereo and tracking. For the displays without tracking, stereo and monocular conditions differed from each other, with monocular viewing showing underestimation of size. CONCLUSIONS Display technologies that are capable of stereoscopic display and tracking of the user's viewpoint are beneficial as perceived size does not differ from real-world estimates. Evaluations of different display technologies are necessary as display conditions vary and the availability of different display technologies continues to grow. APPLICATIONS The findings are important to those using display technologies for research, commercial, and training purposes when it is important for the displayed image to be perceived at an intended scale.
Collapse
Affiliation(s)
- Michael N Geuss
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Jeanine K Stefanucci
- Max Planck Institute for Biological Cybernetics, Tübingen, GermanyUniversity of Utah, Salt Lake City, UtahMax Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Sarah H Creem-Regehr
- Max Planck Institute for Biological Cybernetics, Tübingen, GermanyUniversity of Utah, Salt Lake City, UtahMax Planck Institute for Biological Cybernetics, Tübingen, Germany
| | | | - Betty J Mohler
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
48
|
Creem-Regehr SH, Kunz BR. Perception and action. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2015; 1:800-810. [PMID: 26271778 DOI: 10.1002/wcs.82] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The phrase perception and action is used widely but in diverse ways in the context of the relationship between perceptual and motor processes. This review describes and integrates five perspectives on perception and action which rely on both neurophysiological and behavioral levels of analysis. The two visual systems view proposes dissociable but interactive systems for conscious processing of objects/space and the visual control of action. The integrative view proposes tightly calibrated but flexible systems for perception and motor control in spatial representation. The embodied view posits that action underlies perception, involving common coding or motor simulation systems, and examines the relationship between action observation, imitation, and the understanding of intention. The ecological view emphasizes environmental information and affordances in perception. The functional view defines the relationship between perception, action planning, and semantics in goal-directed actions. Although some of these views/approaches differ in significant ways, their shared emphasis on the importance of action in perception serves as a useful unifying framework. WIREs Cogn Sci 2010 1 800-810 For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
| | - Benjamin R Kunz
- Department of Psychology, University of Utah, Salt Lake City, UT 84112, USA
| |
Collapse
|
49
|
Ooi TL, He ZJ. Space perception of strabismic observers in the real world environment. Invest Ophthalmol Vis Sci 2015; 56:1761-8. [PMID: 25698702 PMCID: PMC4358738 DOI: 10.1167/iovs.14-15741] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2014] [Accepted: 02/06/2015] [Indexed: 11/24/2022] Open
Abstract
PURPOSE Space perception beyond the near distance range (>2 m) is important for target localization, and for directing and guiding a variety of daily activities, including driving and walking. However, it is unclear whether the absolute (egocentric) localization of a single target in the intermediate distance range requires binocular vision, and if so, whether having subnormal stereopsis in strabismus impairs one's ability to localize the target. METHODS We investigated this by measuring the perceived absolute location of a target by observers with normal binocular vision (n = 8; mean age, 24.5 years) and observers with strabismus (n = 8; mean age, 24.9 years) under monocular and binocular conditions. The observers used the blind walking-gesturing task to indicate the judged location of a target located at various viewing distances (2.73-6.93 m) and heights (0, 30, and 90 cm) above the floor. Near stereopsis was assessed with the Randot Stereotest. RESULTS Both groups of observers accurately judged the absolute distance of the target on the ground (height = 0 cm) either with monocular or binocular viewing. However, when the target was suspended in midair, the normal observers accurately judged target location with binocular viewing, but not with monocular viewing (mean slant angle, 0.8° ± 0.5° vs. 7.4° ± 1.4°; P < 0.001, with a slant angle of 0° representing accurate localization). In contrast, the strabismic observers with poorer stereo acuity made larger errors in target localization in both viewing conditions, though with fewer errors during binocular viewing (mean slant angle, 2.7° ± 0.4° vs. 9.2° ± 1.3°; P < 0.0025). Further analysis reveals the localization error, that is, slant angle, correlates positively with stereo threshold during binocular viewing (r(2) = 0.479, P < 0.005), but not during monocular viewing (r(2) = 0.0002, P = 0.963). CONCLUSIONS Locating a single target on the ground is sufficient with monocular depth information, but binocular depth information is required when the target is suspended in midair. Since the absolute binocular disparity information of the single target is weak beyond 2 m, we suggest the visual system localizes the single target using the relative binocular disparity information between the midair target and the visible ground surface. Consequently, strabismic observers with residual stereopsis localize a target more accurately than their counterparts without stereo ability.
Collapse
Affiliation(s)
- Teng Leng Ooi
- The Ohio State University, Columbus, Ohio, United States
| | - Zijiang J. He
- University of Louisville, Louisville, Kentucky, United States
| |
Collapse
|
50
|
Yamamoto N, Meléndez JA, Menzies DT. Homing by path integration when a locomotion trajectory crosses itself. Perception 2015; 43:1049-60. [PMID: 25509682 DOI: 10.1068/p7624] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Path integration is a process with which navigators derive their current position and orientation by integrating self-motion signals along a locomotion trajectory. It has been suggested that path integration becomes disproportionately erroneous when the trajectory crosses itself. However, there is a possibility that this previous finding was confounded by effects of the length of a traveled path and the amount of turns experienced along the path, two factors that are known to affect path integration performance. The present study was designed to investigate whether the crossover of a locomotion trajectory truly increases errors of path integration. In an experiment, blindfolded human navigators were guided along four paths that varied in their lengths and turns, and attempted to walk directly back to the beginning of the paths. Only one of the four paths contained a crossover. Results showed that errors yielded from the path containing the crossover were not always larger than those observed in other paths, and the errors were attributed solely to the effects of longer path lengths or greater degrees of turns. These results demonstrated that path crossover does not always cause significant disruption in path integration processes. Implications of the present findings for models of path integration are discussed.
Collapse
|