51
|
Philbeck JW, Woods AJ, Kontra C, Zdenkova P. A comparison of blindpulling and blindwalking as measures of perceived absolute distance. Behav Res Methods 2010; 42:148-60. [PMID: 20160295 PMCID: PMC2883722 DOI: 10.3758/brm.42.1.148] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Blindwalking has become a common measure of perceived absolute distance and location, but it requires a relatively large testing space and cannot be used with people for whom walking is difficult or impossible. In the present article, we describe an alternative response type that is closely matched to blindwalking in several important respects but is less resource intensive. In the blindpulling technique, participants view a target, then close their eyes and pull a length of tape or rope between the hands to indicate the remembered target distance. As with blindwalking, this response requires integration of cyclical, bilateral limb movements over time. Blind-pulling and blindwalking responses are tightly linked across a range of viewing conditions, and blindpulling is accurate when prior exposure to visually guided pulling is provided. Thus, blindpulling shows promise as a measure of perceived distance that may be used in nonambulatory populations and when the space available for testing is limited.
Collapse
Affiliation(s)
- John W Philbeck
- Department of Psychology, George Washington University, 2125 G Street N.W., Washington, DC 20052, USA.
| | | | | | | |
Collapse
|
52
|
Rietdyk S, Drifmeyer JE. The Rough-Terrain Problem: Accurate Foot Targeting as a Function of Visual Information Regarding Target Location. J Mot Behav 2009; 42:37-48. [DOI: 10.1080/00222890903303309] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
53
|
Saracini C, Franke R, Blümel E, Belardinelli MO. Comparing distance perception in different virtual environments. Cogn Process 2009; 10 Suppl 2:S294-6. [DOI: 10.1007/s10339-009-0314-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
54
|
|
55
|
Abstract
Distance perception is among the most pervasive mental phenomena and the oldest research topics in behavioural science. However, we do not understand well the most pervasive finding of distance perception research, that of large individual differences. There are large individual differences in acrophobia (fear of heights), which we commonly assume consists of an abnormal fear of stimuli perceived normally. Evolved navigation theory (ENT) instead suggests that acrophobia consists of a more normal fear of stimuli perceived abnormally. ENT suggests that distance perception individual differences produce major components of acrophobia. Acrophobia tested over a broad range in the present study predicted large individual differences in distance estimation of surfaces that could produce falls. This fear of heights correlated positively with distance estimates of a vertical surface-even among non-acrophobic individuals at no risk of falling and without knowledge of being tested for acrophobia. Acrophobia score predicted magnitude of the descent illusion, which is thought to reflect the risk of falling. These data hold important implications in environmental navigation, clinical aetiology and the evolution of visual systems.
Collapse
Affiliation(s)
- Russell E Jackson
- Department of Psychology, California State University, San Marcos, CA 92096, USA.
| |
Collapse
|
56
|
Pothier S, Philbeck J, Chichka D, Gajewski DA. Tachistoscopic exposure and masking of real three-dimensional scenes. Behav Res Methods 2009; 41:107-112. [PMID: 19182129 PMCID: PMC2883717 DOI: 10.3758/brm.41.1.107] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Although there are many well-known forms of visual cues specifying absolute and relative distance, little is known about how visual space perception develops at small temporal scales. How much time does the visual system require to extract the information in the various absolute and relative distance cues? In this article, we describe a system that may be used to address this issue by presenting brief exposures of real, three-dimensional scenes, followed by a masking stimulus. The system is composed of an electronic shutter (a liquid crystal smart window) for exposing the stimulus scene, and a liquid crystal projector coupled with an electromechanical shutter for presenting the masking stimulus. This system can be used in both full- and reduced-cue viewing conditions, under monocular and binocular viewing, and at distances limited only by the testing space. We describe a configuration that may be used for studying the microgenesis of visual space perception in the context of visually directed walking.
Collapse
Affiliation(s)
- Stephen Pothier
- Department of Mechanical and Aerospace Engineering, George Washington University, 801 22nd St. N.W., 20052, Washington, DC.
| | - John Philbeck
- Department of Mechanical and Aerospace Engineering, George Washington University, 801 22nd St. N.W., 20052, Washington, DC
| | - David Chichka
- Department of Mechanical and Aerospace Engineering, George Washington University, 801 22nd St. N.W., 20052, Washington, DC
| | - Daniel A Gajewski
- Department of Mechanical and Aerospace Engineering, George Washington University, 801 22nd St. N.W., 20052, Washington, DC
| |
Collapse
|
57
|
Armbrüster C, Wolter M, Kuhlen T, Spijkers W, Fimm B. Depth perception in virtual reality: distance estimations in peri- and extrapersonal space. ACTA ACUST UNITED AC 2008; 11:9-15. [PMID: 18275307 DOI: 10.1089/cpb.2007.9935] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
The present study investigated depth perception in virtual environments. Twenty-three participants verbally estimated ten distances between 40 cm and 500 cm in three different virtual environments in two conditions: (1) only one target was presented or (2) ten targets were presented at the same time. Additionally, the presence of a metric aid was varied. A questionnaire assessed subjective ratings about physical complaints (e.g., headache), the experience in the virtual world (e.g., presence), and the experiment itself (self-evaluation of the estimations). Results show that participants underestimate the virtual distances but are able to perceive the distances in the right metric order even when only very simple virtual environments are presented. Furthermore, interindividual differences and intraindividual stabilities can be found among participants, and neither the three different virtual environments nor the metric aid improved depth estimations. Estimation performance is better in peripersonal than in extrapersonal space. In contrast, subjective ratings provide a preferred space: a closed room with visible floor, ceiling, and walls.
Collapse
Affiliation(s)
- C Armbrüster
- Department of Computer Science, University of Applied Science, Sankt Augustin, Germany.
| | | | | | | | | |
Collapse
|
58
|
Arthur JC, Philbeck JW, Chichka D. Spatial memory enhances the precision of angular self-motion updating. Exp Brain Res 2007; 183:557-68. [PMID: 17684736 DOI: 10.1007/s00221-007-1075-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2006] [Accepted: 07/19/2007] [Indexed: 10/23/2022]
Abstract
Humans are typically able to keep track of brief changes in their head and body orientation, even when visual and auditory cues are temporarily unavailable. Determining the magnitude of one's displacement from a known location is one form of self-motion updating. Most research on self-motion updating during body rotations has focused on the role of a restricted set of sensory signals (primarily vestibular) available during self-motion. However, humans can and do internally represent spatial aspects of the environment, and little is known about how remembered spatial frameworks may impact angular self-motion updating. Here, we describe an experiment addressing this issue. Participants estimated the magnitude of passive, non-visual body rotations (40 degrees -130 degrees ), using non-visual manual pointing. Prior to each rotation, participants were either allowed full vision of the testing environment, or remained blindfolded. Within-subject response precision was dramatically enhanced when the body rotations were preceded by a visual preview of the surrounding environment; constant (signed) and absolute (unsigned) error were much less affected. These results are informative for future perceptual, cognitive, and neuropsychological studies, and demonstrate the powerful role of stored spatial representations for improving the precision of angular self-motion updating.
Collapse
Affiliation(s)
- Joeanna C Arthur
- Department of Psychology, The George Washington University, 2125 G. Street, NW, Washington, DC 20052, USA.
| | | | | |
Collapse
|
59
|
The linear perspective information in ground surface representation and distance judgment. ACTA ACUST UNITED AC 2007; 69:654-72. [PMID: 17929690 DOI: 10.3758/bf03193769] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
60
|
Richardson AR, Waller D. Interaction with an immersive virtual environment corrects users' distance estimates. HUMAN FACTORS 2007; 49:507-17. [PMID: 17552313 DOI: 10.1518/001872007x200139] [Citation(s) in RCA: 35] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
OBJECTIVE Two experiments examined whether prior interaction within an immersive virtual environment (VE) enabled people to improve the accuracy of their distance judgments and whether an improved ability to estimate distance generalized to other means of estimating distances. BACKGROUND Prior literature has consistently found that users of immersive VEs underestimate distances by approximately 50%. METHOD In each of the two experiments, 16 participants viewed objects in an immersive VE and estimated their distance to them by means of blindfolded walking tasks before and after interacting with the VE. RESULTS The interaction task significantly corrected users' underestimation bias to nearly veridical. Differences between pre- and post-interaction mean distance estimation accuracy were large (d = 4.63), and significant (p < .001), and they generalized across response task. CONCLUSION This finding limits the generality of the underestimation effect in VEs and suggests that distance underestimation in VEs may not be a road block to the development of VE applications. APPLICATION Potential or actual applications of this research include the improvement of VE systems requiring accurate spatial awareness.
Collapse
|
61
|
Abstract
The sequential-surface-integration-process (SSIP) hypothesis was proposed to elucidate how the visual system constructs the ground-surface representation in the intermediate distance range (He et al, 2004 Perception 33 789-806). According to the hypothesis, the SSIP constructs an accurate representation of the near ground surface by using reliable near depth cues. The near ground representation then serves as a template for integrating the adjacent surface patch by using the texture gradient information as the predominant depth cue. By sequentially integrating the surface patches from near to far, the visual system obtains the global ground representation. A critical prediction of the SSIP hypothesis is that, when an abrupt texture-gradient change exists between the near and far ground surfaces, the SSIP can no longer accurately represent the far surface. Consequently, the representation of the far surface will be slanted upward toward the frontoparallel plane (owing to the intrinsic bias of the visual system), and the egocentric distance of a target on the far surface will be underestimated. Our previous findings in the real 3-D environment have shown that observers underestimated the target distance across a texture boundary. Here, we used the virtual-reality system to first test distance judgments with a distance-matching task. We created the texture boundary by having virtual grass- and cobblestone-textured patterns abutting on a flat (horizontal) ground surface in experiment 1, and by placing a brick wall to interrupt the continuous texture gradient of a flat grass surface in experiment 2. In both instances, observers underestimated the target distance across the texture boundary, compared to the homogeneous-texture ground surface (control). Second, we tested the proposal that the far surface beyond the texture boundary is perceived as slanted upward. For this, we used a virtual checkerboard-textured ground surface that was interrupted by a texture boundary. We found that not only was the target distance beyond the texture boundary underestimated relative to the homogeneous-texture condition, but the far surface beyond the texture boundary was also perceived as relatively slanted upward (experiment 3). Altogether, our results confirm the predictions of the SSIP hypothesis.
Collapse
Affiliation(s)
- Bing Wu
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY 40292, USA
| | - Zijiang J He
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY 40292, USA
| | - Teng Leng Ooi
- Department of Basic Sciences, Pennsylvania College of Optometry, Elkins Park, PA 19027, USA
| |
Collapse
|
62
|
Wang RF, Crowell JA, Simons DJ, Irwin DE, Kramer AF, Ambinder MS, Thomas LE, Gosney JL, Levinthal BR, Hsieh BB. Spatial updating relies on an egocentric representation of space: effects of the number of objects. Psychon Bull Rev 2006; 13:281-6. [PMID: 16892995 DOI: 10.3758/bf03193844] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Models of spatial updating attempt to explain how representations of spatial relationships between the actor and objects in the environment change as the actor moves. In allocentric models, object locations are encoded in an external reference frame, and only the actor's position and orientation in that reference frame need to be updated. Thus, spatial updating should be independent of the number of objects in the environment (set size). In egocentric updating models, object locations are encoded relative to the actor, so the location of each object relative to the actor must be updated as the actor moves. Thus, spatial updating efficiency should depend on set size. We examined which model better accounts for human spatial updating by having people reconstruct the locations of varying numbers of virtual objects either from the original study position or from a changed viewing position. In consistency with the egocentric updating model, object localization following a viewpoint change was affected by the number of objects in the environment.
Collapse
Affiliation(s)
- Ranxiao Frances Wang
- Department of Psychology and Beckman Institute, 603 E. Daniel St., Room 533, University of Illinois, Champaign, IL 61820, USA.
| | | | | | | | | | | | | | | | | | | |
Collapse
|
63
|
Aznar-Casanova JA, Matsushima EH, Ribeiro-Filho NP, Da Silva JA. One-dimensional and multi-dimensional studies of the exocentric distance estimates in frontoparallel plane, virtual space, and outdoor open field. SPANISH JOURNAL OF PSYCHOLOGY 2006; 9:273-84. [PMID: 17120706 DOI: 10.1017/s113874160000617x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The aim of this study is twofold: on the one hand, to determine how visual space, as assessed by exocentric distance estimates, is related to physical space. On the other hand, to determine the structure of visual space as assessed by exocentric distance estimates. Visual space was measured in three environments: (a) points located in a 2-D frontoparallel plane, covering a range of distances of 20 cm; (b) stakes placed in a 3-D virtual space (range = 330 mm); and (c) stakes in a 3-D outdoors open field (range = 45 m). Observers made matching judgments of distances between all possible pairs of stimuli, obtained from 16 stimuli (in a regular squared 4 x 4 matrix). Two parameters from Stevens' power law informed us about the distortion of visual space: its exponent and its coefficient of determination (R2). The results showed a ranking of the magnitude of the distortions found in each experimental environment, and also provided information about the efficacy of available visual cues of spatial layout. Furthermore, our data are in agreement with previous findings showing systematic perceptual errors, such as the further the stimuli, the larger the distortion of the area subtended by perceived distances between stimuli. Additionally, we measured the magnitude of distortion of visual space relative to physical space by a parameter of multidimensional scaling analyses, the RMSE. From these results, the magnitude of such distortions can be ranked, and the utility or efficacy of the available visual cues informing about the space layout can also be inferred.
Collapse
Affiliation(s)
- J Antonio Aznar-Casanova
- Department of Basic Psychology, Faculty of Psychology, University of Barcelona, Passeig Vall d'Hebron, 171, 08035-Barcelona, Spain).
| | | | | | | |
Collapse
|
64
|
Riecke BE, Cunningham DW, Bülthoff HH. Spatial updating in virtual reality: the sufficiency of visual information. PSYCHOLOGICAL RESEARCH 2006; 71:298-313. [PMID: 17024431 DOI: 10.1007/s00426-006-0085-z] [Citation(s) in RCA: 49] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2005] [Accepted: 03/14/2006] [Indexed: 10/24/2022]
Abstract
Robust and effortless spatial orientation critically relies on "automatic and obligatory spatial updating", a largely automatized and reflex-like process that transforms our mental egocentric representation of the immediate surroundings during ego-motions. A rapid pointing paradigm was used to assess automatic/obligatory spatial updating after visually displayed upright rotations with or without concomitant physical rotations using a motion platform. Visual stimuli displaying a natural, subject-known scene proved sufficient for enabling automatic and obligatory spatial updating, irrespective of concurrent physical motions. This challenges the prevailing notion that visual cues alone are insufficient for enabling such spatial updating of rotations, and that vestibular/proprioceptive cues are both required and sufficient. Displaying optic flow devoid of landmarks during the motion and pointing phase was insufficient for enabling automatic spatial updating, but could not be entirely ignored either. Interestingly, additional physical motion cues hardly improved performance, and were insufficient for affording automatic spatial updating. The results are discussed in the context of the mental transformation hypothesis and the sensorimotor interference hypothesis, which associates difficulties in imagined perspective switches to interference between the sensorimotor and cognitive (to-be-imagined) perspective.
Collapse
Affiliation(s)
- Bernhard E Riecke
- Max Planck Institute for Biological Cybernetics, Spemannstrasse 38, 72076 Tübingen, Germany.
| | | | | |
Collapse
|
65
|
Ooi TL, He ZJ. Elucidating the ground-based mechanisms underlying space perception in the intermediate distance range. Cogn Process 2006. [DOI: 10.1007/s10339-006-0073-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
66
|
Lappin JS, Shelton AL, Rieser JJ. Environmental context influences visually perceived distance. ACTA ACUST UNITED AC 2006; 68:571-81. [PMID: 16933422 DOI: 10.3758/bf03208759] [Citation(s) in RCA: 59] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
What properties determine visually perceived space? We discovered that the perceived relative distances of familiar objects in natural settings depended in unexpected ways onthe surrounding visual field. Observers bisected egocentric distances in a lobby, in a hallway, and on an open lawn. Three key findings were the following: (1) Perceived midpoints were too far from the observer, which is the opposite of the common foreshortening effect. (2) This antiforeshortening constant error depended on the environmental setting--greatest in the lobby and hall but nonsignificant on the lawn. (3) Context also affected distance discrimination; variability was greater in the hall than in the lobby or on the lawn. A second experiment replicated these findings, using a method of constant stimuli. Evidently, both the accuracy and the precision of perceived distance depend on subtle properties of the surrounding environment.
Collapse
Affiliation(s)
- Joseph S Lappin
- Vanderbilt Vision Research Center, Department of Psychology, Vanderbilt University, 301 Wilson Hall, Nashville, TN 37203, USA.
| | | | | |
Collapse
|
67
|
Kudoh N. Dissociation between visual perception of allocentric distance and visually directed walking of its extent. Perception 2006; 34:1399-416. [PMID: 16355744 DOI: 10.1068/p5444] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Walking without vision to previously viewed targets was compared with visual perception of allocentric distance in two experiments. Experimental evidence had shown that physically equal distances in a sagittal plane on the ground were perceptually underestimated as compared with those in a frontoparallel plane, even under full-cue conditions. In spite of this perceptual anisotropy of space, Loomis et al (1992 Journal of Experimental Psychology. Human Perception and Performance 18 906-921) found that subjects could match both types of distances in a blind-walking task. In experiment 1 of the present study, subjects were required to reproduce the extent of allocentric distance between two targets by either walking towards the targets, or by walking in a direction incompatible with the locations of the targets. The latter condition required subjects to derive an accurate allocentric distance from information based on the perceived locations of the two targets. The walked distance in the two conditions was almost identical whether the two targets were presented in depth (depth-presentation condition) or in the frontoparallel plane (width-presentation condition). The results of a perceptual-matching task showed that the depth distances had to be much greater than the width distances in order to be judged to be equal in length (depth compression). In experiment 2, subjects were required to reproduce the extent of allocentric distance from the viewing point by blindly walking in a direction other than toward the targets. The walked distance in the depth-presentation condition was shorter than that in the width-presentation condition. This anisotropy in motor responses, however, was mainly caused by apparent overestimation of length oriented in width, not by depth compression. In addition, the walked distances were much better scaled than those in experiment 1. These results suggest that the perceptual and motor systems share a common representation of the location of targets, whereas a dissociation in allocentric distance exists between the two systems in full-cue conditions.
Collapse
Affiliation(s)
- Nobuo Kudoh
- Department of Psychology, Faculty of Humanities, Niigata University, Ikarashi, Niigata 950-2181, Japan.
| |
Collapse
|
68
|
Allen GL, Rashotte MA. Training metric accuracy in distance estimation skill: pictures versus words. APPLIED COGNITIVE PSYCHOLOGY 2006. [DOI: 10.1002/acp.1174] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
69
|
Ni R, Braunstein M, Andersen G. Distance perception from motion parallax and ground contact. VISUAL COGNITION 2005. [DOI: 10.1080/13506280444000724] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
70
|
Tcheang L, Gilson SJ, Glennerster A. Systematic distortions of perceptual stability investigated using immersive virtual reality. Vision Res 2005; 45:2177-89. [PMID: 15845248 PMCID: PMC2833395 DOI: 10.1016/j.visres.2005.02.006] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2004] [Revised: 02/02/2005] [Accepted: 02/02/2005] [Indexed: 11/28/2022]
Abstract
Using an immersive virtual reality system, we measured the ability of observers to detect the rotation of an object when its movement was yoked to the observer's own translation. Most subjects had a large bias such that a static object appeared to rotate away from them as they moved. Thresholds for detecting target rotation were similar to those for an equivalent speed discrimination task carried out by static observers, suggesting that visual discrimination is the predominant limiting factor in detecting target rotation. Adding a stable visual reference frame almost eliminated the bias. Varying the viewing distance of the target had little effect, consistent with observers underestimating distance walked. However, accuracy of walking to a briefly presented visual target was high and not consistent with an underestimation of distance walked. We discuss implications for theories of a task-independent representation of visual space.
Collapse
Affiliation(s)
- Lili Tcheang
- University Laboratory of Physiology, Parks Road, Oxford, OX1 3PT
| | - Stuart J. Gilson
- University Laboratory of Physiology, Parks Road, Oxford, OX1 3PT
| | | |
Collapse
|
71
|
Creem-Regehr SH, Willemsen P, Gooch AA, Thompson WB. The influence of restricted viewing conditions on egocentric distance perception: implications for real and virtual indoor environments. Perception 2005; 34:191-204. [PMID: 15832569 DOI: 10.1068/p5144] [Citation(s) in RCA: 100] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
We carried out three experiments to examine the influence of field of view and binocular viewing restrictions on absolute distance perception in real-world indoor environments. Few of the classical visual cues provide direct information for accurate absolute distance judgments to points in the environment beyond about 2 m from the viewer. Nevertheless, in previous work it has been found that visually directed walking tasks reveal accurate distance estimations in full-cue real-world environments to distances up to 20 m. In contrast, the same tasks in virtual environments produced with head-mounted displays (HMDs) show large compression of distance. Field of view and binocular viewing are common limitations in research with HMDs, and have been rarely studied under full pictorial-cue conditions in the context of distance perception in the real-world. Experiment 1 showed that the view of one's body and feet on the floor was not necessary for accurate distance perception. In experiment 2 we manipulated the horizontal and the vertical field of view along with head rotation and found that a restricted field of view did not affect the accuracy of distance estimations when head movement was allowed. Experiment 3 showed that performance with monocular viewing was equal to that with binocular viewing. These results have implications for the information needed to scale egocentric distance in the real-world and reduce the support for the hypothesis that a limited field of view or imperfections in binocular image presentation are the cause of the underestimation seen with HMDs.
Collapse
|
72
|
Abstract
People frequently analyze the actions of other people for the purpose of action coordination. To understand whether such self-relative action perception differs from other-relative action perception, the authors had observers either compare their own walking speed with that of a point-light walker or compare the walking speeds of 2 point-light walkers. In Experiment 1, observers walked, bicycled, or stood while performing a gait-speed discrimination task. Walking observers demonstrated the poorest sensitivity to walking speed, suggesting that perception and performance of the same action alters visual-motion processes. Experiments 2-6 demonstrated that the processes used during self-relative and other-relative action perception differ significantly in their dependence on observers' previous motor experience, current motor effort, and potential for action coordination. These results suggest that the visual analysis of human motion during traditional laboratory studies can differ substantially from the visual analysis of human movement under more realistic conditions.
Collapse
Affiliation(s)
- Alissa Jacobs
- Department of Psychology, Rutgers, The State University of New Jersey, Newark, NJ 07102, USA
| | | |
Collapse
|
73
|
Richardson AR, Waller D. The effect of feedback training on distance estimation in virtual environments. APPLIED COGNITIVE PSYCHOLOGY 2005. [DOI: 10.1002/acp.1140] [Citation(s) in RCA: 72] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
74
|
Abstract
PURPOSE Geometrical analysis of monocular visual information specifying distance shows that a low vision telescope compresses optically specified distances by a factor about equal to its magnification. Using a group of eight visually healthy adults, we investigated the initial perceptual effect of putting on a 2x Galilean telescope and the adaptation produced by wearing the telescope. METHODS Viewing was monocular, and the environment was only visible through the telescope. Because the telescope reduced the field of view to 13 degrees , we also tested a different group of eight visually normal adults who wore a simple monocular tube that restricted the field of view to 13 degrees . We measured perceived distance in a corridor using a visually directed open-loop walking task with distances ranging from 4 to 8 m. For both groups, monocular distance perception was measured before putting on the viewing device (baseline), immediately after putting on the viewing device (preadaptation), after wearing the viewing device during a 30-minute period of visual-motor activities (postadaptation), and immediately after taking off the viewing device (aftereffect). RESULTS Comparing preadaptation with baseline measurements, the viewing devices produced a 15.4% initial compression of perceived distance on average. Comparing aftereffect with baseline measurements, the adaptation period produced a negative aftereffect that was 56.5% of the initial compression, thus showing substantial adaptation. The initial compression and the adaptation were highly significant effects, but neither effect was significantly different for the telescope group and the tube group. CONCLUSION We conclude that free head movements in a structured environment can largely overcome the optically specified compression of distance produced by the 2x magnification of a low vision telescope, but there remains a significant initial compression of perceived distance that is produced by the restricted field of view. This compression can be substantially reduced by a short period of interaction with the environment.
Collapse
Affiliation(s)
- Dina Shah
- State University of New York, State College of Optometry, New York, New York 10036, USA
| | | |
Collapse
|
75
|
Philbeck JW, Oleary S, Lew ALB. Large errors, but no depth compression, in walked indications of exocentric extent. ACTA ACUST UNITED AC 2004; 66:377-91. [PMID: 15283063 DOI: 10.3758/bf03194886] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Observers can sight a target 20 m away or more and then walk to it accurately without vision. In contrast to this good performance, this article shows that walked indications of the exocentric separation of two locations exceed the required values by over 70% when vision is obscured. Significantly, these large errors are coupled with a robust lack of depth foreshortening, even under conditions in which visual matches and verbal estimates of extent exhibit strong evidence of depth compression. This article presents evidence that the overshooting errors are due largely to recalibration of locomotor control produced by prolonged exposure to nonvisual walking. The robust lack of depth foreshortening, meanwhile, could reflect a corresponding isotropy in the spatial representation controlling the walking response. More research is needed to confirm this interpretation, however.
Collapse
Affiliation(s)
- John W Philbeck
- Department of Psychology, George Washington University, Washington, DC 20052, USA.
| | | | | |
Collapse
|
76
|
Wu B, Ooi TL, He ZJ. Perceiving distance accurately by a directional process of integrating ground information. Nature 2004; 428:73-7. [PMID: 14999282 DOI: 10.1038/nature02350] [Citation(s) in RCA: 140] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2003] [Accepted: 01/20/2004] [Indexed: 11/10/2022]
Abstract
By itself, the absolute distance of an object cannot be accurately judged beyond 2-3 m (refs 1-3). Yet, when it is viewed with reference to a flat terrain, humans accurately judge the absolute distance of the object up to 20 m, an ability that is important for various actions. Here we provide evidence that this is accomplished by integrating local patches of ground information into a global surface reference frame. We first show that restricting an observer's visual field of view to the local ground area around the target leads to distance underestimation, indicating that a relatively wide expanse of the ground surface is required for accurate distance judgement. Second, as proof of surface integration, we show that even with the restricted view, the observer can accurately judge absolute distance by scanning local patches of the ground surface, bit by bit, from near to far, but not in the reverse direction. This finding also reveals that the surface integration process uses the near-ground-surface information as a foundation for surface representation, and extrapolation to the far ground surface around the target for accurate absolute distance computation.
Collapse
Affiliation(s)
- Bing Wu
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, Kentucky 40292, USA
| | | | | |
Collapse
|
77
|
Abstract
When one moves, the spatial relationship between oneself and the entire world changes. Spatial updating refers to the cognitive process that computes these relationships as one moves. In two experiments, we tested whether spatial updating occurs automatically for multiple environments simultaneously. Participants turned relative to either a room or the surrounding campus buildings and then pointed to targets in both the environment in which they turned (updated environment) and the other environment (nonupdated environment). The participants automatically updated the room targets when they moved relative to the campus, but they did not update the campus targets when they moved relative to the room. Thus, automatic spatial updating depends on the nature of the environment. Implications for theories of spatial learning and the structure of human spatial representations are discussed.
Collapse
Affiliation(s)
- Ranxiao Frances Wang
- Department of Psychology and Beckman Institute, University of Illinois, Champaign, Illinois 61820, USA.
| | | |
Collapse
|
78
|
Yang Z, Purves D. A statistical explanation of visual space. Nat Neurosci 2003; 6:632-40. [PMID: 12754512 DOI: 10.1038/nn1059] [Citation(s) in RCA: 81] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2003] [Accepted: 03/25/2003] [Indexed: 11/10/2022]
Abstract
The subjective visual space perceived by humans does not reflect a simple transformation of objective physical space; rather, perceived space has an idiosyncratic relationship with the real world. To date, there is no consensus about either the genesis of perceived visual space or the implications of its peculiar characteristics for visually guided behavior. Here we used laser range scanning to measure the actual distances from the image plane of all unoccluded points in a series of natural scenes. We then asked whether the differences between real and apparent distances could be explained by the statistical relationship of scene geometry and the observer. We were able to predict perceived distances in a variety of circumstances from the probability distribution of physical distances. This finding lends support to the idea that the characteristics of human visual space are determined probabilistically.
Collapse
Affiliation(s)
- Zhiyong Yang
- Department of Neurobiology, Box 3209, Duke University Medical Center, Durham, North Carolina 27710, USA.
| | | |
Collapse
|
79
|
Spatial Representations and Spatial Updating. PSYCHOLOGY OF LEARNING AND MOTIVATION 2003. [DOI: 10.1016/s0079-7421(03)01004-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register]
|
80
|
Hecht H, Kaiser MK, Savelsbergh GJP, van der Kamp J. The impact of spatiotemporal sampling on time-to-contact judgments. PERCEPTION & PSYCHOPHYSICS 2002; 64:650-66. [PMID: 12132765 DOI: 10.3758/bf03194733] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
When motion in the frontoparallel plane is temporally sampled, it is often perceived to be slower than its continuous counterpart. This finding stands in contrast to humans' ability to extrapolate and anticipate constant-velocity motion. We investigated whether this sampling bias generalizes to motion in the sagittal plane (i.e., objects approaching the observer). We employed a paradigm in which observers judged the arrival time of an oncoming object. We found detrimental effects of time sampling on both perceived time to contact and time to passage. Observers systematically overestimated the time it would take a frontally approaching object to intersect their eye plane. To rule out artifacts inherent in computer simulation, we replicated the experiment, using real objects. The bias persisted and proved to be robust across a large range of temporal and spatial variations. Energy and pooling mechanisms are discussed in an attempt to understand the effect.
Collapse
Affiliation(s)
- Heiko Hecht
- MIT Man-Vehicle Lab, Cambridge, Massachusetts 02139, USA.
| | | | | | | |
Collapse
|
81
|
|
82
|
Abstract
A biological system is often more efficient when it takes advantage of the regularities in its environment. Like other terrestrial creatures, our spatial sense relies on the regularities associated with the ground surface. A simple, but important, ecological fact is that the field of view of the ground surface extends upwards from near (feet) to infinity (horizon). It forms the basis of a trigonometric relationship wherein the further an object on the ground is, the higher in the field of view it looks, with an object at infinity being seen at the horizon. Here, we provide support for the hypothesis that the visual system uses the angular declination below the horizon for distance judgement. Using a visually directed action task, we found that when the angular declination was increased by binocularly viewing through base-up prisms, the observer underestimated distance. After adapting to the same prisms, however, the observer overestimated distance on prism removal. Most significantly, we show that the distance overestimation as an after-effect of prism adaptation was due to a lowered perceived eye level, which reduced the object's angular declination below the horizon.
Collapse
Affiliation(s)
- T L Ooi
- Department of Biomedical Sciences, Southern College of Optometry, Memphis, Tennessee 38104, USA.
| | | | | |
Collapse
|
83
|
Meng JC, Sedgwick HA. Distance perception mediated through nested contact relations among surfaces. PERCEPTION & PSYCHOPHYSICS 2001; 63:1-15. [PMID: 11304007 DOI: 10.3758/bf03200497] [Citation(s) in RCA: 60] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In complex natural scenes, objects at different spatial locations can usually be related to each other through nested contact relations among adjoining surfaces. Our research asks how well human observers, under monocular static viewing conditions, are able to utilize this information in distance perception. We present computer-generated naturalistic scenes of a cube resting on a platform, which is in turn resting on the ground. Observers adjust the location of a marker on the ground to equal the perceived distance of the cube. We find that (1) perceived distance of the cube varies appropriately as the perceived location of contact between the platform and the ground varies; (2) variability increases systematically as the relating surfaces move apart; and (3) certain local edge alignments allow precise propagation of distance information. These results demonstrate considerable efficiency in the mediation of distance perception through nested contact relations among surfaces.
Collapse
Affiliation(s)
- J C Meng
- State University of New York, New York, USA
| | | |
Collapse
|
84
|
Philbeck JW, Behrmann M, Black SE, Ebert P. Intact spatial updating during locomotion after right posterior parietal lesions. Neuropsychologia 2000; 38:950-63. [PMID: 10775706 DOI: 10.1016/s0028-3932(99)00156-6] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
One function of the posterior parietal cortex (PPC) is to monitor and integrate sensory signals relating to the current pointing direction of the eyes. We investigated the possibility that the human PPC also contributes to spatial updating during larger-scale behaviors. Two groups of patients with brain injuries either including or excluding the right hemisphere PPC and a group of healthy subjects performed a visually-directed walking task, in which the subject views a target and then attempts to walk to it without vision. All groups walked without vision accurately and precisely to remembered targets up to 6 m away; the patient groups also performed similarly to the healthy controls when indicating egocentric distances using non-motoric responses. These results indicate that the right PPC is not critically involved in monitoring and integrating non-visual self-motion signals, at least along linear paths. In addition, visual perception of egocentric distance in multi-cue environments is immune to injury of a variety of brain areas.
Collapse
Affiliation(s)
- J W Philbeck
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| | | | | | | |
Collapse
|
85
|
Hecht H, van Doorn A, Koenderink JJ. Compression of visual space in natural scenes and in their photographic counterparts. PERCEPTION & PSYCHOPHYSICS 1999; 61:1269-86. [PMID: 10572457 DOI: 10.3758/bf03206179] [Citation(s) in RCA: 34] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Classical theories of space perception posit continuous distortions of subjective space. These stand in contrast to the quantitatively and qualitatively different distortions experienced in space that is represented pictorially. We challenge several aspects of these theories. Comparing real-world objects with depictions of the same objects, we investigated to what extent distortions are introduced by the photographic medium. Corners of irregularly shaped buildings had to be judged in terms of the vertical dihedral angles subtended by two adjacent walls. Across all conditions, a robust effect of viewing distance was found: Building corners appear to flatten out with distance. Moreover, depictions of corners produce remarkably similar results and should not receive a different theoretical treatment than do real-world scenes. The flattening of vertical angles cannot be explained by a linear distortion of the entire visual space. We suggest that, for natural scenes, compression of space is local and dependent on contextual information.
Collapse
Affiliation(s)
- H Hecht
- Universität Bielefeld, Germany.
| | | | | |
Collapse
|
86
|
Loomis JM, Philbeck JW. Is the anisotropy of perceived 3-D shape invariant across scale? PERCEPTION & PSYCHOPHYSICS 1999; 61:397-402. [PMID: 10334089 DOI: 10.3758/bf03211961] [Citation(s) in RCA: 67] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A number of studies have resulted in the finding of a 3-D perceptual anisotropy, whereby spatial intervals oriented in depth are perceived to be smaller than physically equal intervals in the frontoparallel plane. In this experiment, we examined whether this anisotropy is scale invariant. The stimuli were L shapes created by two rods placed flat on a level grassy field, with one rod defining a frontoparallel interval, and the other, a depth interval. Observers monocularly and binocularly viewed L shapes at two scales such that they were projectively equivalent under monocular viewing. Observers judged the aspect ratio (depth/width) of each shape. Judged aspect ratio indicated a perceptual anisotropy that was invariant with scale for monocular viewing, but not for binocular viewing. When perspective is kept constant, monocular viewing results in perceptual anisotropy that is invariant across these two scales and presumably across still larger scales. This scale invariance indicates that the perception of shape under these conditions is determined independently of the perception of size.
Collapse
Affiliation(s)
- J M Loomis
- Department of Psychology, University of California, Santa Barbara 93106-9660, USA.
| | | |
Collapse
|
87
|
|
88
|
Abstract
Mathematically, three-dimensional space can be represented differently by the cartesian, polar, and other coordinate systems. However, in physical sciences, the choice of representation system is restricted by the need to simplify a machine's computation while enhancing its efficiency. Does the brain, for the same reasons, 'select' the most cost-efficient way to represent the three-dimensional location of objects? As we frequently interact with objects on the common ground surface, it might be beneficial for the visual system to code an object's location using a ground-surface-based reference frame. More precisely, the brain could use a quasi-two-dimensional coordinate system (x(s), y(s)) with respect to the ground surface (s), rather than a strictly three-dimensional coordinate system (x, y, z), thus reducing coding redundancy and simplifying computations. Here we provide support for this view by studying human psychophysical performance in perceiving absolute distance and in visually directed action tasks. For example, when an object was seen on a continuous, homogeneous texture ground surface, the observer judged the distance to the object accurately. However, when similar surface information was unavailable, for example, when the object was seen across a gap in the ground, or across distinct texture regions, distance judgement was impaired.
Collapse
Affiliation(s)
- M J Sinai
- Department of Psychology, University of Louisville, Kentucky 40292, USA
| | | | | |
Collapse
|
89
|
Amorim MA, Loomis JM, Fukusima SS. Reproduction of object shape is more accurate without the continued availability of visual information. Perception 1998; 27:69-86. [PMID: 9692089 DOI: 10.1068/p270069] [Citation(s) in RCA: 19] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
An unfamiliar configuration lying in depth and viewed from a distance is typically seen as foreshortened. The hypothesis motivating this research was that a change in an observer's viewpoint even when the configuration is no longer visible induces an imaginal updating of the internal representation and thus reduces the degree of foreshortening. In experiment 1, observers attempted to reproduce configurations defined by three small glowing balls on a table 2 m distant under conditions of darkness following 'viewpoint change' instructions. In one condition, observers reproduced the continuously visible configuration using three other glowing balls on a nearer table while imagining standing at the distant table. In the other condition, observers viewed the configuration, it was then removed, and they walked in darkness to the far table and reproduced the configuration. Even though the observers received no additional information about the stimulus configuration in walking to the table, they were more accurate (less foreshortening) than in the other condition. In experiment 2, observers reproduced distant configurations on a nearer table more accurately when doing so from memory than when doing so while viewing the distant stimulus configuration. In experiment 3, observers performed both the real and imagined perspective change after memorizing the remote configuration. The results of the three experiments indicate that the continued visual presence of the target configuration impedes imaginary perspective-change performance and that an actual change in viewpoint does not increase reproduction accuracy substantially over that obtained with an imagined change in viewpoint.
Collapse
Affiliation(s)
- M A Amorim
- Laboratoire de Physiologie de la Perception et de l'Action, Collège de France-CNRS, Paris, France.
| | | | | |
Collapse
|
90
|
|
91
|
Philbeck JW, Loomis JM, Beall AC. Visually perceived location is an invariant in the control of action. PERCEPTION & PSYCHOPHYSICS 1997; 59:601-12. [PMID: 9158334 DOI: 10.3758/bf03211868] [Citation(s) in RCA: 76] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
We provide experimental evidence that perceived location is an invariant in the control of action, by showing that different actions are directed toward a single visually specified location in space (corresponding to the putative perceived location) and that this single location, although specified by a fixed physical target, varies with the availability of information about the distance of that target. Observers in two conditions varying in the availability of egocentric distance cues viewed targets at 1.5, 3.1, or 6.0 m and then attempted to walk to the target with eyes closed using one of three paths; the path was not specified until after vision was occluded. The observers stopped at about the same location regardless of the path taken, providing evidence that action was being controlled by some invariant, ostensibly visually perceived location. That it was indeed perceived location was indicated by the manipulation of information about target distance--the trajectories in the full-cues condition converged near the physical target locations, whereas those in the reduced-cues condition converged at locations consistent with the usual perceptual errors found when distance cues are impoverished.
Collapse
Affiliation(s)
- J W Philbeck
- Department of Psychology, University of California, Santa Barbara 93106-9660, USA.
| | | | | |
Collapse
|
92
|
|