1
|
Zhou L, He ZJ, Ooi TL. Perception of distance during self-motion depends on the brain's internal model of the terrain. PLoS One 2025; 20:e0316524. [PMID: 40063893 PMCID: PMC11893116 DOI: 10.1371/journal.pone.0316524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Accepted: 12/12/2024] [Indexed: 05/13/2025] Open
Abstract
The body's geometrical relationship with the terrain is important for depth perception of human and non-human terrestrial animals. Static human observers in the dark employ the brain's internal model of the terrain, the intrinsic bias, to represent the ground as an allocentric reference frame for coding distance. However, it is unknown if the same ground-based coding process operates when observers walk in a cue-impoverished environment with visible ground surface. We explored this by measuring human observers' perceived locations of dimly-lit targets after a short walk in the dark from the home-base location. We found the intrinsic bias was kept at the home-base location and not the destination-location after walking, causing distance underestimation, fitting its allocentric nature. We then measured perceived distance of dimly-lit targets from the destination-location when there were visual depth cues on the floor. We found judged locations of targets on the floor transcribed a slanted surface shifted towards the home-base location, indicating distance underestimation. This suggests, in dynamically translating observers, the brain integrates the allocentric intrinsic bias with visual depth cues to construct an allocentric ground reference frame. More broadly, our findings underscore the dynamic interaction between the internal model of the ground and external depth cues.
Collapse
Affiliation(s)
- Liu Zhou
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, Kentucky, United States of America
| | - Zijiang J. He
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, Kentucky, United States of America
| | - Teng Leng Ooi
- College of Optometry, The Ohio State University, Columbus, Ohio, United States of America
| |
Collapse
|
2
|
Chen Y, He ZJ, Ooi TL. Factors Affecting Stimulus Duration Threshold for Depth Discrimination of Asynchronous Targets in the Intermediate Distance Range. Invest Ophthalmol Vis Sci 2024; 65:36. [PMID: 39446355 PMCID: PMC11512565 DOI: 10.1167/iovs.65.12.36] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 10/04/2024] [Indexed: 10/28/2024] Open
Abstract
Purpose Binocular depth discrimination in the near distance range (< 2 m) improves with stimulus duration. However, whether the same response-pattern holds in the intermediate distance range (approximately 2-25 m) remains unknown because the spatial coding mechanisms are thought to be different. Methods We used the two-interval forced choice procedure to measure absolute depth discrimination of paired asynchronous targets (3, 6, or 16 arc min). The paired targets (0.2 degrees) were located over a distance and height range, respectively, of 4.5 to 7.0 m and 0.15 to 0.7 m. Experiment 1 estimated duration thresholds for binocular depth discrimination at varying target durations (40-1610 ms), in the presence of a 2 × 6 array of parallel texture-elements spanning 1.5 × 5.83 m on the floor. The texture-elements provided a visible background in the light-tight room (9 × 3 m). Experiment 2 used a similar setup to control for viewing conditions: binocular versus monocular and with versus without texture background. Experiment 3 compared binocular depth discrimination between brief (40, 80, and 125 ms) and continuous texture background presentation. Results Stimulus duration threshold for depth discrimination decreased with increasing disparity in experiment 1. Experiment 2 revealed depth discrimination performance with texture background was near chance level with monocular viewing. Performance with binocular viewing degraded without texture background. Experiment 3 showed continuous texture background presentation enhances binocular depth discrimination. Conclusions Absolute depth discrimination improves with target duration, binocular viewing, and texture background. Performance further improved with longer background duration underscoring the role of ground surface representation in spatial coding.
Collapse
Affiliation(s)
- Yiya Chen
- College of Optometry, The Ohio State University, Columbus, Ohio, United States
| | - Zijiang J. He
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, Kentucky, United States
| | - Teng Leng Ooi
- College of Optometry, The Ohio State University, Columbus, Ohio, United States
| |
Collapse
|
3
|
Zhou L, Wei W, Ooi TL, He ZJ. An allocentric human odometer for perceiving distances on the ground plane. eLife 2024; 12:RP88095. [PMID: 39023517 PMCID: PMC11257686 DOI: 10.7554/elife.88095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024] Open
Abstract
We reliably judge locations of static objects when we walk despite the retinal images of these objects moving with every step we take. Here, we showed our brains solve this optical illusion by adopting an allocentric spatial reference frame. We measured perceived target location after the observer walked a short distance from the home base. Supporting the allocentric coding scheme, we found the intrinsic bias , which acts as a spatial reference frame for perceiving location of a dimly lit target in the dark, remained grounded at the home base rather than traveled along with the observer. The path-integration mechanism responsible for this can utilize both active and passive (vestibular) translational motion signals, but only along the horizontal direction. This asymmetric path-integration finding in human visual space perception is reminiscent of the asymmetric spatial memory finding in desert ants, pointing to nature's wondrous and logically simple design for terrestrial creatures.
Collapse
Affiliation(s)
- Liu Zhou
- Department of Psychological and Brain Sciences, University of LouisvilleLouisvilleUnited States
| | - Wei Wei
- Department of Psychological and Brain Sciences, University of LouisvilleLouisvilleUnited States
- College of Optometry, The Ohio State UniversityColumbusUnited States
| | - Teng Leng Ooi
- College of Optometry, The Ohio State UniversityColumbusUnited States
| | - Zijiang J He
- Department of Psychological and Brain Sciences, University of LouisvilleLouisvilleUnited States
| |
Collapse
|
4
|
Zhou L, Wei W, Ooi TL, He ZJ. An allocentric human odometer for perceiving distances on the ground plane. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.03.22.533725. [PMID: 38645085 PMCID: PMC11030244 DOI: 10.1101/2023.03.22.533725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
We reliably judge locations of static objects when we walk despite the retinal images of these objects moving with every step we take. Here, we showed our brains solve this optical illusion by adopting an allocentric spatial reference frame. We measured perceived target location after the observer walked a short distance from the home base. Supporting the allocentric coding scheme, we found the intrinsic bias 1, 2 , which acts as a spatial reference frame for perceiving location of a dimly lit target in the dark, remained grounded at the home base rather than traveled along with the observer. The path-integration mechanism responsible for this can utilize both active and passive (vestibular) translational motion signals, but only along the horizontal direction. This anisotropic path-integration finding in human visual space perception is reminiscent of the anisotropic spatial memory finding in desert ants 3 , pointing to nature's wondrous and logically simple design for terrestrial creatures.
Collapse
|
5
|
Kemp JT, Cesanek E, Domini F. Perceiving depth from texture and disparity cues: Evidence for a non-probabilistic account of cue integration. J Vis 2023; 23:13. [PMID: 37486299 PMCID: PMC10382782 DOI: 10.1167/jov.23.7.13] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 06/12/2023] [Indexed: 07/25/2023] Open
Abstract
Bayesian inference theories have been extensively used to model how the brain derives three-dimensional (3D) information from ambiguous visual input. In particular, the maximum likelihood estimation (MLE) model combines estimates from multiple depth cues according to their relative reliability to produce the most probable 3D interpretation. Here, we tested an alternative theory of cue integration, termed the intrinsic constraint (IC) theory, which postulates that the visual system derives the most stable, not most probable, interpretation of the visual input amid variations in viewing conditions. The vector sum model provides a normative approach for achieving this goal where individual cue estimates are components of a multidimensional vector whose norm determines the combined estimate. Individual cue estimates are not accurate but related to distal 3D properties through a deterministic mapping. In three experiments, we show that the IC theory can more adeptly account for 3D cue integration than MLE models. In Experiment 1, we show systematic biases in the perception of depth from texture and depth from binocular disparity. Critically, we demonstrate that the vector sum model predicts an increase in perceived depth when these cues are combined. In Experiment 2, we illustrate the IC theory radical reinterpretation of the just noticeable difference (JND) and test the related vector sum model prediction of the classic finding of smaller JNDs for combined-cue versus single-cue stimuli. In Experiment 3, we confirm the vector sum prediction that biases found in cue integration experiments cannot be attributed to flatness cues, as the MLE model predicts.
Collapse
Affiliation(s)
- Jovan T Kemp
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA
| | - Evan Cesanek
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Fulvio Domini
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA
- Italian Institute of Technology, Rovereto, Italy
| |
Collapse
|
6
|
Cardelli L, Tullo MG, Galati G, Sulpizio V. Effect of optic flow on spatial updating: insight from an immersive virtual reality study. Exp Brain Res 2023; 241:865-874. [PMID: 36781456 DOI: 10.1007/s00221-023-06567-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 02/03/2023] [Indexed: 02/15/2023]
Abstract
Self-motion information is required to keep track of where we are with respect to our environment (spatial updating). Visual signals such as optic flow are relevant to provide information about self-motion, especially in the absence of vestibular and/or proprioceptive cues generated by physical movement. However, the role of optic flow on spatial updating is still debated. A virtual reality system based on a head-mounted display was used to allow participants to experience a self-motion sensation within a naturalistic environment in the absence of physical movement. We asked participants to keep track of spatial positions of a target during simulated self-motion while manipulating the availability of optic flow coming from the lower part of the environment (ground plane). In each trial, the ground could be a green lawn (optic flow ON) or covered in snow (optic flow OFF). We observed that the lack of optic flow on the ground had a detrimental effect on spatial updating. Furthermore, we explored the interaction between the optic flow availability and different characteristics of self-motion: we observed that increasing self-motion speed had a detrimental effect on spatial updating, especially in the absence of optic flow, while self-motion direction (leftward, forward, rightward) and path (translational and curvilinear) had no statically significant effect. Overall, we demonstrated that, in the absence of some idiothetic cues, the optic flow provided by the ground has a dominant role for the self-motion estimation and, hence, for the ability to update the spatial relationships between one's position and the position of the surrounding objects.
Collapse
Affiliation(s)
- Lisa Cardelli
- Brain Imaging Laboratory, Department of Psychology, Sapienza University, Via Dei Marsi 78, 00185, Rome, Italy
| | - Maria Giulia Tullo
- Brain Imaging Laboratory, Department of Psychology, Sapienza University, Via Dei Marsi 78, 00185, Rome, Italy.,Department of Translational and Precision Medicine, Sapienza University, Rome, Italy
| | - Gaspare Galati
- Brain Imaging Laboratory, Department of Psychology, Sapienza University, Via Dei Marsi 78, 00185, Rome, Italy.,Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Valentina Sulpizio
- Brain Imaging Laboratory, Department of Psychology, Sapienza University, Via Dei Marsi 78, 00185, Rome, Italy. .,Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.
| |
Collapse
|
7
|
Dong B, Chen A, Gu Z, Sun Y, Zhang X, Tian X. Methods for measuring egocentric distance perception in visual modality. Front Psychol 2023; 13:1061917. [PMID: 36710778 PMCID: PMC9874321 DOI: 10.3389/fpsyg.2022.1061917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Accepted: 12/16/2022] [Indexed: 01/12/2023] Open
Abstract
Egocentric distance perception has been widely concerned by researchers in the field of spatial perception due to its significance in daily life. The frame of perception involves the perceived distance from an observer to an object. Over the years, researchers have been searching for an optimal way to measure the perceived distance and their contribution constitutes a critical aspect of the field. This paper summarizes the methodological findings and divides the measurement methods for egocentric distance perception into three categories according to the behavior types. The first is Perceptional Method, including successive equal-appearing intervals of distance judgment measurement, verbal report, and perceptual distance matching task. The second is Directed Action Method, including blind walking, blind-walking gesturing, blindfolded throwing, and blind rope pulling. The last one is Indirect Action Method, including triangulation-by-pointing and triangulation-by-walking. In the meantime, we summarize each method's procedure, core logic, scope of application, advantages, and disadvantages. In the end, we discuss the future concerns of egocentric distance perception.
Collapse
Affiliation(s)
- Bo Dong
- Department of Psychology, Suzhou University of Science and Technology, Suzhou, China,*Correspondence: Xiaoming Tian, ; Bo Dong, ; Yuan Sun, ; Xiuling Zhang,
| | - Airui Chen
- Department of Psychology, Suzhou University of Science and Technology, Suzhou, China
| | - Zhengyin Gu
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Yuan Sun
- School of Education, Suzhou University of Science and Technology, Suzhou, China,*Correspondence: Xiaoming Tian, ; Bo Dong, ; Yuan Sun, ; Xiuling Zhang,
| | - Xiuling Zhang
- School of Psychology, Northeast Normal University, Changchun, China,*Correspondence: Xiaoming Tian, ; Bo Dong, ; Yuan Sun, ; Xiuling Zhang,
| | - Xiaoming Tian
- Department of Psychology, Suzhou University of Science and Technology, Suzhou, China,*Correspondence: Xiaoming Tian, ; Bo Dong, ; Yuan Sun, ; Xiuling Zhang,
| |
Collapse
|
8
|
Chen S, Li Y, Pan JS. Monocular Perception of Equidistance: The Effects of Viewing Experience and Motion-generated Information. Optom Vis Sci 2022; 99:470-478. [PMID: 35149634 DOI: 10.1097/opx.0000000000001878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
SIGNIFICANCE Using static depth information, normal observers monocularly perceived equidistance with high accuracy. With dynamic depth information and/or monocular viewing experience, they perceived with high precision. Therefore, monocular patients, who were adapted to monocular viewing, should be able to perceive equidistance and perform related tasks. PURPOSE This study investigated whether normal observers could accurately and precisely perceive equidistance with one eye, in different viewing environments, with various optical information and monocular viewing experience. METHODS Sixteen normally sighted observers monocularly perceived the distance (5 to 30 m) between a target and the self and replicated it either in some hallways that contained ample static monocular depth information but had a limited field of view or on a lawn that contained less depth information but had a large field of view. Participants remained stationary or walked 5 m before performing the task, as a manipulation of the availability of dynamic depth information. Eight observers wore eye patches for 3 hours before the experiment and gained monocular viewing experience, whereas the others did not. Both accuracy and precision were measured. RESULTS As long as static monocular depth information was available, equidistance perception was effectively accurate, despite minute underestimation. Perception precision was improved by prior monocular walking and/or experience with monocularity. Accuracy and precision were not affected by the viewing environments. CONCLUSIONS Using static and dynamic monocular depth information and/or with monocular experience, normal observers judged equidistance with reliable accuracy and precision. This implied that patients with monocular vision, who are better adapted than participants of this study, should also be able to perceive equidistance and perform distance-dependent tasks in natural viewing environments.
Collapse
Affiliation(s)
- Shenying Chen
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| | - Yusi Li
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| | | |
Collapse
|
9
|
Foley JM. Visually directed action. J Vis 2021; 21:25. [PMID: 34019620 PMCID: PMC8142698 DOI: 10.1167/jov.21.5.25] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
When people throw or walk to targets in front of them without visual feedback, they often respond short. With feedback, responses rapidly become approximately accurate. To understand this, an experiment is performed with four stages. 1) The errors in blind walking and blind throwing are measured in a virtual environment in light and dark cue conditions. 2) Error feedback is introduced and the resulting learning measured. 3) Transfer to the other response is then measured. 4) Finally, responses to the perceived distances of the targets are measured. There is large initial under-responding. Feedback rapidly makes responses almost accurate. Throw training transfers completely to walking. Walk training produces a small effect on throwing. Under instructions to respond to perceived distances, under-responding recurs. The phenomena are well described by a model in which the relation between target distance and response distance is determined by a sequence of a perceptual, a cognitive, and a motor transform. Walk learning is primarily motor; throw learning is cognitive.
Collapse
Affiliation(s)
- John M Foley
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA, USA.,
| |
Collapse
|
10
|
Li H, Mavros P, Krukar J, Hölscher C. The effect of navigation method and visual display on distance perception in a large-scale virtual building. Cogn Process 2021; 22:239-259. [PMID: 33564939 PMCID: PMC8179918 DOI: 10.1007/s10339-020-01011-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Accepted: 12/16/2020] [Indexed: 11/30/2022]
Abstract
Immersive virtual reality (VR) technology has become a popular method for fundamental and applied spatial cognition research. One challenge researchers face is emulating walking in a large-scale virtual space although the user is in fact in a small physical space. To address this, a variety of movement interfaces in VR have been proposed, from traditional joysticks to teleportation and omnidirectional treadmills. These movement methods tap into different mental processes of spatial learning during navigation, but their impacts on distance perception remain unclear. In this paper, we investigated the role of visual display, proprioception, and optic flow on distance perception in a large-scale building by manipulating four different movement methods. Eighty participants either walked in a real building, or moved through its virtual replica using one of three movement methods: VR-treadmill, VR-touchpad, and VR-teleportation. Results revealed that, first, visual display played a major role in both perceived and traversed distance estimates but did not impact environmental distance estimates. Second, proprioception and optic flow did not impact the overall accuracy of distance perception, but having only an intermittent optic flow (in the VR-teleportation movement method) impaired the precision of traversed distance estimates. In conclusion, movement method plays a significant role in distance perception but does not impact the configurational knowledge learned in a large-scale real and virtual building, and the VR-touchpad movement method provides an effective interface for navigation in VR.
Collapse
Affiliation(s)
- Hengshan Li
- Future Cities Laboratory, Singapore-ETH Centre, 1 CREATE Way, CREATE Tower, 138602, Singapore, Singapore.
| | - Panagiotis Mavros
- Future Cities Laboratory, Singapore-ETH Centre, 1 CREATE Way, CREATE Tower, 138602, Singapore, Singapore
| | - Jakub Krukar
- Institute for Geoinformatics, University of Muenster, Münster, Germany
| | - Christoph Hölscher
- Future Cities Laboratory, Singapore-ETH Centre, 1 CREATE Way, CREATE Tower, 138602, Singapore, Singapore
- Department of Humanities, Social and Political Sciences, ETH Zürich, Zurich, Switzerland
| |
Collapse
|
11
|
Zlatkute G, de la Bastida VCS, Vishwanath D. Unimpaired perception of relative depth from perspective cues in strabismus. ROYAL SOCIETY OPEN SCIENCE 2020; 7:200955. [PMID: 33489262 PMCID: PMC7813253 DOI: 10.1098/rsos.200955] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Accepted: 11/19/2020] [Indexed: 06/12/2023]
Abstract
Strabismus is a relatively common ophthalmological condition where the coordination of eye muscles to binocularly fixate a single point in space is impaired. This leads to deficits in vision and particularly in three-dimensional (3D) space perception. The exact nature of the deficits in 3D perception is poorly understood as much of understanding has relied on anecdotal reports or conjecture. Here, we investigated, for the first time, the perception of relative depth comparing strabismic and typically developed binocular observers. Specifically, we assessed the susceptibility to the depth cue of perspective convergence as well as the capacity to use this cue to make accurate judgements of relative depth. Susceptibility was measured by examining a 3D bias in making two-dimensional (2D) interval equidistance judgements and accuracy was measured by examining 3D interval equidistance judgements. We tested both monocular and binocular viewing of images of perspective scenes under two different psychophysical methods: two-alternative forced-choice (2AFC) and the method of adjustment. The biasing effect of perspective information on the 2D judgements (3D cue susceptibility) was highly significant and comparable for both subject groups in both the psychophysical tasks (all ps < 0.001) with no statistically significant difference found between the two groups. Both groups showed an underestimation in the 3D task with no significant difference between the group's judgements in the 2AFC task, but a small statistically significant difference (ratio difference of approx. 10%, p = 0.016) in the method of adjustment task. A small but significant effect of viewing condition (monocular versus binocular) was revealed only in the non-strabismic group (ratio difference of approx. 6%, p = 0.002). Our results show that both the automatic susceptibility to, and accuracy in the use of, the perspective convergence cue in strabismus is largely comparable to that found in typically developed binocular vision, and have implications on the nature of the encoding of depth in the human visual system.
Collapse
Affiliation(s)
- Giedre Zlatkute
- School of Psychology and Neuroscience, University of St Andrews, St Mary's Quad, St Andrews, Fife KY16 9JP, UK
| | | | - Dhanraj Vishwanath
- School of Psychology and Neuroscience, University of St Andrews, St Mary's Quad, St Andrews, Fife KY16 9JP, UK
| |
Collapse
|
12
|
Clément G, Bukley A, Loureiro N, Lindblad L, Sousa D, Zandvilet A. Horizontal and Vertical Distance Perception in Altered Gravity. Sci Rep 2020; 10:5471. [PMID: 32214172 PMCID: PMC7096486 DOI: 10.1038/s41598-020-62405-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Accepted: 03/12/2020] [Indexed: 11/25/2022] Open
Abstract
The perception of the horizontal and vertical distances of a visual target to an observer was investigated in parabolic flight during alternating short periods of normal gravity (1 g). microgravity (0 g), and hypergravity (1.8 g). The methods used for obtaining absolute judgments of egocentric distance included verbal reports and visually directed motion toward a memorized visual target by pulling on a rope with the arms (blind pulling). The results showed that, for all gravity levels, the verbal reports of distance judgments were accurate for targets located between 0.6 and 6.0 m. During blind pulling, subjects underestimated horizontal distances as distances increased, and this underestimation decreased in 0 g. Vertical distances for up targets were overestimated and vertical distances for down targets were underestimated in both 1 g and 1.8 g. This vertical asymmetry was absent in 0 g. The results of the present study confirm that blind pulling and verbal reports are independently influenced by gravity. The changes in distance judgments during blind pulling in 0 g compared to 1 g support the view that, during an action-based task, subjects base their perception of distance on the estimated motor effort of navigating to the perceived object.
Collapse
Affiliation(s)
| | - Angie Bukley
- International Space University Org., Inc., Webster, Massachusetts, USA
| | - Nuno Loureiro
- Champalimaud Research, Champalimaud Centre for the Unknown, Lisbon, Portugal
| | | | | | - André Zandvilet
- European Space Research and Technology Center, Noordwijk, The Netherlands
| |
Collapse
|
13
|
Distance perception warped by social relations: Social interaction information compresses distance. Acta Psychol (Amst) 2020; 202:102948. [PMID: 31751830 DOI: 10.1016/j.actpsy.2019.102948] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2018] [Revised: 08/15/2019] [Accepted: 10/07/2019] [Indexed: 11/24/2022] Open
Abstract
Though distance perception feeds the fundamental input that constructs a visual structure of the world, the suggestion has been made that it is constrained by this constructed structure. Instead of focusing on the physically defined structure, this study investigates whether and how social relations, especially the quality of social interaction (how individuals interact) rather than its content (what type of social interaction), precisely influences distance perception. The quality of social interaction was framed as an actor's intent and incurred outcome regarding another individual, whether helpful or harmful. Through visual animations, intent was operationalized as an agent's (i.e., actor's) intentional or unintentional act having an influence on another agent (i.e., affectee). Two experiments were conducted. In Experiment 1, the act was helpful, resulting in small or great beneficial consequences to the affectee. In Experiment 2, the act was harmful and resulted in small or great losses to the affectee. We found that when the help or harm had a large effect on others (the great-benefits or great-losses conditions), distance was perceived as shorter than when help or harm was minor, and the actor's intent did not affect distance perception. This suggests that, regardless of the type of social interaction, distance perception is mainly influenced by the outcome of an act not by the actor's intent. It implies that the perceived quality of social interaction creates a social constraint on distance perception. These findings are consistent with the idea that the intent and outcome of an action are assessed differently, and they help us understand how social relation penetrates the perceptual system.
Collapse
|
14
|
Blouin J, Saradjian AH, Pialasse JP, Manson GA, Mouchnino L, Simoneau M. Two Neural Circuits to Point Towards Home Position After Passive Body Displacements. Front Neural Circuits 2019; 13:70. [PMID: 31736717 PMCID: PMC6831616 DOI: 10.3389/fncir.2019.00070] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2019] [Accepted: 10/15/2019] [Indexed: 12/02/2022] Open
Abstract
A challenge in motor control research is to understand the mechanisms underlying the transformation of sensory information into arm motor commands. Here, we investigated these transformation mechanisms for movements whose targets were defined by information issued from body rotations in the dark (i.e., idiothetic information). Immediately after being rotated, participants reproduced the amplitude of their perceived rotation using their arm (Experiment 1). The cortical activation during movement planning was analyzed using electroencephalography and source analyses. Task-related activities were found in regions of interest (ROIs) located in the prefrontal cortex (PFC), dorsal premotor cortex, dorsal region of the anterior cingulate cortex (ACC) and the sensorimotor cortex. Importantly, critical regions for the cognitive encoding of space did not show significant task-related activities. These results suggest that arm movements were planned using a sensorimotor-type of spatial representation. However, when a 8 s delay was introduced between body rotation and the arm movement (Experiment 2), we found that areas involved in the cognitive encoding of space [e.g., ventral premotor cortex (vPM), rostral ACC, inferior and superior posterior parietal cortex (PPC)] showed task-related activities. Overall, our results suggest that the use of a cognitive-type of representation for planning arm movement after body motion is necessary when relevant spatial information must be stored before triggering the movement.
Collapse
Affiliation(s)
- Jean Blouin
- Aix-Marseille Univ, CNRS, Laboratoire de Neurosciences Cognitives, Marseille, France
| | - Anahid H Saradjian
- Aix-Marseille Univ, CNRS, Laboratoire de Neurosciences Cognitives, Marseille, France
| | | | - Gerome A Manson
- Aix-Marseille Univ, CNRS, Laboratoire de Neurosciences Cognitives, Marseille, France.,Centre for Motor Control, University of Toronto, Toronto, ON, Canada
| | - Laurence Mouchnino
- Aix-Marseille Univ, CNRS, Laboratoire de Neurosciences Cognitives, Marseille, France
| | - Martin Simoneau
- Faculté de Médecine, Département de Kinésiologie, Université Laval, Québec, QC, Canada.,Centre Interdisciplinaire de Recherche en Réadaptation et Intégration Sociale (CIRRIS), Québec, QC, Canada
| |
Collapse
|
15
|
Caramenti M, Pretto P, Lafortuna CL, Bresciani JP, Dubois A. Influence of the Size of the Field of View on Visual Perception While Running in a Treadmill-Mediated Virtual Environment. Front Psychol 2019; 10:2344. [PMID: 31681123 PMCID: PMC6812648 DOI: 10.3389/fpsyg.2019.02344] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Accepted: 10/01/2019] [Indexed: 11/13/2022] Open
Abstract
We investigated how the size of the horizontal field of view (FoV) affects visual speed perception with individuals running on a treadmill. Twelve moderately trained to trained participants ran on a treadmill at two different speeds (8 and 12 km/h) in front of a moving virtual scene. Different masks were used to manipulate the visible visual field, masking either the central or the peripheral area of the virtual scene or showing the full visual field. We asked participants to match the visual speed of the scene to their actual running speed. For each trial, participants indicated whether the scene was moving faster or slower than they were running. Visual speed was adjusted according to the responses using a staircase method until the Point of Subjective Equality was reached, that is until visual and running speed were perceived as matching. For both speeds and all FoV conditions, participants underestimated visual speed relative to the actual running speed. However, this underestimation was significant only when the peripheral FoV was masked. These results confirm that the size of the FoV should absolutely be taken into account for the design of treadmill-mediated virtual environments (VEs).
Collapse
Affiliation(s)
- Martina Caramenti
- Department of Neurosciences and Movement Sciences, University of Fribourg, Fribourg, Switzerland.,Istituto di Bioimmagini e Fisiologia Molecolare, Consiglio Nazionale delle Ricerche, Segrate, Italy.,HumanTech Institute, University of Applied Sciences and Arts Western Switzerland, Fribourg, Switzerland
| | | | - Claudio L Lafortuna
- Istituto di Fisiologia Clinica, Consiglio Nazionale delle Ricerche, Milan, Italy
| | - Jean-Pierre Bresciani
- Department of Neurosciences and Movement Sciences, University of Fribourg, Fribourg, Switzerland.,LPNC, University Grenoble Alpes, Grenoble, France
| | - Amandine Dubois
- Department of Neurosciences and Movement Sciences, University of Fribourg, Fribourg, Switzerland.,Université de Lorraine, 2LPN-CEMA Group (Cognition-EMotion-Action), EA 7489, Metz, France
| |
Collapse
|
16
|
Tuhkanen S, Pekkanen J, Rinkkala P, Mole C, Wilkie RM, Lappi O. Humans Use Predictive Gaze Strategies to Target Waypoints for Steering. Sci Rep 2019; 9:8344. [PMID: 31171850 PMCID: PMC6554351 DOI: 10.1038/s41598-019-44723-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2018] [Accepted: 05/15/2019] [Indexed: 12/22/2022] Open
Abstract
A major unresolved question in understanding visually guided locomotion in humans is whether actions are driven solely by the immediately available optical information (model-free online control mechanisms), or whether internal models have a role in anticipating the future path. We designed two experiments to investigate this issue, measuring spontaneous gaze behaviour while steering, and predictive gaze behaviour when future path information was withheld. In Experiment 1 participants (N = 15) steered along a winding path with rich optic flow: gaze patterns were consistent with tracking waypoints on the future path 1–3 s ahead. In Experiment 2, participants (N = 12) followed a path presented only in the form of visual waypoints located on an otherwise featureless ground plane. New waypoints appeared periodically every 0.75 s and predictably 2 s ahead, except in 25% of the cases the waypoint at the expected location was not displayed. In these cases, there were always other visible waypoints for the participant to fixate, yet participants continued to make saccades to the empty, but predictable, waypoint locations (in line with internal models of the future path guiding gaze fixations). This would not be expected based upon existing model-free online steering control models, and strongly points to a need for models of steering control to include mechanisms for predictive gaze control that support anticipatory path following behaviours.
Collapse
Affiliation(s)
- Samuel Tuhkanen
- Cognitive Science, Department of Digital Humanities & Helsinki Centre for Digital Humanities (Heldig), University of Helsinki, Helsinki, Finland.,TRUlab, University of Helsinki, Helsinki, Finland
| | - Jami Pekkanen
- Cognitive Science, Department of Digital Humanities & Helsinki Centre for Digital Humanities (Heldig), University of Helsinki, Helsinki, Finland.,TRUlab, University of Helsinki, Helsinki, Finland
| | - Paavo Rinkkala
- Cognitive Science, Department of Digital Humanities & Helsinki Centre for Digital Humanities (Heldig), University of Helsinki, Helsinki, Finland.,TRUlab, University of Helsinki, Helsinki, Finland
| | - Callum Mole
- School of Psychology, University of Leeds, Leeds, UK
| | | | - Otto Lappi
- Cognitive Science, Department of Digital Humanities & Helsinki Centre for Digital Humanities (Heldig), University of Helsinki, Helsinki, Finland. .,TRUlab, University of Helsinki, Helsinki, Finland.
| |
Collapse
|
17
|
Tan DS, Yao CY, Ruiz C, Hua KL. Single-Image Depth Inference Using Generative Adversarial Networks. SENSORS (BASEL, SWITZERLAND) 2019; 19:E1708. [PMID: 30974774 PMCID: PMC6480060 DOI: 10.3390/s19071708] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/16/2019] [Revised: 04/01/2019] [Accepted: 04/08/2019] [Indexed: 11/17/2022]
Abstract
Depth has been a valuable piece of information for perception tasks such as robot grasping, obstacle avoidance, and navigation, which are essential tasks for developing smart homes and smart cities. However, not all applications have the luxury of using depth sensors or multiple cameras to obtain depth information. In this paper, we tackle the problem of estimating the per-pixel depths from a single image. Inspired by the recent works on generative neural network models, we formulate the task of depth estimation as a generative task where we synthesize an image of the depth map from a single Red, Green, and Blue (RGB) input image. We propose a novel generative adversarial network that has an encoder-decoder type generator with residual transposed convolution blocks trained with an adversarial loss. Quantitative and qualitative experimental results demonstrate the effectiveness of our approach over several depth estimation works.
Collapse
Affiliation(s)
- Daniel Stanley Tan
- Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei 10607, Taiwan.
| | - Chih-Yuan Yao
- Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei 10607, Taiwan.
| | - Conrado Ruiz
- Software Technology Department, De La Salle University, Manila 1004, Philippines.
| | - Kai-Lung Hua
- Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei 10607, Taiwan.
- Center for Cyber-Physical System Innovation, National Taiwan University of Science and Technology, Taipei 10607, Taiwan.
| |
Collapse
|
18
|
Caramenti M, Lafortuna CL, Mugellini E, Abou Khaled O, Bresciani JP, Dubois A. Matching optical flow to motor speed in virtual reality while running on a treadmill. PLoS One 2018; 13:e0195781. [PMID: 29641564 PMCID: PMC5895071 DOI: 10.1371/journal.pone.0195781] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Accepted: 03/29/2018] [Indexed: 11/19/2022] Open
Abstract
We investigated how visual and kinaesthetic/efferent information is integrated for speed perception in running. Twelve moderately trained to trained subjects ran on a treadmill at three different speeds (8, 10, 12 km/h) in front of a moving virtual scene. They were asked to match the visual speed of the scene to their running speed-i.e., treadmill's speed. For each trial, participants indicated whether the scene was moving slower or faster than they were running. Visual speed was adjusted according to their response using a staircase until the Point of Subjective Equality (PSE) was reached, i.e., until visual and running speed were perceived as equivalent. For all three running speeds, participants systematically underestimated the visual speed relative to their actual running speed. Indeed, the speed of the visual scene had to exceed the actual running speed in order to be perceived as equivalent to the treadmill speed. The underestimation of visual speed was speed-dependent, and percentage of underestimation relative to running speed ranged from 15% at 8km/h to 31% at 12km/h. We suggest that this fact should be taken into consideration to improve the design of attractive treadmill-mediated virtual environments enhancing engagement into physical activity for healthier lifestyles and disease prevention and care.
Collapse
Affiliation(s)
- Martina Caramenti
- Department of Neuroscience and Movement Science, University of Fribourg, Fribourg, Switzerland
- Istituto di Bioimmagini e Fisiologia Molecolare, Consiglio Nazionale delle Ricerche, Segrate, Milano, Italy
- HumanTech Institute, University of Applied Sciences and Arts Western Switzerland, Fribourg, Switzerland
| | - Claudio L. Lafortuna
- Istituto di Bioimmagini e Fisiologia Molecolare, Consiglio Nazionale delle Ricerche, Segrate, Milano, Italy
| | - Elena Mugellini
- HumanTech Institute, University of Applied Sciences and Arts Western Switzerland, Fribourg, Switzerland
| | - Omar Abou Khaled
- HumanTech Institute, University of Applied Sciences and Arts Western Switzerland, Fribourg, Switzerland
| | - Jean-Pierre Bresciani
- Department of Neuroscience and Movement Science, University of Fribourg, Fribourg, Switzerland
| | - Amandine Dubois
- Department of Neuroscience and Movement Science, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
19
|
Warren WH, Rothman DB, Schnapp BH, Ericson JD. Wormholes in virtual space: From cognitive maps to cognitive graphs. Cognition 2017; 166:152-163. [PMID: 28577445 DOI: 10.1016/j.cognition.2017.05.020] [Citation(s) in RCA: 70] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2016] [Revised: 05/10/2017] [Accepted: 05/14/2017] [Indexed: 02/02/2023]
Abstract
Humans and other animals build up spatial knowledge of the environment on the basis of visual information and path integration. We compare three hypotheses about the geometry of this knowledge of navigation space: (a) 'cognitive map' with metric Euclidean structure and a consistent coordinate system, (b) 'topological graph' or network of paths between places, and (c) 'labelled graph' incorporating local metric information about path lengths and junction angles. In two experiments, participants walked in a non-Euclidean environment, a virtual hedge maze containing two 'wormholes' that visually rotated and teleported them between locations. During training, they learned the metric locations of eight target objects from a 'home' location, which were visible individually. During testing, shorter wormhole routes to a target were preferred, and novel shortcuts were directional, contrary to the topological hypothesis. Shortcuts were strongly biased by the wormholes, with mean constant errors of 37° and 41° (45° expected), revealing violations of the metric postulates in spatial knowledge. In addition, shortcuts to targets near wormholes shifted relative to flanking targets, revealing 'rips' (86% of cases), 'folds' (91%), and ordinal reversals (66%) in spatial knowledge. Moreover, participants were completely unaware of these geometric inconsistencies, reflecting a surprising insensitivity to Euclidean structure. The probability of the shortcut data under the Euclidean map model and labelled graph model indicated decisive support for the latter (BFGM>100). We conclude that knowledge of navigation space is best characterized by a labelled graph, in which local metric information is approximate, geometrically inconsistent, and not embedded in a common coordinate system. This class of 'cognitive graph' models supports route finding, novel detours, and rough shortcuts, and has the potential to unify a range of data on spatial navigation.
Collapse
Affiliation(s)
- William H Warren
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Box 1821, 190 Thayer St., Providence, RI 02912, USA.
| | - Daniel B Rothman
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Box 1821, 190 Thayer St., Providence, RI 02912, USA
| | - Benjamin H Schnapp
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Box 1821, 190 Thayer St., Providence, RI 02912, USA
| | - Jonathan D Ericson
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Box 1821, 190 Thayer St., Providence, RI 02912, USA
| |
Collapse
|
20
|
What a car does to your perception: Distance evaluations differ from within and outside of a car. Psychon Bull Rev 2017; 23:781-8. [PMID: 26428670 DOI: 10.3758/s13423-015-0954-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Almost a century ago it was first suggested that cars can be interpreted as tools, but consequences of this assumption were never tested. Research on hand-held tools that are used to manipulate objects in the environment suggests that perception of near space is extended by using tools. Literature on environment perception finds perception of far space to be modulated by the observer's potential to act in the environment. Here we argue that a car increases the action potential and modulates perception of far space in a way similar to how hand-held tools modulate perception of near space. Five distances (4 to 20 meters) were estimated by pedestrians and drivers before and after driving/walking. Drivers underestimated all distances to a larger percentage than did pedestrians. Underestimation was even stronger after driving. We conclude that cars modulate the perception of far distances because they modulate the driver's perception, like a tool typically does, and change the perceived action potential.
Collapse
|
21
|
Netzel R, Hlawatsch M, Burch M, Balakrishnan S, Schmauder H, Weiskopf D. An Evaluation of Visual Search Support in Maps. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:421-430. [PMID: 27875158 DOI: 10.1109/tvcg.2016.2598898] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Visual search can be time-consuming, especially if the scene contains a large number of possibly relevant objects. An instance of this problem is present when using geographic or schematic maps with many different elements representing cities, streets, sights, and the like. Unless the map is well-known to the reader, the full map or at least large parts of it must be scanned to find the elements of interest. In this paper, we present a controlled eye-tracking study (30 participants) to compare four variants of map annotation with labels: within-image annotations, grid reference annotation, directional annotation, and miniature annotation. Within-image annotation places labels directly within the map without any further search support. Grid reference annotation corresponds to the traditional approach known from atlases. Directional annotation utilizes a label in combination with an arrow pointing in the direction of the label within the map. Miniature annotation shows a miniature grid to guide the reader to the area of the map in which the label is located. The study results show that within-image annotation is outperformed by all other annotation approaches. Best task completion times are achieved with miniature annotation. The analysis of eye-movement data reveals that participants applied significantly different visual task solution strategies for the different visual annotations.
Collapse
|
22
|
Zhou L, Ooi TL, He ZJ. Intrinsic spatial knowledge about terrestrial ecology favors the tall for judging distance. SCIENCE ADVANCES 2016; 2:e1501070. [PMID: 27602402 PMCID: PMC5007070 DOI: 10.1126/sciadv.1501070] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/10/2015] [Accepted: 08/02/2016] [Indexed: 06/06/2023]
Abstract
Our sense of vision reliably directs and guides our everyday actions, such as reaching and walking. This ability is especially fascinating because the optical images of natural scenes that project into our eyes are insufficient to adequately form a perceptual space. It has been proposed that the brain makes up for this inadequacy by using its intrinsic spatial knowledge. However, it is unclear what constitutes intrinsic spatial knowledge and how it is acquired. We investigated this question and showed evidence of an ecological basis, which uses the statistical spatial relationship between the observer and the terrestrial environment, namely, the ground surface. We found that in dark and reduced-cue environments where intrinsic knowledge has a greater contribution, perceived target location is more accurate when referenced to the ground than to the ceiling. Furthermore, taller observers more accurately localized the target. Superior performance was also observed in the full-cue environment, even when we compensated for the observers' heights by having the taller observer sit on a chair and the shorter observers stand on a box. Although fascinating, this finding dovetails with the prediction of the ecological hypothesis for intrinsic spatial knowledge. It suggests that an individual's accumulated lifetime experiences of being tall and his or her constant interactions with ground-based objects not only determine intrinsic spatial knowledge but also endow him or her with an advantage in spatial ability in the intermediate distance range.
Collapse
Affiliation(s)
- Liu Zhou
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Science and Technology Commission of Shanghai Municipality), Institute of Cognitive Neurosciences, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Teng Leng Ooi
- College of Optometry, Ohio State University, Columbus, OH 43210, USA
| | - Zijiang J. He
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Science and Technology Commission of Shanghai Municipality), Institute of Cognitive Neurosciences, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY 40292, USA
| |
Collapse
|
23
|
Clément G, Loureiro N, Sousa D, Zandvliet A. Perception of Egocentric Distance during Gravitational Changes in Parabolic Flight. PLoS One 2016; 11:e0159422. [PMID: 27463106 PMCID: PMC4963113 DOI: 10.1371/journal.pone.0159422] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2016] [Accepted: 07/01/2016] [Indexed: 12/05/2022] Open
Abstract
We explored the effect of gravity on the perceived representation of the absolute distance of objects to the observers within the range from 1.5-6 m. Experiments were performed on board the CNES Airbus Zero-G during parabolic flights eliciting repeated exposures to short periods of microgravity (0 g), hypergravity (1.8 g), and normal gravity (1 g). Two methods for obtaining estimates of perceived egocentric distance were used: verbal reports and visually directed motion toward a memorized visual target. For the latter method, because normal walking is not possible in 0 g, blindfolded subjects translated toward the visual target by pulling on a rope with their arms. The results showed that distance estimates using both verbal reports and blind pulling were significantly different between normal gravity, microgravity, and hypergravity. Compared to the 1 g measurements, the estimates of perceived distance using blind pulling were shorter for all distances in 1.8 g, whereas in 0 g they were longer for distances up to 4 m and shorter for distances beyond. These findings suggest that gravity plays a role in both the sensorimotor system and the perceptual/cognitive system for estimating egocentric distance.
Collapse
Affiliation(s)
| | - Nuno Loureiro
- Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown, Lisbon, Portugal
| | - Duarte Sousa
- International Space University, Strasbourg, France
| | - Andre Zandvliet
- European Space Research and Technology Center, European Space Agency, Noordwijk, The Netherlands
| |
Collapse
|
24
|
Lappi O. Eye movements in the wild: Oculomotor control, gaze behavior & frames of reference. Neurosci Biobehav Rev 2016; 69:49-68. [PMID: 27461913 DOI: 10.1016/j.neubiorev.2016.06.006] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2015] [Revised: 05/14/2016] [Accepted: 06/08/2016] [Indexed: 11/19/2022]
Abstract
Understanding the brain's capacity to encode complex visual information from a scene and to transform it into a coherent perception of 3D space and into well-coordinated motor commands are among the outstanding questions in the study of integrative brain function. Eye movement methodologies have allowed us to begin addressing these questions in increasingly naturalistic tasks, where eye and body movements are ubiquitous and, therefore, the applicability of most traditional neuroscience methods restricted. This review explores foundational issues in (1) how oculomotor and motor control in lab experiments extrapolates into more complex settings and (2) how real-world gaze behavior in turn decomposes into more elementary eye movement patterns. We review the received typology of oculomotor patterns in laboratory tasks, and how they map onto naturalistic gaze behavior (or not). We discuss the multiple coordinate systems needed to represent visual gaze strategies, how the choice of reference frame affects the description of eye movements, and the related but conceptually distinct issue of coordinate transformations between internal representations within the brain.
Collapse
Affiliation(s)
- Otto Lappi
- Cognitive Science, Institute of Behavioural Sciences, PO BOX 9, 00014 University of Helsinki, Finland.
| |
Collapse
|
25
|
He ZJ, Wu B, Ooi TL, Yarbrough G, Wu J. Judging Egocentric Distance on the Ground: Occlusion and Surface Integration. Perception 2016; 33:789-806. [PMID: 15460507 DOI: 10.1068/p5256a] [Citation(s) in RCA: 73] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
On the basis of the finding that a common and homogeneous ground surface is vital for accurate egocentric distance judgments (Sinai et al, 1998 Nature395 497–500), we propose a sequential-surface-integration-process (SSIP) hypothesis to elucidate how the visual system constructs a representation of the ground-surface in the intermediate distance range. According to the SSIP hypothesis, a near ground-surface representation is formed from near depth cues, and is utilized as an anchor to integrate the more distant surfaces by using texture-gradient information as the depth cue. The SSIP hypothesis provides an explanation for the finding that egocentric distance judgment is underestimated when a texture boundary exists on the ground surface that commonly supports the observer and target. We tested the prediction that the fidelity of the visually represented ground-surface reference frame depends on how the visual system selects the surface information for integration. Specifically, if information is selected along a direct route between the observer and target where the ground surface is disrupted by an occluding object, the ground surface will be inaccurately represented. In experiments 1–3 we used a perceptual task and two different visually directed tasks to show that this leads to egocentric distance underestimation. Judgment is accurate however, when the observer selects the continuous ground information bypassing the occluding object (indirect route), as found in experiments 4 and 5 with a visually directed task. Altogether, our findings provide support for the SSIP hypothesis and reveal, surprisingly, that the phenomenal visual space is not unique but depends on how optic information is selected.
Collapse
Affiliation(s)
- Zijiang J He
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY 40292, USA.
| | | | | | | | | |
Collapse
|
26
|
Ooi TL, Wu B, He ZJ. Perceptual Space in the Dark Affected by the Intrinsic Bias of the Visual System. Perception 2016; 35:605-24. [PMID: 16836053 DOI: 10.1068/p5492] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Correct judgment of egocentric/absolute distance in the intermediate distance range requires both the angular declination below the horizon and ground-surface information being represented accurately. This requirement can be met in the light environment but not in the dark, where the ground surface is invisible and hence cannot be represented accurately. We previously showed that a target in the dark is judged at the intersection of the projection line from the eye to the target that defines the angular declination below the horizon and an implicit surface. The implicit surface can be approximated as a slant surface with its far end slanted toward the frontoparallel plane. We hypothesize that the implicit slant surface reflects the intrinsic bias of the visual system and helps to define the perceptual space. Accordingly, we conducted two experiments in the dark to further elucidate the characteristics of the implicit slant surface. In the first experiment we measured the egocentric location of a dimly lit target on, or above, the ground, using the blind-walking-gesturing paradigm. Our results reveal that the judged target locations could be fitted by a line (surface), which indicates an intrinsic bias with a geographical slant of about 12.4°. In the second experiment, with an exocentric/relative-distance task, we measured the judged ratio of aspect ratio of a fluorescent L-shaped target. Using trigonometric analysis, we found that the judged ratio of aspect ratio can be accounted for by assuming that the L-shaped target was perceived on an implicit slant surface with an average geographical slant of 14.4° That the data from the two experiments with different tasks can be fitted by implicit slant surfaces suggests that the intrinsic bias has a role in determining perceived space in the dark. The possible contribution of the intrinsic bias to representing the ground surface and its impact on space perception in the light environment are also discussed.
Collapse
Affiliation(s)
- Teng Leng Ooi
- Department of Basic Sciences, Pennsylvania College of Optometry, 8360 Old York Road, Elkins Park, PA 19027, USA.
| | | | | |
Collapse
|
27
|
Wu J, He ZJ, Ooi TL. Visually Perceived Eye Level and Horizontal Midline of the Body Trunk Influenced by Optic Flow. Perception 2016; 34:1045-60. [PMID: 16245484 DOI: 10.1068/p5416] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
The eye level and the horizontal midline of the body trunk can serve, respectively as references for judging the vertical and horizontal egocentric directions. We investigated whether the optic-flow pattern, which is the dynamic motion information generated when one moves in the visual world, can be used by the visual system to determine and calibrate these two references. Using a virtual-reality setup to generate the optic-flow pattern, we showed that judged elevation of the eye level and the azimuth of the horizontal midline of the body trunk are biased toward the positional placement of the focus of expansion (FOE) of the optic-flow pattern. Furthermore, for the vertical reference, prolonged viewing of an optic-flow pattern with lowered FOE not only causes a lowered judged eye level after removal of the optic-flow pattern, but also an overestimation of distance in the dark. This is equivalent to a reduction in the judged angular declination of the object after adaptation, indicating that the optic-flow information also plays a role in calibrating the extraretinal signals used to establish the vertical reference.
Collapse
Affiliation(s)
- Jun Wu
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY 40292, USA
| | | | | |
Collapse
|
28
|
Abstract
Distance perception seems to be an incredible achievement if it is construed as being based solely on static retinal images. Information provided by such images is sparse at best. On the other hand, when the perceptual context is taken to be one in which people are acting in natural environments, the informational bases for distance perception become abundant. There are, however, surprising consequences of studying people in action. Nonvisual factors, such as people's goals and physiological states, also influence their distance perceptions. Although the informational specification of distance becomes redundant when people are active, paradoxically, many distance-related actions sidestep the need to perceive distance at all.
Collapse
|
29
|
Saulton A, Longo MR, Wong HY, Bülthoff HH, de la Rosa S. The role of visual similarity and memory in body model distortions. Acta Psychol (Amst) 2016; 164:103-11. [PMID: 26783695 DOI: 10.1016/j.actpsy.2015.12.013] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2015] [Revised: 11/23/2015] [Accepted: 12/31/2015] [Indexed: 12/27/2022] Open
Abstract
Several studies have shown that the perception of one's own hand size is distorted in proprioceptive localization tasks. It has been suggested that those distortions mirror somatosensory anisotropies. Recent research suggests that non-corporeal items also show some spatial distortions. In order to investigate the psychological processes underlying the localization task, we investigated the influences of visual similarity and memory on distortions observed on corporeal and non-corporeal items. In experiment 1, participants indicated the location of landmarks on: their own hand, a rubber hand (rated as most similar to the real hand), and a rake (rated as least similar to the real hand). Results show no significant differences between rake and rubber hand distortions but both items were significantly less distorted than the hand. Experiments 2 and 3 explored the role of memory in spatial distance judgments of the hand, the rake and the rubber hand. Spatial representations of items measured in experiments 2 and 3 were also distorted but showed the tendency to be smaller than in localization tasks. While memory and visual similarity seem to contribute to explain qualitative similarities in distortions between the hand and non-corporeal items, those factors cannot explain the larger magnitude observed in hand distortions.
Collapse
|
30
|
Pulling out all the stops to make the distance: Effects of effort and optical information in distance perception responses made by rope pulling. Atten Percept Psychophys 2015; 78:685-99. [DOI: 10.3758/s13414-015-1035-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
31
|
Gestalt-like constraints produce veridical (Euclidean) percepts of 3D indoor scenes. Vision Res 2015; 126:264-277. [PMID: 26525845 DOI: 10.1016/j.visres.2015.09.011] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2015] [Revised: 08/12/2015] [Accepted: 09/15/2015] [Indexed: 11/22/2022]
Abstract
This study, which was influenced a lot by Gestalt ideas, extends our prior work on the role of a priori constraints in the veridical perception of 3D shapes to the perception of 3D scenes. Our experiments tested how human subjects perceive the layout of a naturally-illuminated indoor scene that contains common symmetrical 3D objects standing on a horizontal floor. In one task, the subject was asked to draw a top view of a scene that was viewed either monocularly or binocularly. The top views the subjects reconstructed were configured accurately except for their overall size. These size errors varied from trial to trial, and were shown most-likely to result from the presence of a response bias. There was little, if any, evidence of systematic distortions of the subjects' perceived visual space, the kind of distortions that have been reported in numerous experiments run under very unnatural conditions. This shown, we proceeded to use Foley's (Vision Research 12 (1972) 323-332) isosceles right triangle experiment to test the intrinsic geometry of visual space directly. This was done with natural viewing, with the impoverished viewing conditions Foley had used, as well as with a number of intermediate viewing conditions. Our subjects produced very accurate triangles when the viewing conditions were natural, but their performance deteriorated systematically as the viewing conditions were progressively impoverished. Their perception of visual space became more compressed as their natural visual environment was degraded. Once this was shown, we developed a computational model that emulated the most salient features of our psychophysical results. We concluded that human observers see 3D scenes veridically when they view natural 3D objects within natural 3D environments.
Collapse
|
32
|
Geuss MN, Stefanucci JK, Creem-Regehr SH, Thompson WB, Mohler BJ. Effect of Display Technology on Perceived Scale of Space. HUMAN FACTORS 2015; 57:1235-1247. [PMID: 26060237 DOI: 10.1177/0018720815590300] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2013] [Accepted: 05/12/2015] [Indexed: 06/04/2023]
Abstract
OBJECTIVE Our goal was to evaluate the degree to which display technologies influence the perception of size in an image. BACKGROUND Research suggests that factors such as whether an image is displayed stereoscopically, whether a user's viewpoint is tracked, and the field of view of a given display can affect users' perception of scale in the displayed image. METHOD Participants directly estimated the size of a gap by matching the distance between their hands to the gap width and judged their ability to pass unimpeded through the gap in one of five common implementations of three display technologies (two head-mounted displays [HMD] and a back-projection screen). RESULTS Both measures of gap width were similar for the two HMD conditions and the back projection with stereo and tracking. For the displays without tracking, stereo and monocular conditions differed from each other, with monocular viewing showing underestimation of size. CONCLUSIONS Display technologies that are capable of stereoscopic display and tracking of the user's viewpoint are beneficial as perceived size does not differ from real-world estimates. Evaluations of different display technologies are necessary as display conditions vary and the availability of different display technologies continues to grow. APPLICATIONS The findings are important to those using display technologies for research, commercial, and training purposes when it is important for the displayed image to be perceived at an intended scale.
Collapse
Affiliation(s)
- Michael N Geuss
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Jeanine K Stefanucci
- Max Planck Institute for Biological Cybernetics, Tübingen, GermanyUniversity of Utah, Salt Lake City, UtahMax Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Sarah H Creem-Regehr
- Max Planck Institute for Biological Cybernetics, Tübingen, GermanyUniversity of Utah, Salt Lake City, UtahMax Planck Institute for Biological Cybernetics, Tübingen, Germany
| | | | - Betty J Mohler
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
33
|
Naceri A, Moscatelli A, Chellali R. Depth discrimination of constant angular size stimuli in action space: role of accommodation and convergence cues. Front Hum Neurosci 2015; 9:511. [PMID: 26441608 PMCID: PMC4584972 DOI: 10.3389/fnhum.2015.00511] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2015] [Accepted: 09/02/2015] [Indexed: 11/13/2022] Open
Abstract
In our daily life experience, the angular size of an object correlates with its distance from the observer, provided that the physical size of the object remains constant. In this work, we investigated depth perception in action space (i.e., beyond the arm reach), while keeping the angular size of the target object constant. This was achieved by increasing the physical size of the target object as its distance to the observer increased. To the best of our knowledge, this is the first time that a similar protocol has been tested in action space, for distances to the observer ranging from 1.4–2.4 m. We replicated the task in virtual and real environments and we found that the performance was significantly different between the two environments. In the real environment, all participants perceived the depth of the target object precisely. Whereas, in virtual reality (VR) the responses were significantly less precise, although, still above chance level in 16 of the 20 observers. The difference in the discriminability of the stimuli was likely due to different contributions of the convergence and the accommodation cues in the two environments. The values of Weber fractions estimated in our study were compared to those reported in previous studies in peripersonal and action space.
Collapse
Affiliation(s)
- Abdeldjallil Naceri
- Department of Cognitive Neuroscience, Cognitive Interaction Technology Center of Excellence (CITEC), Bielefeld University Bielefeld, Germany
| | - Alessandro Moscatelli
- Department of Cognitive Neuroscience, Cognitive Interaction Technology Center of Excellence (CITEC), Bielefeld University Bielefeld, Germany
| | - Ryad Chellali
- Nanjing Robotics Institute, College of Electrical Engineering and Control Science, Nanjing Tech University Nanjing, China
| |
Collapse
|
34
|
Abstract
Two experiments including 24 (M age=29 yr., SD=9; 6 men) and 25 participants (M age=27 yr., SD=9; 8 men), respectively, examined how arm movement extent affects the perception of visual locations. Linear arm movements were performed on a horizontal plane from a start position until an auditory signal occurred. Subsequently, the position of a visual target located along the movement path was judged. The target was judged as further away with an increase in movement extent. The results indicated that motor-related signals are taken into account in visual perception of locations. There were no indications, though, that changes of location perception prompted subsequent changes of action planning, which demonstrates the short-term nature of action-induced plasticity of space perception under the present conditions.
Collapse
Affiliation(s)
| | - Wilfried Kunde
- 1 Department of Psychology, University of Würzburg, Germany
| |
Collapse
|
35
|
Creem-Regehr SH, Kunz BR. Perception and action. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2015; 1:800-810. [PMID: 26271778 DOI: 10.1002/wcs.82] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The phrase perception and action is used widely but in diverse ways in the context of the relationship between perceptual and motor processes. This review describes and integrates five perspectives on perception and action which rely on both neurophysiological and behavioral levels of analysis. The two visual systems view proposes dissociable but interactive systems for conscious processing of objects/space and the visual control of action. The integrative view proposes tightly calibrated but flexible systems for perception and motor control in spatial representation. The embodied view posits that action underlies perception, involving common coding or motor simulation systems, and examines the relationship between action observation, imitation, and the understanding of intention. The ecological view emphasizes environmental information and affordances in perception. The functional view defines the relationship between perception, action planning, and semantics in goal-directed actions. Although some of these views/approaches differ in significant ways, their shared emphasis on the importance of action in perception serves as a useful unifying framework. WIREs Cogn Sci 2010 1 800-810 For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
| | - Benjamin R Kunz
- Department of Psychology, University of Utah, Salt Lake City, UT 84112, USA
| |
Collapse
|
36
|
Erkelens CJ. The extent of visual space inferred from perspective angles. Iperception 2015; 6:5-14. [PMID: 26034567 PMCID: PMC4441024 DOI: 10.1068/i0673] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2014] [Revised: 12/11/2014] [Indexed: 11/16/2022] Open
Abstract
Retinal images are perspective projections of the visual environment. Perspective projections do not explain why we perceive perspective in 3-D space. Analysis of underlying spatial transformations shows that visual space is a perspective transformation of physical space if parallel lines in physical space vanish at finite distance in visual space. Perspective angles, i.e., the angle perceived between parallel lines in physical space, were estimated for rails of a straight railway track. Perspective angles were also estimated from pictures taken from the same point of view. Perspective angles between rails ranged from 27% to 83% of their angular size in the retinal image. Perspective angles prescribe the distance of vanishing points of visual space. All computed distances were shorter than 6 m. The shallow depth of a hypothetical space inferred from perspective angles does not match the depth of visual space, as it is perceived. Incongruity between the perceived shape of a railway line on the one hand and the experienced ratio between width and length of the line on the other hand is huge, but apparently so unobtrusive that it has remained unnoticed. The incompatibility between perspective angles and perceived distances casts doubt on evidence for a curved visual space that has been presented in the literature and was obtained from combining judgments of distances and angles with physical positions.
Collapse
Affiliation(s)
- Casper J Erkelens
- Helmholtz Institute, Utrecht University, Utrecht, The Netherlands; e-mail:
| |
Collapse
|
37
|
Creem-Regehr SH, Stefanucci JK, Thompson WB. Perceiving Absolute Scale in Virtual Environments: How Theory and Application Have Mutually Informed the Role of Body-Based Perception. PSYCHOLOGY OF LEARNING AND MOTIVATION 2015. [DOI: 10.1016/bs.plm.2014.09.006] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
38
|
Larrue F, Sauzeon H, Wallet G, Foloppe D, Cazalets JR, Gross C, N'Kaoua B. Influence of body-centered information on the transfer of spatial learning from a virtual to a real environment. JOURNAL OF COGNITIVE PSYCHOLOGY 2014. [DOI: 10.1080/20445911.2014.965714] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
39
|
Chen CF, Lin CC, Huang KC. Effects of spacing between items and view direction on errors in the perceived height of a rotated 3-D figure. Percept Mot Skills 2014; 119:215-27. [PMID: 25153751 DOI: 10.2466/24.27.pms.119c12z6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This study investigated the errors in the perceived height of virtual cones presented on the screen. 80 students (50 women, 30 men; M age = 18.8 yr., SD = 1.2 using a duodecimal system) participated in the study. They judged the height of virtual cones in several conditions: (a) different spaces between the items in the array (2, 4, and 6 cm); (b) different viewing directions - bottom-up or top-down; (c) cones presented in different forward-rotated angles (15, 30, and 45°). Results indicate that fewer errors in the perceived heights of virtual cones were made when: the space between items was 2 cm, judgment was made in a bottom-up view and at a 15° angle. These results may have implications for graphics-based interface design such as interior design, driver navigation systems, geological models, and flight-simulation systems.
Collapse
Affiliation(s)
- Chen-Fu Chen
- 1 Department of Product Design, Ming Chuan University, Taiwan
| | | | | |
Collapse
|
40
|
Moscatelli A, Naceri A, Ernst MO. Path integration in tactile perception of shapes. Behav Brain Res 2014; 274:355-64. [PMID: 25151621 DOI: 10.1016/j.bbr.2014.08.025] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2014] [Revised: 08/07/2014] [Accepted: 08/11/2014] [Indexed: 11/17/2022]
Abstract
Whenever we move the hand across a surface, tactile signals provide information about the relative velocity between the skin and the surface. If the system were able to integrate the tactile velocity information over time, cutaneous touch may provide an estimate of the relative displacement between the hand and the surface. Here, we asked whether humans are able to form a reliable representation of the motion path from tactile cues only, integrating motion information over time. In order to address this issue, we conducted three experiments using tactile motion and asked participants (1) to estimate the length of a simulated triangle, (2) to reproduce the shape of a simulated triangular path, and (3) to estimate the angle between two-line segments. Participants were able to accurately indicate the length of the path, whereas the perceived direction was affected by a direction bias (inward bias). The response pattern was thus qualitatively similar to the ones reported in classical path integration studies involving locomotion. However, we explain the directional biases as the result of a tactile motion aftereffect.
Collapse
Affiliation(s)
- Alessandro Moscatelli
- Cognitive Neuroscience Department, Bielefeld University, 33615 Bielefeld, Germany; Cognitive Interaction Technology-Center of Excellence, Bielefeld University, 33615 Bielefeld, Germany.
| | - Abdeldjallil Naceri
- Cognitive Neuroscience Department, Bielefeld University, 33615 Bielefeld, Germany; Cognitive Interaction Technology-Center of Excellence, Bielefeld University, 33615 Bielefeld, Germany
| | - Marc O Ernst
- Cognitive Neuroscience Department, Bielefeld University, 33615 Bielefeld, Germany; Cognitive Interaction Technology-Center of Excellence, Bielefeld University, 33615 Bielefeld, Germany
| |
Collapse
|
41
|
Abstract
The overestimation of geographical slant is one of the most sizable visual illusions. However, in some cases estimates of close-by slopes within the range of the observer's personal space have been found to be rather accurate. We propose that the seemingly diverse findings can be reconciled when taking the viewing distance of the observer into account. The latter involves the distance of the observer from the slope (personal space, action space, and vista space) and also the eye-point relative to the slope. We separated these factors and compared outdoor judgments to those collected with a three-dimensional (3D) model of natural terrain, which was within arm's reach of the observer. Slope was overestimated in the outdoors at viewing distances between 2 m and 138 m. The 3D model reproduced the errors in monocular viewing; however, performance was accurate with stereoscopic viewing. We conclude that accurate slant perception breaks down as soon as the situation exits personal space, be it physically or be it by closing one eye.
Collapse
|
42
|
Going for distance and going for speed: Effort and optical variables shape information for distance perception from observation to response. Atten Percept Psychophys 2014; 76:1015-35. [DOI: 10.3758/s13414-014-0629-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
43
|
Dopaminergic contributions to distance estimation in Parkinson's disease: A sensory-perceptual deficit? Neuropsychologia 2013; 51:1426-34. [DOI: 10.1016/j.neuropsychologia.2013.04.015] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2012] [Revised: 04/17/2013] [Accepted: 04/19/2013] [Indexed: 11/21/2022]
|
44
|
Nefs HT, van Bilsen A, Pont SC, de Ridder H, Wijntjes MWA, van Doorn AJ. Perception of length to width relations of city squares. Iperception 2013; 4:111-21. [PMID: 23755356 PMCID: PMC3677331 DOI: 10.1068/i0553] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2012] [Revised: 02/17/2013] [Indexed: 11/05/2022] Open
Abstract
In this paper, we focus on how people perceive the aspect ratio of city squares. Earlier research has focused on distance perception but not so much on the perceived aspect ratio of the surrounding space. Furthermore, those studies have focused on “open” spaces rather than urban areas enclosed by walls, houses and filled with people, cars, etc. In two experiments, we therefore measured, using a direct and an indirect method, the perceived aspect ratio of five city squares in the historic city center of Delft, the Netherlands. We also evaluated whether the perceived aspect ratio of city squares was affected by the position of the observer on the square. In the first experiment, participants were asked to set the aspect ratio of a small rectangle such that it matched the perceived aspect ratio of the city square. In the second experiment, participants were asked to estimate the length and width of the city square separately. In the first experiment, we found that the perceived aspect ratio was in general lower than the physical aspect ratio. However, in the second experiment, we found that the calculated ratios were close to veridical except for the most elongated city square. We conclude therefore that the outcome depends on how the measurements are performed. Furthermore, although indirect measurements are nearly veridical, the perceived aspect ratio is an underestimation of the physical aspect ratio when measured in a direct way. Moreover, the perceived aspect ratio also depends on the location of the observer. These results may be beneficial to the design of large open urban environments, and in particular to rectangular city squares.
Collapse
Affiliation(s)
- Harold T Nefs
- Faculty of Electrical Engineering, Mathematics, and Computer Science, Perceptual Intelligence Lab/Interactive Intelligence Group, Delft University of Technology, Mekelweg 4, 2628 CD Delft, The Netherlands; e-mail:
| | | | | | | | | | | |
Collapse
|
45
|
Zhang J, Braunstein ML, Andersen GJ. Changes in angular size and speed affect the judged height of objects moving over a ground surface. Perception 2013; 42:34-44. [PMID: 23678615 DOI: 10.1068/p7336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Kersten et al (1997, Perception 26 171-192) showed that the perceived path of an object moving over a ground surface can be manipulated by changing the path of a shadow. Using a scene similar to Kersten's "ball-in-a-box" scene, we investigated the effect of angular size and angular speed in determining the perceived height of a moving sphere when optical contact (the position at which the object contacted the ground in the image) indicated that the sphere was receding in depth. In four experiments we examined both the effects of changes in size and speed, and the effects of constant levels of size and speed. Increases in angular size or speed during a motion sequence resulted in judgments of increased height above the ground plane. The angular size at the end of the motion sequence was also important in determining judged height, with greater height judged with larger final sizes.
Collapse
Affiliation(s)
- Junjun Zhang
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA 92697-5100, USA
| | | | | |
Collapse
|
46
|
Dissociations between vision for perception and vision for action depend on the relative availability of egocentric and allocentric information. Atten Percept Psychophys 2013; 75:1206-14. [PMID: 23670269 DOI: 10.3758/s13414-013-0476-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In three experiments, we scrutinized the dissociation between perception and action, as reflected by the contributions of egocentric and allocentric information. In Experiment 1, participants stood at the base of a large-scale one-tailed version of a Müller-Lyer illusion (with a hoop) and either threw a beanbag to the endpoint of the shaft or verbally estimated the egocentric distance to that location. The results confirmed an effect of the illusion on verbal estimates, but not on throwing, providing evidence for a dissociation between perception and action. In Experiment 2, participants observed a two-tailed version of the Müller-Lyer illusion from a distance of 1.5 m and performed the same tasks as in Experiment 1, yet neither the typical illusion effects nor a dissociation became apparent. Experiment 3 was a replication of Experiment 1, with the difference that participants stood at a distance of 1.5 m from the base of the one-tailed illusion. The results indicated an illusion effect on both the verbal estimate task and the throwing task; hence, there was no dissociation between perception and action. The presence (Exp. 1) and absence (Exp. 3) of a dissociation between perception and action may indicate that dissociations are a function of the relative availability of egocentric and allocentric information. When distance estimates are purely egocentric, dissociations between perception and action occur. However, when egocentric distance estimates have a (complementary) exocentric component, the use of allocentric information is promoted, and dissociations between perception and action are reduced or absent.
Collapse
|
47
|
Zhang H, Zhang K, Wang RF. The role of static scene information on locomotion distance estimation. JOURNAL OF COGNITIVE PSYCHOLOGY 2013. [DOI: 10.1080/20445911.2012.744314] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
48
|
Park E, Kim KJ, del Pobil AP. An Examination of Psychological Factors Affecting Drivers’ Perceptions and Attitudes Toward Car Navigation Systems. IT CONVERGENCE AND SECURITY 2012 2013. [DOI: 10.1007/978-94-007-5860-5_66] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
|
49
|
Towards a Successful Mobile Map Service: An Empirical Examination of Technology Acceptance Model. NETWORKED DIGITAL TECHNOLOGIES 2012. [DOI: 10.1007/978-3-642-30507-8_36] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
|
50
|
Visual influence on path integration in darkness indicates a multimodal representation of large-scale space. Proc Natl Acad Sci U S A 2011; 108:1152-7. [PMID: 21199934 DOI: 10.1073/pnas.1011843108] [Citation(s) in RCA: 74] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map.
Collapse
|