1
|
Aizenman AM, Gegenfurtner KR, Goettker A. Oculomotor routines for perceptual judgments. J Vis 2024; 24:3. [PMID: 38709511 PMCID: PMC11078167 DOI: 10.1167/jov.24.5.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 03/09/2024] [Indexed: 05/07/2024] Open
Abstract
In everyday life we frequently make simple visual judgments about object properties, for example, how big or wide is a certain object? Our goal is to test whether there are also task-specific oculomotor routines that support perceptual judgments, similar to the well-established exploratory routines for haptic perception. In a first study, observers saw different scenes with two objects presented in a photorealistic virtual reality environment. Observers were asked to judge which of two objects was taller or wider while gaze was tracked. All tasks were performed with the same set of virtual objects in the same scenes, so that we can compare spatial characteristics of exploratory gaze behavior to quantify oculomotor routines for each task. Width judgments showed fixations around the center of the objects with larger horizontal spread. In contrast, for height judgments, gaze was shifted toward the top of the objects with larger vertical spread. These results suggest specific strategies in gaze behavior that presumably are used for perceptual judgments. To test the causal link between oculomotor behavior and perception, in a second study, observers could freely gaze at the object or we introduced a gaze-contingent setup forcing observers to fixate specific positions on the object. Discrimination performance was similar between free-gaze and the gaze-contingent conditions for width and height judgments. These results suggest that although gaze is adapted for different tasks, performance seems to be based on a perceptual strategy, independent of potential cues that can be provided by the oculomotor system.
Collapse
Affiliation(s)
- Avi M Aizenman
- Psychology Department Giessen University, Giessen, Germany
- http://aviaizenman.com/
| | - Karl R Gegenfurtner
- Psychology Department Giessen University, Giessen, Germany
- https://www.allpsych.uni-giessen.de/karl/
| | - Alexander Goettker
- Psychology Department Giessen University, Giessen, Germany
- https://alexgoettker.com/
| |
Collapse
|
2
|
Chan EY, Maglio SJ. Seeing far: Abstract construal and visual distance judgments. Psychon Bull Rev 2023; 30:2196-2202. [PMID: 37166704 DOI: 10.3758/s13423-023-02300-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/26/2023] [Indexed: 05/12/2023]
Abstract
construal underlies mental travel. As a result, the human mind associates abstraction and psychological distance, whereby prompting abstract construal begets the inference of psychological distance - in time, social distance, hypotheticality, and physical space. That final distance is the only dimension that can be appraised visually, so would abstract construal impact judgments related to perceived visual distance? Two experiments provide evidence that abstract construal causes targets in the visual field to be judged as physically farther away. Further, the exacerbated sense of distance gives rise to related inferences about those visual targets (size and weight). These results deepen and broaden Construal Level Theory with practical implications for how people reason about the physical properties of objects - including but not limited to their physical distance.
Collapse
Affiliation(s)
- Eugene Y Chan
- Toronto Metropolitan University, Toronto, ON, Canada.
| | - Sam J Maglio
- University of Toronto at Scarborough, Toronto, ON, Canada
| |
Collapse
|
3
|
Familiarity with an Object’s Size Influences the Perceived Size of Its Image. Vision (Basel) 2022; 6:vision6010014. [PMID: 35324599 PMCID: PMC8955019 DOI: 10.3390/vision6010014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 02/21/2022] [Accepted: 02/21/2022] [Indexed: 11/29/2022] Open
Abstract
It is known that judgments about objects’ distances are influenced by familiar size: a soccer ball looks farther away than a tennis ball if their images are equally large on the retina. We here investigate whether familiar size also influences judgments about the size of images of objects that are presented side-by-side on a computer screen. Sixty-three participants indicated which of two images appeared larger on the screen in a 2-alternative forced-choice discrimination task. The objects were either two different types of balls, two different types of coins, or a ball and a grey disk. We found that the type of ball biased the comparison between their image sizes: the size of the image of the soccer ball was over-estimated by about 5% (assimilation). The bias in the comparison between the two balls was equal to the sum of the biases in the comparisons with the grey disk. The bias for the coins was smaller and in the opposite direction (contrast). The average precision of the size comparison was 3.5%, irrespective of the type of object. We conclude that knowing a depicted object’s real size can influence the perceived size of its image, but the perceived size is not always attracted towards the familiar size.
Collapse
|
4
|
Aguado B, López-Moliner J. Gravity and Known Size Calibrate Visual Information to Time Parabolic Trajectories. Front Hum Neurosci 2021; 15:642025. [PMID: 34497497 PMCID: PMC8420811 DOI: 10.3389/fnhum.2021.642025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 07/28/2021] [Indexed: 11/13/2022] Open
Abstract
Catching a ball in a parabolic flight is a complex task in which the time and area of interception are strongly coupled, making interception possible for a short period. Although this makes the estimation of time-to-contact (TTC) from visual information in parabolic trajectories very useful, previous attempts to explain our precision in interceptive tasks circumvent the need to estimate TTC to guide our action. Obtaining TTC from optical variables alone in parabolic trajectories would imply very complex transformations from 2D retinal images to a 3D layout. We propose based on previous work and show by using simulations that exploiting prior distributions of gravity and known physical size makes these transformations much simpler, enabling predictive capacities from minimal early visual information. Optical information is inherently ambiguous, and therefore, it is necessary to explain how these prior distributions generate predictions. Here is where the role of prior information comes into play: it could help to interpret and calibrate visual information to yield meaningful predictions of the remaining TTC. The objective of this work is: (1) to describe the primary sources of information available to the observer in parabolic trajectories; (2) unveil how prior information can be used to disambiguate the sources of visual information within a Bayesian encoding-decoding framework; (3) show that such predictions might be robust against complex dynamic environments; and (4) indicate future lines of research to scrutinize the role of prior knowledge calibrating visual information and prediction for action control.
Collapse
Affiliation(s)
- Borja Aguado
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain
| | - Joan López-Moliner
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain
| |
Collapse
|
5
|
Maltz MV, Stubbs KM, Quinlan DJ, Rzepka AM, Martin JR, Culham JC. Familiar size affects the perceived size and distance of real objects even with binocular vision. J Vis 2021; 21:21. [PMID: 34581767 PMCID: PMC8479574 DOI: 10.1167/jov.21.10.21] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Although the familiar size of real-world objects affects size and distance perception, evidence is mixed about whether this is the case when oculomotor cues are available. We examined the familiar size effect (FSE) on both size and distance perception for real objects under two viewing conditions with full or restricted oculomotor cues (binocular viewing, which provides vergence and accommodation cues, and monocular viewing through a 1-mm pinhole, which removes those cues). Familiar objects (a playing die versus a Rubik's cube) were manufactured in their typical (1.6-cm die and 5.7-cm Rubik's cube) and reverse (5.7-cm die and 1.6-cm Rubik's cube) sizes and shown at two distances (25 cm versus 91 cm) in isolation. Small near and large far objects subtended equal retinal angles. Participants provided manual estimates of perceived size and distance. For every combination of size and distance, Rubik's cubes were perceived as larger and farther than the dice, even during binocular viewing at near distances (<1 meter), when oculomotor cues are particularly strong. For size perception but not distance perception, the familiar size effect was significantly stronger under monocular pinhole viewing than binocular viewing. These results suggest that (1) familiar size affects the accuracy of perception, not just the speed; (2) the effect occurs even when oculomotor cues are available; and (3) size and distance perception are not perfectly yoked.
Collapse
Affiliation(s)
- Margaret V Maltz
- Department of Psychology, University of Western Ontario, London, Ontario, Canada.,Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada.,
| | - Kevin M Stubbs
- Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada.,BrainsCAN, University of Western Ontario, London, Ontario, Canada.,
| | - Derek J Quinlan
- Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada.,BrainsCAN, University of Western Ontario, London, Ontario, Canada.,Department of Psychology, Huron University College, London, Ontario, Canada.,
| | - Anna M Rzepka
- Neuroscience Program, University of Western Ontario, London, Ontario, Canada.,
| | - Jocelyn R Martin
- Department of Psychology, University of Western Ontario, London, Ontario, Canada.,
| | - Jody C Culham
- Department of Psychology, University of Western Ontario, London, Ontario, Canada.,Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada.,Neuroscience Program, University of Western Ontario, London, Ontario, Canada., http://www.culhamlab.com/
| |
Collapse
|
6
|
Wang K, Jiang Z, Huang S, Qian J. Increasing perceptual separateness affects working memory for depth - re-allocation of attention from boundaries to the fixated center. J Vis 2021; 21:8. [PMID: 34264289 PMCID: PMC8288055 DOI: 10.1167/jov.21.7.8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2020] [Accepted: 05/25/2021] [Indexed: 11/24/2022] Open
Abstract
For decades, working memory (WM) has been a heated research topic in the field of cognitive psychology. However, most studies on WM presented visual stimuli on a two-dimensional plane, rarely involving depth perception. Several previous studies have investigated how depth information is stored in WM, and found that WM for depth is even more limited in capacity and the memory performance is poor compared to visual WM. In the present study, we used a change detection task to investigate whether dissociating memory items by different visual features, thereby to increase their perceptual separateness, can improve WM performance for depth. Memory items presented at various depth planes were bound with different colors (Experiments 1 and 3) or sizes (Experiment 2). The memory performance for depth locations of visual stimuli with homogeneous and heterogeneous appearances were tested and compared. The results showed a consistent pattern that although separating items with various feature values did not affect the overall memory performance, the manipulation significantly improved memory performance for the middle depth locations but impaired the performance for the boundary locations when observers fixated at the center of the whole depth volume. The memory benefits of feature separation can be attributed to enhanced individuation of memory items, therefore facilitating a more balanced allocation of attention and memory resources.
Collapse
Affiliation(s)
- Kaiyue Wang
- Department of Psychology, Sun Yat-Sen University, Guangzhou, China
| | - Zhuyuan Jiang
- Department of Psychology, Sun Yat-Sen University, Guangzhou, China
| | - Suqi Huang
- Department of Psychology, Sun Yat-Sen University, Guangzhou, China
| | - Jiehui Qian
- Department of Psychology, Sun Yat-Sen University, Guangzhou, China
| |
Collapse
|
7
|
Abstract
Although spatial attention has been found to alter the subjective appearance of visual stimuli in several perceptual dimensions, no research has explored whether exogenous spatial attention can affect depth perception, which is a fundamental dimension in perception that allows us to effectively interact with the environment. Here, we used an experimental paradigm adapted from Gobell and Carrasco (Psychological Science, 16[8], 644-651, 2005) to investigate this question. A peripheral cue preceding two line stimuli was used to direct exogenous attention to either location of the two lines. The two lines were separated by a certain relative disparity, and participants were asked to judge the perceived depth of two lines while attention was manipulated. We found that a farther stereoscopic depth at the attended location was perceived to be equally distant as a nearer depth at the unattended location. No such effect was found in a control experiment that employed a postcue paradigm, suggesting that our findings could not be attributed to response bias. Therefore, our study shows that exogenous spatial attention shortens perceived depth. The apparent change in stereoscopic depth may be regulated by a mechanism involving direct neural enhancement on those tuned to disparity, or be modulated by an attentional effect on apparent contrast. This finding shows that attention can change not only visual appearance but also the perceived spatial relation between an object and an observer.
Collapse
|
8
|
Karimpur H, Eftekharifar S, Troje NF, Fiehler K. Spatial coding for memory-guided reaching in visual and pictorial spaces. J Vis 2020; 20:1. [PMID: 32271893 PMCID: PMC7405696 DOI: 10.1167/jov.20.4.1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Accepted: 11/20/2019] [Indexed: 01/10/2023] Open
Abstract
An essential difference between pictorial space displayed as paintings, photographs, or computer screens, and the visual space experienced in the real world is that the observer has a defined location, and thus valid information about distance and direction of objects, in the latter but not in the former. Thus egocentric information should be more reliable in visual space, whereas allocentric information should be more reliable in pictorial space. The majority of studies relied on pictorial representations (images on a computer screen), leaving it unclear whether the same coding mechanisms apply in visual space. Using a memory-guided reaching task in virtual reality, we investigated allocentric coding in both visual space (on a table in virtual reality) and pictorial space (on a monitor that is on the table in virtual reality). Our results suggest that the brain uses allocentric information to represent objects in both pictorial and visual space. Contrary to our hypothesis, the influence of allocentric cues was stronger in visual space than in pictorial space, also after controlling for retinal stimulus size, confounding allocentric cues, and differences in presentation depth. We discuss possible reasons for stronger allocentric coding in visual than in pictorial space.
Collapse
Affiliation(s)
- Harun Karimpur
- Experimental Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| | | | - Nikolaus F. Troje
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada
- Centre for Vision Research and Department of Biology, York University, Toronto, ON, Canada
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| |
Collapse
|
9
|
Schubert RS, Jung ML, Helmert JR, Velichkovsky BM, Pannasch S. Size matters: How reaching and vergence movements are influenced by the familiar size of stereoscopically presented objects. PLoS One 2019; 14:e0225311. [PMID: 31747431 PMCID: PMC6867642 DOI: 10.1371/journal.pone.0225311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Accepted: 11/01/2019] [Indexed: 11/19/2022] Open
Abstract
The knowledge about the usual size of objects-familiar size-is known to be a taken into account for distance perception. The influence of familiar size on action programming is less clear and has not yet been tested with regard to vergence eye movements. In two experiments, we stereoscopically presented everyday objects, such as a credit card or a package of paper tissues, and varied the distance as specified by binocular disparity and the distance as specified by familiar size. Participants had to fixate the shown object and subsequently reach towards it either with open or with closed eyes. When binocular disparity and familiar size were in conflict, reaching movements revealed a combination of the two depth cues with individually different weights. The influence of familiar size was larger when no visual feedback was available during the reaching movement. Vergence movements closely followed binocular disparity and were largely unaffected by familiar size. In sum, the results suggest that in this experimental setting familiar size is taken into account for programming and executing reaching movements while vergence movements are primarily based on binocular disparity.
Collapse
Affiliation(s)
| | - Maarten L. Jung
- Faculty of Psychology, Technische Universität Dresden, Dresden, Germany
| | - Jens R. Helmert
- Faculty of Psychology, Technische Universität Dresden, Dresden, Germany
| | - Boris M. Velichkovsky
- Faculty of Psychology, Technische Universität Dresden, Dresden, Germany
- National Research Center “Kurchatov Institute”, Moscow, Russian Federation
- Moscow Institute for Physics and Technology, Moscow, Russian Federation
- Russian State University for the Humanities, Moscow, Russian Federation
| | | |
Collapse
|
10
|
Feldstein IT. Impending Collision Judgment from an Egocentric Perspective in Real and Virtual Environments: A Review. Perception 2019; 48:769-795. [DOI: 10.1177/0301006619861892] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The human egocentric perception of approaching objects and the related perceptual processes have been of interest to researchers for several decades. This article gives a literature review on numerous studies that investigated the phenomenon when an object approaches an observer (or the other way around) with the goal to single out factors that influence the perceptual process. A taxonomy of metrics is followed by a breakdown of different experimental measurement methods. Thereinafter, potential factors affecting the judgment of approaching objects are compiled and debated while divided into human factors (e.g., gender, age, and driving experience), compositional factors (e.g., approaching velocity, spatial distance, and observation time), and technical factors (e.g., field of view, stereoscopy, and display contrast). Experimental findings are collated, juxtaposed, and critically discussed. With virtual-reality devices having taken a tremendous developmental leap forward in the past few years, they have been able to gain ground in experimental research. Therefore, special attention in this article is also given to the perception of approaching objects in virtual environments and put in contrast to the perception in reality.
Collapse
Affiliation(s)
- Ilja T. Feldstein
- Harvard Medical School, Department of Ophthalmology, Boston, MA, USA; Technical University of Munich, Department of Mechanical Engineering, Garching, Germany
| |
Collapse
|
11
|
Klinghammer M, Schütz I, Blohm G, Fiehler K. Allocentric information is used for memory-guided reaching in depth: A virtual reality study. Vision Res 2016; 129:13-24. [PMID: 27789230 DOI: 10.1016/j.visres.2016.10.004] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2016] [Revised: 10/05/2016] [Accepted: 10/07/2016] [Indexed: 10/20/2022]
Abstract
Previous research has demonstrated that humans use allocentric information when reaching to remembered visual targets, but most of the studies are limited to 2D space. Here, we study allocentric coding of memorized reach targets in 3D virtual reality. In particular, we investigated the use of allocentric information for memory-guided reaching in depth and the role of binocular and monocular (object size) depth cues for coding object locations in 3D space. To this end, we presented a scene with objects on a table which were located at different distances from the observer and served as reach targets or allocentric cues. After free visual exploration of this scene and a short delay the scene reappeared, but with one object missing (=reach target). In addition, the remaining objects were shifted horizontally or in depth. When objects were shifted in depth, we also independently manipulated object size by either magnifying or reducing their size. After the scene vanished, participants reached to the remembered target location on the blank table. Reaching endpoints deviated systematically in the direction of object shifts, similar to our previous results from 2D presentations. This deviation was stronger for object shifts in depth than in the horizontal plane and independent of observer-target-distance. Reaching endpoints systematically varied with changes in object size. Our results suggest that allocentric information is used for coding targets for memory-guided reaching in depth. Thereby, retinal disparity and vergence as well as object size provide important binocular and monocular depth cues.
Collapse
Affiliation(s)
- Mathias Klinghammer
- Justus-Liebig-University, Experimental Psychology, Otto-Behaghel-Str. 10F, 35394 Giessen, Germany
| | - Immo Schütz
- TU Chemnitz, Institut für Physik, Reichenhainer Str. 70, 09126 Chemnitz, Germany.
| | - Gunnar Blohm
- Queen's University, Centre for Neuroscience Studies, 18, Stuart Street, Kingston, Ontario K7L 3N6, Canada.
| | - Katja Fiehler
- Justus-Liebig-University, Experimental Psychology, Otto-Behaghel-Str. 10F, 35394 Giessen, Germany.
| |
Collapse
|
12
|
Schot WD, Brenner E, Sousa R, Smeets JBJ. Are people adapted to their own glasses? Perception 2013; 41:991-3. [PMID: 23362676 DOI: 10.1068/p7261] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Negative lenses, either in the form of glasses or contact lenses, can correct nearsightedness. Unlike contact lenses, glasses do not only correct, but also induce optic distortions. In the scientific literature, it has often been assumed that people who wear corrective glasses instantaneously account for these distortions when they put their glasses on. We tested this assumption and found that, when people switched between their contact lenses and their glasses, they made the errors that one would predict based on the optics. This shows that people are not immediately adapted to their own glasses when they put them on.
Collapse
Affiliation(s)
- Willemijn D Schot
- Research Institute MOVE, Faculty of Human Movement Sciences, VU University Amsterdam, van der Boechorststraat 9, 1081 BT Amsterdam, The Netherlands.
| | | | | | | |
Collapse
|
13
|
Abstract
For isolated objects in complete darkness, retinal image size contributes to distance judgments even if the true object size is unknown. Here we show that the same is true under more natural conditions. On a wide beach we positioned a red cube at 10–20 m distance and then asked subjects to walk to it while blindfolded. Subjects never had a close view of the cube and were unaware that on separate trials cubes with sides of 15 cm and 20 cm were positioned at the same locations. On average, subjects walked 1 m further after seeing the 15 cm cube than after seeing the 20 cm cube.
Collapse
Affiliation(s)
- Rita Sousa
- Research Institute MOVE, Faculty of Human Movement Sciences, VU University, van der Boechorststraat 9, NL 1081 BT Amsterdam, The Netherlands
| | - Jeroen B J Smeets
- Research Institute MOVE, Faculty of Human Movement Sciences, VU University, van der Boechorststraat 9, NL 1081 BT Amsterdam, The Netherlands
| | - Eli Brenner
- Research Institute MOVE, Faculty of Human Movement Sciences, VU University, van der Boechorststraat 9, NL 1081 BT Amsterdam, The Netherlands
| |
Collapse
|