1
|
Warren PA, Bell G, Li Y. Investigating distortions in perceptual stability during different self-movements using virtual reality. Perception 2022; 51:3010066221116480. [PMID: 35946126 PMCID: PMC9478599 DOI: 10.1177/03010066221116480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 06/30/2022] [Indexed: 11/30/2022]
Abstract
Using immersive virtual reality (the HTC Vive Head Mounted Display), we measured both bias and sensitivity when making judgements about the scene stability of a target object during both active (self-propelled) and passive (experimenter-propelled) observer movements. This was repeated in the same group of 16 participants for three different observer-target movement conditions in which the instability of a target was yoked to the movement of the observer. We found that in all movement conditions that the target needed to move with (in the same direction) as the participant to be perceived as scene-stable. Consistent with the presence of additional available information (efference copy) about self-movement during active conditions, biases were smaller and sensitivities to instability were higher in these relative to passive conditions. However, the presence of efference copy was clearly not sufficient to completely eliminate the bias and we suggest that the presence of additional visual information about self-movement is also critical. We found some (albeit limited) evidence for correlation between appropriate metrics across different movement conditions. These results extend previous findings, providing evidence for consistency of biases across different movement types, suggestive of common processing underpinning perceptual stability judgements.
Collapse
Affiliation(s)
- Paul A. Warren
- Virtual Reality Research (VR2) Facility, Division of
Neuroscience and Experimental Psychology, University of Manchester, Manchester, UK
| | - Graham Bell
- Virtual Reality Research (VR2) Facility, Division of
Neuroscience and Experimental Psychology, University of Manchester, Manchester, UK
| | - Yu Li
- Virtual Reality Research (VR2) Facility, Division of
Neuroscience and Experimental Psychology, University of Manchester, Manchester, UK
| |
Collapse
|
2
|
Aguado B, López-Moliner J. Gravity and Known Size Calibrate Visual Information to Time Parabolic Trajectories. Front Hum Neurosci 2021; 15:642025. [PMID: 34497497 PMCID: PMC8420811 DOI: 10.3389/fnhum.2021.642025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 07/28/2021] [Indexed: 11/13/2022] Open
Abstract
Catching a ball in a parabolic flight is a complex task in which the time and area of interception are strongly coupled, making interception possible for a short period. Although this makes the estimation of time-to-contact (TTC) from visual information in parabolic trajectories very useful, previous attempts to explain our precision in interceptive tasks circumvent the need to estimate TTC to guide our action. Obtaining TTC from optical variables alone in parabolic trajectories would imply very complex transformations from 2D retinal images to a 3D layout. We propose based on previous work and show by using simulations that exploiting prior distributions of gravity and known physical size makes these transformations much simpler, enabling predictive capacities from minimal early visual information. Optical information is inherently ambiguous, and therefore, it is necessary to explain how these prior distributions generate predictions. Here is where the role of prior information comes into play: it could help to interpret and calibrate visual information to yield meaningful predictions of the remaining TTC. The objective of this work is: (1) to describe the primary sources of information available to the observer in parabolic trajectories; (2) unveil how prior information can be used to disambiguate the sources of visual information within a Bayesian encoding-decoding framework; (3) show that such predictions might be robust against complex dynamic environments; and (4) indicate future lines of research to scrutinize the role of prior knowledge calibrating visual information and prediction for action control.
Collapse
Affiliation(s)
- Borja Aguado
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain
| | - Joan López-Moliner
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain
| |
Collapse
|
3
|
Scarfe P, Glennerster A. Combining cues to judge distance and direction in an immersive virtual reality environment. J Vis 2021; 21:10. [PMID: 33900366 PMCID: PMC8083085 DOI: 10.1167/jov.21.4.10] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Accepted: 01/31/2021] [Indexed: 11/24/2022] Open
Abstract
When we move, the visual direction of objects in the environment can change substantially. Compared with our understanding of depth perception, the problem the visual system faces in computing this change is relatively poorly understood. Here, we tested the extent to which participants' judgments of visual direction could be predicted by standard cue combination rules. Participants were tested in virtual reality using a head-mounted display. In a simulated room, they judged the position of an object at one location, before walking to another location in the room and judging, in a second interval, whether an object was at the expected visual direction of the first. By manipulating the scale of the room across intervals, which was subjectively invisible to observers, we put two classes of cue into conflict, one that depends only on visual information and one that uses proprioceptive information to scale any reconstruction of the scene. We find that the sensitivity to changes in one class of cue while keeping the other constant provides a good prediction of performance when both cues vary, consistent with the standard cue combination framework. Nevertheless, by comparing judgments of visual direction with those of distance, we show that judgments of visual direction and distance are mutually inconsistent. We discuss why there is no need for any contradiction between these two conclusions.
Collapse
|
4
|
Evans L, Champion RA, Rushton SK, Montaldi D, Warren PA. Detection of scene-relative object movement and optic flow parsing across the adult lifespan. J Vis 2020; 20:12. [PMID: 32945848 PMCID: PMC7509779 DOI: 10.1167/jov.20.9.12] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Moving around safely relies critically on our ability to detect object movement. This is made difficult because retinal motion can arise from object movement or our own movement. Here we investigate ability to detect scene-relative object movement using a neural mechanism called optic flow parsing. This mechanism acts to subtract retinal motion caused by self-movement. Because older observers exhibit marked changes in visual motion processing, we consider performance across a broad age range (N = 30, range: 20–76 years). In Experiment 1 we measured thresholds for reliably discriminating the scene-relative movement direction of a probe presented among three-dimensional objects moving onscreen to simulate observer movement. Performance in this task did not correlate with age, suggesting that ability to detect scene-relative object movement from retinal information is preserved in ageing. In Experiment 2 we investigated changes in the underlying optic flow parsing mechanism that supports this ability, using a well-established task that measures the magnitude of globally subtracted optic flow. We found strong evidence for a positive correlation between age and global flow subtraction. These data suggest that the ability to identify object movement during self-movement from visual information is preserved in ageing, but that there are changes in the flow parsing mechanism that underpins this ability. We suggest that these changes reflect compensatory processing required to counteract other impairments in the ageing visual system.
Collapse
Affiliation(s)
- Lucy Evans
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Rebecca A Champion
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | | | - Daniela Montaldi
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Paul A Warren
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| |
Collapse
|
5
|
Kozhemiako N, Nunes AS, Samal A, Rana KD, Calabro FJ, Hämäläinen MS, Khan S, Vaina LM. Neural activity underlying the detection of an object movement by an observer during forward self-motion: Dynamic decoding and temporal evolution of directional cortical connectivity. Prog Neurobiol 2020; 195:101824. [PMID: 32446882 DOI: 10.1016/j.pneurobio.2020.101824] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2020] [Revised: 05/09/2020] [Accepted: 05/18/2020] [Indexed: 01/13/2023]
Abstract
Relatively little is known about how the human brain identifies movement of objects while the observer is also moving in the environment. This is, ecologically, one of the most fundamental motion processing problems, critical for survival. To study this problem, we used a task which involved nine textured spheres moving in depth, eight simulating the observer's forward motion while the ninth, the target, moved independently with a different speed towards or away from the observer. Capitalizing on the high temporal resolution of magnetoencephalography (MEG) we trained a Support Vector Classifier (SVC) using the sensor-level data to identify correct and incorrect responses. Using the same MEG data, we addressed the dynamics of cortical processes involved in the detection of the independently moving object and investigated whether we could obtain confirmatory evidence for the brain activity patterns used by the classifier. Our findings indicate that response correctness could be reliably predicted by the SVC, with the highest accuracy during the blank period after motion and preceding the response. The spatial distribution of the areas critical for the correct prediction was similar but not exclusive to areas underlying the evoked activity. Importantly, SVC identified frontal areas otherwise not detected with evoked activity that seem to be important for the successful performance in the task. Dynamic connectivity further supported the involvement of frontal and occipital-temporal areas during the task periods. This is the first study to dynamically map cortical areas using a fully data-driven approach in order to investigate the neural mechanisms involved in the detection of moving objects during observer's self-motion.
Collapse
Affiliation(s)
- N Kozhemiako
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, BC, Canada; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - A S Nunes
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, BC, Canada; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA.
| | - A Samal
- Departments of Biomedical Engineering, Neurology and the Graduate Program for Neuroscience, Boston University, Boston, MA, USA.
| | - K D Rana
- Departments of Biomedical Engineering, Neurology and the Graduate Program for Neuroscience, Boston University, Boston, MA, USA; National Institute of Mental Health, Bethesda, MD, USA.
| | - F J Calabro
- Department of Psychiatry and Biomedical Engineering, University of Pittsburgh, PA, USA.
| | - M S Hämäläinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard Medical School, Boston, MA, USA.
| | - S Khan
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard Medical School, Boston, MA, USA
| | - L M Vaina
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Departments of Biomedical Engineering, Neurology and the Graduate Program for Neuroscience, Boston University, Boston, MA, USA; Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
6
|
Rushton SK, Chen R, Li L. Ability to identify scene-relative object movement is not limited by, or yoked to, ability to perceive heading. J Vis 2018; 18:11. [PMID: 30029224 DOI: 10.1167/18.6.11] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
During locomotion humans can judge where they are heading relative to the scene and the movement of objects within the scene. Both judgments rely on identifying global components of optic flow. What is the relationship between the perception of heading, and the identification of object movement during self-movement? Do they rely on a shared mechanism? One way to address these questions is to compare performance on the two tasks. We designed stimuli that allowed direct comparison of the precision of heading and object movement judgments. Across a series of experiments, we found the precision was typically higher when judging scene-relative object movement than when judging heading. We also found that manipulations of the content of the visual scene can change the relative precision of the two judgments. These results demonstrate that the ability to judge scene-relative object movement during self-movement is not limited by, or yoked to, the ability to judge the direction of self-movement.
Collapse
Affiliation(s)
- Simon K Rushton
- School of Psychology, Cardiff University, Cardiff, Wales, UK
| | - Rongrong Chen
- Department of Psychology, The University of Hong Kong, Hong Kong SAR
| | - Li Li
- Department of Psychology, The University of Hong Kong, Hong Kong SAR.,Neural Science Program, NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, PRC
| |
Collapse
|
7
|
The Primary Role of Flow Processing in the Identification of Scene-Relative Object Movement. J Neurosci 2017; 38:1737-1743. [PMID: 29229707 PMCID: PMC5815455 DOI: 10.1523/jneurosci.3530-16.2017] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2016] [Revised: 08/15/2017] [Accepted: 09/07/2017] [Indexed: 11/25/2022] Open
Abstract
Retinal image motion could be due to the movement of the observer through space or an object relative to the scene. Optic flow, form, and change of position cues all provide information that could be used to separate out retinal motion due to object movement from retinal motion due to observer movement. In Experiment 1, we used a minimal display to examine the contribution of optic flow and form cues. Human participants indicated the direction of movement of a probe object presented against a background of radially moving pairs of dots. By independently controlling the orientation of each dot pair, we were able to put flow cues to self-movement direction (the point from which all the motion radiated) and form cues to self-movement direction (the point toward which all the dot pairs were oriented) in conflict. We found that only flow cues influenced perceived probe movement. In Experiment 2, we switched to a rich stereo display composed of 3D objects to examine the contribution of flow and position cues. We moved the scene objects to simulate a lateral translation and counter-rotation of gaze. By changing the polarity of the scene objects (from light to dark and vice versa) between frames, we placed flow cues to self-movement direction in opposition to change of position cues. We found that again flow cues dominated the perceived probe movement relative to the scene. Together, these experiments indicate the neural network that processes optic flow has a primary role in the identification of scene-relative object movement. SIGNIFICANCE STATEMENT Motion of an object in the retinal image indicates relative movement between the observer and the object, but it does not indicate its cause: movement of an object in the scene; movement of the observer; or both. To isolate retinal motion due to movement of a scene object, the brain must parse out the retinal motion due to movement of the eye (“flow parsing”). Optic flow, form, and position cues all have potential roles in this process. We pitted the cues against each other and assessed their influence. We found that flow parsing relies on optic flow alone. These results indicate the primary role of the neural network that processes optic flow in the identification of scene-relative object movement.
Collapse
|
8
|
Rogers C, Rushton SK, Warren PA. Peripheral Visual Cues Contribute to the Perception of Object Movement During Self-Movement. Iperception 2017; 8:2041669517736072. [PMID: 29201335 PMCID: PMC5700793 DOI: 10.1177/2041669517736072] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Safe movement through the environment requires us to monitor our surroundings for moving objects or people. However, identification of moving objects in the scene is complicated by self-movement, which adds motion across the retina. To identify world-relative object movement, the brain thus has to ‘compensate for’ or ‘parse out’ the components of retinal motion that are due to self-movement. We have previously demonstrated that retinal cues arising from central vision contribute to solving this problem. Here, we investigate the contribution of peripheral vision, commonly thought to provide strong cues to self-movement. Stationary participants viewed a large field of view display, with radial flow patterns presented in the periphery, and judged the trajectory of a centrally presented probe. Across two experiments, we demonstrate and quantify the contribution of peripheral optic flow to flow parsing during forward and backward movement.
Collapse
Affiliation(s)
| | | | - Paul A Warren
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK
| |
Collapse
|
9
|
Niehorster DC, Li L, Lappe M. The Accuracy and Precision of Position and Orientation Tracking in the HTC Vive Virtual Reality System for Scientific Research. Iperception 2017; 8:2041669517708205. [PMID: 28567271 PMCID: PMC5439658 DOI: 10.1177/2041669517708205] [Citation(s) in RCA: 137] [Impact Index Per Article: 17.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
The advent of inexpensive consumer virtual reality equipment enables many more researchers to study perception with naturally moving observers. One such system, the HTC Vive, offers a large field-of-view, high-resolution head mounted display together with a room-scale tracking system for less than a thousand U.S. dollars. If the position and orientation tracking of this system is of sufficient accuracy and precision, it could be suitable for much research that is currently done with far more expensive systems. Here we present a quantitative test of the HTC Vive’s position and orientation tracking as well as its end-to-end system latency. We report that while the precision of the Vive’s tracking measurements is high and its system latency (22 ms) is low, its position and orientation measurements are provided in a coordinate system that is tilted with respect to the physical ground plane. Because large changes in offset were found whenever tracking was briefly lost, it cannot be corrected for with a one-time calibration procedure. We conclude that the varying offset between the virtual and the physical tracking space makes the HTC Vive at present unsuitable for scientific experiments that require accurate visual stimulation of self-motion through a virtual world. It may however be suited for other experiments that do not have this requirement.
Collapse
Affiliation(s)
| | - Li Li
- Neural Science Program, New York University Shanghai, China; Department of Psychology, The University of Hong Kong, Hong Kong
| | - Markus Lappe
- Institute for Psychology, University of Muenster, Germany; Otto-Creutzfeldt Center for Cognitive and Behavioural Neuroscience, University of Muenster, Germany
| |
Collapse
|
10
|
Niehorster DC, Li L. Accuracy and Tuning of Flow Parsing for Visual Perception of Object Motion During Self-Motion. Iperception 2017; 8:2041669517708206. [PMID: 28567272 PMCID: PMC5439648 DOI: 10.1177/2041669517708206] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing.
Collapse
Affiliation(s)
| | - Li Li
- Department of Psychology, The University of Hong Kong, Pokfulam, Hong Kong; Neural Science Program, NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, China
| |
Collapse
|
11
|
Affiliation(s)
- Andrew Glennerster
- Department of Psychology, School of Psychology and Clinical Language Sciences, University of Reading Reading, UK
| |
Collapse
|
12
|
Pelah A, Barbur J, Thurrell A, Hock HS. The coupling of vision with locomotion in cortical blindness. Vision Res 2014; 110:286-94. [PMID: 24832646 DOI: 10.1016/j.visres.2014.04.015] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2013] [Revised: 04/29/2014] [Accepted: 04/30/2014] [Indexed: 10/25/2022]
Abstract
Maintaining or modifying the speed and direction of locomotion requires the coupling of the locomotion with the retinal optic flow that it generates. It is shown that this essential behavioral capability, which requires on-line neural control, is preserved in the cortically blind hemifield of a hemianope. In experiments, optic flow stimuli were presented to either the normal or blind hemifield while the patient was walking on a treadmill. Little difference was found between the hemifields with respect to the coupling (i.e. co-dependency) of optic flow detection with locomotion. Even in the cortically blind hemifield, faster walking resulted in the perceptual slowing of detected optic flow, and self-selected locomotion speeds demonstrated behavioral discrimination between different optic flow speeds. The results indicate that the processing of optic flow, and thereby on-line visuo-locomotor coupling, can take place along neural pathways that function without processing in Area V1, and thus in the absence of conscious intervention. These and earlier findings suggest that optic flow and object motion are processed in parallel along with correlated non-visual locomotion signals. Extrastriate interactions may be responsible for discounting the optical effects of locomotion on the perceived direction of object motion, and maintaining visually guided self-motion.
Collapse
Affiliation(s)
- Adar Pelah
- Department of Electronics, University of York, York Y010 5DD, UK.
| | - John Barbur
- School of Health Sciences, City University London, London EG1V 0HB, UK
| | - Adrian Thurrell
- Girton College, University of Cambridge, Cambridge CB3 0JG, UK
| | - Howard S Hock
- Department of Psychology, The Center for Complex Systems and Brain Science, Florida Atlantic University, Boca Raton, FL 33486, USA
| |
Collapse
|
13
|
Joint representation of depth from motion parallax and binocular disparity cues in macaque area MT. J Neurosci 2013; 33:14061-74, 14074a. [PMID: 23986242 DOI: 10.1523/jneurosci.0251-13.2013] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Perception of depth is based on a variety of cues, with binocular disparity and motion parallax generally providing more precise depth information than pictorial cues. Much is known about how neurons in visual cortex represent depth from binocular disparity or motion parallax, but little is known about the joint neural representation of these depth cues. We recently described neurons in the middle temporal (MT) area that signal depth sign (near vs far) from motion parallax; here, we examine whether and how these neurons also signal depth from binocular disparity. We find that most MT neurons in rhesus monkeys (Macaca Mulatta) are selective for depth sign based on both disparity and motion parallax cues. However, the depth-sign preferences (near or far) are not always aligned: 56% of MT neurons have matched depth-sign preferences ("congruent" cells) whereas the remaining 44% of neurons prefer near depth from motion parallax and far depth from disparity, or vice versa ("opposite" cells). For congruent cells, depth-sign selectivity increases when disparity cues are added to motion parallax, but this enhancement does not occur for opposite cells. This suggests that congruent cells might contribute to perceptual integration of depth cues. We also found that neurons are clustered in MT according to their depth tuning based on motion parallax, similar to the known clustering of MT neurons for binocular disparity. Together, these findings suggest that area MT is involved in constructing a representation of 3D scene structure that takes advantage of multiple depth cues available to mobile observers.
Collapse
|
14
|
Visual and non-visual contributions to the perception of object motion during self-motion. PLoS One 2013; 8:e55446. [PMID: 23408983 PMCID: PMC3567075 DOI: 10.1371/journal.pone.0055446] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2012] [Accepted: 12/29/2012] [Indexed: 11/19/2022] Open
Abstract
Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component - that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (experiment 1) and direction (experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects.
Collapse
|
15
|
Hibbard PB, Goutcher R, O'Kane LM, Scarfe P. Misperception of aspect ratio in binocularly viewed surfaces. Vision Res 2012; 70:34-43. [PMID: 22925917 DOI: 10.1016/j.visres.2012.08.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2012] [Revised: 08/03/2012] [Accepted: 08/07/2012] [Indexed: 10/28/2022]
Abstract
The horizontal-vertical illusion, in which the vertical dimension is overestimated relative to the horizontal direction, has been explained in terms of the statistical relationship between the lengths of lines in the world, and the lengths of their projections onto the retina (Howe & Purves, 2002). The current study shows that this illusion affects the apparent aspect ratio of shapes, and investigates how it interacts with binocular cues to surface slant. One way in which statistical information could give rise to the horizontal-vertical illusion would be through prior assumptions about the distribution of slant. This prior would then be expected to interact with retinal cues to slant. We determined the aspect ratio of stereoscopically viewed ellipses that appeared circular. We show that observers' judgements of aspect ratio were affected by surface slant, but that the largest image vertical:horizontal aspect ratio that was considered to be a surface with a circular profile was always found for surfaces close to fronto-parallel. This is not consistent with a Bayesian model in which the horizontal-vertical illusion arises from a non-uniform prior probability distribution for slant. Rather, we suggest that assumptions about the slant of surfaces affect apparent aspect ratio in a manner that is more heuristic, and partially dissociated from apparent slant.
Collapse
Affiliation(s)
- Paul B Hibbard
- School of Psychology, University of St. Andrews, St. Mary's Quad, St. Andrews, Fife, UK.
| | | | | | | |
Collapse
|
16
|
Jain A, Backus BT. Experience affects the use of ego-motion signals during 3D shape perception. J Vis 2010; 10:10.14.30. [PMID: 21191132 DOI: 10.1167/10.14.30] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Experience has long-term effects on perceptual appearance (Q. Haijiang, J. A. Saunders, R. W. Stone, & B. T. Backus, 2006). We asked whether experience affects the appearance of structure-from-motion stimuli when the optic flow is caused by observer ego-motion. Optic flow is an ambiguous depth cue: a rotating object and its oppositely rotating, depth-inverted dual generate similar flow. However, the visual system exploits ego-motion signals to prefer the percept of an object that is stationary over one that rotates (M. Wexler, F. Panerai, I. Lamouret, & J. Droulez, 2001). We replicated this finding and asked whether this preference for stationarity, the "stationarity prior," is modulated by experience. During training, two groups of observers were exposed to objects with identical flow, but that were either stationary or moving as determined by other cues. The training caused identical test stimuli to be seen preferentially as stationary or moving by the two groups, respectively. We then asked whether different priors can exist independently at different locations in the visual field. Observers were trained to see objects either as stationary or as moving at two different locations. Observers' stationarity bias at the two respective locations was modulated in the directions consistent with training. Thus, the utilization of extraretinal ego-motion signals for disambiguating optic flow signals can be updated as the result of experience, consistent with the updating of a Bayesian prior for stationarity.
Collapse
Affiliation(s)
- Anshul Jain
- SUNY Eye Institute and Graduate Center for Vision Research, SUNY College of Optometry, New York, NY 10036, USA.
| | | |
Collapse
|
17
|
Svarverud E, Gilson SJ, Glennerster A. Cue combination for 3D location judgements. J Vis 2010; 10:5.1-13. [PMID: 20143898 DOI: 10.1167/10.1.5] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2009] [Accepted: 12/04/2009] [Indexed: 11/24/2022] Open
Abstract
Cue combination rules have often been applied to the perception of surface shape but not to judgements of object location. Here, we used immersive virtual reality to explore the relationship between different cues to distance. Participants viewed a virtual scene and judged the change in distance of an object presented in two intervals, where the scene changed in size between intervals (by a factor of between 0.25 and 4). We measured thresholds for detecting a change in object distance when there were only 'physical' (stereo and motion parallax) or 'texture-based' cues (independent of the scale of the scene) and used these to predict biases in a distance matching task. Under a range of conditions, in which the viewing distance and position of the target relative to other objects was varied, the ratio of 'physical' to 'texture-based' thresholds was a good predictor of biases in the distance matching task. The cue combination approach, which successfully accounts for our data, relies on quite different principles from those underlying traditional models of 3D reconstruction.
Collapse
Affiliation(s)
- Ellen Svarverud
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | | | | |
Collapse
|
18
|
Warren PA, Rushton SK. Perception of scene-relative object movement: Optic flow parsing and the contribution of monocular depth cues. Vision Res 2009; 49:1406-19. [PMID: 19480063 DOI: 10.1016/j.visres.2009.01.016] [Citation(s) in RCA: 50] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
We have recently suggested that the brain uses its sensitivity to optic flow in order to parse retinal motion into components arising due to self and object movement (e.g. Rushton, S. K., & Warren, P. A. (2005). Moving observers, 3D relative motion and the detection of object movement. Current Biology, 15, R542-R543). Here, we explore whether stereo disparity is necessary for flow parsing or whether other sources of depth information, which could theoretically constrain flow-field interpretation, are sufficient. Stationary observers viewed large field of view stimuli containing textured cubes, moving in a manner that was consistent with a complex observer movement through a stationary scene. Observers made speeded responses to report the perceived direction of movement of a probe object presented at different depths in the scene. Across conditions we varied the presence or absence of different binocular and monocular cues to depth order. In line with previous studies, results consistent with flow parsing (in terms of both perceived direction and response time) were found in the condition in which motion parallax and stereoscopic disparity were present. Observers were poorer at judging object movement when depth order was specified by parallax alone. However, as more monocular depth cues were added to the stimulus the results approached those found when the scene contained stereoscopic cues. We conclude that both monocular and binocular static depth information contribute to flow parsing. These findings are discussed in the context of potential architectures for a model of the flow parsing mechanism.
Collapse
Affiliation(s)
- Paul A Warren
- School of Psychology and Communications Research Centre, Cardiff University, Cardiff, CF10 3AT Wales, UK.
| | | |
Collapse
|
19
|
Optic flow processing for the assessment of object movement during ego movement. Curr Biol 2009; 19:1555-60. [PMID: 19699091 DOI: 10.1016/j.cub.2009.07.057] [Citation(s) in RCA: 118] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2009] [Revised: 07/14/2009] [Accepted: 07/15/2009] [Indexed: 11/23/2022]
Abstract
The vast majority of research on optic flow (retinal motion arising because of observer movement) has focused on its use in heading recovery and guidance of locomotion. Here we demonstrate that optic flow processing has an important role in the detection and estimation of scene-relative object movement during self movement. To do this, the brain identifies and globally discounts (i.e., subtracts) optic flow patterns across the visual scene-a process called flow parsing. Remaining motion can then be attributed to other objects in the scene. In two experiments, stationary observers viewed radial expansion flow fields and a moving probe at various onscreen locations. Consistent with global discounting, perceived probe motion had a significant component toward the center of the display and the magnitude of this component increased with probe eccentricity. The contribution of local motion processing to this effect was small compared to that of global processing (experiment 1). Furthermore, global discounting was clearly implicated because these effects persisted even when all the flow in the hemifield containing the probe was removed (experiment 2). Global processing of optic flow information is shown to play a fundamental role in the recovery of object movement during ego movement.
Collapse
|
20
|
Umemura H, Watanabe H. Interpretation of optic flows synchronized with observer’s hand movements. Vision Res 2009; 49:834-42. [DOI: 10.1016/j.visres.2009.02.020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2008] [Revised: 02/25/2009] [Accepted: 02/28/2009] [Indexed: 11/30/2022]
|
21
|
Gilson SJ, Fitzgibbon AW, Glennerster A. Spatial calibration of an optical see-through head-mounted display. J Neurosci Methods 2008; 173:140-6. [PMID: 18599125 DOI: 10.1016/j.jneumeth.2008.05.015] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2008] [Revised: 05/07/2008] [Accepted: 05/09/2008] [Indexed: 10/22/2022]
Abstract
We present here a method for calibrating an optical see-through head-mounted display (HMD) using techniques usually applied to camera calibration (photogrammetry). Using a camera placed inside the HMD to take pictures simultaneously of a tracked object and features in the HMD display, we could exploit established camera calibration techniques to recover both the intrinsic and extrinsic properties of the HMD (width, height, focal length, optic centre and principal ray of the display). Our method gives low re-projection errors and, unlike existing methods, involves no time-consuming and error-prone human measurements, nor any prior estimates about the HMD geometry.
Collapse
Affiliation(s)
- Stuart J Gilson
- Department of Physiology, Anatomy and Genetics, Parks Road, Oxford OX1 3PT, United Kingdom.
| | | | | |
Collapse
|
22
|
How soccer players head the ball: a test of Optic Acceleration Cancellation theory with virtual reality. Vision Res 2008; 48:1479-87. [PMID: 18472123 DOI: 10.1016/j.visres.2008.03.016] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2006] [Revised: 03/20/2008] [Accepted: 03/26/2008] [Indexed: 10/22/2022]
Abstract
We measured the movements of soccer players heading a football in a fully immersive virtual reality environment. In mid-flight the ball's trajectory was altered from its normal quasi-parabolic path to a linear one, producing a jump in the rate of change of the angle of elevation of gaze (alpha) from player to ball. One reaction time later the players adjusted their speed so that the rate of change of alpha increased when it had been reduced and reduced it when it had been increased. Since the result of the player's movement was to regain a value of the rate of change close to that before the disturbance, the data suggest that the players have an expectation of, and memory for, the pattern that the rate of change of alpha will follow during the flight. The results support the general claim that players intercepting balls use servo control strategies and are consistent with the particular claim of Optic Acceleration Cancellation theory that the servo strategy is to allow alpha to increase at a steadily decreasing rate.
Collapse
|
23
|
Wu B, He ZJ, Ooi TL. Inaccurate representation of the ground surface beyond a texture boundary. Perception 2007; 36:703-21. [PMID: 17624117 PMCID: PMC4000708 DOI: 10.1068/p5693] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
The sequential-surface-integration-process (SSIP) hypothesis was proposed to elucidate how the visual system constructs the ground-surface representation in the intermediate distance range (He et al, 2004 Perception 33 789-806). According to the hypothesis, the SSIP constructs an accurate representation of the near ground surface by using reliable near depth cues. The near ground representation then serves as a template for integrating the adjacent surface patch by using the texture gradient information as the predominant depth cue. By sequentially integrating the surface patches from near to far, the visual system obtains the global ground representation. A critical prediction of the SSIP hypothesis is that, when an abrupt texture-gradient change exists between the near and far ground surfaces, the SSIP can no longer accurately represent the far surface. Consequently, the representation of the far surface will be slanted upward toward the frontoparallel plane (owing to the intrinsic bias of the visual system), and the egocentric distance of a target on the far surface will be underestimated. Our previous findings in the real 3-D environment have shown that observers underestimated the target distance across a texture boundary. Here, we used the virtual-reality system to first test distance judgments with a distance-matching task. We created the texture boundary by having virtual grass- and cobblestone-textured patterns abutting on a flat (horizontal) ground surface in experiment 1, and by placing a brick wall to interrupt the continuous texture gradient of a flat grass surface in experiment 2. In both instances, observers underestimated the target distance across the texture boundary, compared to the homogeneous-texture ground surface (control). Second, we tested the proposal that the far surface beyond the texture boundary is perceived as slanted upward. For this, we used a virtual checkerboard-textured ground surface that was interrupted by a texture boundary. We found that not only was the target distance beyond the texture boundary underestimated relative to the homogeneous-texture condition, but the far surface beyond the texture boundary was also perceived as relatively slanted upward (experiment 3). Altogether, our results confirm the predictions of the SSIP hypothesis.
Collapse
Affiliation(s)
- Bing Wu
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY 40292, USA
| | - Zijiang J He
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY 40292, USA
| | - Teng Leng Ooi
- Department of Basic Sciences, Pennsylvania College of Optometry, Elkins Park, PA 19027, USA
| |
Collapse
|
24
|
Gilson SJ, Fitzgibbon AW, Glennerster A. Quantitative analysis of accuracy of an inertial/acoustic 6DOF tracking system in motion. J Neurosci Methods 2006; 154:175-82. [PMID: 16448700 PMCID: PMC2816816 DOI: 10.1016/j.jneumeth.2005.12.013] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2005] [Revised: 12/15/2005] [Accepted: 12/19/2005] [Indexed: 10/25/2022]
Abstract
An increasing number of neuroscience experiments are using virtual reality to provide a more immersive and less artificial experimental environment. This is particularly useful to navigation and three-dimensional scene perception experiments. Such experiments require accurate real-time tracking of the observer's head in order to render the virtual scene. Here, we present data on the accuracy of a commonly used six degrees of freedom tracker (Intersense IS900) when it is moved in ways typical of virtual reality applications. We compared the reported location of the tracker with its location computed by an optical tracking method. When the tracker was stationary, the root mean square error in spatial accuracy was 0.64 mm. However, we found that errors increased over ten-fold (up to 17 mm) when the tracker moved at speeds common in virtual reality applications. We demonstrate that the errors we report here are predominantly due to inaccuracies of the IS900 system rather than the optical tracking against which it was compared.
Collapse
|
25
|
Glennerster A, Tcheang L, Gilson SJ, Fitzgibbon AW, Parker AJ. Humans ignore motion and stereo cues in favor of a fictional stable world. Curr Biol 2006; 16:428-32. [PMID: 16488879 PMCID: PMC2833396 DOI: 10.1016/j.cub.2006.01.019] [Citation(s) in RCA: 46] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2005] [Revised: 01/06/2006] [Accepted: 01/09/2006] [Indexed: 11/17/2022]
Abstract
As we move through the world, our eyes acquire a sequence of images. The information from this sequence is sufficient to determine the structure of a three-dimensional scene, up to a scale factor determined by the distance that the eyes have moved. Previous evidence shows that the human visual system accounts for the distance the observer has walked and the separation of the eyes when judging the scale, shape, and distance of objects. However, in an immersive virtual-reality environment, observers failed to notice when a scene expanded or contracted, despite having consistent information about scale from both distance walked and binocular vision. This failure led to large errors in judging the size of objects. The pattern of errors cannot be explained by assuming a visual reconstruction of the scene with an incorrect estimate of interocular separation or distance walked. Instead, it is consistent with a Bayesian model of cue integration in which the efficacy of motion and disparity cues is greater at near viewing distances. Our results imply that observers are more willing to adjust their estimate of interocular separation or distance walked than to accept that the scene has changed in size.
Collapse
Affiliation(s)
- Andrew Glennerster
- Department of Physiology, Anatomy and Genetics, Sherrington Building, University of Oxford, Parks Road, Oxford OX1 3PT, United Kingdom.
| | | | | | | | | |
Collapse
|
26
|
Scarfe P, Hibbard PB. Disparity-defined objects moving in depth do not elicit three-dimensional shape constancy. Vision Res 2005; 46:1599-610. [PMID: 16364392 DOI: 10.1016/j.visres.2005.11.002] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2005] [Revised: 11/01/2005] [Accepted: 11/03/2005] [Indexed: 11/27/2022]
Abstract
Observers generally fail to recover three-dimensional shape accurately from binocular disparity. Typically, depth is overestimated at near distances and underestimated at far distances [Johnston, E. B. (1991). Systematic distortions of shape from stereopsis. Vision Research, 31, 1351-1360]. A simple prediction from this is that disparity-defined objects should appear to expand in depth when moving towards the observer, and compress in depth when moving away. However, additional information is provided when an object moves from which 3D Euclidean shape can be recovered, be this through the addition of structure from motion information [Richards, W. (1985). Structure from stereo and motion. Journal of the Optical Society of America A, 2, 343-349], or the use of non-generic strategies [Todd, J. T., & Norman, J. F. (2003). The visual perception of 3-D shape from multiple cues: Are observers capable of perceiving metric structure? Perception and Psychophysics, 65, 31-47]. Here, we investigated shape constancy for objects moving in depth. We found that to be perceived as constant in shape, objects needed to contract in depth when moving toward the observer, and expand in depth when moving away, countering the effects of incorrect distance scaling (Johnston, 1991). This is a striking example of the failure of shape constancy, but one that is predicted if observers neither accurately estimate object distance in order to recover Euclidean shape, nor are able to base their responses on a simpler processing strategy.
Collapse
Affiliation(s)
- P Scarfe
- School of Psychology, University of St. Andrews, St. Andrews, Fife KY16 9JP, UK.
| | | |
Collapse
|