1
|
The role of eye movements in perceiving vehicle speed and time-to-arrival at the roadside. Sci Rep 2021; 11:23312. [PMID: 34857779 PMCID: PMC8640052 DOI: 10.1038/s41598-021-02412-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 11/09/2021] [Indexed: 11/12/2022] Open
Abstract
To avoid collisions, pedestrians depend on their ability to perceive and interpret the visual motion of other road users. Eye movements influence motion perception, yet pedestrians' gaze behavior has been little investigated. In the present study, we ask whether observers sample visual information differently when making two types of judgements based on the same virtual road-crossing scenario and to which extent spontaneous gaze behavior affects those judgements. Participants performed in succession a speed and a time-to-arrival two-interval discrimination task on the same simple traffic scenario-a car approaching at a constant speed (varying from 10 to 90 km/h) on a single-lane road. On average, observers were able to discriminate vehicle speeds of around 18 km/h and times-to-arrival of 0.7 s. In both tasks, observers placed their gaze closely towards the center of the vehicle's front plane while pursuing the vehicle. Other areas of the visual scene were sampled infrequently. No differences were found in the average gaze behavior between the two tasks and a pattern classifier (Support Vector Machine), trained on trial-level gaze patterns, failed to reliably classify the task from the spontaneous eye movements it elicited. Saccadic gaze behavior could predict time-to-arrival discrimination performance, demonstrating the relevance of gaze behavior for perceptual sensitivity in road-crossing.
Collapse
|
2
|
Velocity perception in a moving observer. Vision Res 2017; 138:12-17. [PMID: 28687325 DOI: 10.1016/j.visres.2017.06.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2016] [Revised: 05/30/2017] [Accepted: 06/01/2017] [Indexed: 11/24/2022]
Abstract
Previous research has shown that when a moving stimulus is presented to a moving observer, the perceived speed of the stimulus is affected by vestibular self-motion signals (Hogendoorn, Verstraten, MacDougall, & Alais, 2017. Vision Research 130, 22-30.). This interaction was interpreted as a weighted sum of visual and vestibular motion signals. This interpretation also predicts effects of vestibular self-motion signals on perceived speed. Here, we test this prediction in two experiments. In Experiment 1, moving observers carried out a visual speed discrimination task in order to establish points of subjective equality (PSE) between stimuli presented in the same or opposite direction of self-motion. We observed robust effects of self-motion on perceived speed, with self-motion in the same direction as visual motion resulting in increases in perceived speed and vice versa. These effects were well- described by a limited-width integration window. In Experiment 2, the same observers carried out another speed discrimination task in order to establish discrimination thresholds. According to the Weber-Fechner law, these thresholds are expected to increase or decrease along with perceived speed. However, no effect of self-motion on discrimination thresholds was observed. This pattern of results suggests a limit on speed discrimination performance early in the visual system, with visuo-vestibular integration in later downstream areas. These results are consistent with previous work on heading perception.
Collapse
|
3
|
Agaoglu MN, Herzog MH, Öğmen H. The effective reference frame in perceptual judgments of motion direction. Vision Res 2015; 107:101-12. [PMID: 25536467 DOI: 10.1016/j.visres.2014.12.009] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2014] [Revised: 11/26/2014] [Accepted: 12/02/2014] [Indexed: 11/28/2022]
Affiliation(s)
- Mehmet N Agaoglu
- Department of Electrical and Computer Engineering, University of Houston, N308 Engineering Building 1, Houston, TX 77204-4005, USA; Center for Neuro-Engineering and Cognitive Science, University of Houston, Houston, TX 77204-4005, USA
| | - Michael H Herzog
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Haluk Öğmen
- Department of Electrical and Computer Engineering, University of Houston, N308 Engineering Building 1, Houston, TX 77204-4005, USA; Center for Neuro-Engineering and Cognitive Science, University of Houston, Houston, TX 77204-4005, USA
| |
Collapse
|
4
|
Birkeland A, Turkay C, Viola I. Perceptually Uniform Motion Space. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2014; 20:1542-1554. [PMID: 26355333 DOI: 10.1109/tvcg.2014.2322363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Flow data is often visualized by animated particles inserted into a flow field. The velocity of a particle on the screen is typically linearly scaled by the velocities in the data. However, the perception of velocity magnitude in animated particles is not necessarily linear. We present a study on how different parameters affect relative motion perception. We have investigated the impact of four parameters. The parameters consist of speed multiplier, direction, contrast type and the global velocity scale. In addition, we investigated if multiple motion cues, and point distribution, affect the speed estimation. Several studies were executed to investigate the impact of each parameter. In the initial results, we noticed trends in scale and multiplier. Using the trends for the significant parameters, we designed a compensation model, which adjusts the particle speed to compensate for the effect of the parameters. We then performed a second study to investigate the performance of the compensation model. From the second study we detected a constant estimation error, which we adjusted for in the last study. In addition, we connect our work to established theories in psychophysics by comparing our model to a model based on Stevens' Power Law.
Collapse
|
5
|
Nawrot M, Ratzlaff M, Leonard Z, Stroyan K. Modeling depth from motion parallax with the motion/pursuit ratio. Front Psychol 2014; 5:1103. [PMID: 25339926 PMCID: PMC4186274 DOI: 10.3389/fpsyg.2014.01103] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2014] [Accepted: 09/11/2014] [Indexed: 11/13/2022] Open
Abstract
The perception of unambiguous scaled depth from motion parallax relies on both retinal image motion and an extra-retinal pursuit eye movement signal. The motion/pursuit ratio represents a dynamic geometric model linking these two proximal cues to the ratio of depth to viewing distance. An important step in understanding the visual mechanisms serving the perception of depth from motion parallax is to determine the relationship between these stimulus parameters and empirically determined perceived depth magnitude. Observers compared perceived depth magnitude of dynamic motion parallax stimuli to static binocular disparity comparison stimuli at three different viewing distances, in both head-moving and head-stationary conditions. A stereo-viewing system provided ocular separation for stereo stimuli and monocular viewing of parallax stimuli. For each motion parallax stimulus, a point of subjective equality (PSE) was estimated for the amount of binocular disparity that generates the equivalent magnitude of perceived depth from motion parallax. Similar to previous results, perceived depth from motion parallax had significant foreshortening. Head-moving conditions produced even greater foreshortening due to the differences in the compensatory eye movement signal. An empirical version of the motion/pursuit law, termed the empirical motion/pursuit ratio, which models perceived depth magnitude from these stimulus parameters, is proposed.
Collapse
Affiliation(s)
- Mark Nawrot
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University Fargo, ND, USA
| | - Michael Ratzlaff
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University Fargo, ND, USA
| | - Zachary Leonard
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University Fargo, ND, USA
| | - Keith Stroyan
- Math Department, University of Iowa Iowa City, IA, USA
| |
Collapse
|
6
|
Braun DI, Schütz AC, Gegenfurtner KR. Localization of speed differences of context stimuli during fixation and smooth pursuit eye movements. Vision Res 2010; 50:2740-9. [DOI: 10.1016/j.visres.2010.07.028] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2010] [Revised: 07/15/2010] [Accepted: 07/27/2010] [Indexed: 10/19/2022]
|
7
|
Champion RA, Freeman TCA. Discrimination contours for the perception of head-centered velocity. J Vis 2010; 10:14. [PMID: 20884563 DOI: 10.1167/10.6.14] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
There is little direct psychophysical evidence that the visual system contains mechanisms tuned to head-centered velocity when observers make a smooth pursuit eye movement. Much of the evidence is implicit, relying on measurements of bias (e.g., matching and nulling). We therefore measured discrimination contours in a space dimensioned by pursuit target motion and relative motion between target and background. Within this space, lines of constant head-centered motion are parallel to the main negative diagonal, so judgments dominated by mechanisms that combine individual components should produce contours with a similar orientation. Conversely, contours oriented parallel to the cardinal axes of the space indicate judgments based on individual components. The results provided evidence for mechanisms tuned to head-centered velocity-discrimination ellipses were significantly oriented away from the cardinal axes, toward the main negative diagonal. However, ellipse orientation was considerably less steep than predicted by a pure combination of components. This suggests that observers used a mixture of two strategies across trials, one based on individual components and another based on their sum. We provide a model that simulates this type of behavior and is able to reproduce the ellipse orientations we found.
Collapse
|
8
|
Chukoskie L, Movshon JA. Modulation of visual signals in macaque MT and MST neurons during pursuit eye movement. J Neurophysiol 2009; 102:3225-33. [PMID: 19776359 DOI: 10.1152/jn.90692.2008] [Citation(s) in RCA: 52] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Retinal image motion is produced with each eye movement, yet we usually do not perceive this self-produced "reafferent" motion, nor are motion judgments much impaired when the eyes move. To understand the neural mechanisms involved in processing reafferent motion and distinguishing it from the motion of objects in the world, we studied the visual responses of single cells in middle temporal (MT) and medial superior temporal (MST) areas during steady fixation and smooth-pursuit eye movements in awake, behaving macaques. We measured neuronal responses to random-dot patterns moving at different speeds in a stimulus window that moved with the pursuit target and the eyes. This allowed us to control retinal image motion at all eye velocities. We found the expected high proportion of cells selective for the direction of visual motion. Pursuit tracking changed both response amplitude and preferred retinal speed for some cells. The changes in preferred speed were on average weakly but systematically related to the speed of pursuit for area MST cells, as would be expected if the shifts in speed selectivity were compensating for reafferent input. In area MT, speed tuning did not change systematically during pursuit. Many cells in both areas also changed response amplitude during pursuit; the most common form of modulation was response suppression when pursuit was opposite in direction to the cell's preferred direction. These results suggest that some cells in area MST encode retinal image motion veridically during eye movements, whereas others in both MT and MST contribute to the suppression of visual responses to reafferent motion.
Collapse
Affiliation(s)
- Leanne Chukoskie
- Center for Neural Science, New York University, New York, NY 10003, USA
| | | |
Collapse
|
9
|
Ruiz-Ruiz M, Martinez-Trujillo JC. Human updating of visual motion direction during head rotations. J Neurophysiol 2008; 99:2558-76. [PMID: 18337365 DOI: 10.1152/jn.00931.2007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Previous studies have demonstrated that human subjects update the location of visual targets for saccades after head and body movements and in the absence of visual feedback. This phenomenon is known as spatial updating. Here we investigated whether a similar mechanism exists for the perception of motion direction. We recorded eye positions in three dimensions and behavioral responses in seven subjects during a motion task in two different conditions: when the subject's head remained stationary and when subjects rotated their heads around an anteroposterior axis (head tilt). We demonstrated that after head-tilt subjects updated the direction of saccades made in the perceived stimulus direction (direction of motion updating), the amount of updating varied across subjects and stimulus directions, the amount of motion direction updating was highly correlated with the amount of spatial updating during a memory-guided saccade task, subjects updated the stimulus direction during a two-alternative forced-choice direction discrimination task in the absence of saccadic eye movements (perceptual updating), perceptual updating was more accurate than motion direction updating involving saccades, and subjects updated motion direction similarly during active and passive head rotation. These results demonstrate the existence of an updating mechanism for the perception of motion direction in the human brain that operates during active and passive head rotations and that resembles the one of spatial updating. Such a mechanism operates during different tasks involving different motor and perceptual skills (saccade and motion direction discrimination) with different degrees of accuracy.
Collapse
Affiliation(s)
- Mario Ruiz-Ruiz
- Cognitive Neurophysiology Laboratory, Department of Physiology, McGill University, Montreal, Quebec, Canada
| | | |
Collapse
|
10
|
Beintema JA, van den Berg AV. Pursuit affects precision of perceived heading for small viewing apertures. Vision Res 2001; 41:2375-91. [PMID: 11459594 DOI: 10.1016/s0042-6989(01)00077-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
We investigated the interaction between extra-retinal rotation signals and retinal motion signals in heading perception during pursuit eye movement. For limited viewing aperture, the variability in perceived heading strongly depends on the pattern of motion directions. Heading towards a point outside the aperture generates nearly parallel aperture flow. This results in lower precision of perceived heading than heading that renders the radial pattern of flow visible. We ask if the precision is limited by the pattern of flow visible on the retina or that on the screen. During fixation, the two patterns are identical. They are decoupled during pursuit, since pursuit changes radial flow within the aperture on the screen into nearly parallel flow on the retina, and vice versa. The extra-retinal signal is known to reduce systematic errors in the direction of pursuit, thus compensating for the rotational flow during pursuit. We now ask if the extra-retinal signal also affects the precision of heading percepts. It might if at the spatial integration stage the rotational flow has been subtracted out already. A compensation beyond the integration stage, however, cannot undo the change in retinal motion directions so that an effect of pursuit on precision cannot be avoided. We measured the variable and systematic errors in perceived heading during fixation and pursuit for a frontal plane approach, while varying duration, dot lifetime and aperture size. We found precision is effected by pursuit as much as predicted from the pattern of retinal flow, while compensation is significantly greater than zero. This means that the interaction between the extra-retinal signal and visual motion signals takes place after spatial integration of local motion signals. Furthermore, compensation increased significantly with longer duration (0.5-3.0 s), but not with larger aperture size (10-50 degrees ). A larger aperture size did increase the eccentricity of perceived heading.
Collapse
Affiliation(s)
- J A Beintema
- Department of Zoology and Neurobiology, Ruhr University Bochum, 44780, Bochum, Germany.
| | | |
Collapse
|
11
|
Abstract
The aim of this study was to test the hypothesis that an extra-retinal signal combines with retinal velocity in a linear manner as described by existing models to determine perceived velocity. To do so, we utilized a method that allowed the determination of the relative contributions of the retinal-velocity and the extra-retinal signals for the perception of stimulus velocity. We determined the velocity (speed and direction) of a stimulus viewed with stationary eyes that was perceptually the same as the velocity of the stimulus viewed with moving eyes. Eye movements were governed by the tracking (or pursuit) of a separate pursuit target. The velocity-matching data were unable to be fit with a model that linearly combined a retinal-velocity signal and an extra-retinal signal. A model that was successful in explaining the data was one that takes the difference between two simple saturating non-linear functions, g and f, each symmetric about the origin, but one having an interaction term. That is, the function g has two arguments: retinal velocity, R, and eye velocity, E. The only argument to f is retinal velocity, R. Each argument has a scaling parameter. A comparison of the goodness of fits between models demonstrated that the success of the model is the interaction term, i.e. the modification of the compensating eye velocity signal by the retinal velocity prior to combination.
Collapse
Affiliation(s)
- K A Turano
- Johns Hopkins University School of Medicine, Wilmer Eye Institute, Baltimore, MD, USA.
| | | |
Collapse
|
12
|
Abstract
Eye movements add a constant displacement to the visual scene, altering the retinal-image velocity. Therefore, in order to recover the real world motion, eye-movement effects must be compensated. If full compensation occurs, the perceived speed of a moving object should be the same regardless of whether the eye is stationary or moving. Using a pursue-fixate procedure in a perceptual matching paradigm, we found that eye movements systematically bias the perceived speed of the distal stimulus, indicating a lack of compensation. Speed judgments depended on the interaction between the distal stimulus size and the eye velocity relative to the distal stimulus motion. When the eyes and distal stimulus moved in the same direction, speed judgments of the distal stimulus approximately matched its retinal-image motion. When the eyes and distal stimulus moved in the opposite direction, speed judgments depended on the stimulus size. For small sizes, perceived speed was typically overestimated. For large sizes, perceived speed was underestimated. Results are explained in terms of retinal-extraretinal interactions and correlate with recent neurophysiological findings.
Collapse
Affiliation(s)
- K A Turano
- Johns Hopkins University School of Medicine, Wilmer Eye Institute, Baltimore, MD, USA.
| | | |
Collapse
|
13
|
Abstract
Visual detection and discrimination thresholds are often measured using adaptive staircases, and most studies use transformed (or weighted) up/down methods with fixed step sizes--in the spirit of Wetherill and Levitt (Br J Mathemat Statist Psychol 1965;18:1-10) or Kaernbach (Percept Psychophys 1991;49:227-229)--instead of changing step size at each trial in accordance with best-placement rules--in the spirit of Watson and Pelli (Percept Psychophys 1983;47:87-91). It is generally assumed that a fixed-step-size (FSS) staircase converges on the stimulus level at which a correct response occurs with the probabilities derived by Wetherill and Levitt or Kaernbach, but this has never been proved rigorously. This work used simulation techniques to determine the asymptotic and small-sample convergence of FSS staircases as a function of such parameters as the up/down rule, the size of the steps up or down, the starting stimulus level, or the spread of the psychometric function. The results showed that the asymptotic convergence of FSS staircases depends much more on the sizes of the steps than it does on the up/down rule. Yet, if the size delta+ of a step up differs from the size delta- of a step down in a way that the ratio delta-/delta+ is constant at a specific value that changes with up/down rule, then convergence percent-correct is unaffected by the absolute sizes of the steps. For use with the popular one-, two-, three- and four-down/one-up rules, these ratios must respectively be set at 0.2845, 0.5488, 0.7393 and 0.8415, rendering staircases that converge on the 77.85%-, 80.35%-, 83.15%- and 85.84%-correct points. Wetherill and Levitt's transformed up/down rules--which require delta-/delta+ = 1--and the general version of Kaernbach's weighted up/down rule--which allows any delta-/delta+ ratio--fail to reach their presumed targets. The small-sample study showed that, even with the optimal settings, short FSS staircases (up to 20 reversals in length) are subject to some bias, and their precision is less than reasonable, but their characteristics improve when the size delta+ of a step up is larger than half the spread of the psychometric function. Practical recommendations are given for the design of efficient and trustworthy FSS staircases.
Collapse
Affiliation(s)
- M A García-Pérez
- Departamento de Metodología, Facultad de Psicología, Universidad Complutense, Madrid, Spain.
| |
Collapse
|