1
|
Freeman TCA, Powell G. Perceived speed at low luminance: Lights out for the Bayesian observer? Vision Res 2022; 201:108124. [PMID: 36193604 DOI: 10.1016/j.visres.2022.108124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Revised: 07/21/2022] [Accepted: 09/06/2022] [Indexed: 11/06/2022]
Abstract
To account for perceptual bias, Bayesian models use the precision of early sensory measurements to weight the influence of prior expectations. As precision decreases, prior expectations start to dominate. Important examples come from motion perception, where the slow-motion prior has been used to explain a variety of motion illusions in vision, hearing, and touch, many of which correlate appropriately with threshold measures of underlying precision. However, the Bayesian account seems defeated by the finding that moving objects appear faster in the dark, because most motion thresholds are worse at low luminance. Here we show this is not the case for speed discrimination. Our results show that performance improves at low light levels by virtue of a perceived contrast cue that is more salient in the dark. With this cue removed, discrimination becomes independent of luminance. However, we found perceived speed still increased in the dark for the same observers, and by the same amount. A possible interpretation is that motion processing is therefore not Bayesian, because our findings challenge a key assumption these models make, namely that the accuracy of early sensory measurements is independent of basic stimulus properties like luminance. However, a final experiment restored Bayesian behaviour by adding external noise, making discrimination worse and slowing perceived speed down. Our findings therefore suggest that motion is processed in a Bayesian fashion but based on noisy sensory measurements that also vary in accuracy.
Collapse
Affiliation(s)
- Tom C A Freeman
- School of Psychology, Cardiff University, Tower Building, 70, Park Place, Cardiff CF10 3AT, United Kingdom.
| | - Georgie Powell
- School of Psychology, Cardiff University, Tower Building, 70, Park Place, Cardiff CF10 3AT, United Kingdom
| |
Collapse
|
2
|
The "speed" of acuity in scotopic vs. photopic vision. Graefes Arch Clin Exp Ophthalmol 2020; 258:2791-2798. [PMID: 32803325 PMCID: PMC7677280 DOI: 10.1007/s00417-020-04867-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2020] [Revised: 07/20/2020] [Accepted: 07/27/2020] [Indexed: 11/01/2022] Open
Abstract
PURPOSE The effect of duration of optotype presentation on visual acuity measures has been extensively studied under photopic conditions. However, systematic data on duration dependence of acuity values under mesopic and scotopic conditions is scarce, despite being highly relevant for many visual tasks including night driving, and for clinical diagnostic applications. The present study aims to address this void. METHODS We measured Landolt C acuity under photopic (90 cd/m2), mesopic (0.7 cd/m2), and scotopic (0.009 cd/m2) conditions for several optotype presentation durations ranging from 0.1 to 10 s using the Freiburg Acuity and Contrast Test. Two age groups were tested (young, 18-29 years, and older, 61-74 years). RESULTS As expected, under all luminance conditions, better acuity values were found for longer presentation durations. Photopic acuity in young participants decreased by about 0.25 log units from 0.1 to 10 s; mesopic vision mimicked the photopic visual behavior. Scotopic acuities depended more strongly on presentation duration (difference > 0.78 log units) than photopic values. There was no consistent pattern of correlation between luminance conditions across participants. We found a qualitative similarity between younger and older participants, despite higher variability among the latter and differences in absolute acuity: Photopic acuity difference (0.1 vs. 10 s) for the older participants was 0.19 log units, and scotopic difference was > 0.62 log units. CONCLUSION Scotopic acuity is more susceptible to changes in stimulus duration than photopic vision, with considerable interindividual variability. The latter may reflect differences in aging and sub-clinical pathophysiological processes and might have consequences for visual performance during nocturnal activities such as driving at night. Acuity testing with briefly presented scotopic stimuli might increase the usefulness of acuity assessment for tracking of the health state of the visual system.
Collapse
|
3
|
Feldstein IT, Dyszak GN. Road crossing decisions in real and virtual environments: A comparative study on simulator validity. ACCIDENT; ANALYSIS AND PREVENTION 2020; 137:105356. [PMID: 32059135 DOI: 10.1016/j.aap.2019.105356] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2019] [Revised: 11/03/2019] [Accepted: 11/04/2019] [Indexed: 06/10/2023]
Abstract
Virtual reality (VR) is a valuable tool for the assessment of human perception and behavior in a risk-free environment. Investigators should, however, ensure that the used virtual environment is validated in accordance with the experiment's intended research question since behavior in virtual environments has been shown to differ to behavior in real environments. This article presents the street crossing decisions of 30 participants who were facing an approaching vehicle and had to decide at what moment it was no longer safe to cross, applying the step-back method. The participants executed the task in a real environment and also within a highly immersive VR setup involving a head-mounted display (HMD). The results indicate significant differences between the two settings regarding the participants' behaviors. The time-to-contact of approaching vehicles was significantly lower for crossing decisions in the virtual environment than for crossing decisions in the real one. Additionally, it was demonstrated that participants based their crossing decisions in the real environment on the temporal distance of the approaching vehicle (i.e., time-to-contact), whereas the crossing decisions in the virtual environment seemed to depend on the vehicle's spatial distance, neglecting the vehicle's velocity. Furthermore, a deeper analysis suggests that crossing decisions were not affected by factors such as the participant's gender or the order in which they faced the real and the virtual environment.
Collapse
Affiliation(s)
- Ilja T Feldstein
- Harvard Medical School, Dept. of Ophthalmology, Schepens Eye Research Institute, Boston, MA, 02114, USA.
| | - Georg N Dyszak
- Technical University of Munich, Dept. of Mechanical Engineering, Chair of Ergonomics, 85748, Garching, Germany
| |
Collapse
|
4
|
Feldstein IT. Impending Collision Judgment from an Egocentric Perspective in Real and Virtual Environments: A Review. Perception 2019; 48:769-795. [DOI: 10.1177/0301006619861892] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The human egocentric perception of approaching objects and the related perceptual processes have been of interest to researchers for several decades. This article gives a literature review on numerous studies that investigated the phenomenon when an object approaches an observer (or the other way around) with the goal to single out factors that influence the perceptual process. A taxonomy of metrics is followed by a breakdown of different experimental measurement methods. Thereinafter, potential factors affecting the judgment of approaching objects are compiled and debated while divided into human factors (e.g., gender, age, and driving experience), compositional factors (e.g., approaching velocity, spatial distance, and observation time), and technical factors (e.g., field of view, stereoscopy, and display contrast). Experimental findings are collated, juxtaposed, and critically discussed. With virtual-reality devices having taken a tremendous developmental leap forward in the past few years, they have been able to gain ground in experimental research. Therefore, special attention in this article is also given to the perception of approaching objects in virtual environments and put in contrast to the perception in reality.
Collapse
Affiliation(s)
- Ilja T. Feldstein
- Harvard Medical School, Department of Ophthalmology, Boston, MA, USA; Technical University of Munich, Department of Mechanical Engineering, Garching, Germany
| |
Collapse
|
5
|
Burton E, Wattam-Bell J, Rubin GS, Atkinson J, Braddick O, Nardini M. Cortical processing of global form, motion and biological motion under low light levels. Vision Res 2016; 121:39-49. [PMID: 26878697 DOI: 10.1016/j.visres.2016.01.008] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2015] [Revised: 12/21/2015] [Accepted: 01/12/2016] [Indexed: 10/22/2022]
Abstract
Advances in potential treatments for rod and cone dystrophies have increased the need to understand the contributions of rods and cones to higher-level cortical vision. We measured form, motion and biological motion coherence thresholds and EEG steady-state visual evoked potentials (SSVEP) responses under light conditions ranging from photopic to scotopic. Low light increased thresholds for all three kinds of stimuli; however, global form thresholds were relatively more impaired than those for global motion or biological motion. SSVEP responses to coherent global form and motion were reduced in low light, and motion responses showed a shift in topography from the midline to more lateral locations. Contrast sensitivity measures confirmed that basic visual processing was also affected by low light. However, comparison with contrast sensitivity function (CSF) reductions achieved by optical blur indicated that these were insufficient to explain the pattern of results, although the temporal properties of the rod system may also play a role. Overall, mid-level processing in extra-striate areas is differentially affected by light level, in ways that cannot be explained in terms of low-level spatiotemporal sensitivity. A topographical shift in scotopic motion SSVEP responses may reflect either changes to inhibitory feedback mechanisms between V1 and extra-striate regions or a reduction of input to the visual cortex. These results provide insight into how higher-level cortical vision is normally organised in absence of cone input, and provide a basis for comparison with patients with cone dystrophies, before and after treatments aiming to restore cone function.
Collapse
Affiliation(s)
- Eliza Burton
- Institute of Ophthalmology, University College London, London, UK.
| | - John Wattam-Bell
- Division of Psychology and Language Sciences, University College London, London, UK
| | - Gary S Rubin
- Institute of Ophthalmology, University College London, London, UK
| | - Janette Atkinson
- Division of Psychology and Language Sciences, University College London, London, UK; Experimental Psychology, University of Oxford, UK
| | | | - Marko Nardini
- Department of Psychology, Durham University, Durham, UK; Institute of Ophthalmology, University College London, London, UK.
| |
Collapse
|
6
|
Schütz AC, Billino J, Bodrogi P, Polin D, Khanh TQ, Gegenfurtner KR. Robust Underestimation of Speed During Driving: A Field Study. Perception 2015; 44:1356-70. [PMID: 26562855 DOI: 10.1177/0301006615599137] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Traffic reports consistently identify speeding as a substantial source of accidents. Adequate driving speeds require reliable speed estimation; however, there is still a lack of understanding how speed perception is biased during driving. Therefore, we ran three experiments measuring speed estimation under controlled driving and lighting conditions. In the first experiment, participants had to produce target speeds as drivers or had to judge driven speed as passengers. Measurements were performed at daylight and at night. In the second experiment, participants were required to produce target speeds at dusk, under rapidly changing lighting conditions. In the third experiment, we let two cars approach and pass each other. Drivers were instructed to produce target speeds as well as to judge the speed of the oncoming vehicle. Here measurements were performed at daylight and at night, with full or dipped headlights. We found that passengers underestimated driven speed by about 20% and drivers went over the instructed speed by roughly the same amount. Interestingly, the underestimation of speed extended to oncoming cars. All of these effects were independent of lighting conditions. The consistent underestimation of speed could lead to potentially fatal situations where drivers go faster than intended and judge oncoming traffic to approach slower than it actually is.
Collapse
Affiliation(s)
- Alexander C Schütz
- Abteilung Allgemeine Psychologie, Justus-Liebig-Universität Gießen, Germany
| | - Jutta Billino
- Abteilung Allgemeine Psychologie, Justus-Liebig-Universität Gießen, Germany
| | - Peter Bodrogi
- Fachgebiet Lichttechnik, Technische Universität Darmstadt
| | - Dmitrij Polin
- Fachgebiet Lichttechnik, Technische Universität Darmstadt
| | - Tran Q Khanh
- Fachgebiet Lichttechnik, Technische Universität Darmstadt
| | | |
Collapse
|
7
|
Takeuchi T, Tuladhar A, Yoshimoto S. The effect of retinal illuminance on visual motion priming. Vision Res 2011; 51:1137-45. [PMID: 21396394 DOI: 10.1016/j.visres.2011.03.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2010] [Revised: 02/26/2011] [Accepted: 03/04/2011] [Indexed: 11/17/2022]
Abstract
The perceived direction of a directionally ambiguous stimulus is influenced by the moving direction of a preceding priming stimulus. Previous studies have shown that a brief priming stimulus induces positive motion priming, in which a subsequent directionally ambiguous stimulus is perceived to move in the same direction as the primer, while a longer priming stimulus induces negative priming, in which the following ambiguous stimulus is perceived to move in the opposite direction of the primer. The purpose of this study was to elucidate the underlying mechanism of motion priming by examining how retinal illuminance and velocity of the primer influences the perception of priming. Subjects judged the perceived direction of 180-deg phase-shifted (thus directionally ambiguous) sine-wave gratings displayed immediately after the offset of a primer stimulus. We found that perception of motion priming was greatly modulated by the retinal illuminance and velocity of the primer. Under low retinal illuminance, positive priming nearly disappeared even when the effective luminance contrast was equated between different conditions. Positive priming was prominent when the velocity of the primer was low, while only negative priming was observed when the velocity was high. These results suggest that the positive motion priming is induced by a higher-order mechanism that tracks prominent features of the visual stimulus, while a directionally selective motion mechanism induces negative motion priming.
Collapse
Affiliation(s)
- Tatsuto Takeuchi
- Department of Psychology, Japan Women's University, Tama-ku Nishiikuta 1-1-1, Kawasaki, Kangawa 214-8565, Japan.
| | | | | |
Collapse
|
8
|
Takeuchi T, De Valois KK. Visual motion mechanisms under low retinal illuminance revealed by motion reversal. Vision Res 2009; 49:801-9. [PMID: 19250946 DOI: 10.1016/j.visres.2009.02.011] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2008] [Accepted: 02/11/2009] [Indexed: 10/21/2022]
Abstract
The aim of this study is to determine what kinds of motion mechanisms operate at low luminance levels. We used a motion reversal phenomenon in which the perceived direction of motion is reversed when a blank inter-stimulus interval (ISI) frame is inserted between two image frames of similar mean luminance. At low luminance levels, we found that motion reversal was perceived when the moving pattern was presented in the retinal periphery, but no motion reversal was observed when the stimulus was presented in the central retina. When a large stimulus that covers both central and peripheral visual fields was presented, motion reversal did not occur. We conclude that as retinal illuminance decreases, the relative contribution of a feature-tracking mechanism in the central retina becomes larger, while motion perception in the peripheral retina continues to depend on a biphasic, first-order motion mechanism. When both central and peripheral visual fields are stimulated simultaneously, the motion mechanism that dominates in the central retina determines the perceived direction of motion at low luminance levels.
Collapse
Affiliation(s)
- Tatsuto Takeuchi
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Morinosato-Wakamiya 3-1, Atsugi, Kanagawa 243-0198, Japan.
| | | |
Collapse
|
9
|
Sheliga BM, Chen KJ, FitzGibbon EJ, Miles FA. The initial ocular following responses elicited by apparent-motion stimuli: reversal by inter-stimulus intervals. Vision Res 2006; 46:979-92. [PMID: 16242168 PMCID: PMC2430525 DOI: 10.1016/j.visres.2005.09.001] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2005] [Revised: 08/25/2005] [Accepted: 09/02/2005] [Indexed: 11/30/2022]
Abstract
Transient apparent-motion stimuli, consisting of single 1/4-wavelength steps applied to square-wave gratings lacking the fundamental ("missing fundamental stimulus") and to sinusoidal gratings, were used to elicit ocular following responses (OFRs) in humans. As previously reported [Sheliga, B. M., Chen, K. J., FitzGibbon, E. J., & Miles, F. A. (2005). Initial ocular following in humans: a response to first-order motion energy. Vision Research, in press], the earliest OFRs were strongly dependent on the motion of the major Fourier component, consistent with early spatio-temporal filtering prior to motion detection, as in the well-known energy model of motion analysis. Introducing inter-stimulus intervals (ISIs) of 10-200 ms, during which the screen was gray with the same mean luminance, reversed the initial direction of the OFR, the peak reversed responses (with ISIs of 20-40 ms) being substantially greater than the non-reversed responses (with an ISI of 0 ms). When the mean luminance was reduced to scotopic levels, reversals now occurred only with ISIs > or=60 ms and the peak reversed responses (with ISIs of 60-100 ms) were substantially smaller than the non-reversed responses (with an ISI of 0 ms). These findings are consistent with the idea that initial OFRs are mediated by first-order motion-energy-sensing mechanisms that receive a visual input whose temporal impulse response function is strongly biphasic in photopic conditions and almost monophasic in scotopic conditions.
Collapse
Affiliation(s)
- B M Sheliga
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA.
| | | | | | | |
Collapse
|
10
|
Abstract
We examined the effect of average luminance level on texture segregation by motion. We determined the minimum presentation duration required for subjects to detect a target defined by motion direction against a moving background. The average luminance level and retinal position of the target were systematically varied. We found that the minimum presentation duration needed for texture segregation depends significantly on the average luminance level and on retinal position. The minimum presentation duration increased as the mean luminance decreased. At a very low (presumably scotopic) luminance level, the motion-defined target was never detected rapidly. Under scotopic conditions, the minimum presentation duration was shorter in the periphery than in a near foveal region when the task was simple detection of the target. When the task included identifying the shape of the target patch, however, the target presented near the fovea was identified faster at all luminance levels. These results suggest that the performance of texture segregation is constrained by the spatiotemporal characteristics of the early visual system.
Collapse
Affiliation(s)
- Tatsuto Takeuchi
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa 243-0198, Japan.
| | | | | |
Collapse
|
11
|
Abstract
The use of driving simulation for vehicle design and driver perception studies is expanding rapidly. This is largely because simulation saves engineering time and costs, and can be used for studies of road and traffic safety. How applicable driving simulation is to the real world is unclear however, because analyses of perceptual criteria carried out in driving simulation experiments are controversial. On the one hand, recent data suggest that, in driving simulators with a large field of view, longitudinal speed can be estimated correctly from visual information. On the other hand, recent psychophysical studies have revealed an unexpectedly important contribution of vestibular cues in distance perception and steering, prompting a re-evaluation of the role of visuo-vestibular interaction in driving simulation studies.
Collapse
Affiliation(s)
- Andras Kemeny
- Laboratoire de Physiologie de la Perception et de l'Action, CNRS-Collège de France, 11, Place M. Berthelot, 75005, Paris, France
| | | |
Collapse
|
12
|
Lankheet MJM, van Doorn AJ, van de Grind WA. Spatio-temporal tuning of motion coherence detection at different luminance levels. Vision Res 2002; 42:65-73. [PMID: 11804632 DOI: 10.1016/s0042-6989(01)00265-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
We studied effects of dark adaptation on spatial and temporal tuning for motion coherence detection. We compared tuning for step size and delay for moving random pixel arrays (RPAs) at two adaptation levels, one light adapted (50 cd/m(2)) and the other relatively dark adapted (0.05 cd/m(2)). To study coherence detection rather than contrast detection, RPAs were scaled for equal contrast detection at each luminance level, and a signal-to-noise ratio paradigm was used in which the RPA is always at a fixed, supra-threshold contrast level. The noise consists of a spatio-temporally incoherent RPA added to the moving RPA on a pixel-by-pixel basis. Spatial and temporal limits for coherence detection were measured using a single step pattern lifetime stimulus, in which patterns on alternate frames make a coherent step and are being refreshed. Therefore, the stimulus contains coherent motion at a single combination of step size and delay only. The main effect of dark adaptation is a large shift in step size, slightly less than the adjustment of spatial scale required for maintaining equal contrast sensitivity. A similar change of preferred step size occurs also for scaled stimuli at a light-adapted level, indicating that the spatial effect is not directly linked to dark adaptation, but more generally related to changes in the available low-level spatial information. Dark-adaptation shifts temporal tuning by about a factor of 2. Long delays are more effective at low luminance levels, whereas short delays no longer support motion coherence detection. Luminance-invariant velocity tuning curves, as reported previously [Lankheet, M.J.M., van Doorn, A.J., Bouman, M.A., & van de Grind, W.A. (2000) Motion coherence detection as a function of luminance in human central vision. Vision Research, 40, 3599-3611], result from recruitment of different sets of motion detectors, and an adjustment of their temporal properties.
Collapse
Affiliation(s)
- M J M Lankheet
- Comparative Physiology, Helmholtz Institute, Utrecht University, Padualaan 8, 3584 CH Utrecht, The Netherlands.
| | | | | |
Collapse
|
13
|
Takeuchi T, De Valois KK, Motoyoshi I. Light adaptation in motion direction judgments. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2001; 18:755-764. [PMID: 11318325 DOI: 10.1364/josaa.18.000755] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
We examined the time course of light adaptation in the visual motion system. Subjects judged the direction of a two-frame apparent-motion display, with the two frames separated by a 50-ms interstimulus interval of the same mean luminance. The phase of the first frame was randomly determined on each trial. The grating presented in the second frame was phase shifted either leftward or rightward by pi/2 with respect to the grating in the first frame. At some variable point during the first frame, the mean luminance of the pattern increased or decreased by 1-3 log units. Mean luminance levels varied from scotopic or low mesopic to photopic levels. We found that the perceived direction of motion depended jointly on the luminance level of the first frame grating and the time at which the shift in average luminance occurs. When the average luminance increases from scotopic or mesopic to photopic levels at least 0.5 s before the offset of the first frame, motion in the 3pi/2 direction is perceived. When average luminance decreases to low mesopic or scotopic levels, motion in the pi/2 direction is perceived if the change occurs 1.0 s or more before first frame offset, depending on the size of the luminance step. Thus light adaptation in the visual motion system is essentially complete within 1 s. This suggests a rapid change in the shape (biphasic or monophasic) of the temporal impulse response functions that feed into a first-order motion mechanism.
Collapse
Affiliation(s)
- T Takeuchi
- Department of Psychology, University of California at Berkeley, 94720, USA
| | | | | |
Collapse
|