1
|
Purokayastha S, Roberts M, Carrasco M. Do microsaccades vary with discriminability around the visual field? BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.11.575288. [PMID: 38260406 PMCID: PMC10802594 DOI: 10.1101/2024.01.11.575288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
Microsaccades-tiny fixational eye movements- improve discriminability in high acuity tasks in the foveola. To investigate whether they help compensate for low discriminability at perifovea, we examined MS characteristics relative to the adult visual performance field, which is characterized by two perceptual asymmetries: Horizontal-Vertical Anisotropy (better discrimination along the horizontal than vertical meridian), and Vertical Meridian Asymmetry (better discrimination along the lower- than upper-vertical meridian). We investigated whether and to what extent microsaccade directionality varies when stimuli are at isoeccentric locations along the cardinals under conditions of heterogeneous discriminability (Experiment 1) and homogeneous discriminability, equated by adjusting stimulus contrast (Experiment 2). Participants performed a two-alternative forced-choice orientation discrimination task. In both experiments, performance was better on trials without microsaccades between ready signal onset and stimulus offset than on trials with microsaccades. Across the trial sequence the microsaccade rate and directional pattern were similar across locations. Our results indicate that microsaccades were similar regardless of stimulus discriminability and target location, except during the response period-once the stimuli were no longer present and target location no longer uncertain-when microsaccades were biased toward the target location. Thus, this study reveals that microsaccades do not flexibly adapt as a function of varying discriminability in a basic visual task around the visual field.
Collapse
Affiliation(s)
| | - Mariel Roberts
- Department of Psychology, New York University, New York, USA
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, USA
- Center for Neural Science, New York University, New York, USA
| |
Collapse
|
2
|
Smithers SP, Shao Y, Altham J, Bex PJ. Large depth differences between target and flankers can increase crowding: Evidence from a multi-depth plane display. eLife 2023; 12:e85143. [PMID: 37665324 PMCID: PMC10476968 DOI: 10.7554/elife.85143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 07/20/2023] [Indexed: 09/05/2023] Open
Abstract
Crowding occurs when the presence of nearby features causes highly visible objects to become unrecognizable. Although crowding has implications for many everyday tasks and the tremendous amounts of research reflect its importance, surprisingly little is known about how depth affects crowding. Most available studies show that stereoscopic disparity reduces crowding, indicating that crowding may be relatively unimportant in three-dimensional environments. However, most previous studies tested only small stereoscopic differences in depth in which disparity, defocus blur, and accommodation are inconsistent with the real world. Using a novel multi-depth plane display, this study investigated how large (0.54-2.25 diopters), real differences in target-flanker depth, representative of those experienced between many objects in the real world, affect crowding. Our findings show that large differences in target-flanker depth increased crowding in the majority of observers, contrary to previous work showing reduced crowding in the presence of small depth differences. Furthermore, when the target was at fixation depth, crowding was generally more pronounced when the flankers were behind the target as opposed to in front of it. However, when the flankers were at fixation depth, crowding was generally more pronounced when the target was behind the flankers. These findings suggest that crowding from clutter outside the limits of binocular fusion can still have a significant impact on object recognition and visual perception in the peripheral field.
Collapse
Affiliation(s)
- Samuel P Smithers
- Department of Psychology, Northeastern UniversityBostonUnited States
| | - Yulong Shao
- Department of Psychology, Northeastern UniversityBostonUnited States
| | - James Altham
- Department of Psychology, Northeastern UniversityBostonUnited States
| | - Peter J Bex
- Department of Psychology, Northeastern UniversityBostonUnited States
| |
Collapse
|
3
|
Kurzawski JW, Burchell A, Thapa D, Winawer J, Majaj NJ, Pelli DG. The Bouma law accounts for crowding in 50 observers. J Vis 2023; 23:6. [PMID: 37540179 PMCID: PMC10408772 DOI: 10.1167/jov.23.8.6] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Accepted: 05/16/2023] [Indexed: 08/05/2023] Open
Abstract
Crowding is the failure to recognize an object due to surrounding clutter. Our visual crowding survey measured 13 crowding distances (or "critical spacings") twice in each of 50 observers. The survey includes three eccentricities (0, 5, and 10 deg), four cardinal meridians, two orientations (radial and tangential), and two fonts (Sloan and Pelli). The survey also tested foveal acuity, twice. Remarkably, fitting a two-parameter model-the well-known Bouma law, where crowding distance grows linearly with eccentricity-explains 82% of the variance for all 13 × 50 measured log crowding distances, cross-validated. An enhanced Bouma law, with factors for meridian, crowding orientation, target kind, and observer, explains 94% of the variance, again cross-validated. These additional factors reveal several asymmetries, consistent with previous reports, which can be expressed as crowding-distance ratios: 0.62 horizontal:vertical, 0.79 lower:upper, 0.78 right:left, 0.55 tangential:radial, and 0.78 Sloan-font:Pelli-font. Across our observers, peripheral crowding is independent of foveal crowding and acuity. Evaluation of the Bouma factor, b (the slope of the Bouma law), as a biomarker of visual health would be easier if there were a way to compare results across crowding studies that use different methods. We define a standardized Bouma factor b' that corrects for differences from Bouma's 25 choice alternatives, 75% threshold criterion, and linearly symmetric flanker placement. For radial crowding on the right meridian, the standardized Bouma factor b' is 0.24 for this study, 0.35 for Bouma (1970), and 0.30 for the geometric mean across five representative modern studies, including this one, showing good agreement across labs, including Bouma's. Simulations, confirmed by data, show that peeking can skew estimates of crowding (e.g., greatly decreasing the mean or doubling the SD of log b). Using gaze tracking to prevent peeking, individual differences are robust, as evidenced by the much larger 0.08 SD of log b across observers than the mere 0.03 test-retest SD of log b measured in half an hour. The ease of measurement of crowding enhances its promise as a biomarker for dyslexia and visual health.
Collapse
Affiliation(s)
- Jan W Kurzawski
- Department of Psychology, New York University, New York, NY, USA
| | - Augustin Burchell
- Cognitive Science & Computer Science, Swarthmore College, Swarthmore, PA, USA
| | - Darshan Thapa
- Center for Neural Science, New York University, New York, NY, USA
| | - Jonathan Winawer
- Department of Psychology, New York University, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| | - Najib J Majaj
- Center for Neural Science, New York University, New York, NY, USA
| | - Denis G Pelli
- Department of Psychology, New York University, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| |
Collapse
|
4
|
Hout MC, Papesh MH, Masadeh S, Sandin H, Walenchok SC, Post P, Madrid J, White B, Pinto JDG, Welsh J, Goode D, Skulsky R, Rodriguez MC. The Oddity Detection in Diverse Scenes (ODDS) database: Validated real-world scenes for studying anomaly detection. Behav Res Methods 2023; 55:583-599. [PMID: 35353316 PMCID: PMC8966608 DOI: 10.3758/s13428-022-01816-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/18/2022] [Indexed: 11/24/2022]
Abstract
Many applied screening tasks (e.g., medical image or baggage screening) involve challenging searches for which standard laboratory search is rarely equivalent. For example, whereas laboratory search frequently requires observers to look for precisely defined targets among isolated, non-overlapping images randomly arrayed on clean backgrounds, medical images present unspecified targets in noisy, yet spatially regular scenes. Those unspecified targets are typically oddities, elements that do not belong. To develop a closer laboratory analogue to this, we created a database of scenes containing subtle, ill-specified "oddity" targets. These scenes have similar perceptual densities and spatial regularities to those found in expert search tasks, and each includes 16 variants of the unedited scene wherein an oddity (a subtle deformation of the scene) is hidden. In Experiment 1, eight volunteers searched thousands of scene variants for an oddity. Regardless of their search accuracy, they were then shown the highlighted anomaly and rated its subtlety. Subtlety ratings reliably predicted search performance (accuracy and response times) and did so better than image statistics. In Experiment 2, we conducted a conceptual replication in which a larger group of naïve searchers scanned subsets of the scene variants. Prior subtlety ratings reliably predicted search outcomes. Whereas medical image targets are difficult for naïve searchers to detect, our database contains thousands of interior and exterior scenes that vary in difficulty, but are nevertheless searchable by novices. In this way, the stimuli will be useful for studying visual search as it typically occurs in expert domains: Ill-specified search for anomalies in noisy displays.
Collapse
Affiliation(s)
- Michael C Hout
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA.
- National Science Foundation, Alexandria, VA, USA.
| | - Megan H Papesh
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Saleem Masadeh
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Hailey Sandin
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | | | - Phillip Post
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Jessica Madrid
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Bryan White
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | | | - Julian Welsh
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Dre Goode
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Rebecca Skulsky
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Mariana Cazares Rodriguez
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| |
Collapse
|
5
|
Shah N, Dakin SC, Mulholland PJ, Racheva K, Matlach J, Anderson RS. The Effect of Induced Intraocular Stray Light on Recognition Thresholds for Pseudo-High-Pass Filtered Letters. Transl Vis Sci Technol 2022; 11:4. [PMID: 35511149 PMCID: PMC9078078 DOI: 10.1167/tvst.11.5.4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose The Moorfields Acuity Chart (MAC)—comprising pseudo-high-pass filtered “vanishing optotype” (VO) letters—is more sensitive to functional visual loss in age-related macular degeneration (AMD) compared to conventional letter charts. It is currently unknown the degree to which MAC acuity is affected by optical factors such as cataract. This is important to know when determining whether an individual's vision loss owes more to neural or optical factors. Here we estimate recognition acuity for VOs and conventional letters with simulated lens aging, achieved using different levels of induced intraocular light scatter. Methods Recognition thresholds were determined for two experienced and one naive participant with conventional and VO letters. Stimuli were presented either foveally or at 10 degrees in the horizontal temporal retina, under varying degrees of intraocular light scatter induced by white resin opacity-containing filters (WOFs grades 1 to 5). Results Foveal acuity only became significantly different from baseline (no filter) for WOF grade 5 with conventional letters and WOF grades 4 and 5 with VOs. In the periphery, no statistical difference was found for any stray-light level for both conventional and VOs. Conclusions Recognition acuity measured with conventional and VOs is robust to the effects of simulated lens opacification, and thus its higher sensitivity to neural damage should not simultaneously be confounded by such optical factors. Translational Relevance The MAC may be better able to differentiate between neural and optical deficits of visual performance, making it more suitable for the assessment of patients with AMD, who may display both types of functional visual loss.
Collapse
Affiliation(s)
- Nilpa Shah
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Steven C Dakin
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK.,School of Optometry & Vision Science, University of Auckland, Auckland, New Zealand
| | - Pádraig J Mulholland
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK.,Centre for Optometry and Vision Science, School of Biomedical Sciences, University of Ulster at Coleraine, N Ireland, UK
| | - Kalina Racheva
- Institute of Neurobiology, Bulgarian Academy of Sciences, Sofia, Bulgaria
| | - Juliane Matlach
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK.,Department of Ophthalmology, University Medical Centre, Johannes Gutenberg University Mainz, Germany
| | - Roger S Anderson
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK.,Centre for Optometry and Vision Science, School of Biomedical Sciences, University of Ulster at Coleraine, N Ireland, UK
| |
Collapse
|
6
|
Rideaux R, West RK, Wallis TSA, Bex PJ, Mattingley JB, Harrison WJ. Spatial structure, phase, and the contrast of natural images. J Vis 2022; 22:4. [PMID: 35006237 PMCID: PMC8762697 DOI: 10.1167/jov.22.1.4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Accepted: 11/25/2021] [Indexed: 11/24/2022] Open
Abstract
The sensitivity of the human visual system is thought to be shaped by environmental statistics. A major endeavor in vision science, therefore, is to uncover the image statistics that predict perceptual and cognitive function. When searching for targets in natural images, for example, it has recently been proposed that target detection is inversely related to the spatial similarity of the target to its local background. We tested this hypothesis by measuring observers' sensitivity to targets that were blended with natural image backgrounds. Targets were designed to have a spatial structure that was either similar or dissimilar to the background. Contrary to masking from similarity, we found that observers were most sensitive to targets that were most similar to their backgrounds. We hypothesized that a coincidence of phase alignment between target and background results in a local contrast signal that facilitates detection when target-background similarity is high. We confirmed this prediction in a second experiment. Indeed, we show that, by solely manipulating the phase of a target relative to its background, the target can be rendered easily visible or undetectable. Our study thus reveals that, in addition to its structural similarity, the phase of the target relative to the background must be considered when predicting detection sensitivity in natural images.
Collapse
Affiliation(s)
- Reuben Rideaux
- Queensland Brain Institute, University of Queensland, St. Lucia, Queensland, Australia
| | - Rebecca K West
- School of Psychology, University of Queensland, St. Lucia, Queensland, Australia
| | - Thomas S A Wallis
- Institut für Psychologie & Centre for Cognitive Science, Technische Universität Darmstadt, Darmstadt, Germany
| | - Peter J Bex
- Department of Psychology, Northeastern University, Boston, MA, USA
| | - Jason B Mattingley
- Queensland Brain Institute, University of Queensland, St. Lucia, Queensland, Australia
- School of Psychology, University of Queensland, St. Lucia, Queensland, Australia
| | - William J Harrison
- Queensland Brain Institute, University of Queensland, St. Lucia, Queensland, Australia
- School of Psychology, University of Queensland, St. Lucia, Queensland, Australia
| |
Collapse
|
7
|
Purokayastha S, Roberts M, Carrasco M. Voluntary attention improves performance similarly around the visual field. Atten Percept Psychophys 2021; 83:2784-2794. [PMID: 34036535 PMCID: PMC8514247 DOI: 10.3758/s13414-021-02316-y] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/06/2021] [Indexed: 12/14/2022]
Abstract
Performance as a function of polar angle at isoeccentric locations across the visual field is known as a performance field (PF) and is characterized by two asymmetries: the HVA (horizontal-vertical anisotropy) and VMA (vertical meridian asymmetry). Exogenous (involuntary) spatial attention does not affect the shape of the PF, improving performance similarly across polar angle. Here we investigated whether endogenous (voluntary) spatial attention, a flexible mechanism, can attenuate these perceptual asymmetries. Twenty participants performed an orientation discrimination task while their endogenous attention was either directed to the target location or distributed across all possible locations. The effects of attention were assessed either using the same stimulus contrast across locations or equating difficulty across locations using individually titrated contrast thresholds. In both experiments, endogenous attention similarly improved performance at all locations, maintaining the canonical PF shape. Thus, despite its voluntary nature, like exogenous attention, endogenous attention cannot alleviate perceptual asymmetries at isoeccentric locations.
Collapse
Affiliation(s)
| | - Mariel Roberts
- Department of Psychology, New York University, New York, NY, USA
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, NY, USA.
- Center for Neural Science, New York University, 6 Washington Place, Room 970, New York, NY, 10003, USA.
| |
Collapse
|
8
|
Kosovicheva A, Bex PJ. Gravitational effects of scene information in object localization. Sci Rep 2021; 11:11520. [PMID: 34075169 PMCID: PMC8169838 DOI: 10.1038/s41598-021-91006-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2021] [Accepted: 05/20/2021] [Indexed: 02/04/2023] Open
Abstract
We effortlessly interact with objects in our environment, but how do we know where something is? An object's apparent position does not simply correspond to its retinotopic location but is influenced by its surrounding context. In the natural environment, this context is highly complex, and little is known about how visual information in a scene influences the apparent location of the objects within it. We measured the influence of local image statistics (luminance, edges, object boundaries, and saliency) on the reported location of a brief target superimposed on images of natural scenes. For each image statistic, we calculated the difference between the image value at the physical center of the target and the value at its reported center, using observers' cursor responses, and averaged the resulting values across all trials. To isolate image-specific effects, difference scores were compared to a randomly-permuted null distribution that accounted for any response biases. The observed difference scores indicated that responses were significantly biased toward darker regions, luminance edges, object boundaries, and areas of high saliency, with relatively low shared variance among these measures. In addition, we show that the same image statistics were associated with observers' saccade errors, despite large differences in response time, and that some effects persisted when high-level scene processing was disrupted by 180° rotations and color negatives of the originals. Together, these results provide evidence for landmark effects within natural images, in which feature location reports are pulled toward low- and high-level informative content in the scene.
Collapse
Affiliation(s)
- Anna Kosovicheva
- grid.17063.330000 0001 2157 2938Department of Psychology, University of Toronto Mississauga, 3359 Mississauga Road, Mississauga, ON L5L 1C6 Canada ,grid.261112.70000 0001 2173 3359Department of Psychology, Northeastern University, 125 Nightingale Hall, 360 Huntington Ave., Boston, MA 02115 USA
| | - Peter J. Bex
- grid.261112.70000 0001 2173 3359Department of Psychology, Northeastern University, 125 Nightingale Hall, 360 Huntington Ave., Boston, MA 02115 USA
| |
Collapse
|
9
|
Ringer RV, Coy AM, Larson AM, Loschky LC. Investigating Visual Crowding of Objects in Complex Real-World Scenes. Iperception 2021; 12:2041669521994150. [PMID: 35145614 PMCID: PMC8822316 DOI: 10.1177/2041669521994150] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2019] [Accepted: 01/07/2021] [Indexed: 11/23/2022] Open
Abstract
Visual crowding, the impairment of object recognition in peripheral vision due to flanking objects, has generally been studied using simple stimuli on blank backgrounds. While crowding is widely assumed to occur in natural scenes, it has not been shown rigorously yet. Given that scene contexts can facilitate object recognition, crowding effects may be dampened in real-world scenes. Therefore, this study investigated crowding using objects in computer-generated real-world scenes. In two experiments, target objects were presented with four flanker objects placed uniformly around the target. Previous research indicates that crowding occurs when the distance between the target and flanker is approximately less than half the retinal eccentricity of the target. In each image, the spacing between the target and flanker objects was varied considerably above or below the standard (0.5) threshold to either suppress or facilitate the crowding effect. Experiment 1 cued the target location and then briefly flashed the scene image before participants could move their eyes. Participants then selected the target object’s category from a 15-alternative forced choice response set (including all objects shown in the scene). Experiment 2 used eye tracking to ensure participants were centrally fixating at the beginning of each trial and showed the image for the duration of the participant’s fixation. Both experiments found object recognition accuracy decreased with smaller spacing between targets and flanker objects. Thus, this study rigorously shows crowding of objects in semantically consistent real-world scenes.
Collapse
Affiliation(s)
- Ryan V Ringer
- Department of Psychology, Wichita State University, Wichita, Kansas, United States
| | - Allison M Coy
- Department of Psychological Sciences, Kansas State University, Manhattan, Kansas, United States
| | - Adam M Larson
- Department of Psychology, University of Findlay, Findlay, Ohio, United States
| | - Lester C Loschky
- Department of Psychological Sciences, Kansas State University, Manhattan, Kansas, United States
| |
Collapse
|
10
|
Barbot A, Xue S, Carrasco M. Asymmetries in visual acuity around the visual field. J Vis 2021; 21:2. [PMID: 33393963 PMCID: PMC7794272 DOI: 10.1167/jov.21.1.2] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Accepted: 11/10/2020] [Indexed: 12/21/2022] Open
Abstract
Human vision is heterogeneous around the visual field. At a fixed eccentricity, performance is better along the horizontal than the vertical meridian and along the lower than the upper vertical meridian. These asymmetric patterns, termed performance fields, have been found in numerous visual tasks, including those mediated by contrast sensitivity and spatial resolution. However, it is unknown whether spatial resolution asymmetries are confined to the cardinal meridians or whether and how far they extend into the upper and lower hemifields. Here, we measured visual acuity at isoeccentric peripheral locations (10 deg eccentricity), every 15° of polar angle. On each trial, observers judged the orientation (± 45°) of one of four equidistant, suprathreshold grating stimuli varying in spatial frequency (SF). On each block, we measured performance as a function of stimulus SF at 4 of 24 isoeccentric locations. We estimated the 75%-correct SF threshold, SF cutoff point (i.e., chance-level), and slope of the psychometric function for each location. We found higher SF estimates (i.e., better acuity) for the horizontal than the vertical meridian and for the lower than the upper vertical meridian. These asymmetries were most pronounced at the cardinal meridians and decreased gradually as the angular distance from the vertical meridian increased. This gradual change in acuity with polar angle reflected a shift of the psychometric function without changes in slope. The same pattern was found under binocular and monocular viewing conditions. These findings advance our understanding of visual processing around the visual field and help constrain models of visual perception.
Collapse
Affiliation(s)
- Antoine Barbot
- Department of Psychology, New York University, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
- Spinoza Centre for Neuroimaging, Amsterdam, Netherlands
| | - Shutian Xue
- Department of Psychology, New York University, New York, NY, USA
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| |
Collapse
|
11
|
Abstract
Detection of target objects in the surrounding environment is a common visual task. There is a vast psychophysical and modeling literature concerning the detection of targets in artificial and natural backgrounds. Most studies involve detection of additive targets or of some form of image distortion. Although much has been learned from these studies, the targets that most often occur under natural conditions are neither additive nor distorting; rather, they are opaque targets that occlude the backgrounds behind them. Here, we describe our efforts to measure and model detection of occluding targets in natural backgrounds. To systematically vary the properties of the backgrounds, we used the constrained sampling approach of Sebastian, Abrams, and Geisler (2017). Specifically, millions of calibrated gray-scale natural-image patches were sorted into a 3D histogram along the dimensions of luminance, contrast, and phase-invariant similarity to the target. Eccentricity psychometric functions (accuracy as a function of retinal eccentricity) were measured for four different occluding targets and 15 different combinations of background luminance, contrast, and similarity, with a different randomly sampled background on each trial. The complex pattern of results was consistent across the three subjects, and was largely explained by a principled model observer (with only a single efficiency parameter) that combines three image cues (pattern, silhouette, and edge) and four well-known properties of the human visual system (optical blur, blurring and downsampling by the ganglion cells, divisive normalization, intrinsic position uncertainty). The model also explains the thresholds for additive foveal targets in natural backgrounds reported in Sebastian et al. (2017).
Collapse
|
12
|
Abstract
Visual crowding-the deleterious influence of nearby objects on object recognition-is considered to be a major bottleneck for object recognition in cluttered environments. Although crowding has been studied for decades with static and artificial stimuli, it is still unclear how crowding operates when viewing natural dynamic scenes in real-life situations. For example, driving is a frequent and potentially fatal real-life situation where crowding may play a critical role. In order to investigate the role of crowding in this kind of situation, we presented observers with naturalistic driving videos and recorded their eye movements while they performed a simulated driving task. We found that the saccade localization on pedestrians was impacted by visual clutter, in a manner consistent with the diagnostic criteria of crowding (Bouma's rule of thumb, flanker similarity tuning, and the radial-tangential anisotropy). In order to further confirm that altered saccadic localization is a behavioral consequence of crowding, we also showed that crowding occurs in the recognition of cluttered pedestrians in a more conventional crowding paradigm. We asked participants to discriminate the gender of pedestrians in static video frames and found that the altered saccadic localization correlated with the degree of crowding of the saccade targets. Taken together, our results provide strong evidence that crowding impacts both recognition and goal-directed actions in natural driving situations.
Collapse
|
13
|
Marma V, Bulatov A, Bulatova N. Dependence of the filled-space illusion on the size and location of contextual distractors. Acta Neurobiol Exp (Wars) 2020. [DOI: 10.21307/ane-2020-014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
14
|
Wallis TS, Funke CM, Ecker AS, Gatys LA, Wichmann FA, Bethge M. Image content is more important than Bouma's Law for scene metamers. eLife 2019; 8:42512. [PMID: 31038458 PMCID: PMC6491040 DOI: 10.7554/elife.42512] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2018] [Accepted: 03/09/2019] [Indexed: 11/16/2022] Open
Abstract
We subjectively perceive our visual field with high fidelity, yet peripheral distortions can go unnoticed and peripheral objects can be difficult to identify (crowding). Prior work showed that humans could not discriminate images synthesised to match the responses of a mid-level ventral visual stream model when information was averaged in receptive fields with a scaling of about half their retinal eccentricity. This result implicated ventral visual area V2, approximated ‘Bouma’s Law’ of crowding, and has subsequently been interpreted as a link between crowding zones, receptive field scaling, and our perceptual experience. However, this experiment never assessed natural images. We find that humans can easily discriminate real and model-generated images at V2 scaling, requiring scales at least as small as V1 receptive fields to generate metamers. We speculate that explaining why scenes look as they do may require incorporating segmentation and global organisational constraints in addition to local pooling. As you read this digest, your eyes move to follow the lines of text. But now try to hold your eyes in one position, while reading the text on either side and below: it soon becomes clear that peripheral vision is not as good as we tend to assume. It is not possible to read text far away from the center of your line of vision, but you can see ‘something’ out of the corner of your eye. You can see that there is text there, even if you cannot read it, and you can see where your screen or page ends. So how does the brain generate peripheral vision, and why does it differ from what you see when you look straight ahead? One idea is that the visual system averages information over areas of the peripheral visual field. This gives rise to texture-like patterns, as opposed to images made up of fine details. Imagine looking at an expanse of foliage, gravel or fur, for example. Your eyes cannot make out the individual leaves, pebbles or hairs. Instead, you perceive an overall pattern in the form of a texture. Our peripheral vision may also consist of such textures, created when the brain averages information over areas of space. Wallis, Funke et al. have now tested this idea using an existing computer model that averages visual input in this way. By giving the model a series of photographs to process, Wallis, Funke et al. obtained images that should in theory simulate peripheral vision. If the model mimics the mechanisms that generate peripheral vision, then healthy volunteers should be unable to distinguish the processed images from the original photographs. But in fact, the participants could easily discriminate the two sets of images. This suggests that the visual system does not solely use textures to represent information in the peripheral visual field. Wallis, Funke et al. propose that other factors, such as how the visual system separates and groups objects, may instead determine what we see in our peripheral vision. This knowledge could ultimately benefit patients with eye diseases such as macular degeneration, a condition that causes loss of vision in the center of the visual field and forces patients to rely on their peripheral vision.
Collapse
Affiliation(s)
- Thomas Sa Wallis
- Werner Reichardt Center for Integrative Neuroscience, Eberhard Karls Universität Tübingen, Tübingen, Germany.,Bernstein Center for Computational Neuroscience, Berlin, Germany
| | - Christina M Funke
- Werner Reichardt Center for Integrative Neuroscience, Eberhard Karls Universität Tübingen, Tübingen, Germany.,Bernstein Center for Computational Neuroscience, Berlin, Germany
| | - Alexander S Ecker
- Werner Reichardt Center for Integrative Neuroscience, Eberhard Karls Universität Tübingen, Tübingen, Germany.,Bernstein Center for Computational Neuroscience, Berlin, Germany.,Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, United States.,Institute for Theoretical Physics, Eberhard Karls Universität Tübingen, Tübingen, Germany
| | - Leon A Gatys
- Werner Reichardt Center for Integrative Neuroscience, Eberhard Karls Universität Tübingen, Tübingen, Germany
| | - Felix A Wichmann
- Neural Information Processing Group, Faculty of Science, Eberhard Karls Universität Tübingen, Tübingen, Germany
| | - Matthias Bethge
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, United States.,Institute for Theoretical Physics, Eberhard Karls Universität Tübingen, Tübingen, Germany.,Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
15
|
Qiu C, Lee KR, Jung JH, Goldstein R, Peli E. Motion Parallax Improves Object Recognition in the Presence of Clutter in Simulated Prosthetic Vision. Transl Vis Sci Technol 2018; 7:29. [PMID: 30386681 PMCID: PMC6205682 DOI: 10.1167/tvst.7.5.29] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2018] [Accepted: 07/31/2018] [Indexed: 11/24/2022] Open
Abstract
Purpose Efficacy of current visual prostheses in object recognition is limited. Among various limitations to be addressed, such as low resolution and low dynamic range, here we focus on reducing the impact of background clutter on object recognition. We have proposed the use of motion parallax via head-mounted camera lateral scanning and computationally stabilizing the object of interest (OI) to support neural background decluttering. Simulations in head-mounted displays (HMD), mimicking the proposed effect, were used to test object recognition in normally sighted subjects. Methods Images (24° field of view) were captured from multiple viewpoints and presented at a low resolution (20 × 20). All viewpoints were centered on the OI. Experimental conditions (2 × 3) included clutter (with or without) × head scanning (single viewpoint, 9 coherent viewpoints corresponding to subjects' head positions, and 9 randomly associated viewpoints). Subjects used lateral head movements to view OIs in the HMD. Each object was displayed only once for each subject. Results The median recognition rate without clutter was 40% for all head scanning conditions. Performance with synthetic background clutter dropped to 10% in the static condition, but it was improved to 20% with the coherent and random head scanning (corrected P = 0.005 and P = 0.049, respectively). Conclusions Background decluttering using motion parallax cues but not the coherent multiple views of the OI improved object recognition in low-resolution images. The improvement did not fully eliminate the impact of background. Translational Relevance Motion parallax is an effective but incomplete decluttering solution for object recognition with visual prostheses.
Collapse
Affiliation(s)
- Cheng Qiu
- The Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA.,Department of Psychology, University of Pennsylvania Philadelphia, PA, USA
| | - Kassandra R Lee
- The Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Jae-Hyun Jung
- The Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Robert Goldstein
- The Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Eli Peli
- The Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
16
|
Critical resolution: A superior measure of crowding. Vision Res 2018; 153:13-23. [PMID: 30240717 PMCID: PMC6294650 DOI: 10.1016/j.visres.2018.08.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2018] [Revised: 07/30/2018] [Accepted: 08/31/2018] [Indexed: 11/29/2022]
Abstract
Visual object recognition is essential for adaptive interactions with the environment. It is fundamentally limited by crowding, a breakdown of object recognition in clutter. The spatial extent over which crowding occurs is proportional to the eccentricity of the target object, but nevertheless varies substantially depending on various stimulus factors (e.g. viewing time, contrast). However, a lack of studies jointly manipulating such factors precludes predictions of crowding in more heterogeneous scenes, such as the majority of real life situations. To establish how such co-occurring variations affect crowding, we manipulated combinations of 1) flanker contrast and backward masking, 2) flanker contrast and presentation duration, and 3) flanker preview and pop-out while measuring participants’ ability to correctly report the orientation of a target stimulus. In all three experiments, combining two manipulations consistently modulated the spatial extent of crowding in a way that could not be predicted from an additive combination. However, a simple transformation of the measurement scale completely abolished these interactions and all effects became additive. Precise quantitative predictions of the magnitude of crowding when combining multiple manipulations are thus possible when it is expressed in terms of what we label the ‘critical resolution’. Critical resolution is proportional to the inverse of the smallest flanker free area surrounding the target object necessary for its unimpaired identification. It offers a more parsimonious description of crowding than the traditionally used critical spacing and may thus constitute a measure of fundamental importance for understanding object recognition.
Collapse
|
17
|
Meese TS, Baker DH, Summers RJ. Perception of global image contrast involves transparent spatial filtering and the integration and suppression of local contrasts (not RMS contrast). ROYAL SOCIETY OPEN SCIENCE 2017; 4:170285. [PMID: 28989735 PMCID: PMC5627075 DOI: 10.1098/rsos.170285] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/28/2017] [Accepted: 07/26/2017] [Indexed: 06/07/2023]
Abstract
When adjusting the contrast setting on a television set, we experience a perceptual change in the global image contrast. But how is that statistic computed? We addressed this using a contrast-matching task for checkerboard configurations of micro-patterns in which the contrasts and spatial spreads of two interdigitated components were controlled independently. When the patterns differed greatly in contrast, the higher contrast determined the perceived global contrast. Crucially, however, low contrast additions of one pattern to intermediate contrasts of the other caused a paradoxical reduction in the perceived global contrast. None of the following metrics/models predicted this: max, linear sum, average, energy, root mean squared (RMS), Legge and Foley. However, a nonlinear gain control model, derived from contrast detection and discrimination experiments, incorporating wide-field summation and suppression, did predict the results with no free parameters, but only when spatial filtering was removed. We conclude that our model describes fundamental processes in human contrast vision (the pattern of results was the same for expert and naive observers), but that above threshold-when contrast pedestals are clearly visible-vision's spatial filtering characteristics become transparent, tending towards those of a delta function prior to spatial summation. The global contrast statistic from our model is as easily derived as the RMS contrast of an image, and since it more closely relates to human perception, we suggest it be used as an image contrast metric in practical applications.
Collapse
Affiliation(s)
- Tim S. Meese
- School of Life and Health Sciences, Aston University, Birmingham B4 7ET, UK
| | - Daniel H. Baker
- School of Life and Health Sciences, Aston University, Birmingham B4 7ET, UK
- Department of Psychology, University of York, York YO10 5DD, UK
| | - Robert J. Summers
- School of Life and Health Sciences, Aston University, Birmingham B4 7ET, UK
| |
Collapse
|
18
|
Constrained sampling experiments reveal principles of detection in natural scenes. Proc Natl Acad Sci U S A 2017; 114:E5731-E5740. [PMID: 28652323 DOI: 10.1073/pnas.1619487114] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
A fundamental everyday visual task is to detect target objects within a background scene. Using relatively simple stimuli, vision science has identified several major factors that affect detection thresholds, including the luminance of the background, the contrast of the background, the spatial similarity of the background to the target, and uncertainty due to random variations in the properties of the background and in the amplitude of the target. Here we use an experimental approach based on constrained sampling from multidimensional histograms of natural stimuli, together with a theoretical analysis based on signal detection theory, to discover how these factors affect detection in natural scenes. We sorted a large collection of natural image backgrounds into multidimensional histograms, where each bin corresponds to a particular luminance, contrast, and similarity. Detection thresholds were measured for a subset of bins spanning the space, where a natural background was randomly sampled from a bin on each trial. In low-uncertainty conditions, both the background bin and the amplitude of the target were fixed, and, in high-uncertainty conditions, they varied randomly on each trial. We found that thresholds increase approximately linearly along all three dimensions and that detection accuracy is unaffected by background bin and target amplitude uncertainty. The results are predicted from first principles by a normalized matched-template detector, where the dynamic normalizing gain factor follows directly from the statistical properties of the natural backgrounds. The results provide an explanation for classic laws of psychophysics and their underlying neural mechanisms.
Collapse
|
19
|
Maiello G, Walker L, Bex PJ, Vera-Diaz FA. Blur perception throughout the visual field in myopia and emmetropia. J Vis 2017; 17:3. [PMID: 28476060 PMCID: PMC5425112 DOI: 10.1167/17.5.3] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2016] [Accepted: 03/29/2017] [Indexed: 12/17/2022] Open
Abstract
We evaluated the ability of emmetropic and myopic observers to detect and discriminate blur across the retina under monocular or binocular viewing conditions. We recruited 39 young (23-30 years) healthy adults (n = 19 myopes) with best-corrected visual acuity 0.0 LogMAR (20/20) or better in each eye and no binocular or accommodative dysfunction. Monocular and binocular blur discrimination thresholds were measured as a function of pedestal blur using naturalistic stimuli with an adaptive 4AFC procedure. Stimuli were presented in a 46° diameter window at 40 cm. Gaussian blur pedestals were confined to an annulus at either 0°, 4°, 8°, or 12° eccentricity, with a blur increment applied to only one quadrant of the image. The adaptive procedure efficiently estimated a dipper shaped blur discrimination threshold function with two parameters: intrinsic blur and blur sensitivity. The amount of intrinsic blur increased for retinal eccentricities beyond 4° (p < 0.001) and was lower in binocular than monocular conditions (p < 0.001), but was similar across refractive groups (p = 0.47). Blur sensitivity decreased with retinal eccentricity (p < 0.001) and was highest for binocular viewing, but only for central vision (p < 0.05). Myopes showed worse blur sensitivity than emmetropes monocularly (p < 0.05) but not binocularly (p = 0.66). As expected, blur perception worsens in the visual periphery and binocular summation is most evident in central vision. Furthermore, myopes exhibit a monocular impairment in blur sensitivity that improves under binocular conditions. Implications for the development of myopia are discussed.
Collapse
Affiliation(s)
- Guido Maiello
- University College London Institute of Ophthalmology, London, UKNortheastern University, Boston, MA, USA
| | - Lenna Walker
- New England College of Optometry, Boston, MA, USA
| | | | | |
Collapse
|
20
|
Variations in crowding, saccadic precision, and spatial localization reveal the shared topology of spatial vision. Proc Natl Acad Sci U S A 2017; 114:E3573-E3582. [PMID: 28396415 DOI: 10.1073/pnas.1615504114] [Citation(s) in RCA: 62] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Visual sensitivity varies across the visual field in several characteristic ways. For example, sensitivity declines sharply in peripheral (vs. foveal) vision and is typically worse in the upper (vs. lower) visual field. These variations can affect processes ranging from acuity and crowding (the deleterious effect of clutter on object recognition) to the precision of saccadic eye movements. Here we examine whether these variations can be attributed to a common source within the visual system. We first compared the size of crowding zones with the precision of saccades using an oriented clock target and two adjacent flanker elements. We report that both saccade precision and crowded-target reports vary idiosyncratically across the visual field with a strong correlation across tasks for all participants. Nevertheless, both group-level and trial-by-trial analyses reveal dissociations that exclude a common representation for the two processes. We therefore compared crowding with two measures of spatial localization: Landolt-C gap resolution and three-dot bisection. Here we observe similar idiosyncratic variations with strong interparticipant correlations across tasks despite considerably finer precision. Hierarchical regression analyses further show that variations in spatial precision account for much of the variation in crowding, including the correlation between crowding and saccades. Altogether, we demonstrate that crowding, spatial localization, and saccadic precision show clear dissociations, indicative of independent spatial representations, whilst nonetheless sharing idiosyncratic variations in spatial topology. We propose that these topological idiosyncrasies are established early in the visual system and inherited throughout later stages to affect a range of higher-level representations.
Collapse
|
21
|
Harrison WJ, Bex PJ. Visual crowding is a combination of an increase of positional uncertainty, source confusion, and featural averaging. Sci Rep 2017; 7:45551. [PMID: 28378781 PMCID: PMC5381224 DOI: 10.1038/srep45551] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2016] [Accepted: 02/28/2017] [Indexed: 11/09/2022] Open
Abstract
Although we perceive a richly detailed visual world, our ability to identify individual objects is severely limited in clutter, particularly in peripheral vision. Models of such “crowding” have generally been driven by the phenomenological misidentifications of crowded targets: using stimuli that do not easily combine to form a unique symbol (e.g. letters or objects), observers typically confuse the source of objects and report either the target or a distractor, but when continuous features are used (e.g. orientated gratings or line positions) observers report a feature somewhere between the target and distractor. To reconcile these accounts, we develop a hybrid method of adjustment that allows detailed analysis of these multiple error categories. Observers reported the orientation of a target, under several distractor conditions, by adjusting an identical foveal target. We apply new modelling to quantify whether perceptual reports show evidence of positional uncertainty, source confusion, and featural averaging on a trial-by-trial basis. Our results show that observers make a large proportion of source-confusion errors. However, our study also reveals the distribution of perceptual reports that underlie performance in this crowding task more generally: aggregate errors cannot be neatly labelled because they are heterogeneous and their structure depends on target-distractor distance.
Collapse
Affiliation(s)
- William J Harrison
- Department of Psychology, University of Cambridge, Cambridge, UK.,Queensland Brain Institute, The University of Queensland, Brisbane, Australia
| | - Peter J Bex
- Department of Psychology, Northeastern University, Boston, USA
| |
Collapse
|
22
|
Wallis TSA, Dorr M, Bex PJ. Sensitivity to gaze-contingent contrast increments in naturalistic movies: An exploratory report and model comparison. J Vis 2015; 15:3. [PMID: 26057546 DOI: 10.1167/15.8.3] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Sensitivity to luminance contrast is a prerequisite for all but the simplest visual systems. To examine contrast increment detection performance in a way that approximates the natural environmental input of the human visual system, we presented contrast increments gaze-contingently within naturalistic video freely viewed by observers. A band-limited contrast increment was applied to a local region of the video relative to the observer's current gaze point, and the observer made a forced-choice response to the location of the target (≈25,000 trials across five observers). We present exploratory analyses showing that performance improved as a function of the magnitude of the increment and depended on the direction of eye movements relative to the target location, the timing of eye movements relative to target presentation, and the spatiotemporal image structure at the target location. Contrast discrimination performance can be modeled by assuming that the underlying contrast response is an accelerating nonlinearity (arising from a nonlinear transducer or gain control). We implemented one such model and examined the posterior over model parameters, estimated using Markov-chain Monte Carlo methods. The parameters were poorly constrained by our data; parameters constrained using strong priors taken from previous research showed poor cross-validated prediction performance. Atheoretical logistic regression models were better constrained and provided similar prediction performance to the nonlinear transducer model. Finally, we explored the properties of an extended logistic regression that incorporates both eye movement and image content features. Models of contrast transduction may be better constrained by incorporating data from both artificial and natural contrast perception settings.
Collapse
|
23
|
Bulatov A, Bulatova N, Loginovich Y, Surkys T. Illusion of extent evoked by closed two-dimensional shapes. BIOLOGICAL CYBERNETICS 2015; 109:163-178. [PMID: 25359505 DOI: 10.1007/s00422-014-0633-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/09/2014] [Accepted: 10/17/2014] [Indexed: 06/04/2023]
Abstract
In the present study, we have tested the applicability of the computational model of centroid extraction to account for the data collected in experiments with stimuli comprising of closed two-dimensional shapes. The outlined or uniformly filled pie-shaped circular sectors (contextual distractors) were arranged according to the Brentano pattern, and three different stimulus parameters (either the radius or the central angle or the tilt angle of the sectors) were used as independent variables in different series of experiments. It was demonstrated that the model calculations adequately predict the variations of illusion magnitude shown by all the subjects for all independent variables and that there is no significant difference between the data obtained for stimuli with the outlined and uniformly filled distractors. A good correspondence between the computational and experimental data provides convincing evidence in support of the "centroid" explanation of illusions of extent of the Müller-Lyer type.
Collapse
Affiliation(s)
- Aleksandr Bulatov
- Laboratory of Visual Neurophysiology, Lithuanian University of Health Sciences, Mickevičiaus 9, 44307, Kaunas, Lithuania,
| | | | | | | |
Collapse
|
24
|
Maiello G, Chessa M, Solari F, Bex PJ. Simulated disparity and peripheral blur interact during binocular fusion. J Vis 2014; 14:13. [PMID: 25034260 DOI: 10.1167/14.8.13] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
Abstract
We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual’s aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion.
Collapse
Affiliation(s)
- Guido Maiello
- Department of Ophthalmology, Harvard Medical School, Boston, MA, USADepartment of Informatics, Bioengineering, Robotics and System Engineering, University of Genoa, Genoa, ItalyUCL Institute of Ophthalmology, University College London, London, UK
| | - Manuela Chessa
- Department of Informatics, Bioengineering, Robotics and System Engineering, University of Genoa, Genoa, Italy
| | - Fabio Solari
- Department of Informatics, Bioengineering, Robotics and System Engineering, University of Genoa, Genoa, Italy
| | - Peter J Bex
- Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|