1
|
Shi Y, Eskew RT. Asymmetries between achromatic increments and decrements: Perceptual scales and discrimination thresholds. J Vis 2024; 24:10. [PMID: 38607638 PMCID: PMC11019583 DOI: 10.1167/jov.24.4.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 02/16/2024] [Indexed: 04/13/2024] Open
Abstract
The perceptual response to achromatic incremental (A+) and decremental (A-) visual stimuli is known to be asymmetrical, due most likely to differences between ON and OFF channels. In the current study, we further investigated this asymmetry psychophysically. In Experiment 1, maximum likelihood difference scaling (MLDS) was used to estimate separately observers' perceptual scales for A+ and A-. In Experiment 2, observers performed two spatial alternative forced choice (2SAFC) pedestal discrimination on multiple pedestal contrast levels, using all combinations of A+ and A- pedestals and tests. Both experiments showed the well-known asymmetry. The perceptual scale curves of A+ follow a modified Naka-Rushton equation, whereas those of A- follow a cubic function. Correspondingly, the discrimination thresholds for the A+ pedestal increased monotonically with pedestal contrast, whereas the thresholds of the A- pedestal first increased as the pedestal contrast increased, then decreased as the contrast became higher. We propose a model that links the results of the two experiments, in which the pedestal discrimination threshold is inversely related to the derivative of the perceptual scale curve. Our findings generally agree with Whittle's previous findings (Whittle, 1986, 1992), which also included strong asymmetry between A+ and A-. We suggest that the perception of achromatic balanced incremental and decremental (bipolar) stimuli, such as gratings or flicker, might be dominated by one polarity due to this asymmetry under some conditions.
Collapse
Affiliation(s)
- Yangyi Shi
- Department of Psychology, Northeastern University, Boston, MA, USA
- yangyishi.com
| | - Rhea T Eskew
- Department of Psychology, Northeastern University, Boston, MA, USA
- https://web.northeastern.edu/visionlab/
| |
Collapse
|
2
|
Shooner C, Mullen KT. Linking perceived to physical contrast: Comparing results from discrimination and difference-scaling experiments. J Vis 2022; 22:13. [PMID: 35061001 PMCID: PMC8787651 DOI: 10.1167/jov.22.1.13] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Accepted: 12/29/2021] [Indexed: 11/24/2022] Open
Abstract
Psychophysical approaches that allow us to estimate how perceived stimulus intensity is linked to physical intensity are import tools for studying nonlinear transformations of visual signals within different visual pathways. Here, we investigated how stimulus contrast is encoded in achromatic and chromatic pathways using simple grating stimuli. We compared two experimental approaches to this question: contrast discrimination (increment detection thresholds measured on contrast pedestals) and the maximum likelihood difference scaling (MLDS) approach introduced by Maloney and Yang (2003). The results of both experiments are expressed using simple models that include a transducer function mapping physical contrast to an internal signal the observer uses in making judgments, and an estimate of the variability of this representation (internal "noise"). We found that the transducers derived from both experiments have a similar form, but occupy different ranges of physical contrast in different stimulus conditions, reflecting difference in contrast sensitivity. This is consistent with past discrimination results, and in the difference-scaling case provides new evidence supporting the idea that suprathreshold chromatic and achromatic contrast are processed similarly, once differences in contrast sensitivity are taken into account. Model estimates of internal noise were higher in the difference-scaling experiment than the discrimination experiment, a finding we attribute to a difference in task complexity. Finally, we fit an alternative version of the MLDS model in which internal noise increased with response level. This alternative was no better at predicting holdout data in a cross-validation analysis than the original constant-variance model.
Collapse
Affiliation(s)
- Christopher Shooner
- McGill Vision Research, Department of Ophthalmology & Visual Sciences, McGill University Montreal, Quebec, Canada
| | - Kathy T Mullen
- McGill Vision Research, Department of Ophthalmology & Visual Sciences, McGill University Montreal, Quebec, Canada
| |
Collapse
|
3
|
DiMattina C. Luminance texture boundaries and luminance step boundaries are segmented using different mechanisms. Vision Res 2022; 190:107968. [PMID: 34794083 PMCID: PMC8712411 DOI: 10.1016/j.visres.2021.107968] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Revised: 10/18/2021] [Accepted: 10/28/2021] [Indexed: 01/03/2023]
Abstract
In natural scenes, two adjacent surfaces may differ in mean luminance without any sharp change in luminance at their boundary, but rather due to different relative proportions of light and dark regions within each surface. We refer to such boundaries as luminance texture boundaries (LTBs), and in this study we investigate whether LTBs are segmented using different mechanisms than luminance step boundaries (LSBs). We develop a novel method to generate luminance texture boundaries from natural uniform textures, and using these natural LTB stimuli in a boundary segmentation task, we find that observers are much more sensitive to identical luminance differences which are defined by textures (LTBs) than by uniform luminance steps (LSBs), consistent with the possibility of different mechanisms. In a second and third set of experiments, we characterize observer performance segmenting natural LTBs in the presence of masking LSBs which observers are instructed to ignore. We show that there is very little effect of masking LSBs on LTB segmentation performance. Furthermore, any masking effects we find are far less than those observed in a control experiment where both the masker and target are LSBs, and far less than those predicted by a model assuming identical mechanisms. Finally, we perform a fourth set of boundary segmentation experiments using artificial LTB stimuli comprised of differing proportions of white and black dots on opposite sides of the boundary. We find that these stimuli are also highly robust to masking by supra-threshold LSBs, consistent with our results using natural stimuli, and with our earlier studies using similar stimuli. Taken as a whole, these results suggest that the visual system contains mechanisms well suited to detecting surface boundaries that are robust to interference from luminance differences arising from luminance steps like those formed by cast shadows.
Collapse
Affiliation(s)
- Christopher DiMattina
- Computational Perception Laboratory, Fort Myers, FL, USA 33965-6565,Department of Psychology Florida Gulf Coast University, Fort Myers, FL, USA 33965-6565
| |
Collapse
|
4
|
Cheeseman JR, Ferwerda JA, Maile FJ, Fleming RW. Scaling and discriminability of perceived gloss. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2021; 38:203-210. [PMID: 33690530 DOI: 10.1364/josaa.409454] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Accepted: 12/08/2020] [Indexed: 06/12/2023]
Abstract
While much attention has been given to understanding biases in gloss perception (e.g., changes in perceived reflectance as a function of lighting, shape, viewpoint, and other factors), here we investigated sensitivity to changes in surface reflectance. We tested how visual sensitivity to differences in specular reflectance varies as a function of the magnitude of specular reflectance. Stimuli consisted of renderings of glossy objects under natural illumination. Using maximum likelihood difference scaling (MLDS), we created a perceptual scaling of the specular reflectance parameter of the Ward reflectance model. Then, using the method of constant stimuli and a standard 2AFC procedure, we obtained psychometric functions for gloss discrimination across a range of reflectance values derived from the perceptual scale. Both methods demonstrate that discriminability is significantly diminished at high levels of specular reflectance, thus indicating that gloss sensitivity depends on the magnitude of change in the image produced by different reflectance values. Taken together, these experiments also suggest that internal sensory noise remains constant for suprathreshold and near-threshold intervals of specular reflectance, which supports the use of MLDS as a highly efficient method for evaluating gloss sensitivity.
Collapse
|
5
|
Abstract
In studying visual perception, we seek to develop models of processing that accurately predict perceptual judgments. Much of this work is focused on judgments of discrimination, and there is a large literature concerning models of visual discrimination. There are, however, non-threshold visual judgments, such as judgments of the magnitude of differences between visual stimuli, that provide a means to bridge the gap between threshold and appearance. We describe two such models of suprathreshold judgments, maximum likelihood difference scaling and maximum likelihood conjoint measurement, and review recent literature that has exploited them.
Collapse
Affiliation(s)
- Laurence T Maloney
- Department of Psychology, New York University, New York, New York 10003, USA;
| | - Kenneth Knoblauch
- Université Lyon, Université Claude Bernard Lyon 1, INSERM, Stem Cell and Brain Research Institute U1208, 69500 Bron, France; .,National Centre for Optics, Vision and Eye Care, Faculty of Health and Social Sciences, University of South-Eastern Norway, 3616 Kongsberg, Norway
| |
Collapse
|
6
|
Abstract
A central question in psychophysical research is how perceptual differences between stimuli translate into physical differences and vice versa. Characterizing such a psychophysical scale would reveal how a stimulus is converted into a perceptual event, particularly under changes in viewing conditions (e.g., illumination). Various methods exist to derive perceptual scales, but in practice, scale estimation is often bypassed by assessing appearance matches. Matches, however, only reflect the underlying perceptual scales but do not reveal them directly. Two recently developed methods, MLDS (Maximum Likelihood Difference Scaling) and MLCM (Maximum Likelihood Conjoint Measurement), promise to reliably estimate perceptual scales. Here we compared both methods in their ability to estimate perceptual scales across context changes in the domain of lightness perception. In simulations, we adopted a lightness constant, a contrast, and a luminance-based observer model to generate differential patterns of perceptual scales. MLCM correctly recovered all models. MLDS correctly recovered only the lightness constant observer model. We also empirically probed both methods with two types of stimuli: (a) variegated checkerboards that support lightness constancy and (b) center-surround stimuli that do not support lightness constancy. Consistent with the simulations, MLDS and MLCM provided similar scale estimates in the first case and divergent estimates in the second. In addition, scales from MLCM–and not from MLDS–accurately predicted asymmetric matches for both types of stimuli. Taking experimental and simulation results together, MLCM seems more apt to provide a valid estimate of the perceptual scales underlying judgments of lightness across viewing conditions.
Collapse
|
7
|
Nonlinear transduction of emotional facial expression. Vision Res 2020; 170:1-11. [PMID: 32217366 DOI: 10.1016/j.visres.2020.03.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Revised: 03/06/2020] [Accepted: 03/09/2020] [Indexed: 11/23/2022]
Abstract
To create neural representations of external stimuli, the brain performs a number of processing steps that transform its inputs. For fundamental attributes, such as stimulus contrast, this involves one or more nonlinearities that are believed to optimise the neural code to represent features of the natural environment. Here we ask if the same is also true of more complex stimulus dimensions, such as emotional facial expression. We report the results of three experiments combining morphed facial stimuli with electrophysiological and psychophysical methods to measure the function mapping emotional expression intensity to internal response. The results converge on a nonlinearity that accelerates over weak expressions, and then becomes shallower for stronger expressions, similar to the situation for lower level stimulus properties. We further demonstrate that the nonlinearity is not attributable to the morphing procedure used in stimulus generation.
Collapse
|
8
|
Internal noise in contrast discrimination propagates forwards from early visual cortex. Neuroimage 2019; 191:503-517. [PMID: 30822470 DOI: 10.1016/j.neuroimage.2019.02.049] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Revised: 02/03/2019] [Accepted: 02/19/2019] [Indexed: 11/22/2022] Open
Abstract
Human contrast discrimination performance is limited by transduction nonlinearities and variability of the neural representation (noise). Whereas the nonlinearities have been well-characterised, there is less agreement about the specifics of internal noise. Psychophysical models assume that it impacts late in sensory processing, whereas neuroimaging and intracranial electrophysiology studies suggest that the noise is much earlier. We investigated whether perceptually-relevant internal noise arises in early visual areas or later decision making areas. We recorded EEG and MEG during a two-interval-forced-choice contrast discrimination task and used multivariate pattern analysis to decode target/non-target and selected/non-selected intervals from evoked responses. We found that perceptual decisions could be decoded from both EEG and MEG signals, even when the stimuli in both intervals were physically identical. Above-chance decision classification started <100 ms after stimulus onset, suggesting that neural noise affects sensory signals early in the visual pathway. Classification accuracy increased over time, peaking at >500 ms. Applying multivariate analysis to separate anatomically-defined brain regions in MEG source space, we found that occipital regions were informative early on but then information spreads forwards across parietal and frontal regions. This is consistent with neural noise affecting sensory processing at multiple stages of perceptual decision making. We suggest how early sensory noise might be resolved with Birdsall's linearisation, in which a dominant noise source obscures subsequent nonlinearities, to allow the visual system to preserve the wide dynamic range of early areas whilst still benefitting from contrast-invariance at later stages. A preprint of this work is available at: https://doi.org/10.1101/364612.
Collapse
|
9
|
Neri P. The empirical characteristics of human pattern vision defy theoretically-driven expectations. PLoS Comput Biol 2018; 14:e1006585. [PMID: 30513091 PMCID: PMC6294397 DOI: 10.1371/journal.pcbi.1006585] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2018] [Revised: 12/14/2018] [Accepted: 10/17/2018] [Indexed: 11/19/2022] Open
Abstract
Contrast is the most fundamental property of images. Consequently, any comprehensive model of biological vision must incorporate this attribute and provide a veritable description of its impact on visual perception. Current theoretical and computational models predict that vision should modify its characteristics at low contrast: for example, it should become broader (more lowpass) to protect from noise, as often demonstrated by individual neurons. We find that the opposite is true for human discrimination of elementary image elements: vision becomes sharper, not broader, as contrast approaches threshold levels. Furthermore, it suffers from increased internal variability at low contrast and it transitions from a surprisingly linear regime at high contrast to a markedly nonlinear processing mode in the low-contrast range. These characteristics are hard-wired in that they happen on a single trial without memory or expectation. Overall, the empirical results urge caution when attempting to interpret human vision from the standpoint of optimality and related theoretical constructs. Direct measurements of this phenomenon indicate that the actual constraints derive from intrinsic architectural features, such as the co-existence of complex-cell-like and simple-cell-like components. Small circuits built around these elements can indeed account for the empirical results, but do not appear to operate in a manner that conforms to optimality even approximately. More generally, our results provide a compelling demonstration of how far we still are from securing an adequate computational account of the most basic operations carried out by human vision.
Collapse
Affiliation(s)
- Peter Neri
- Laboratoire des Systèmes Perceptifs, Département d’études cognitives, École normale supérieure, PSL University, CNRS, 75005 Paris, France
| |
Collapse
|