1
|
White DN, Burge J. How distinct sources of nuisance variability in natural images and scenes limit human stereopsis. PLoS Comput Biol 2025; 21:e1012945. [PMID: 40233309 PMCID: PMC12080933 DOI: 10.1371/journal.pcbi.1012945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2024] [Revised: 05/15/2025] [Accepted: 03/10/2025] [Indexed: 04/17/2025] Open
Abstract
Stimulus variability-a form of nuisance variability-is a primary source of perceptual uncertainty in everyday natural tasks. How do different properties of natural images and scenes contribute to this uncertainty? Using binocular disparity as a model system, we report a systematic investigation of how various forms of natural stimulus variability impact performance in a stereo-depth discrimination task. With stimuli sampled from a stereo-image database of real-world scenes having pixel-by-pixel ground-truth distance data, three human observers completed two closely related double-pass psychophysical experiments. In the two experiments, each human observer responded twice to ten thousand unique trials, in which twenty thousand unique stimuli were presented. New analytical methods reveal, from this data, the specific and nearly dissociable effects of two distinct sources of natural stimulus variability-variation in luminance-contrast patterns and variation in local-depth structure-on discrimination performance, as well as the relative importance of stimulus-driven-variability and internal-noise in determining performance limits. Between-observer analyses show that both stimulus-driven sources of uncertainty are responsible for a large proportion of total variance, have strikingly similar effects on different people, and-surprisingly-make stimulus-by-stimulus responses more predictable (not less). The consistency across observers raises the intriguing prospect that image-computable models can make reasonably accurate performance predictions in natural viewing. Overall, the findings provide a rich picture of stimulus factors that contribute to human perceptual performance in natural scenes. The approach should have broad application to other animal models and other sensory-perceptual tasks with natural or naturalistic stimuli.
Collapse
Affiliation(s)
- David N. White
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
- Department of Electrical Engineering & Computer Science, York University, Toronto, Ontario, Canada
| | - Johannes Burge
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
- Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
2
|
Talwar S, Mazade R, Bentley-Ford M, Yu J, Pilli N, Kane MA, Ethier CR, Pardue MT. Modulation of All-Trans Retinoic Acid by Light and Dopamine in the Murine Eye. Invest Ophthalmol Vis Sci 2025; 66:37. [PMID: 40100201 PMCID: PMC11927300 DOI: 10.1167/iovs.66.3.37] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2025] [Accepted: 02/20/2025] [Indexed: 03/20/2025] Open
Abstract
Purpose Ambient light exposure is linked to myopia development in children and affects myopia susceptibility in animal models. Currently, it is unclear which signals mediate the effects of light on myopia. All-trans retinoic acid (atRA) and dopamine (DA) oppositely influence experimental myopia and may be involved in the retinoscleral signaling cascade underlying myopic eye growth. However, how ocular atRA responds to different lighting and whether atRA and DA interact remains unknown. Methods Dark-adapted C57BL/6J mice (29-31 days old) were exposed to dim (1 lux), mid (59 lux), or bright (12,000 lux) ambient lighting for 5 to 60 minutes. Some mice were also systemically administered the DA precursor, LDOPA, or atRA before light exposure. After exposure, the retina and the back of the eye (BOE) were collected and analyzed for levels of atRA, DA, and the DA metabolite, DOPAC. Results DA turnover (DOPAC/DA ratio) in the retina increased in magnitude after only 5 minutes of exposure to higher ambient luminance, but was minimal in the BOE. In contrast, atRA levels in the retina and BOE significantly decreased with higher ambient luminance and longer duration exposure. Intriguingly, LDOPA-treated mice had a transient reduction in retinal atRA compared with saline-treated mice, whereas atRA treatment had no effect on ocular DA. Conclusions Ocular atRA was affected by the duration of exposure to different ambient lighting, and retinal atRA levels decreased with increased DA. Overall, these data suggest specific interactions between ambient lighting, atRA, and DA that could have implications for the retinoscleral signaling cascade underlying myopic eye growth.
Collapse
Affiliation(s)
- Sarah Talwar
- Department of Ophthalmology, Emory University School of Medicine, Atlanta, Georgia, United States
- Center for Visual and Neurocognitive Rehabilitation, Atlanta VA Health Care System, Atlanta, Georgia, United States
| | - Reece Mazade
- Department of Ophthalmology, Emory University School of Medicine, Atlanta, Georgia, United States
- Center for Visual and Neurocognitive Rehabilitation, Atlanta VA Health Care System, Atlanta, Georgia, United States
| | - Melissa Bentley-Ford
- Department of Ophthalmology, Emory University School of Medicine, Atlanta, Georgia, United States
- Center for Visual and Neurocognitive Rehabilitation, Atlanta VA Health Care System, Atlanta, Georgia, United States
| | - Jianshi Yu
- Department of Pharmaceutical Sciences, University of Maryland School of Pharmacy, Baltimore, Maryland, United States
| | - Nageswara Pilli
- Department of Pharmaceutical Sciences, University of Maryland School of Pharmacy, Baltimore, Maryland, United States
| | - Maureen A. Kane
- Department of Pharmaceutical Sciences, University of Maryland School of Pharmacy, Baltimore, Maryland, United States
| | - C. Ross Ethier
- Department of Ophthalmology, Emory University School of Medicine, Atlanta, Georgia, United States
- Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, Georgia, United States
| | - Machelle T. Pardue
- Department of Ophthalmology, Emory University School of Medicine, Atlanta, Georgia, United States
- Center for Visual and Neurocognitive Rehabilitation, Atlanta VA Health Care System, Atlanta, Georgia, United States
- Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, Georgia, United States
| |
Collapse
|
3
|
Patterson SS, Cai Y, Yang Q, Merigan WH, Williams DR. Asymmetric Activation of Retinal ON and OFF Pathways by AOSLO Raster-Scanned Visual Stimuli. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.12.17.628952. [PMID: 39763934 PMCID: PMC11702774 DOI: 10.1101/2024.12.17.628952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/18/2025]
Abstract
Adaptive optics scanning light ophthalmoscopy (AOSLO) enables high-resolution retinal imaging, eye tracking, and stimulus delivery in the living eye. AOSLO-mediated visual stimuli are created by temporally modulating the excitation light as it scans across the retina. As a result, each location within the field of view receives a brief flash of light during each scanner cycle (every 33-40 ms). Here we used in vivo calcium imaging with AOSLO to investigate the impact of this intermittent stimulation on the retinal ON and OFF pathways. Raster-scanned backgrounds exaggerated existing ON-OFF pathway asymmetries leading to high baseline activity in ON cells and increased response rectification in OFF cells.
Collapse
Affiliation(s)
- Sara S Patterson
- Flaum Eye Institute, University of Rochester Medical Center, Rochester, NY, 14642
- Del Monte Institute for Neuroscience, University of Rochester Medical Center, NY, 14642
| | - Yongyi Cai
- Institute of Optics, University of Rochester, Rochester, NY, 14627
| | - Qiang Yang
- Center for Visual Science, University of Rochester, Rochester, NY, 14627
| | - William H Merigan
- Flaum Eye Institute, University of Rochester Medical Center, Rochester, NY, 14642
- Center for Visual Science, University of Rochester, Rochester, NY, 14627
| | - David R Williams
- Flaum Eye Institute, University of Rochester Medical Center, Rochester, NY, 14642
- Institute of Optics, University of Rochester, Rochester, NY, 14627
- Center for Visual Science, University of Rochester, Rochester, NY, 14627
| |
Collapse
|
4
|
Talwar S, Mazade R, Bentley-Ford M, Yu J, Pilli N, Kane MA, Ethier CR, Pardue MT. Modulation of all- trans retinoic acid by light and dopamine in the murine eye. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.12.06.627245. [PMID: 39713473 PMCID: PMC11661107 DOI: 10.1101/2024.12.06.627245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2024]
Abstract
Purpose Ambient light exposure is linked to myopia development in children and affects myopia susceptibility in animal models. Currently, it is unclear which signals mediate the effects of light on myopia. All- trans retinoic acid (atRA) and dopamine (DA) oppositely influence experimental myopia and may be involved in the retino-scleral signaling cascade underlying myopic eye growth. However, how ocular atRA responds to different lighting and whether atRA and DA interact remains unknown. Methods Dark-adapted C57BL/6J mice (29-31 days old) were exposed to Dim (1 lux), Mid (59 lux), or Bright (12,000 lux) ambient lighting for 5-60 minutes. Some mice were also systemically administered the DA precursor, LDOPA, or atRA prior to light exposure. After exposure, the retina and the back-of-the-eye (BOE) were collected and analyzed for levels of atRA, DA, and the DA metabolite, DOPAC. Results DA turnover (DOPAC/DA ratio) in the retina increased in magnitude after only five minutes of exposure to higher ambient luminance but was minimal in the BOE. In contrast, atRA levels in the retina and BOE significantly decreased with higher ambient luminance and longer duration exposure. Intriguingly, LDOPA-treated mice had a transient reduction in retinal atRA compared to saline-treated mice, whereas atRA treatment had no effect on ocular DA. Conclusions Ocular atRA was affected by the duration of exposure to different ambient lighting and retinal atRA levels decreased with increased DA. Overall, these data suggest specific interactions between ambient lighting, atRA, and DA that could have implications for the retino-scleral signaling cascade underlying myopic eye growth.
Collapse
|
5
|
Domdei N, Sauer Y, Hecox B, Neugebauer A, Wahl S. Assessing visual performance during intense luminance changes in virtual reality. Heliyon 2024; 10:e40349. [PMID: 39650180 PMCID: PMC11625149 DOI: 10.1016/j.heliyon.2024.e40349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Revised: 10/04/2024] [Accepted: 11/11/2024] [Indexed: 12/11/2024] Open
Abstract
During indoor-outdoor transitions humans encounter luminance changes beyond the functional range of the photoreceptors, leaving the individual at risk of overlooking harmful low-contrast objects until adaptation processes re-enable optimal vision. To study human visual performance during intense luminance changes, we propose a virtual reality based simulation platform. After linearization of the headset's luminance output, detection times were recorded for ten participants. The small (FWHM = 0.6°) low-contrast stimuli appeared randomly in one of four corners (±10°) after luminance changes of three magnitudes within 1 or 3 s. Significantly decreased detection times were observed for the conditions with simulated self-tinting lenses compared to lenses with fixed transmission rates after luminance decreases. In cases of luminance increases all detection times were similar. In conclusion, the proposed virtual reality simulation platform allows for studying vision during or after steep luminance changes and helps to design technical aids like self-tinting lenses.
Collapse
Affiliation(s)
- Niklas Domdei
- Carl Zeiss Vision International GmbH, Aalen, Germany
| | - Yannick Sauer
- Carl Zeiss Vision International GmbH, Aalen, Germany
- Institute for Ophthalmic Research, Eberhard Karls University Tübingen, Tübingen, Germany
| | | | - Alexander Neugebauer
- Institute for Ophthalmic Research, Eberhard Karls University Tübingen, Tübingen, Germany
| | - Siegfried Wahl
- Carl Zeiss Vision International GmbH, Aalen, Germany
- Institute for Ophthalmic Research, Eberhard Karls University Tübingen, Tübingen, Germany
| |
Collapse
|
6
|
Chen Q, Ingram NT, Baudin J, Angueyra JM, Sinha R, Rieke F. Predictably manipulating photoreceptor light responses to reveal their role in downstream visual responses. eLife 2024; 13:RP93795. [PMID: 39498955 PMCID: PMC11537484 DOI: 10.7554/elife.93795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2024] Open
Abstract
Computation in neural circuits relies on the judicious use of nonlinear circuit components. In many cases, multiple nonlinear components work collectively to control circuit outputs. Separating the contributions of these different components is difficult, and this limits our understanding of the mechanistic basis of many important computations. Here, we introduce a tool that permits the design of light stimuli that predictably alter rod and cone phototransduction currents - including stimuli that compensate for nonlinear properties such as light adaptation. This tool, based on well-established models for the rod and cone phototransduction cascade, permits the separation of nonlinearities in phototransduction from those in downstream circuits. This will allow, for example, direct tests of how adaptation in rod and cone phototransduction affects downstream visual signals and perception.
Collapse
Affiliation(s)
- Qiang Chen
- Department of Physiology and Biophysics, University of WashingtonSeattleUnited States
| | - Norianne T Ingram
- Department of Physiology and Biophysics, University of WashingtonSeattleUnited States
| | - Jacob Baudin
- Department of Physiology and Biophysics, University of WashingtonSeattleUnited States
| | - Juan M Angueyra
- Department of Physiology and Biophysics, University of WashingtonSeattleUnited States
| | - Raunak Sinha
- Department of Physiology and Biophysics, University of WashingtonSeattleUnited States
| | - Fred Rieke
- Department of Physiology and Biophysics, University of WashingtonSeattleUnited States
| |
Collapse
|
7
|
Saha A, Bucci T, Baudin J, Sinha R. Regional tuning of photoreceptor adaptation in the primate retina. Nat Commun 2024; 15:8821. [PMID: 39394185 PMCID: PMC11470117 DOI: 10.1038/s41467-024-53061-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 09/27/2024] [Indexed: 10/13/2024] Open
Abstract
Adaptation in cone photoreceptors allows our visual system to effectively operate over an enormous range of light intensities. However, little is known about the properties of cone adaptation in the specialized region of the primate central retina called the fovea, which is densely packed with cones and mediates high-acuity central vision. Here we show that macaque foveal cones exhibit weaker and slower luminance adaptation compared to cones in the peripheral retina. We find that this difference in adaptive properties between foveal and peripheral cones is due to differences in the magnitude of a hyperpolarization-activated current, Ih. This Ih current regulates the strength and time course of luminance adaptation in peripheral cones where it is more prominent than in foveal cones. A weaker and slower adaptation in foveal cones helps maintain a higher sensitivity for a longer duration which may be well-suited for maximizing the collection of high-acuity information at the fovea during gaze fixation between rapid eye movements.
Collapse
Affiliation(s)
- Aindrila Saha
- Department of Neuroscience, University of Wisconsin, Madison, WI, USA
- McPherson Eye Research Institute, University of Wisconsin, Madison, WI, USA
| | - Theodore Bucci
- Department of Neuroscience, University of Wisconsin, Madison, WI, USA
- McPherson Eye Research Institute, University of Wisconsin, Madison, WI, USA
| | - Jacob Baudin
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
| | - Raunak Sinha
- Department of Neuroscience, University of Wisconsin, Madison, WI, USA.
- McPherson Eye Research Institute, University of Wisconsin, Madison, WI, USA.
- Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, WI, USA.
| |
Collapse
|
8
|
Gür B, Ramirez L, Cornean J, Thurn F, Molina-Obando S, Ramos-Traslosheros G, Silies M. Neural pathways and computations that achieve stable contrast processing tuned to natural scenes. Nat Commun 2024; 15:8580. [PMID: 39362859 PMCID: PMC11450186 DOI: 10.1038/s41467-024-52724-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Accepted: 09/18/2024] [Indexed: 10/05/2024] Open
Abstract
Natural scenes are highly dynamic, challenging the reliability of visual processing. Yet, humans and many animals perform accurate visual behaviors, whereas computer vision devices struggle with rapidly changing background luminance. How does animal vision achieve this? Here, we reveal the algorithms and mechanisms of rapid luminance gain control in Drosophila, resulting in stable visual processing. We identify specific transmedullary neurons as the site of luminance gain control, which pass this property to direction-selective cells. The circuitry further involves wide-field neurons, matching computational predictions that local spatial pooling drive optimal contrast processing in natural scenes when light conditions change rapidly. Experiments and theory argue that a spatially pooled luminance signal achieves luminance gain control via divisive normalization. This process relies on shunting inhibition using the glutamate-gated chloride channel GluClα. Our work describes how the fly robustly processes visual information in dynamically changing natural scenes, a common challenge of all visual systems.
Collapse
Affiliation(s)
- Burak Gür
- Institute of Developmental Biology and Neurobiology, Johannes-Gutenberg University Mainz, Mainz, Germany
- The Friedrich Miescher Institute for Biomedical Research (FMI), Basel, Switzerland
| | - Luisa Ramirez
- Institute of Developmental Biology and Neurobiology, Johannes-Gutenberg University Mainz, Mainz, Germany
| | - Jacqueline Cornean
- Institute of Developmental Biology and Neurobiology, Johannes-Gutenberg University Mainz, Mainz, Germany
| | - Freya Thurn
- Institute of Developmental Biology and Neurobiology, Johannes-Gutenberg University Mainz, Mainz, Germany
| | - Sebastian Molina-Obando
- Institute of Developmental Biology and Neurobiology, Johannes-Gutenberg University Mainz, Mainz, Germany
| | - Giordano Ramos-Traslosheros
- Institute of Developmental Biology and Neurobiology, Johannes-Gutenberg University Mainz, Mainz, Germany
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Marion Silies
- Institute of Developmental Biology and Neurobiology, Johannes-Gutenberg University Mainz, Mainz, Germany.
| |
Collapse
|
9
|
Brooks CJ, Fielding J, White OB, Badcock DR, McKendrick AM. Exploring the Phenotype and Possible Mechanisms of Palinopsia in Visual Snow Syndrome. Invest Ophthalmol Vis Sci 2024; 65:23. [PMID: 39412817 PMCID: PMC11488523 DOI: 10.1167/iovs.65.12.23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2024] [Accepted: 09/22/2024] [Indexed: 10/20/2024] Open
Abstract
Purpose Palinopsia (persistent afterimages and/or trailing) is a common but poorly understood symptom of the neurological condition visual snow syndrome. This study aimed to collect a phenotypical description of palinopsia in visual snow syndrome and probe for abnormalities in temporal visual processing, hypothesizing that palinopsia could arise from increased visibility of normal afterimage signals or prolonged visible persistence. Methods Thirty controls and 31 participants with visual snow syndrome (18 with migraine) took part. Participants completed a palinopsia symptom questionnaire. Contrast detection thresholds were measured before and after brief exposure to a spatial grating because deficient contrast adaptation could increase afterimage visibility. Temporal integration and segregation were assessed using missing-element and odd-element tasks, respectively, because prolonged persistence would promote integration at wide temporal offsets. To distinguish the effects of visual snow syndrome from comorbid migraine, 25 people with migraine alone participated in an additional experiment. Results Palinopsia was common in visual snow syndrome, typically presenting as unformed images that were frequently noticed. Contrary to our hypotheses, we found neither reduced contrast adaptation (F(3.22, 190.21) = 0.71, P = 0.56) nor significantly prolonged temporal integration thresholds (F(1, 59) = 2.35, P = 0.13) in visual snow syndrome. Instead, participants with visual snow syndrome could segregate stimuli in closer succession than controls (F(1, 59) = 4.62, P = 0.04, ηp2 = 0.073) regardless of co-occurring migraine (F(2, 53) = 1.22, P = 0.30). In contrast, individuals with migraine alone exhibited impaired integration (F(2, 53) = 4.44, P = 0.017, ηp2 = 0.14). Conclusions Although neither deficient contrast adaptation nor prolonged visible persistence explains palinopsia, temporal resolution of spatial cues is enhanced and potentially more flexible in visual snow syndrome.
Collapse
Affiliation(s)
- Cassandra J. Brooks
- Department of Optometry and Vision Sciences, University of Melbourne, Parkville, Australia
| | - Joanne Fielding
- Department of Neurosciences, Central Clinical School, Monash University, Melbourne, Australia
| | - Owen B. White
- Department of Neurosciences, Central Clinical School, Monash University, Melbourne, Australia
| | - David R. Badcock
- School of Psychological Science, The University of Western Australia, Crawley, Australia
| | - Allison M. McKendrick
- Department of Optometry and Vision Sciences, University of Melbourne, Parkville, Australia
- Lions Eye Institute, Nedlands, Australia
- School of Allied Health, The University of Western Australia, Crawley, Australia
| |
Collapse
|
10
|
Chen Q, Ingram NT, Baudin J, Angueyra JM, Sinha R, Rieke F. Predictably manipulating photoreceptor light responses to reveal their role in downstream visual responses. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.20.563304. [PMID: 37961603 PMCID: PMC10634684 DOI: 10.1101/2023.10.20.563304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
Computation in neural circuits relies on the judicious use of nonlinear circuit components. In many cases, multiple nonlinear components work collectively to control circuit outputs. Separating the contributions of these different components is difficult, and this limits our understanding of the mechanistic basis of many important computations. Here, we introduce a tool that permits the design of light stimuli that predictably alter rod and cone phototransduction currents - including stimuli that compensate for nonlinear properties such as light adaptation. This tool, based on well-established models for the rod and cone phototransduction cascade, permits the separation of nonlinearities in phototransduction from those in downstream circuits. This will allow, for example, direct tests of how adaptation in rod and cone phototransduction affects downstream visual signals and perception.
Collapse
Affiliation(s)
- Qiang Chen
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| | - Norianne T. Ingram
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| | - Jacob Baudin
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| | | | | | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| |
Collapse
|
11
|
Brook L, Kreichman O, Masarwa S, Gilaie-Dotan S. Higher-contrast images are better remembered during naturalistic encoding. Sci Rep 2024; 14:13445. [PMID: 38862623 PMCID: PMC11166978 DOI: 10.1038/s41598-024-63953-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 06/04/2024] [Indexed: 06/13/2024] Open
Abstract
It is unclear whether memory for images of poorer visibility (as low contrast or small size) will be lower due to weak signals elicited in early visual processing stages, or perhaps better since their processing may entail top-down processes (as effort and attention) associated with deeper encoding. We have recently shown that during naturalistic encoding (free viewing without task-related modulations), for image sizes between 3°-24°, bigger images stimulating more visual system processing resources at early processing stages are better remembered. Similar to size, higher contrast leads to higher activity in early visual processing. Therefore, here we hypothesized that during naturalistic encoding, at critical visibility ranges, higher contrast images will lead to higher signal-to-noise ratio and better signal quality flowing downstream and will thus be better remembered. Indeed, we found that during naturalistic encoding higher contrast images were remembered better than lower contrast ones (~ 15% higher accuracy, ~ 1.58 times better) for images at 7.5-60 RMS contrast range. Although image contrast and size modulate early visual processing very differently, our results further substantiate that at poor visibility ranges, during naturalistic non-instructed visual behavior, physical image dimensions (contributing to image visibility) impact image memory.
Collapse
Affiliation(s)
- Limor Brook
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan, Israel
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Olga Kreichman
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan, Israel
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Shaimaa Masarwa
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan, Israel
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Sharon Gilaie-Dotan
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan, Israel.
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel.
- UCL Institute of Cognitive Neuroscience, London, UK.
| |
Collapse
|
12
|
Chen SCY, Chen Y, Geisler WS, Seidemann E. Neural correlates of perceptual similarity masking in primate V1. eLife 2024; 12:RP89570. [PMID: 38592269 PMCID: PMC11003749 DOI: 10.7554/elife.89570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/10/2024] Open
Abstract
Visual detection is a fundamental natural task. Detection becomes more challenging as the similarity between the target and the background in which it is embedded increases, a phenomenon termed 'similarity masking'. To test the hypothesis that V1 contributes to similarity masking, we used voltage sensitive dye imaging (VSDI) to measure V1 population responses while macaque monkeys performed a detection task under varying levels of target-background similarity. Paradoxically, we find that during an initial transient phase, V1 responses to the target are enhanced, rather than suppressed, by target-background similarity. This effect reverses in the second phase of the response, so that in this phase V1 signals are positively correlated with the behavioral effect of similarity. Finally, we show that a simple model with delayed divisive normalization can qualitatively account for our findings. Overall, our results support the hypothesis that a nonlinear gain control mechanism in V1 contributes to perceptual similarity masking.
Collapse
Affiliation(s)
- Spencer Chin-Yu Chen
- Center for Perceptual Systems, University of Texas at AustinAustinUnited States
- Department of Psychology, University of Texas at AustinAustinUnited States
- Center for Theoretical and Computational NeuroscienceAustinUnited States
- Department of Neuroscience, University of Texas at AustinAustinUnited States
- Department of Neurosurgery, Rutgers UniversityNew BrunswickUnited States
| | - Yuzhi Chen
- Center for Perceptual Systems, University of Texas at AustinAustinUnited States
- Department of Psychology, University of Texas at AustinAustinUnited States
- Center for Theoretical and Computational NeuroscienceAustinUnited States
- Department of Neuroscience, University of Texas at AustinAustinUnited States
| | - Wilson S Geisler
- Center for Perceptual Systems, University of Texas at AustinAustinUnited States
- Department of Psychology, University of Texas at AustinAustinUnited States
- Center for Theoretical and Computational NeuroscienceAustinUnited States
| | - Eyal Seidemann
- Center for Perceptual Systems, University of Texas at AustinAustinUnited States
- Department of Psychology, University of Texas at AustinAustinUnited States
- Center for Theoretical and Computational NeuroscienceAustinUnited States
- Department of Neuroscience, University of Texas at AustinAustinUnited States
| |
Collapse
|
13
|
A-Izzeddin EJ, Mattingley JB, Harrison WJ. The influence of natural image statistics on upright orientation judgements. Cognition 2024; 242:105631. [PMID: 37820487 DOI: 10.1016/j.cognition.2023.105631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 09/24/2023] [Accepted: 09/28/2023] [Indexed: 10/13/2023]
Abstract
Humans have well-documented priors for many features present in nature that guide visual perception. Despite being putatively grounded in the statistical regularities of the environment, scene priors are frequently violated due to the inherent variability of visual features from one scene to the next. However, these repeated violations do not appreciably challenge visuo-cognitive function, necessitating the broad use of priors in conjunction with context-specific information. We investigated the trade-off between participants' internal expectations formed from both longer-term priors and those formed from immediate contextual information using a perceptual inference task and naturalistic stimuli. Notably, our task required participants to make perceptual inferences about naturalistic images using their own internal criteria, rather than making comparative judgements. Nonetheless, we show that observers' performance is well approximated by a model that makes inferences using a prior for low-level image statistics, aggregated over many images. We further show that the dependence on this prior is rapidly re-weighted against contextual information, even when misleading. Our results therefore provide insight into how apparent high-level interpretations of scene appearances follow from the most basic of perceptual processes, which are grounded in the statistics of natural images.
Collapse
Affiliation(s)
- Emily J A-Izzeddin
- Queensland Brain Institute, Building 79, University of Queensland, St Lucia, QLD 4072, Australia.
| | - Jason B Mattingley
- Queensland Brain Institute, Building 79, University of Queensland, St Lucia, QLD 4072, Australia; School of Psychology, Building 24A, University of Queensland, St Lucia, QLD 4072, Australia
| | - William J Harrison
- Queensland Brain Institute, Building 79, University of Queensland, St Lucia, QLD 4072, Australia; School of Psychology, Building 24A, University of Queensland, St Lucia, QLD 4072, Australia
| |
Collapse
|
14
|
Chen SC, Chen Y, Geisler WS, Seidemann E. NEURAL CORRELATES OF PERCEPTUAL SIMILARITY MASKING IN PRIMATE V1. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.06.547970. [PMID: 37503133 PMCID: PMC10369882 DOI: 10.1101/2023.07.06.547970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
Visual detection is a fundamental natural task. Detection becomes more challenging as the similarity between the target and the background in which it is embedded increases, a phenomenon termed "similarity masking". To test the hypothesis that V1 contributes to similarity masking, we used voltage sensitive dye imaging (VSDI) to measure V1 population responses while macaque monkeys performed a detection task under varying levels of target-background similarity. Paradoxically, we find that during an initial transient phase, V1 responses to the target are enhanced, rather than suppressed, by target-background similarity. This effect reverses in the second phase of the response, so that in this phase V1 signals are positively correlated with the behavioral effect of similarity. Finally, we show that a simple model with delayed divisive normalization can qualitatively account for our findings. Overall, our results support the hypothesis that a nonlinear gain control mechanism in V1 contributes to perceptual similarity masking.
Collapse
Affiliation(s)
- Spencer C Chen
- Center for Perceptual Systems, University of Texas at Austin
- Center for Theoretical and Computational Neuroscience, University of Texas at Austin
- Department of Psychology, University of Texas at Austin
- Department of Neuroscience, University of Texas at Austin
- Department of Neurosurgery, Rutgers University
| | - Yuzhi Chen
- Center for Perceptual Systems, University of Texas at Austin
- Center for Theoretical and Computational Neuroscience, University of Texas at Austin
- Department of Psychology, University of Texas at Austin
- Department of Neuroscience, University of Texas at Austin
| | - Wilson S Geisler
- Center for Perceptual Systems, University of Texas at Austin
- Center for Theoretical and Computational Neuroscience, University of Texas at Austin
- Department of Psychology, University of Texas at Austin
| | - Eyal Seidemann
- Center for Perceptual Systems, University of Texas at Austin
- Center for Theoretical and Computational Neuroscience, University of Texas at Austin
- Department of Psychology, University of Texas at Austin
- Department of Neuroscience, University of Texas at Austin
| |
Collapse
|
15
|
Oluk C, Geisler WS. Effects of target-amplitude and background-contrast uncertainty predicted by a normalized template-matching observer. J Vis 2023; 23:8. [PMID: 37878319 PMCID: PMC10619701 DOI: 10.1167/jov.23.12.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 09/08/2023] [Indexed: 10/26/2023] Open
Abstract
When detecting targets under natural conditions, the visual system almost always faces multiple, simultaneous, dimensions of extrinsic uncertainty. This study focused on the simultaneous uncertainty about target amplitude and background contrast. These dimensions have a large effect on detection and vary greatly in natural scenes. We measured the human performance for detecting a sine-wave target in white noise and natural-scene backgrounds for two levels of prior probability of the target being present. We derived and tested the ideal observer for white-noise backgrounds, a special case of a template-matching observer that dynamically moves its criterion with the background contrast (the DTM observer) and two simpler models with a fixed criterion: the template-matching (TM) observer and the normalized template-matching (NTM) observer that normalizes template response by background contrast. Simulations show that, when the target prior is low, the performance of the NTM observer is near optimal and the TM observer is near chance, suggesting that manipulating the target prior is valuable for distinguishing among models. Surprisingly, we found that the NTM and DTM observers better explain human performance than the TM observer for both target priors in both background types. We argue that the visual system most likely exploits contrast normalization, rather than dynamic criterion adjustment, to deal with simultaneous background contrast and target amplitude uncertainty. Finally, our findings show that the data collected under high levels of uncertainty have a rich structure capable of discriminating between models, providing an alternative approach for studying high dimensions of uncertainty.
Collapse
Affiliation(s)
- Can Oluk
- Center for Perceptual Systems and Department of Psychology, University of Texas at Austin, Austin, TX, USA
- Laboratoire des systèmes perceptifs, Département d'études cognitives, École normale supérieure, Université PSL, Paris, France
| | - Wilson S Geisler
- Center for Perceptual Systems and Department of Psychology, University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
16
|
Mano O, Choi M, Tanaka R, Creamer MS, Matos NCB, Shomar JW, Badwan BA, Clandinin TR, Clark DA. Long-timescale anti-directional rotation in Drosophila optomotor behavior. eLife 2023; 12:e86076. [PMID: 37751469 PMCID: PMC10522332 DOI: 10.7554/elife.86076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 09/12/2023] [Indexed: 09/28/2023] Open
Abstract
Locomotor movements cause visual images to be displaced across the eye, a retinal slip that is counteracted by stabilizing reflexes in many animals. In insects, optomotor turning causes the animal to turn in the direction of rotating visual stimuli, thereby reducing retinal slip and stabilizing trajectories through the world. This behavior has formed the basis for extensive dissections of motion vision. Here, we report that under certain stimulus conditions, two Drosophila species, including the widely studied Drosophila melanogaster, can suppress and even reverse the optomotor turning response over several seconds. Such 'anti-directional turning' is most strongly evoked by long-lasting, high-contrast, slow-moving visual stimuli that are distinct from those that promote syn-directional optomotor turning. Anti-directional turning, like the syn-directional optomotor response, requires the local motion detecting neurons T4 and T5. A subset of lobula plate tangential cells, CH cells, show involvement in these responses. Imaging from a variety of direction-selective cells in the lobula plate shows no evidence of dynamics that match the behavior, suggesting that the observed inversion in turning direction emerges downstream of the lobula plate. Further, anti-directional turning declines with age and exposure to light. These results show that Drosophila optomotor turning behaviors contain rich, stimulus-dependent dynamics that are inconsistent with simple reflexive stabilization responses.
Collapse
Affiliation(s)
- Omer Mano
- Department of Molecular, Cellular, and Developmental Biology, Yale UniversityNew HavenUnited States
| | - Minseung Choi
- Department of Neurobiology, Stanford UniversityStanfordUnited States
| | - Ryosuke Tanaka
- Interdepartmental Neuroscience Program, Yale UniversityNew HavenUnited States
| | - Matthew S Creamer
- Interdepartmental Neuroscience Program, Yale UniversityNew HavenUnited States
| | - Natalia CB Matos
- Interdepartmental Neuroscience Program, Yale UniversityNew HavenUnited States
| | - Joseph W Shomar
- Department of Physics, Yale UniversityNew HavenUnited States
| | - Bara A Badwan
- Department of Chemical Engineering, Yale UniversityNew HavenUnited States
| | | | - Damon A Clark
- Department of Molecular, Cellular, and Developmental Biology, Yale UniversityNew HavenUnited States
- Interdepartmental Neuroscience Program, Yale UniversityNew HavenUnited States
- Department of Physics, Yale UniversityNew HavenUnited States
- Department of Neuroscience, Yale UniversityNew HavenUnited States
| |
Collapse
|
17
|
Yin Z, Kaiser MAA, Camara LO, Camarena M, Parsa M, Jacob A, Schwartz G, Jaiswal A. IRIS: Integrated Retinal Functionality in Image Sensors. Front Neurosci 2023; 17:1241691. [PMID: 37719155 PMCID: PMC10502419 DOI: 10.3389/fnins.2023.1241691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 08/14/2023] [Indexed: 09/19/2023] Open
Abstract
Neuromorphic image sensors draw inspiration from the biological retina to implement visual computations in electronic hardware. Gain control in phototransduction and temporal differentiation at the first retinal synapse inspired the first generation of neuromorphic sensors, but processing in downstream retinal circuits, much of which has been discovered in the past decade, has not been implemented in image sensor technology. We present a technology-circuit co-design solution that implements two motion computations-object motion sensitivity and looming detection-at the retina's output that could have wide applications for vision-based decision-making in dynamic environments. Our simulations on Globalfoundries 22 nm technology node show that the proposed retina-inspired circuits can be fabricated on image sensing platforms in existing semiconductor foundries by taking advantage of the recent advances in semiconductor chip stacking technology. Integrated Retinal Functionality in Image Sensors (IRIS) technology could drive advances in machine vision applications that demand energy-efficient and low-bandwidth real-time decision-making.
Collapse
Affiliation(s)
- Zihan Yin
- Information Sciences Institute, University of Southern California, Los Angeles, CA, United States
| | - Md Abdullah-Al Kaiser
- Information Sciences Institute, University of Southern California, Los Angeles, CA, United States
| | | | - Mark Camarena
- Information Sciences Institute, University of Southern California, Los Angeles, CA, United States
| | - Maryam Parsa
- Electrical and Computer Engineering, George Mason University, Fairfax, VA, United States
| | - Ajey Jacob
- Information Sciences Institute, University of Southern California, Los Angeles, CA, United States
| | - Gregory Schwartz
- Department of Ophthalmology, Northwestern University, Evanston, IL, United States
| | - Akhilesh Jaiswal
- Information Sciences Institute, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
18
|
Cai LT, Krishna VS, Hladnik TC, Guilbeault NC, Vijayakumar C, Arunachalam M, Juntti SA, Arrenberg AB, Thiele TR, Cooper EA. Spatiotemporal visual statistics of aquatic environments in the natural habitats of zebrafish. Sci Rep 2023; 13:12028. [PMID: 37491571 PMCID: PMC10368656 DOI: 10.1038/s41598-023-36099-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Accepted: 05/29/2023] [Indexed: 07/27/2023] Open
Abstract
Animal sensory systems are tightly adapted to the demands of their environment. In the visual domain, research has shown that many species have circuits and systems that exploit statistical regularities in natural visual signals. The zebrafish is a popular model animal in visual neuroscience, but relatively little quantitative data is available about the visual properties of the aquatic habitats where zebrafish reside, as compared to terrestrial environments. Improving our understanding of the visual demands of the aquatic habitats of zebrafish can enhance the insights about sensory neuroscience yielded by this model system. We analyzed a video dataset of zebrafish habitats captured by a stationary camera and compared this dataset to videos of terrestrial scenes in the same geographic area. Our analysis of the spatiotemporal structure in these videos suggests that zebrafish habitats are characterized by low visual contrast and strong motion when compared to terrestrial environments. Similar to terrestrial environments, zebrafish habitats tended to be dominated by dark contrasts, particularly in the lower visual field. We discuss how these properties of the visual environment can inform the study of zebrafish visual behavior and neural processing and, by extension, can inform our understanding of the vertebrate brain.
Collapse
Affiliation(s)
- Lanya T Cai
- Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley, CA, USA
| | - Venkatesh S Krishna
- Department of Biological Sciences, University of Toronto, Scarborough, ON, Canada
| | - Tim C Hladnik
- Werner Reichardt Centre for Integrative Neuroscience, Institute of Neurobiology, University of Tübingen, Tübingen, Germany
- Graduate Training Centre for Neuroscience, University of Tübingen, Tübingen, Germany
| | - Nicholas C Guilbeault
- Department of Biological Sciences, University of Toronto, Scarborough, ON, Canada
- Department of Cell and Systems Biology, University of Toronto, Toronto, Canada
| | - Chinnian Vijayakumar
- Department of Zoology, Department of Zoology, St. Andrew's College, Gorakhpur, Uttar Pradesh, India
| | - Muthukumarasamy Arunachalam
- Department of Zoology, School of Biological Sciences, Central University of Kerala, Kasaragod, Kerala, India
- Centre for Inland Fishes and Conservation, St. Andrew's College, Gorakhpur, Uttar Pradesh, India
| | - Scott A Juntti
- Department of Biology, University of Maryland, College Park, MD, USA
| | - Aristides B Arrenberg
- Werner Reichardt Centre for Integrative Neuroscience, Institute of Neurobiology, University of Tübingen, Tübingen, Germany
| | - Tod R Thiele
- Department of Biological Sciences, University of Toronto, Scarborough, ON, Canada.
- Department of Cell and Systems Biology, University of Toronto, Toronto, Canada.
| | - Emily A Cooper
- Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley, CA, USA.
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA.
| |
Collapse
|
19
|
Theobald J. Insect vision: Contrast perception under fluctuating light. Curr Biol 2023; 33:R710-R712. [PMID: 37433269 DOI: 10.1016/j.cub.2023.05.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/13/2023]
Abstract
Natural light levels vary tremendously, both over the day and from minute to minute, creating a formidable challenge for animals that rely on vision to survive. New work in fruit flies demonstrates the neural mechanisms that produce luminance-invariant perceptions of visual contrast.
Collapse
Affiliation(s)
- Jamie Theobald
- Florida International University, Department of Biological Sciences, Miami, FL 33199, USA.
| |
Collapse
|
20
|
Chaib S, Lind O, Kelber A. Fast visual adaptation to dim light in a cavity-nesting bird. Proc Biol Sci 2023; 290:20230596. [PMID: 37161333 PMCID: PMC10170191 DOI: 10.1098/rspb.2023.0596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/11/2023] Open
Abstract
Many birds move fast into dark nest cavities forcing the visual system to adapt to low light intensities. Their visual system takes between 15 and 60 min for complete dark adaptation, but little is known about the visual performance of birds during the first seconds in low light intensities. In a forced two-choice behavioural experiment we studied how well budgerigars can discriminate stimuli of different luminance directly after entering a darker environment. The birds made their choices within about 1 s and did not wait to adapt their visual system to the low light intensities. When moving from a bright facility into an environment with 0.5 log unit lower illuminance, the budgerigars detected targets with a luminance of 0.825 cd m-2 on a black background. When moving into an environment with 1.7 or 3.5 log units lower illuminance, they detected targets with luminances between 0.106 and 0.136 cd m-2. In tests with two simultaneously displayed targets, the birds discriminated similar luminance differences between the targets (Weber fraction of 0.41-0.54) in all light levels. Our results support the notion that partial adaptation of bird eyes to the lower illumination occurring within 1 s allows them to safely detect and feed their chicks.
Collapse
Affiliation(s)
- Sandra Chaib
- Lund Vision Group, Department of Biology, Lund University, 223 62 Lund, Sweden
| | - Olle Lind
- Lund Vision Group, Department of Biology, Lund University, 223 62 Lund, Sweden
| | - Almut Kelber
- Lund Vision Group, Department of Biology, Lund University, 223 62 Lund, Sweden
| |
Collapse
|
21
|
Mano O, Choi M, Tanaka R, Creamer MS, Matos NC, Shomar J, Badwan BA, Clandinin TR, Clark DA. Long timescale anti-directional rotation in Drosophila optomotor behavior. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.06.523055. [PMID: 36711627 PMCID: PMC9882005 DOI: 10.1101/2023.01.06.523055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Locomotor movements cause visual images to be displaced across the eye, a retinal slip that is counteracted by stabilizing reflexes in many animals. In insects, optomotor turning causes the animal to turn in the direction of rotating visual stimuli, thereby reducing retinal slip and stabilizing trajectories through the world. This behavior has formed the basis for extensive dissections of motion vision. Here, we report that under certain stimulus conditions, two Drosophila species, including the widely studied D. melanogaster, can suppress and even reverse the optomotor turning response over several seconds. Such "anti-directional turning" is most strongly evoked by long-lasting, high-contrast, slow-moving visual stimuli that are distinct from those that promote syn-directional optomotor turning. Anti-directional turning, like the syn-directional optomotor response, requires the local motion detecting neurons T4 and T5. A subset of lobula plate tangential cells, CH cells, show involvement in these responses. Imaging from a variety of direction-selective cells in the lobula plate shows no evidence of dynamics that match the behavior, suggesting that the observed inversion in turning direction emerges downstream of the lobula plate. Further, anti-directional turning declines with age and exposure to light. These results show that Drosophila optomotor turning behaviors contain rich, stimulus-dependent dynamics that are inconsistent with simple reflexive stabilization responses.
Collapse
Affiliation(s)
- Omer Mano
- Department of Molecular, Cellular, and Developmental Biology, Yale University, New Haven, CT 06511, USA
| | - Minseung Choi
- Department of Neurobiology, Stanford University, Stanford, CA 94305, USA
| | - Ryosuke Tanaka
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Matthew S. Creamer
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Natalia C.B. Matos
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Joseph Shomar
- Department of Physics, Yale University, New Haven, CT 06511, USA
| | - Bara A. Badwan
- Department of Chemical Engineering, Yale University, New Haven, CT 06511, USA
| | | | - Damon A. Clark
- Department of Molecular, Cellular, and Developmental Biology, Yale University, New Haven, CT 06511, USA
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
- Department of Physics, Yale University, New Haven, CT 06511, USA
- Department of Neuroscience, Yale University, New Haven, CT 06511, USA
| |
Collapse
|
22
|
Zapata-Valencia SI, Tobon-Maya H, Garcia-Sucerquia J. Image enhancement and field of view enlargement in digital lensless holographic microscopy by multi-shot imaging. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2023; 40:C150-C156. [PMID: 37132985 DOI: 10.1364/josaa.482496] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
A method to improve the quality of reconstructed images while the field of view (FOV) is enlarged in digital lensless holographic microscopy (DLHM) is presented. Multiple DLHM holograms are recorded while a still sample is located at different places of the plane containing it. The different locations of the sample must produce a set of DLHM holograms that share an overlapped area with a fixed DLHM hologram. The relative displacement among multiple DLHM holograms is computed by means of a normalized cross-correlation. The value of the computed displacement is utilized to produce a new DLHM hologram resulting from the coordinated addition of multi-shot DLHM holograms with the corresponding compensated displacement. The composed DLHM hologram carries enhanced information of the sample in a larger format, leading to a reconstructed image with improved quality and larger FOV. The feasibility of the method is illustrated and validated with results obtained from imaging a calibration test target and a biological specimen.
Collapse
|
23
|
Raghavan RT, Kelly JG, Hasse JM, Levy PG, Hawken MJ, Movshon JA. Contrast and Luminance Gain Control in the Macaque's Lateral Geniculate Nucleus. eNeuro 2023; 10:ENEURO.0515-22.2023. [PMID: 36858825 PMCID: PMC10035770 DOI: 10.1523/eneuro.0515-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 02/16/2023] [Indexed: 03/03/2023] Open
Abstract
There is substantial variation in the mean and variance of light levels (luminance and contrast) in natural visual scenes. Retinal ganglion cells maintain their sensitivity despite this variation using two adaptive mechanisms, which control how responses depend on luminance and on contrast. However, the nature of each mechanism and their interactions downstream of the retina are unknown. We recorded neurons in the magnocellular and parvocellular layers of the lateral geniculate nucleus (LGN) in anesthetized adult male macaques and characterized how their responses adapt to changes in contrast and luminance. As contrast increases, neurons in the magnocellular layers maintain sensitivity to high temporal frequency stimuli but attenuate sensitivity to low-temporal frequency stimuli. Neurons in the parvocellular layers do not adapt to changes in contrast. As luminance increases, both magnocellular and parvocellular cells increase their sensitivity to high-temporal frequency stimuli. Adaptation to luminance is independent of adaptation to contrast, as previously reported for LGN neurons in the cat. Our results are similar to those previously reported for macaque retinal ganglion cells, suggesting that adaptation to luminance and contrast result from two independent mechanisms that are retinal in origin.
Collapse
Affiliation(s)
- R T Raghavan
- Center for Neural Science, New York University, New York, New York 10003
| | - Jenna G Kelly
- Center for Neural Science, New York University, New York, New York 10003
| | - J Michael Hasse
- Center for Neural Science, New York University, New York, New York 10003
| | - Paul G Levy
- Center for Neural Science, New York University, New York, New York 10003
| | - Michael J Hawken
- Center for Neural Science, New York University, New York, New York 10003
| | - J Anthony Movshon
- Center for Neural Science, New York University, New York, New York 10003
| |
Collapse
|
24
|
Conti D, Mora T. Nonequilibrium dynamics of adaptation in sensory systems. Phys Rev E 2022; 106:054404. [PMID: 36559478 DOI: 10.1103/physreve.106.054404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Accepted: 10/11/2022] [Indexed: 06/17/2023]
Abstract
Adaptation is used by biological sensory systems to respond to a wide range of environmental signals, by adapting their response properties to the statistics of the stimulus in order to maximize information transmission. We derive rules of optimal adaptation to changes in the mean and variance of a continuous stimulus in terms of Bayesian filters and map them onto stochastic equations that couple the state of the environment to an internal variable controlling the response function. We calculate numerical and exact results for the speed and accuracy of adaptation and its impact on information transmission. We find that, in the regime of efficient adaptation, the speed of adaptation scales sublinearly with the rate of change of the environment. Finally, we exploit the mathematical equivalence between adaptation and stochastic thermodynamics to quantitatively relate adaptation to the irreversibility of the adaptation time course, defined by the rate of entropy production. Our results suggest a means to empirically quantify adaptation in a model-free and nonparametric way.
Collapse
Affiliation(s)
- Daniele Conti
- Laboratoire de Physique, École Normale Supérieure, CNRS, PSL Université, Sorbonne Université, Université de Paris, 75005 Paris, France
| | - Thierry Mora
- Laboratoire de Physique, École Normale Supérieure, CNRS, PSL Université, Sorbonne Université, Université de Paris, 75005 Paris, France
| |
Collapse
|
25
|
Fitzpatrick MJ, Kerschensteiner D. Homeostatic plasticity in the retina. Prog Retin Eye Res 2022; 94:101131. [PMID: 36244950 DOI: 10.1016/j.preteyeres.2022.101131] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Revised: 09/25/2022] [Accepted: 09/28/2022] [Indexed: 02/07/2023]
Abstract
Vision begins in the retina, whose intricate neural circuits extract salient features of the environment from the light entering our eyes. Neurodegenerative diseases of the retina (e.g., inherited retinal degenerations, age-related macular degeneration, and glaucoma) impair vision and cause blindness in a growing number of people worldwide. Increasing evidence indicates that homeostatic plasticity (i.e., the drive of a neural system to stabilize its function) can, in principle, preserve retinal function in the face of major perturbations, including neurodegeneration. Here, we review the circumstances and events that trigger homeostatic plasticity in the retina during development, sensory experience, and disease. We discuss the diverse mechanisms that cooperate to compensate and the set points and outcomes that homeostatic retinal plasticity stabilizes. Finally, we summarize the opportunities and challenges for unlocking the therapeutic potential of homeostatic plasticity. Homeostatic plasticity is fundamental to understanding retinal development and function and could be an important tool in the fight to preserve and restore vision.
Collapse
|
26
|
Abstract
An ultimate goal in retina science is to understand how the neural circuit of the retina processes natural visual scenes. Yet most studies in laboratories have long been performed with simple, artificial visual stimuli such as full-field illumination, spots of light, or gratings. The underlying assumption is that the features of the retina thus identified carry over to the more complex scenario of natural scenes. As the application of corresponding natural settings is becoming more commonplace in experimental investigations, this assumption is being put to the test and opportunities arise to discover processing features that are triggered by specific aspects of natural scenes. Here, we review how natural stimuli have been used to probe, refine, and complement knowledge accumulated under simplified stimuli, and we discuss challenges and opportunities along the way toward a comprehensive understanding of the encoding of natural scenes. Expected final online publication date for the Annual Review of Vision Science, Volume 8 is September 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Dimokratis Karamanlis
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany.,International Max Planck Research School for Neurosciences, Göttingen, Germany
| | - Helene Marianne Schreyer
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Tim Gollisch
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany.,Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany
| |
Collapse
|
27
|
Jarvis J, Triantaphillidou S, Gupta G. Contrast discrimination in images of natural scenes. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:B50-B64. [PMID: 36215527 DOI: 10.1364/josaa.447390] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Accepted: 03/30/2022] [Indexed: 06/16/2023]
Abstract
Contrast discrimination determines the threshold contrast required to distinguish between two suprathreshold visual stimuli. It is typically measured using sine-wave gratings. We first present a modification to Barten's semi-mechanistic contrast discrimination model to account for spatial frequency effects and demonstrate how the model can successfully predict visual thresholds obtained from published classical contrast discrimination studies. Contrast discrimination functions are then measured from images of natural scenes, using a psychophysical paradigm based on that employed in our previous study of contrast detection sensitivity. The proposed discrimination model modification is shown to successfully predict discrimination thresholds for structurally very different types of natural image stimuli. A comparison of results shows that, for normal contrast levels in natural scene viewing, contextual contrast detection and discrimination are approximately the same and almost independent of spatial frequency within the range of 1-20 c/deg. At higher frequencies, both sensitivities decrease in magnitude due to optical limitations of the eye. The results are discussed in relation to current image quality models.
Collapse
|
28
|
Schlegelmilch K, Wertz AE. Visual segmentation of complex naturalistic structures in an infant eye-tracking search task. PLoS One 2022; 17:e0266158. [PMID: 35363809 PMCID: PMC8975119 DOI: 10.1371/journal.pone.0266158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 03/15/2022] [Indexed: 11/24/2022] Open
Abstract
An infant’s everyday visual environment is composed of a complex array of entities, some of which are well integrated into their surroundings. Although infants are already sensitive to some categories in their first year of life, it is not clear which visual information supports their detection of meaningful elements within naturalistic scenes. Here we investigated the impact of image characteristics on 8-month-olds’ search performance using a gaze contingent eye-tracking search task. Infants had to detect a target patch on a background image. The stimuli consisted of images taken from three categories: vegetation, non-living natural elements (e.g., stones), and manmade artifacts, for which we also assessed target background differences in lower- and higher-level visual properties. Our results showed that larger target-background differences in the statistical properties scaling invariance and entropy, and also stimulus backgrounds including low pictorial depth, predicted better detection performance. Furthermore, category membership only affected search performance if supported by luminance contrast. Data from an adult comparison group also indicated that infants’ search performance relied more on lower-order visual properties than adults. Taken together, these results suggest that infants use a combination of property- and category-related information to parse complex visual stimuli.
Collapse
Affiliation(s)
- Karola Schlegelmilch
- Max Planck Research Group Naturalistic Social Cognition, Max Planck Institute for Human Development, Berlin, Germany
- * E-mail:
| | - Annie E. Wertz
- Max Planck Research Group Naturalistic Social Cognition, Max Planck Institute for Human Development, Berlin, Germany
| |
Collapse
|
29
|
Abstract
The THINGS database is a freely available stimulus set that has the potential to facilitate the generation of theory that bridges multiple areas within cognitive neuroscience. The database consists of 26,107 high quality digital photos that are sorted into 1,854 concepts. While a valuable resource, relatively few technical details relevant to the design of studies in cognitive neuroscience have been described. We present an analysis of two key low-level properties of THINGS images, luminance and luminance contrast. These image statistics are known to influence common physiological and neural correlates of perceptual and cognitive processes. In general, we found that the distributions of luminance and contrast are in close agreement with the statistics of natural images reported previously. However, we found that image concepts are separable in their luminance and contrast: we show that luminance and contrast alone are sufficient to classify images into their concepts with above chance accuracy. We describe how these factors may confound studies using the THINGS images, and suggest simple controls that can be implemented a priori or post-hoc. We discuss the importance of using such natural images as stimuli in psychological research.
Collapse
Affiliation(s)
- William J Harrison
- Queensland Brain Institute and School of Psychology, 1974The University of Queensland
| |
Collapse
|
30
|
Ketkar MD, Gür B, Molina-Obando S, Ioannidou M, Martelli C, Silies M. First-order visual interneurons distribute distinct contrast and luminance information across ON and OFF pathways to achieve stable behavior. eLife 2022; 11:74937. [PMID: 35263247 PMCID: PMC8967382 DOI: 10.7554/elife.74937] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 03/03/2022] [Indexed: 11/26/2022] Open
Abstract
The accurate processing of contrast is the basis for all visually guided behaviors. Visual scenes with rapidly changing illumination challenge contrast computation because photoreceptor adaptation is not fast enough to compensate for such changes. Yet, human perception of contrast is stable even when the visual environment is quickly changing, suggesting rapid post receptor luminance gain control. Similarly, in the fruit fly Drosophila, such gain control leads to luminance invariant behavior for moving OFF stimuli. Here, we show that behavioral responses to moving ON stimuli also utilize a luminance gain, and that ON-motion guided behavior depends on inputs from three first-order interneurons L1, L2, and L3. Each of these neurons encodes contrast and luminance differently and distributes information asymmetrically across both ON and OFF contrast-selective pathways. Behavioral responses to both ON and OFF stimuli rely on a luminance-based correction provided by L1 and L3, wherein L1 supports contrast computation linearly, and L3 non-linearly amplifies dim stimuli. Therefore, L1, L2, and L3 are not specific inputs to ON and OFF pathways but the lamina serves as a separate processing layer that distributes distinct luminance and contrast information across ON and OFF pathways to support behavior in varying conditions.
Collapse
Affiliation(s)
- Madhura D Ketkar
- Institute of Developmental Biology and Neurobiology, Johannes Gutenberg University of Mainz, Mainz, Germany
| | - Burak Gür
- Institute of Developmental Biology and Neurobiology, Johannes Gutenberg University of Mainz, Mainz, Germany
| | - Sebastian Molina-Obando
- Institute of Developmental Biology and Neurobiology, Johannes Gutenberg University of Mainz, Mainz, Germany
| | - Maria Ioannidou
- Institute of Developmental Biology and Neurobiology, Johannes Gutenberg University of Mainz, Mainz, Germany
| | - Carlotta Martelli
- Institute of Developmental Biology and Neurobiology, Johannes Gutenberg University of Mainz, Mainz, Germany
| | - Marion Silies
- Institute of Developmental Biology and Neurobiology, Johannes Gutenberg University of Mainz, Mainz, Germany
| |
Collapse
|
31
|
Angueyra JM, Baudin J, Schwartz GW, Rieke F. Predicting and Manipulating Cone Responses to Naturalistic Inputs. J Neurosci 2022; 42:1254-1274. [PMID: 34949692 PMCID: PMC8883858 DOI: 10.1523/jneurosci.0793-21.2021] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Revised: 11/06/2021] [Accepted: 12/03/2021] [Indexed: 11/21/2022] Open
Abstract
Primates explore their visual environment by making frequent saccades, discrete and ballistic eye movements that direct the fovea to specific regions of interest. Saccades produce large and rapid changes in input. The magnitude of these changes and the limited signaling range of visual neurons mean that effective encoding requires rapid adaptation. Here, we explore how macaque cone photoreceptors maintain sensitivity under these conditions. Adaptation makes cone responses to naturalistic stimuli highly nonlinear and dependent on stimulus history. Such responses cannot be explained by linear or linear-nonlinear models but are well explained by a biophysical model of phototransduction based on well-established biochemical interactions. The resulting model can predict cone responses to a broad range of stimuli and enables the design of stimuli that elicit specific (e.g., linear) cone photocurrents. These advances will provide a foundation for investigating the contributions of cone phototransduction and post-transduction processing to visual function.SIGNIFICANCE STATEMENT We know a great deal about adaptational mechanisms that adjust sensitivity to slow changes in visual inputs such as the rising or setting sun. We know much less about the rapid adaptational mechanisms that are essential for maintaining sensitivity as gaze shifts around a single visual scene. We characterize how phototransduction in cone photoreceptors adapts to rapid changes in input similar to those encountered during natural vision. We incorporate these measurements into a quantitative model that can predict cone responses across a broad range of stimuli. This model not only shows how cone phototransduction aids the encoding of natural inputs but also provides a tool to identify the role of the cone responses in shaping those of downstream visual neurons.
Collapse
Affiliation(s)
- Juan M Angueyra
- Department of Physiology and Biophysics, University of Washington, Seattle, Washington 98195
- National Eye Institute, National Institutes of Health, Bethesda, Maryland 20892
| | - Jacob Baudin
- Department of Physiology and Biophysics, University of Washington, Seattle, Washington 98195
| | - Gregory W Schwartz
- Department of Physiology and Biophysics, University of Washington, Seattle, Washington 98195
- Departments of Ophthalmology and Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, Illinois 60511
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, Washington 98195
| |
Collapse
|
32
|
Thurman SM, Cohen Hoffing RA, Madison A, Ries AJ, Gordon SM, Touryan J. "Blue Sky Effect": Contextual Influences on Pupil Size During Naturalistic Visual Search. Front Psychol 2022; 12:748539. [PMID: 34992563 PMCID: PMC8725886 DOI: 10.3389/fpsyg.2021.748539] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Accepted: 11/16/2021] [Indexed: 01/28/2023] Open
Abstract
Pupil size is influenced by cognitive and non-cognitive factors. One of the strongest modulators of pupil size is scene luminance, which complicates studies of cognitive pupillometry in environments with complex patterns of visual stimulation. To help understand how dynamic visual scene statistics influence pupil size during an active visual search task in a visually rich 3D virtual environment (VE), we analyzed the correlation between pupil size and intensity changes of image pixels in the red, green, and blue (RGB) channels within a large window (~14 degrees) surrounding the gaze position over time. Overall, blue and green channels had a stronger influence on pupil size than the red channel. The correlation maps were not consistent with the hypothesis of a foveal bias for luminance, instead revealing a significant contextual effect, whereby pixels above the gaze point in the green/blue channels had a disproportionate impact on pupil size. We hypothesized this differential sensitivity of pupil responsiveness to blue light from above as a “blue sky effect,” and confirmed this finding with a follow-on experiment with a controlled laboratory task. Pupillary constrictions were significantly stronger when blue was presented above fixation (paired with luminance-matched gray on bottom) compared to below fixation. This effect was specific for the blue color channel and this stimulus orientation. These results highlight the differential sensitivity of pupillary responses to scene statistics in studies or applications that involve complex visual environments and suggest blue light as a predominant factor influencing pupil size.
Collapse
Affiliation(s)
- Steven M Thurman
- US DEVCOM Army Research Laboratory, Human Research and Engineering Directorate, US Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| | - Russell A Cohen Hoffing
- US DEVCOM Army Research Laboratory, Human Research and Engineering Directorate, US Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| | - Anna Madison
- US DEVCOM Army Research Laboratory, Human Research and Engineering Directorate, US Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| | - Anthony J Ries
- US DEVCOM Army Research Laboratory, Human Research and Engineering Directorate, US Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| | | | - Jonathan Touryan
- US DEVCOM Army Research Laboratory, Human Research and Engineering Directorate, US Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| |
Collapse
|
33
|
Yedutenko M, Howlett MHC, Kamermans M. Enhancing the dark side: asymmetric gain of cone photoreceptors underpins their discrimination of visual scenes based on skewness. J Physiol 2021; 600:123-142. [PMID: 34783026 PMCID: PMC9300210 DOI: 10.1113/jp282152] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Accepted: 11/11/2021] [Indexed: 11/08/2022] Open
Abstract
Psychophysical data indicate that humans can discriminate visual scenes based on their skewness, i.e. the ratio of dark and bright patches within a visual scene. It has also been shown that at a phenomenological level this skew discrimination is described by the so-called blackshot mechanism, which accentuates strong negative contrasts within a scene. Here, we present a set of observations suggesting that the underlying computation might start as early as the cone phototransduction cascade, whose gain is higher for strong negative contrasts than for strong positive contrasts. We recorded from goldfish cone photoreceptors and found that the asymmetry in the phototransduction gain leads to responses with larger amplitudes when using negatively rather than positively skewed light stimuli. This asymmetry in amplitude was present in the cone photocurrent, voltage response and synaptic output. Given that the properties of the phototransduction cascade are universal across vertebrates, it is possible that the mechanism shown here gives rise to a general ability to discriminate between scenes based only on their skewness, which psychophysical studies have shown humans can do. Thus, our data suggest the importance of non-linearity of the early photoreceptor for perception. Additionally, we found that stimulus skewness leads to a subtle change in photoreceptor kinetics. For negatively skewed stimuli, the impulse response functions of the cone peak later than for positively skewed stimuli. However, stimulus skewness does not affect the overall integration time of the cone. KEY POINTS: Humans can discriminate visual scenes based on skewness, i.e. the relative prevalence of bright and dark patches within a scene. Here, we show that negatively skewed time-series stimuli induce larger responses in goldfish cone photoreceptors than comparable positively skewed stimuli. This response asymmetry originates from within the phototransduction cascade, where gain is higher for strong negative contrasts (dark patches) than for strong positive contrasts (bright patches). Unlike the implicit assumption often contained within models of downstream visual neurons, our data show that cone photoreceptors do not simply relay linearly filtered versions of visual stimuli to downstream circuitry, but that they also emphasize specific stimulus features. Given that the phototransduction cascade properties among vertebrate retinas are mostly universal, our data imply that the skew discrimination by human subjects reported in psychophysical studies might stem from the asymmetric gain function of the phototransduction cascade.
Collapse
Affiliation(s)
- Matthew Yedutenko
- Retinal Signal Processing Laboratory, Netherlands Institute for Neuroscience, Amsterdam, The Netherlands
| | - Marcus H C Howlett
- Retinal Signal Processing Laboratory, Netherlands Institute for Neuroscience, Amsterdam, The Netherlands
| | - Maarten Kamermans
- Retinal Signal Processing Laboratory, Netherlands Institute for Neuroscience, Amsterdam, The Netherlands.,Department of Biomedical Physics and Biomedical Optics, Amsterdam University Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
34
|
Hübner C, Schütz AC. Rapid visual adaptation persists across saccades. iScience 2021; 24:102986. [PMID: 34485868 PMCID: PMC8403744 DOI: 10.1016/j.isci.2021.102986] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 05/28/2021] [Accepted: 07/09/2021] [Indexed: 11/26/2022] Open
Abstract
Neurons in the visual cortex quickly adapt to constant input, which should lead to perceptual fading within few tens of milliseconds. However, perceptual fading is rarely observed in everyday perception, possibly because eye movements refresh retinal input. Recently, it has been suggested that amplitudes of large saccadic eye movements are scaled to maximally decorrelate presaccadic and postsaccadic inputs and thus to annul perceptual fading. However, this argument builds on the assumption that adaptation within naturally brief fixation durations is strong enough to survive any visually disruptive saccade and affect perception. We tested this assumption by measuring the effect of luminance adaptation on postsaccadic contrast perception. We found that postsaccadic contrast perception was affected by presaccadic luminance adaptation during brief periods of fixation. This adaptation effect emerges within 100 milliseconds and persists over seconds. These results indicate that adaptation during natural fixation periods can affect perception even after visually disruptive saccades.
Collapse
Affiliation(s)
- Carolin Hübner
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, 35037 Marburg, Germany.,Institut für Psychologie, Humboldt-Universität zu Berlin, 12489 Berlin, Germany
| | - Alexander C Schütz
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, 35037 Marburg, Germany.,Center for Mind, Brain and Behavior, Philipps-Universität Marburg, 35037 Marburg, Germany
| |
Collapse
|
35
|
Abstract
Our sense of sight relies on photoreceptors, which transduce photons into the nervous system's electrochemical interpretation of the visual world. These precious photoreceptors can be disrupted by disease, injury, and aging. Once photoreceptors start to die, but before blindness occurs, the remaining retinal circuitry can withstand, mask, or exacerbate the photoreceptor deficit and potentially be receptive to newfound therapies for vision restoration. To maximize the retina's receptivity to therapy, one must understand the conditions that influence the state of the remaining retina. In this review, we provide an overview of the retina's structure and function in health and disease. We analyze a collection of observations on photoreceptor disruption and generate a predictive model to identify parameters that influence the retina's response. Finally, we speculate on whether the retina, with its remarkable capacity to function over light levels spanning nine orders of magnitude, uses these same adaptational mechanisms to withstand and perhaps mask photoreceptor loss.
Collapse
Affiliation(s)
- Joo Yeun Lee
- Department of Ophthalmology, University of California, San Francisco, California 94143, USA; , , ,
| | - Rachel A Care
- Department of Ophthalmology, University of California, San Francisco, California 94143, USA; , , ,
| | - Luca Della Santina
- Department of Ophthalmology, University of California, San Francisco, California 94143, USA; , , ,
- Bakar Computational Health Sciences Institute, University of California, San Francisco, California 94143, USA
| | - Felice A Dunn
- Department of Ophthalmology, University of California, San Francisco, California 94143, USA; , , ,
| |
Collapse
|
36
|
Qiu Y, Zhao Z, Klindt D, Kautzky M, Szatko KP, Schaeffel F, Rifai K, Franke K, Busse L, Euler T. Natural environment statistics in the upper and lower visual field are reflected in mouse retinal specializations. Curr Biol 2021; 31:3233-3247.e6. [PMID: 34107304 DOI: 10.1016/j.cub.2021.05.017] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Revised: 04/06/2021] [Accepted: 05/11/2021] [Indexed: 12/29/2022]
Abstract
Pressures for survival make sensory circuits adapted to a species' natural habitat and its behavioral challenges. Thus, to advance our understanding of the visual system, it is essential to consider an animal's specific visual environment by capturing natural scenes, characterizing their statistical regularities, and using them to probe visual computations. Mice, a prominent visual system model, have salient visual specializations, being dichromatic with enhanced sensitivity to green and UV in the dorsal and ventral retina, respectively. However, the characteristics of their visual environment that likely have driven these adaptations are rarely considered. Here, we built a UV-green-sensitive camera to record footage from mouse habitats. This footage is publicly available as a resource for mouse vision research. We found chromatic contrast to greatly diverge in the upper, but not the lower, visual field. Moreover, training a convolutional autoencoder on upper, but not lower, visual field scenes was sufficient for the emergence of color-opponent filters, suggesting that this environmental difference might have driven superior chromatic opponency in the ventral mouse retina, supporting color discrimination in the upper visual field. Furthermore, the upper visual field was biased toward dark UV contrasts, paralleled by more light-offset-sensitive ganglion cells in the ventral retina. Finally, footage recorded at twilight suggests that UV promotes aerial predator detection. Our findings support that natural scene statistics shaped early visual processing in evolution.
Collapse
Affiliation(s)
- Yongrong Qiu
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, University of Tübingen, 72076 Tübingen, Germany
| | - Zhijian Zhao
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany
| | - David Klindt
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, University of Tübingen, 72076 Tübingen, Germany
| | - Magdalena Kautzky
- Division of Neurobiology, Faculty of Biology, LMU Munich, 82152 Planegg-Martinsried, Germany; Graduate School of Systemic Neurosciences (GSN), LMU Munich, 82152 Planegg-Martinsried, Germany
| | - Klaudia P Szatko
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, University of Tübingen, 72076 Tübingen, Germany; Bernstein Centre for Computational Neuroscience, 72076 Tübingen, Germany
| | - Frank Schaeffel
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany
| | - Katharina Rifai
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Carl Zeiss Vision International GmbH, 73430 Aalen, Germany
| | - Katrin Franke
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Bernstein Centre for Computational Neuroscience, 72076 Tübingen, Germany
| | - Laura Busse
- Division of Neurobiology, Faculty of Biology, LMU Munich, 82152 Planegg-Martinsried, Germany; Bernstein Centre for Computational Neuroscience, 82152 Planegg-Martinsried, Germany.
| | - Thomas Euler
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Bernstein Centre for Computational Neuroscience, 72076 Tübingen, Germany.
| |
Collapse
|
37
|
Tang R, Chen W, Wang Y. Different roles of subcortical inputs in V1 responses to luminance and contrast. Eur J Neurosci 2021; 53:3710-3726. [PMID: 33848389 DOI: 10.1111/ejn.15233] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Revised: 04/02/2021] [Accepted: 04/09/2021] [Indexed: 01/01/2023]
Abstract
Cells in the primary visual cortex (V1) generally respond weakly to large uniform luminance stimuli. Only a subset of V1 cells is thought to encode uniform luminance information. In natural scenes, local luminance is an important feature for defining an object that varies and coexists with local spatial contrast. However, the strategies used by V1 cells to encode local mean luminance for spatial contrast stimuli remain largely unclear. Here, using extracellular recordings in anesthetized cats, we investigated the responses of V1 cells by comparing with those of retinal ganglion (RG) cells and lateral geniculate nucleus (LGN) cells to simultaneous and rapid changes in luminance and spatial contrast. Almost all V1 cells exhibited a strong monotonic increasing luminance tuning when they were exposed to high spatial contrast. Thus, V1 cells encode the luminance carried by spatial contrast stimuli with the monotonically increasing response function. Moreover, high contrast decreased luminance tuning of OFF cells but increased that of in ON cells in RG and LGN. The luminance and contrast tunings of LGN ON cells were highly separable as V1 cells, whereas those of LGN OFF cells were lowly separable. These asymmetrical effects of spatial contrast on ON/OFF channels might underlie the robust ability of V1 cells to perform luminance tuning when exposed to spatial contrast stimuli.
Collapse
Affiliation(s)
- Rendong Tang
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Wenzhen Chen
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Yi Wang
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
38
|
A Fast Preprocessing Method for Micro-Expression Spotting via Perceptual Detection of Frozen Frames. J Imaging 2021; 7:jimaging7040068. [PMID: 34460518 PMCID: PMC8321339 DOI: 10.3390/jimaging7040068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 03/23/2021] [Accepted: 03/30/2021] [Indexed: 11/17/2022] Open
Abstract
This paper presents a preliminary study concerning a fast preprocessing method for facial microexpression (ME) spotting in video sequences. The rationale is to detect frames containing frozen expressions as a quick warning for the presence of MEs. In fact, those frames can either precede or follow (or both) MEs according to ME type and the subject's reaction. To that end, inspired by the Adelson-Bergen motion energy model and the instinctive nature of the preattentive vision, global visual perception-based features were employed for the detection of frozen frames. Preliminary results achieved on both controlled and uncontrolled videos confirmed that the proposed method is able to correctly detect frozen frames and those revealing the presence of nearby MEs-independently of ME kind and facial region. This property can then contribute to speeding up and simplifying the ME spotting process, especially during long video acquisitions.
Collapse
|
39
|
Image luminance changes contrast sensitivity in visual cortex. Cell Rep 2021; 34:108692. [PMID: 33535047 PMCID: PMC7886026 DOI: 10.1016/j.celrep.2021.108692] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 11/16/2020] [Accepted: 01/04/2021] [Indexed: 12/21/2022] Open
Abstract
Accurate measures of contrast sensitivity are important for evaluating visual disease progression and for navigation safety. Previous measures suggested that cortical contrast sensitivity was constant across widely different luminance ranges experienced indoors and outdoors. Against this notion, here, we show that luminance range changes contrast sensitivity in both cat and human cortex, and the changes are different for dark and light stimuli. As luminance range increases, contrast sensitivity increases more within cortical pathways signaling lights than those signaling darks. Conversely, when the luminance range is constant, light-dark differences in contrast sensitivity remain relatively constant even if background luminance changes. We show that a Naka-Rushton function modified to include luminance range and light-dark polarity accurately replicates both the statistics of light-dark features in natural scenes and the cortical responses to multiple combinations of contrast and luminance. We conclude that differences in light-dark contrast increase with luminance range and are largest in bright environments.
Collapse
|
40
|
Tobon H, Trujillo C, Garcia-Sucerquia J. Preprocessing in digital lensless holographic microscopy for intensity reconstructions with enhanced contrast. APPLIED OPTICS 2021; 60:A215-A221. [PMID: 33690372 DOI: 10.1364/ao.404297] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 10/22/2020] [Indexed: 06/12/2023]
Abstract
In this work, a numerical method to enhance the contrast of intensity hologram reconstructions of digital lensless holographic microscopy (DLHM) is presented. The method manipulates the in-line hologram and reference images through mathematical operations between them; additionally, a sharpening operation, functionalized in terms of the parameters of the recording setup, is applied to the said images. The preprocessing of the recorded images produces a modified in-line hologram and a reference wave image from which an intensity reconstruction with a 25% improvement of its contrast, with respect to the conventional reconstruction procedure, is achieved. The method is illustrated with intensity reconstructions of a hologram of a monolayer of polystyrene spheres 1.09 µm in diameter. Finally, the preprocessing method is validated with a modeled hologram, successfully applied to holograms of the section of the head a Drosophila melanogaster fly and its results are contrasted with those obtained via bright-field microscopy.
Collapse
|
41
|
Cropper SJ, McCauley A, Gwinn OS, Bartlett M, Nicholls MER. Flowers in the Attic: Lateralization of the detection of meaning in visual noise. J Vis 2020; 20:11. [PMID: 33027510 PMCID: PMC7545083 DOI: 10.1167/jov.20.10.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Accepted: 09/03/2020] [Indexed: 12/03/2022] Open
Abstract
The brain is a slave to sense; we see and hear things that are not there and engage in ongoing correction of these illusory experiences, commonly termed pareidolia. The current study investigates whether the predisposition to see meaning in noise is lateralized to one hemisphere or the other and how this predisposition to visual false-alarms is related to personality. Stimuli consisted of images of faces or flowers embedded in pink (1/f) noise generated through a novel process and presented in a divided-field paradigm. Right-handed undergraduates participated in a forced-choice signal-detection task where they determined whether a face or flower signal was present in a single-interval trial. Experiment 1 involved an equal ratio of signal-to-noise trials; experiment 2 provided more potential for illusionary perception with 25% signal and 75% noise trials. There was no asymmetry in the ability to discriminate signal from noise trials (measured using d') for either faces and flowers, although the response criterion (c) suggested a stronger predisposition to visual false alarms in the right visual field, and this was negatively correlated to the unusual experiences dimension of schizotypy. Counter to expectations, changing the signal-image to noise-image proportion in Experiment 2 did not change the number of false alarms for either faces and flowers, although a stronger bias was seen to the right visual field; sensitivity remained the same in both hemifields but there was a moderate positive correlation between cognitive disorganization and the bias (c) for "flower" judgements. Overall, these results were consistent with a rapid evidence-accumulation process of the kind described by a diffusion decision model mediating the task lateralized to the left-hemisphere.
Collapse
Affiliation(s)
- Simon J Cropper
- Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, Australia
| | - Ashlan McCauley
- School of Psychology, Flinders University, Adelaide, Australia
| | - O Scott Gwinn
- School of Psychology, Flinders University, Adelaide, Australia
| | - Megan Bartlett
- School of Psychology, Flinders University, Adelaide, Australia
| | | |
Collapse
|
42
|
Evaluation of non-traditional visualization methods to detect surface attachment of biofilms. Colloids Surf B Biointerfaces 2020; 196:111320. [PMID: 32956995 DOI: 10.1016/j.colsurfb.2020.111320] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2020] [Revised: 08/08/2020] [Accepted: 08/10/2020] [Indexed: 11/24/2022]
Abstract
In food safety and food quality, biofilm research is of great importance for mitigating food-borne pathogens in food processing environments. To supplement the traditional staining techniques for biofilm characterization, we introduce several non-traditional imaging methods for detecting biofilm attachment to the solid-liquid and air-liquid interfaces. For strains of Pseudomonas aeruginosa (the positive control), Acinetobacter baumanii, Listeria monocytogenes and Salmonella enterica, the traditional crystal violet assay showed evidence of biofilm attachment to the well plate base as well as inferred the presence of an air-liquid biofilm attached on the upper well walls where the meniscus was present. However, air-liquid biofilms and solid-surface-attached biofilms were not detected for all of these strains using the non-traditional imaging methods. For L. monocytogenes, we were unable to detect biofilms at a particle-laden, air-liquid interface as evidenced through microscopy, which contradicts the meniscus staining test and suggests that the coffee-ring effect may lead to false positives when using meniscus staining. Furthermore, when L. monocytogenes was cultivated in a pendant droplet in air, only microbial sediment at the droplet apex was observed without any apparent bacterial colonization of the droplet surface. All other strains showed clear evidence of air-liquid biofilms at the air-liquid interface of a pendant droplet. To non-invasively detect if and when air-liquid pellicles form in a well plate, we also present a novel in situ reflection assay that demonstrates the capacity to do this quantitatively.
Collapse
|
43
|
Mazade R, Jin J, Pons C, Alonso JM. Functional Specialization of ON and OFF Cortical Pathways for Global-Slow and Local-Fast Vision. Cell Rep 2020; 27:2881-2894.e5. [PMID: 31167135 DOI: 10.1016/j.celrep.2019.05.007] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Revised: 03/07/2019] [Accepted: 04/30/2019] [Indexed: 12/20/2022] Open
Abstract
Visual information is processed in the cortex by ON and OFF pathways that respond to light and dark stimuli. Responses to darks are stronger, faster, and driven by a larger number of cortical neurons than responses to lights. Here, we demonstrate that these light-dark cortical asymmetries reflect a functional specialization of ON and OFF pathways for different stimulus properties. We show that large long-lasting stimuli drive stronger cortical responses when they are light, whereas small fast stimuli drive stronger cortical responses when they are dark. Moreover, we show that these light-dark asymmetries are preserved under a wide variety of luminance conditions that range from photopic to low mesopic light. Our results suggest that ON and OFF pathways extract different spatiotemporal information from visual scenes, making OFF local-fast signals better suited to maximize visual acuity and ON global-slow signals better suited to guide the eye movements needed for retinal image stabilization.
Collapse
Affiliation(s)
- Reece Mazade
- Department of Biological and Visual Sciences, SUNY College of Optometry, New York, NY 10036, USA
| | - Jianzhong Jin
- Department of Biological and Visual Sciences, SUNY College of Optometry, New York, NY 10036, USA
| | - Carmen Pons
- Department of Biological and Visual Sciences, SUNY College of Optometry, New York, NY 10036, USA
| | - Jose-Manuel Alonso
- Department of Biological and Visual Sciences, SUNY College of Optometry, New York, NY 10036, USA.
| |
Collapse
|
44
|
Chen Z, Su Y, Wang Y, Wang Q, Qu H, Wu Y. MARVisT: Authoring Glyph-Based Visualization in Mobile Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:2645-2658. [PMID: 30640614 DOI: 10.1109/tvcg.2019.2892415] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Recent advances in mobile augmented reality (AR) techniques have shed new light on personal visualization for their advantages of fitting visualization within personal routines, situating visualization in a real-world context, and arousing users' interests. However, enabling non-experts to create data visualization in mobile AR environments is challenging given the lack of tools that allow in-situ design while supporting the binding of data to AR content. Most existing AR authoring tools require working on personal computers or manually creating each virtual object and modifying its visual attributes. We systematically study this issue by identifying the specificity of AR glyph-based visualization authoring tool and distill four design considerations. Following these design considerations, we design and implement MARVisT, a mobile authoring tool that leverages information from reality to assist non-experts in addressing relationships between data and virtual glyphs, real objects and virtual glyphs, and real objects and data. With MARVisT, users without visualization expertise can bind data to real-world objects to create expressive AR glyph-based visualizations rapidly and effortlessly, reshaping the representation of the real world with data. We use several examples to demonstrate the expressiveness of MARVisT. A user study with non-experts is also conducted to evaluate the authoring experience of MARVisT.
Collapse
|
45
|
Puckett AM, Schira MM, Isherwood ZJ, Victor JD, Roberts JA, Breakspear M. Manipulating the structure of natural scenes using wavelets to study the functional architecture of perceptual hierarchies in the brain. Neuroimage 2020; 221:117173. [PMID: 32682991 PMCID: PMC8239382 DOI: 10.1016/j.neuroimage.2020.117173] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 05/11/2020] [Accepted: 07/14/2020] [Indexed: 01/08/2023] Open
Abstract
Functional neuroimaging experiments that employ naturalistic stimuli (natural scenes, films, spoken narratives) provide insights into cognitive function "in the wild". Natural stimuli typically possess crowded, spectrally dense, dynamic, and multimodal properties within a rich multiscale structure. However, when using natural stimuli, various challenges exist for creating parametric manipulations with tight experimental control. Here, we revisit the typical spectral composition and statistical dependences of natural scenes, which distinguish them from abstract stimuli. We then demonstrate how to selectively degrade subtle statistical dependences within specific spatial scales using the wavelet transform. Such manipulations leave basic features of the stimuli, such as luminance and contrast, intact. Using functional neuroimaging of human participants viewing degraded natural images, we demonstrate that cortical responses at different levels of the visual hierarchy are differentially sensitive to subtle statistical dependences in natural images. This demonstration supports the notion that perceptual systems in the brain are optimally tuned to the complex statistical properties of the natural world. The code to undertake these stimulus manipulations, and their natural extension to dynamic natural scenes (films), is freely available.
Collapse
Affiliation(s)
- Alexander M Puckett
- School of Psychology, The University of Queensland, Brisbane QLD 4072, Australia; Queensland Brain Institute, The University of Queensland, Brisbane QLD 4072, Australia.
| | - Mark M Schira
- School of Psychology, University of Wollongong, Wollongong NSW 2522, Australia
| | - Zoey J Isherwood
- School of Psychology, University of Nevada, Reno NV 89557, United States
| | - Jonathan D Victor
- Feil Family Brain and Mind Research Institute and Department of Neurology, Weill Cornell Medical College, New York NY 10065, United States
| | - James A Roberts
- Brain Modelling Group, QIMR Berghofer Medical Research Institute, Brisbane QLD 4006, Australia
| | - Michael Breakspear
- Brain and Mind PRC, University of Newcastle, Newcastle NSW 2308, Australia
| |
Collapse
|
46
|
Zhang JY, Ren JJ, Li LJ, Gu J, Zhang DD. THz imaging technique for nondestructive analysis of debonding defects in ceramic matrix composites based on multiple echoes and feature fusion. OPTICS EXPRESS 2020; 28:19901-19915. [PMID: 32680060 DOI: 10.1364/oe.394177] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Accepted: 06/04/2020] [Indexed: 06/11/2023]
Abstract
We propose a THz nondestructive analysis method based on multiple echoes and feature fusion. Conventionally, it is difficult to identify the debonding defects of the glue layer (II) due to the thin adhesive layer. To this end, a THz propagation model is established, and a quantitative method for determining the thickness of debonding defects based on multiple echoes is presented. The measurement error for preset defect thickness of 500 µm was 4%. Further, for determining the area of debonding defects, a feature fusion imaging algorithm is proposed to realize the lateral recognition of defects and quantitative analysis is used to improve the recognition ability of defects.
Collapse
|
47
|
Vinke LN, Yazdanbakhsh A. Lightness induction enhancements and limitations at low frequency modulations across a variety of stimulus contexts. PeerJ 2020; 8:e8918. [PMID: 32351782 PMCID: PMC7183748 DOI: 10.7717/peerj.8918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Accepted: 03/16/2020] [Indexed: 11/20/2022] Open
Abstract
Lightness illusions are often studied under static viewing conditions with figures varying in geometric design, containing different types of perceptual grouping and figure-ground cues. A few studies have explored the perception of lightness induction while modulating lightness illusions continuously in time, where changes in perceived lightness are often linked to the temporal modulation frequency, up to around 2–4 Hz. These findings support the concept of a cut-off frequency for lightness induction. However, another critical change (enhancement) in the magnitude of perceived lightness during slower temporal modulation conditions has not been addressed in previous temporal modulation studies. Moreover, it remains unclear whether this critical change applies to a variety of lightness illusion stimuli, and the degree to which different stimulus configurations can demonstrate enhanced lightness induction in low modulation frequencies. Therefore, we measured lightness induction strength by having participants cancel out any perceived modulation in lightness detected over time within a central target region, while the surrounding context, which ultimately drives the lightness illusion, was viewed in a static state or modulated continuously in time over a low frequency range (0.25–2 Hz). In general, lightness induction decreased as temporal modulation frequency was increased, with the strongest perceived lightness induction occurring at lower modulation frequencies for visual illusions with strong grouping and figure-ground cues. When compared to static viewing conditions, we found that slow continuous surround modulation induces a strong and significant increase in perceived lightness for multiple types of lightness induction stimuli. Stimuli with perceptually ambiguous grouping and figure-ground cues showed weaker effects of slow modulation lightness enhancement. Our results demonstrate that, in addition to the existence of a cut-off frequency, an additional critical temporal modulation frequency of lightness induction exists (0.25–0.5 Hz), which instead maximally enhances lightness induction and seems to be contingent upon the prevalence of figure-ground and grouping organization.
Collapse
Affiliation(s)
- Louis Nicholas Vinke
- Graduate Program for Neuroscience, Boston University, Boston, MA, USA
- Center for Systems Neuroscience (CSN), Boston University, Boston, MA, USA
| | - Arash Yazdanbakhsh
- Graduate Program for Neuroscience, Boston University, Boston, MA, USA
- Center for Systems Neuroscience (CSN), Boston University, Boston, MA, USA
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA
| |
Collapse
|
48
|
Abstract
Our visual system is tasked with transforming variations in light within our environment into a coherent percept, typically described using properties such as luminance and contrast. Models of vision often downplay the importance of luminance in shaping cortical responses, instead prioritizing representations that do not covary with overall luminance (i.e., contrast), and yet visuocortical response properties that may reflect luminance encoding remain poorly understood. In this study, we examined whether well-established visuocortical response properties may also reflect luminance encoding, challenging the idea that luminance information itself plays no significant role in supporting visual perception. To do so, we measured functional activity in human visual cortex when presenting stimuli varying in contrast and mean luminance, and found that luminance response functions are strongly contrast dependent between 50 and 250 cd/m2, confirmed with a subsequent experiment. High-contrast stimuli produced linearly increasing responses as luminance increased logarithmically for all early visual areas, whereas low-contrast stimuli produced either flat (V1) or assorted positive linear (V2 and V3) response profiles. These results reveal that the mean luminance information of a visual signal persists within visuocortical representations, potentially reflecting an inherent imbalance of excitatory and inhibitory components that can be either contrast dependent (V1 and V2) or contrast invariant (V3). The role of luminance should be considered when the aim is to drive potent visually evoked responses and when activity is compared across studies. More broadly, overall luminance should be weighed heavily as a core feature of the visual system and should play a significant role in cortical models of vision.NEW & NOTEWORTHY This neuroimaging study investigates the influence of overall luminance on population activity in human visual cortex. We discovered that the response to a particular stimulus contrast level is reliant, in part, on the mean luminance of a signal, revealing that the mean luminance information of our environment is represented within the visual cortex. The results challenge a long-standing misconception about the role of luminance information in the processing of visual information at the cortical level.
Collapse
Affiliation(s)
- Louis N Vinke
- Graduate Program for Neuroscience, Boston University, Boston, Massachusetts.,Center for Systems Neuroscience, Boston University, Boston, Massachusetts
| | - Sam Ling
- Psychological and Brain Sciences, Boston University, Boston, Massachusetts.,Center for Systems Neuroscience, Boston University, Boston, Massachusetts
| |
Collapse
|
49
|
Ketkar MD, Sporar K, Gür B, Ramos-Traslosheros G, Seifert M, Silies M. Luminance Information Is Required for the Accurate Estimation of Contrast in Rapidly Changing Visual Contexts. Curr Biol 2020; 30:657-669.e4. [PMID: 32008904 DOI: 10.1016/j.cub.2019.12.038] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2019] [Revised: 11/01/2019] [Accepted: 12/11/2019] [Indexed: 11/28/2022]
Abstract
Visual perception scales with changes in the visual stimulus, or contrast, irrespective of background illumination. However, visual perception is challenged when adaptation is not fast enough to deal with sudden declines in overall illumination, for example, when gaze follows a moving object from bright sunlight into a shaded area. Here, we show that the visual system of the fly employs a solution by propagating a corrective luminance-sensitive signal. We use in vivo 2-photon imaging and behavioral analyses to demonstrate that distinct OFF-pathway inputs encode contrast and luminance. Predictions of contrast-sensitive neuronal responses show that contrast information alone cannot explain behavioral responses in sudden dim light. The luminance-sensitive pathway via the L3 neuron is required for visual processing in such rapidly changing light conditions, ensuring contrast constancy when pure contrast sensitivity underestimates a stimulus. Thus, retaining a peripheral feature, luminance, in visual processing is required for robust behavioral responses.
Collapse
Affiliation(s)
- Madhura D Ketkar
- Institute of Developmental Biology and Neurobiology, Johannes Gutenberg-Universität Mainz, Hanns-Dieter-Hüsch-Weg 15, Mainz 55128, Germany; European Neuroscience Institute Göttingen, a Joint Initiative of the University Medical Center Göttingen and the Max Planck Society, Grisebachstr. 5, Göttingen 37077, Germany; International Max Planck Research School and Göttingen Graduate School for Neurosciences, Biophysics, and Molecular Biosciences (GGNB) at the University of Göttingen, Justus-von-Liebig-Weg 11, Göttingen 37077, Germany
| | - Katja Sporar
- Institute of Developmental Biology and Neurobiology, Johannes Gutenberg-Universität Mainz, Hanns-Dieter-Hüsch-Weg 15, Mainz 55128, Germany; European Neuroscience Institute Göttingen, a Joint Initiative of the University Medical Center Göttingen and the Max Planck Society, Grisebachstr. 5, Göttingen 37077, Germany; International Max Planck Research School and Göttingen Graduate School for Neurosciences, Biophysics, and Molecular Biosciences (GGNB) at the University of Göttingen, Justus-von-Liebig-Weg 11, Göttingen 37077, Germany
| | - Burak Gür
- Institute of Developmental Biology and Neurobiology, Johannes Gutenberg-Universität Mainz, Hanns-Dieter-Hüsch-Weg 15, Mainz 55128, Germany; European Neuroscience Institute Göttingen, a Joint Initiative of the University Medical Center Göttingen and the Max Planck Society, Grisebachstr. 5, Göttingen 37077, Germany; International Max Planck Research School and Göttingen Graduate School for Neurosciences, Biophysics, and Molecular Biosciences (GGNB) at the University of Göttingen, Justus-von-Liebig-Weg 11, Göttingen 37077, Germany
| | - Giordano Ramos-Traslosheros
- Institute of Developmental Biology and Neurobiology, Johannes Gutenberg-Universität Mainz, Hanns-Dieter-Hüsch-Weg 15, Mainz 55128, Germany; European Neuroscience Institute Göttingen, a Joint Initiative of the University Medical Center Göttingen and the Max Planck Society, Grisebachstr. 5, Göttingen 37077, Germany; International Max Planck Research School and Göttingen Graduate School for Neurosciences, Biophysics, and Molecular Biosciences (GGNB) at the University of Göttingen, Justus-von-Liebig-Weg 11, Göttingen 37077, Germany
| | - Marvin Seifert
- European Neuroscience Institute Göttingen, a Joint Initiative of the University Medical Center Göttingen and the Max Planck Society, Grisebachstr. 5, Göttingen 37077, Germany
| | - Marion Silies
- Institute of Developmental Biology and Neurobiology, Johannes Gutenberg-Universität Mainz, Hanns-Dieter-Hüsch-Weg 15, Mainz 55128, Germany; European Neuroscience Institute Göttingen, a Joint Initiative of the University Medical Center Göttingen and the Max Planck Society, Grisebachstr. 5, Göttingen 37077, Germany.
| |
Collapse
|
50
|
Image statistics of the environment surrounding freely behaving hoverflies. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2019; 205:373-385. [PMID: 30937518 PMCID: PMC6579776 DOI: 10.1007/s00359-019-01329-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2018] [Revised: 02/12/2019] [Accepted: 03/14/2019] [Indexed: 12/04/2022]
Abstract
Natural scenes are not as random as they might appear, but are constrained in both space and time. The 2-dimensional spatial constraints can be described by quantifying the image statistics of photographs. Human observers perceive images with naturalistic image statistics as more pleasant to view, and both fly and vertebrate peripheral and higher order visual neurons are tuned to naturalistic image statistics. However, for a given animal, what is natural differs depending on the behavior, and even if we have a broad understanding of image statistics, we know less about the scenes relevant for particular behaviors. To mitigate this, we here investigate the image statistics surrounding Episyrphus balteatus hoverflies, where the males hover in sun shafts created by surrounding trees, producing a rich and dense background texture and also intricate shadow patterns on the ground. We quantified the image statistics of photographs of the ground and the surrounding panorama, as the ventral and lateral visual field is particularly important for visual flight control, and found differences in spatial statistics in photos where the hoverflies were hovering compared to where they were flying. Our results can, in the future, be used to create more naturalistic stimuli for experimenter-controlled experiments in the laboratory.
Collapse
|