1
|
Höfling L, Szatko KP, Behrens C, Deng Y, Qiu Y, Klindt DA, Jessen Z, Schwartz GW, Bethge M, Berens P, Franke K, Ecker AS, Euler T. A chromatic feature detector in the retina signals visual context changes. eLife 2024; 13:e86860. [PMID: 39365730 PMCID: PMC11452179 DOI: 10.7554/elife.86860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 08/25/2024] [Indexed: 10/06/2024] Open
Abstract
The retina transforms patterns of light into visual feature representations supporting behaviour. These representations are distributed across various types of retinal ganglion cells (RGCs), whose spatial and temporal tuning properties have been studied extensively in many model organisms, including the mouse. However, it has been difficult to link the potentially nonlinear retinal transformations of natural visual inputs to specific ethological purposes. Here, we discover a nonlinear selectivity to chromatic contrast in an RGC type that allows the detection of changes in visual context. We trained a convolutional neural network (CNN) model on large-scale functional recordings of RGC responses to natural mouse movies, and then used this model to search in silico for stimuli that maximally excite distinct types of RGCs. This procedure predicted centre colour opponency in transient suppressed-by-contrast (tSbC) RGCs, a cell type whose function is being debated. We confirmed experimentally that these cells indeed responded very selectively to Green-OFF, UV-ON contrasts. This type of chromatic contrast was characteristic of transitions from ground to sky in the visual scene, as might be elicited by head or eye movements across the horizon. Because tSbC cells performed best among all RGC types at reliably detecting these transitions, we suggest a role for this RGC type in providing contextual information (i.e. sky or ground) necessary for the selection of appropriate behavioural responses to other stimuli, such as looming objects. Our work showcases how a combination of experiments with natural stimuli and computational modelling allows discovering novel types of stimulus selectivity and identifying their potential ethological relevance.
Collapse
Affiliation(s)
- Larissa Höfling
- Institute for Ophthalmic Research, University of TübingenTübingenGermany
- Centre for Integrative Neuroscience, University of TübingenTübingenGermany
| | - Klaudia P Szatko
- Institute for Ophthalmic Research, University of TübingenTübingenGermany
- Centre for Integrative Neuroscience, University of TübingenTübingenGermany
| | - Christian Behrens
- Institute for Ophthalmic Research, University of TübingenTübingenGermany
| | - Yuyao Deng
- Institute for Ophthalmic Research, University of TübingenTübingenGermany
- Centre for Integrative Neuroscience, University of TübingenTübingenGermany
| | - Yongrong Qiu
- Institute for Ophthalmic Research, University of TübingenTübingenGermany
- Centre for Integrative Neuroscience, University of TübingenTübingenGermany
| | | | - Zachary Jessen
- Feinberg School of Medicine, Department of Ophthalmology, Northwestern UniversityChicagoUnited States
| | - Gregory W Schwartz
- Feinberg School of Medicine, Department of Ophthalmology, Northwestern UniversityChicagoUnited States
| | - Matthias Bethge
- Centre for Integrative Neuroscience, University of TübingenTübingenGermany
- Tübingen AI Center, University of TübingenTübingenGermany
| | - Philipp Berens
- Institute for Ophthalmic Research, University of TübingenTübingenGermany
- Centre for Integrative Neuroscience, University of TübingenTübingenGermany
- Tübingen AI Center, University of TübingenTübingenGermany
- Hertie Institute for AI in Brain HealthTübingenGermany
| | - Katrin Franke
- Institute for Ophthalmic Research, University of TübingenTübingenGermany
| | - Alexander S Ecker
- Institute of Computer Science and Campus Institute Data Science, University of GöttingenGöttingenGermany
- Max Planck Institute for Dynamics and Self-OrganizationGöttingenGermany
| | - Thomas Euler
- Institute for Ophthalmic Research, University of TübingenTübingenGermany
- Centre for Integrative Neuroscience, University of TübingenTübingenGermany
| |
Collapse
|
2
|
Pan F. Defocused Image Changes Signaling of Ganglion Cells in the Mouse Retina. Cells 2019; 8:cells8070640. [PMID: 31247948 PMCID: PMC6678497 DOI: 10.3390/cells8070640] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 06/22/2019] [Accepted: 06/22/2019] [Indexed: 12/14/2022] Open
Abstract
Myopia is a substantial public health problem worldwide. Although it is known that defocused images alter eye growth and refraction, their effects on retinal ganglion cell (RGC) signaling that lead to either emmetropization or refractive errors have remained elusive. This study aimed to determine if defocused images had an effect on signaling of RGCs in the mouse retina. ON and OFF alpha RGCs and ON-OFF RGCs were recorded from adult C57BL/6J wild-type mice. A mono green organic light-emitting display presented images generated by PsychoPy. The defocused images were projected on the retina under a microscope. Dark-adapted mouse RGCs were recorded under different powers of projected defocused images on the retina. Compared with focused images, defocused images showed a significantly decreased probability of spikes. More than half of OFF transient RGCs and ON sustained RGCs showed disparity in responses to the magnitude of plus and minus optical defocus (although remained RGCs we tested exhibited similar response to both types of defocus). ON and OFF units of ON-OFF RGCs also responded differently in the probability of spikes to defocused images and spatial frequency images. After application of a gap junction blocker, the probability of spikes of RGCs decreased with the presence of optical defocused image. At the same time, the RGCs also showed increased background noise. Therefore, defocused images changed the signaling of some ON and OFF alpha RGCs and ON-OFF RGCs in the mouse retina. The process may be the first step in the induction of myopia development. It appears that gap junctions also play a key role in this process.
Collapse
Affiliation(s)
- Feng Pan
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China.
| |
Collapse
|
3
|
García García M, Pusti D, Wahl S, Ohlendorf A. A global approach to describe retinal defocus patterns. PLoS One 2019; 14:e0213574. [PMID: 30939130 PMCID: PMC6445412 DOI: 10.1371/journal.pone.0213574] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Accepted: 02/25/2019] [Indexed: 12/26/2022] Open
Abstract
The popularity of myopia treatments based on the peripheral defocus theory has risen. So far, little evidence has emerged around the questions which of these treatments are effective and why. In order to establish a framework that enables clinicians and researchers to acknowledge the possible interactions of different defocus patterns across the retina, different peripheral refractive errors (PRX) of subjects and different designs of optical treatments were evaluated. Dioptric defocus patterns on the retinal level have been obtained by merging the matrices of dioptric defocus maps of the visual field of different scenarios with individual peripheral refractive errors and different optical designs of multifocal contact lenses. The newly obtained matrices were statistically compared using a non-parametric test with familywise error algorithms and multi-comparison tests. Results show that asymmetric peripheral refractive error profiles (temporal or nasal positively skewed) appear to be less prone to be changed by the defocus imposition of multifocal contact lenses than those presenting symmetric patterns (relative peripheral myopia or hyperopia).
Collapse
Affiliation(s)
- Miguel García García
- Institute for Ophthalmic Research, Eberhard Karls University Tuebingen, Tuebingen, Baden-Wuerttemberg, Germany
- Carl Zeiss Vision International GmbH, Aalen, Baden-Wuerttemberg, Germany
| | - Dibyendu Pusti
- Laboratorio de Óptica, Universidad de Murcia, Murcia, Spain
| | - Siegfried Wahl
- Institute for Ophthalmic Research, Eberhard Karls University Tuebingen, Tuebingen, Baden-Wuerttemberg, Germany
- Carl Zeiss Vision International GmbH, Aalen, Baden-Wuerttemberg, Germany
| | - Arne Ohlendorf
- Institute for Ophthalmic Research, Eberhard Karls University Tuebingen, Tuebingen, Baden-Wuerttemberg, Germany
- Carl Zeiss Vision International GmbH, Aalen, Baden-Wuerttemberg, Germany
| |
Collapse
|
4
|
Haessig G, Berthelon X, Ieng SH, Benosman R. A Spiking Neural Network Model of Depth from Defocus for Event-based Neuromorphic Vision. Sci Rep 2019; 9:3744. [PMID: 30842458 PMCID: PMC6403400 DOI: 10.1038/s41598-019-40064-0] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Accepted: 02/06/2019] [Indexed: 11/09/2022] Open
Abstract
Depth from defocus is an important mechanism that enables vision systems to perceive depth. While machine vision has developed several algorithms to estimate depth from the amount of defocus present at the focal plane, existing techniques are slow, energy demanding and mainly relying on numerous acquisitions and massive amounts of filtering operations on the pixels' absolute luminance value. Recent advances in neuromorphic engineering allow an alternative to this problem, with the use of event-based silicon retinas and neural processing devices inspired by the organizing principles of the brain. In this paper, we present a low power, compact and computationally inexpensive setup to estimate depth in a 3D scene in real time at high rates that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. Exploiting the high temporal resolution of the event-based silicon retina, we are able to extract depth at 100 Hz for a power budget lower than a 200 mW (10 mW for the camera, 90 mW for the liquid lens and ~100 mW for the computation). We validate the model with experimental results, highlighting features that are consistent with both computational neuroscience and recent findings in the retina physiology. We demonstrate its efficiency with a prototype of a neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological depth from defocus experiments reported in the literature.
Collapse
Affiliation(s)
- Germain Haessig
- Sorbonne Universite, INSERM, CNRS, Institut de la Vision, 17 rue Moreau, 75012, Paris, France.
| | - Xavier Berthelon
- Sorbonne Universite, INSERM, CNRS, Institut de la Vision, 17 rue Moreau, 75012, Paris, France
| | - Sio-Hoi Ieng
- Sorbonne Universite, INSERM, CNRS, Institut de la Vision, 17 rue Moreau, 75012, Paris, France
| | - Ryad Benosman
- Sorbonne Universite, INSERM, CNRS, Institut de la Vision, 17 rue Moreau, 75012, Paris, France.,University of Pittsburgh Medical Center, Biomedical Science Tower 3, Fifth Avenue, Pittsburgh, PA, USA.,Carnegie Mellon University, Robotics Institute, 5000 Forbes Avenue, Pittsburgh, PA, 15213-3890, USA
| |
Collapse
|