1
|
Findik H, Kaim M, Uzun F, Kanat A, Keleş ON, Aydin MD. Exploring a Novel Hypothesis: Could the Eye Function as a Radar or Ultrasound Device in Depth and Distance Perception? Neurophysiological Insights. Life (Basel) 2025; 15:536. [PMID: 40283091 PMCID: PMC12028447 DOI: 10.3390/life15040536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2025] [Revised: 03/15/2025] [Accepted: 03/20/2025] [Indexed: 04/29/2025] Open
Abstract
Recent advancements in ocular physiology suggest that the eyes may function similarly to radar antennae or ultrasound probes, with the occipital cortex acting as a detector, challenging the traditional view of binocular vision as the primary mechanism for depth and distance perception. METHODS We conducted a comprehensive analysis of the neuroanatomical and histological architecture of the neuro-optico-cortical systems in a male wild rabbit model. The objective was to identify potential structural and functional similarities between the retino-optical system and radar/ultrasound effector-detector systems. RESULTS Histological examination revealed significant similarities between retinal morphology and radar/ultrasound systems. The outermost retinal layer resembled an acoustic lens, with underlying layers functioning as acoustic matching layers. The ganglion cell layer exhibited characteristics akin to the piezoelectric elements of transducers. CONCLUSIONS Our findings support the hypothesis that the retinal apparatus functions similarly to radar antennae or ultrasound probes. Light-stimulated retinal-occipital cortex cells perceive objects and emit electromagnetic waves through the retina, which are reflected by objects and processed in the occipital cortex to provide information on their distance, shape, and depth. This mechanism may complement binocular vision and enhance depth and distance perception in the visual system. These results open new avenues for research in visual neuroscience and could have implications for understanding various visual phenomena and disorders.
Collapse
Affiliation(s)
- Hüseyin Findik
- Department of Ophthalmology, School of Medicine, Recep Tayyip Erdogan University, 53100 Rize, Turkey; (M.K.)
| | - Muhammet Kaim
- Department of Ophthalmology, School of Medicine, Recep Tayyip Erdogan University, 53100 Rize, Turkey; (M.K.)
| | - Feyzahan Uzun
- Department of Ophthalmology, School of Medicine, Recep Tayyip Erdogan University, 53100 Rize, Turkey; (M.K.)
| | - Ayhan Kanat
- Department of Neurosurgery, School of Medicine, Recep Tayyip Erdogan University, 53100 Rize, Turkey
| | - Osman Nuri Keleş
- Department of Histology, School of Medicine, Ataturk University, 25030 Erzurum, Turkey
| | - Mehmet Dumlu Aydin
- Department of Neurosurgery, School of Medicine, Ataturk University, 25030 Erzurum, Turkey;
| |
Collapse
|
2
|
Wang XM, Troje NF. Relating visual and pictorial space: Integration of binocular disparity and motion parallax. J Vis 2024; 24:7. [PMID: 39652056 PMCID: PMC11640909 DOI: 10.1167/jov.24.13.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2024] [Accepted: 10/18/2024] [Indexed: 12/15/2024] Open
Abstract
Traditionally, perceptual spaces are defined by the medium through which the visual environment is conveyed (e.g., in a physical environment, through a picture, or on a screen). This approach overlooks the distinct contributions of different types of visual information, such as binocular disparity and motion parallax, that transform different visual environments to yield different perceptual spaces. The current study proposes a new approach to describe different perceptual spaces based on different visual information. A geometrical model was developed to delineate the transformations imposed by binocular disparity and motion parallax, including (a) a relief depth scaling along the observer's line of sight and (b) pictorial distortions that rotate the entire perceptual space, as well as the invariant properties after these transformations, including distance, three-dimensional shape, and allocentric direction. The model was fitted to the behavioral results from two experiments, wherein the participants rotated a human figure to point at different targets in virtual reality. The pointer was displayed on a virtual frame that could differentially manipulate the availability of binocular disparity and motion parallax. The model fitted the behavioral results well, and model comparisons validated the relief scaling in the form of depth expansion and the pictorial distortions in the form of an isotropic rotation. Fitted parameters showed that binocular disparity renders distance invariant but also introduces relief depth expansion to three-dimensional objects, whereas motion parallax keeps allocentric direction invariant. We discuss the implications of the mediating effects of binocular disparity and motion parallax when connecting different perceptual spaces.
Collapse
Affiliation(s)
- Xiaoye Michael Wang
- Faculty of Kinesiology & Physical Education, University of Toronto, Toronto, Ontario, Canada
| | - Nikolaus F Troje
- BioMotionLab, Centre for Vision Research and Department of Biology, York University, Toronto, Ontario, Canada
| |
Collapse
|
3
|
Yildiz GY, Skarbez R, Sperandio I, Chen SJ, Mulder IJ, Chouinard PA. Linear perspective cues have a greater effect on the perceptual rescaling of distant stimuli than textures in the virtual environment. Atten Percept Psychophys 2024; 86:653-665. [PMID: 38182938 DOI: 10.3758/s13414-023-02834-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/14/2023] [Indexed: 01/07/2024]
Abstract
The presence of pictorial depth cues in virtual environments is important for minimising distortions driven by unnatural viewing conditions (e.g., vergence-accommodation conflict). Our aim was to determine how different pictorial depth cues affect size constancy in virtual environments under binocular and monocular viewing conditions. We systematically removed linear perspective cues and textures of a hallway in a virtual environment. The experiment was performed using the method of constant stimuli. The task required participants to compare the size of 'far' (10 m) and 'near' (5 m) circles displayed inside a virtual environment with one or both or none of the pictorial depth cues. Participants performed the experiment under binocular and monocular viewing conditions while wearing a virtual reality headset. ANOVA revealed that size constancy was greater for both the far and the near circles in the virtual environment with pictorial depth cues compared to the one without cues. However, the effect of linear perspective cues was stronger than textures, especially for the far circle. We found no difference between the binocular and monocular viewing conditions across the different virtual environments. We conclude that linear perspective cues exert a stronger effect than textures on the perceptual rescaling of far stimuli placed in the virtual environment, and that this effect does not vary between binocular and monocular viewing conditions.
Collapse
Affiliation(s)
- Gizem Y Yildiz
- Department of Psychology, Counselling, and Therapy, La Trobe University, George Singer Building, Room 460, 75 Kingsbury Drive, Bundoora, Victoria, 3086, Australia
- Institute of Neuroscience and Medicine (INM-3), Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Richard Skarbez
- Department of Computer Science and Information Technology, La Trobe University, Melbourne, VIC, Australia
| | - Irene Sperandio
- Department of Psychology and Cognitive Science, University of Trento, Rovereto, TN, Italy
| | - Sandra J Chen
- Department of Psychology, Counselling, and Therapy, La Trobe University, George Singer Building, Room 460, 75 Kingsbury Drive, Bundoora, Victoria, 3086, Australia
| | - Indiana J Mulder
- Department of Psychology, Counselling, and Therapy, La Trobe University, George Singer Building, Room 460, 75 Kingsbury Drive, Bundoora, Victoria, 3086, Australia
| | - Philippe A Chouinard
- Department of Psychology, Counselling, and Therapy, La Trobe University, George Singer Building, Room 460, 75 Kingsbury Drive, Bundoora, Victoria, 3086, Australia.
| |
Collapse
|
4
|
Linton P, Morgan MJ, Read JCA, Vishwanath D, Creem-Regehr SH, Domini F. New Approaches to 3D Vision. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210443. [PMID: 36511413 PMCID: PMC9745878 DOI: 10.1098/rstb.2021.0443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 10/25/2022] [Indexed: 12/15/2022] Open
Abstract
New approaches to 3D vision are enabling new advances in artificial intelligence and autonomous vehicles, a better understanding of how animals navigate the 3D world, and new insights into human perception in virtual and augmented reality. Whilst traditional approaches to 3D vision in computer vision (SLAM: simultaneous localization and mapping), animal navigation (cognitive maps), and human vision (optimal cue integration) start from the assumption that the aim of 3D vision is to provide an accurate 3D model of the world, the new approaches to 3D vision explored in this issue challenge this assumption. Instead, they investigate the possibility that computer vision, animal navigation, and human vision can rely on partial or distorted models or no model at all. This issue also highlights the implications for artificial intelligence, autonomous vehicles, human perception in virtual and augmented reality, and the treatment of visual disorders, all of which are explored by individual articles. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Paul Linton
- Presidential Scholars in Society and Neuroscience, Center for Science and Society, Columbia University, New York, NY 10027, USA
- Italian Academy for Advanced Studies in America, Columbia University, New York, NY 10027, USA
- Visual Inference Lab, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Michael J. Morgan
- Department of Optometry and Visual Sciences, City, University of London, Northampton Square, London EC1V 0HB, UK
| | - Jenny C. A. Read
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, Tyne & Wear NE2 4HH, UK
| | - Dhanraj Vishwanath
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, Fife KY16 9JP, UK
| | | | - Fulvio Domini
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI 02912-9067, USA
| |
Collapse
|