1
|
Davidov A, Razumnikova O, Bakaev M. Nature in the Heart and Mind of the Beholder: Psycho-Emotional and EEG Differences in Perception of Virtual Nature Due to Gender. Vision (Basel) 2023; 7:30. [PMID: 37092463 PMCID: PMC10123600 DOI: 10.3390/vision7020030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 03/21/2023] [Accepted: 03/25/2023] [Indexed: 04/05/2023] Open
Abstract
Natural environment experiences in virtual reality (VR) can be a feasible option for people unable to connect with real nature. Existing research mostly focuses on health and emotional advantages of the "virtual nature" therapy, but studies of its neuropsychological effects related to visual perception are rare. In our experiment, 20 subjects watched nature-related video content in VR headsets (3D condition) and on a computer screen (2D condition). In addition to the gender factor, we considered the individual Environmental Identity Index (EID) and collected the self-assessment of the emotional state per the components of Valence, Arousal, and Dominance in each experimental condition. Besides the psychometric data, we also registered brainwave activity (EEG) and analyzed it with the 7 frequency bands. For EID, which was considerably higher in women, we found significant positive correlation with Valence (i.e., beneficial effect of the natural stimuli on the psycho-emotional status). At the same time, the analysis of the EEG data suggests a considerable impact of the VR immersion itself, with higher relaxation alpha effect in 3D vs. 2D condition in men. The novel and most pronounced effect of the gender factor was found in the relation between the EID and the EEG powers in the high-frequency bands-that is, positive correlation of these variables in women (0.64 < Rs < 0.74) but negative correlation in men (-0.66 < Rs < -0.72). Our results imply individually different and gender-dependent effects of the natural stimulus in VR. Correspondingly, the video and VR content development should consider this and aim to provide a user characteristics-tailored experience.
Collapse
Affiliation(s)
| | | | - Maxim Bakaev
- Department of Data Collection and Processing Systems, Novosibirsk State Technical University, 630073 Novosibirsk, Russia (O.R.)
| |
Collapse
|
2
|
Linton P. V1 as an egocentric cognitive map. Neurosci Conscious 2021; 2021:niab017. [PMID: 34532068 PMCID: PMC8439394 DOI: 10.1093/nc/niab017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 05/21/2021] [Accepted: 06/08/2021] [Indexed: 01/20/2023] Open
Abstract
We typically distinguish between V1 as an egocentric perceptual map and the hippocampus as an allocentric cognitive map. In this article, we argue that V1 also functions as a post-perceptual egocentric cognitive map. We argue that three well-documented functions of V1, namely (i) the estimation of distance, (ii) the estimation of size, and (iii) multisensory integration, are better understood as post-perceptual cognitive inferences. This argument has two important implications. First, we argue that V1 must function as the neural correlates of the visual perception/cognition distinction and suggest how this can be accommodated by V1's laminar structure. Second, we use this insight to propose a low-level account of visual consciousness in contrast to mid-level accounts (recurrent processing theory; integrated information theory) and higher-level accounts (higher-order thought; global workspace theory). Detection thresholds have been traditionally used to rule out such an approach, but we explain why it is a mistake to equate visibility (and therefore the presence/absence of visual experience) with detection thresholds.
Collapse
Affiliation(s)
- Paul Linton
- Centre for Applied Vision Research, City, University of London, Northampton Square, London EC1V 0HB, UK
| |
Collapse
|
3
|
Second-order cues to figure motion enable object detection during prey capture by praying mantises. Proc Natl Acad Sci U S A 2019; 116:27018-27027. [PMID: 31818943 DOI: 10.1073/pnas.1912310116] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023] Open
Abstract
Detecting motion is essential for animals to perform a wide variety of functions. In order to do so, animals could exploit motion cues, including both first-order cues-such as luminance correlation over time-and second-order cues, by correlating higher-order visual statistics. Since first-order motion cues are typically sufficient for motion detection, it is unclear why sensitivity to second-order motion has evolved in animals, including insects. Here, we investigate the role of second-order motion in prey capture by praying mantises. We show that prey detection uses second-order motion cues to detect figure motion. We further present a model of prey detection based on second-order motion sensitivity, resulting from a layer of position detectors feeding into a second layer of elementary-motion detectors. Mantis stereopsis, in contrast, does not require figure motion and is explained by a simpler model that uses only the first layer in both eyes. Second-order motion cues thus enable prey motion to be detected, even when perfectly matching the average background luminance and independent of the elementary motion of any parts of the prey. Subsequent to prey detection, processes such as stereopsis could work to determine the distance to the prey. We thus demonstrate how second-order motion mechanisms enable ecologically relevant behavior such as detecting camouflaged targets for other visual functions including stereopsis and target tracking.
Collapse
|
4
|
Ihlefeld A, Alamatsaz N, Shapley RM. Population rate-coding predicts correctly that human sound localization depends on sound intensity. eLife 2019; 8:47027. [PMID: 31633481 PMCID: PMC6802950 DOI: 10.7554/elife.47027] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2019] [Accepted: 09/20/2019] [Indexed: 12/02/2022] Open
Abstract
Human sound localization is an important computation performed by the brain. Models of sound localization commonly assume that sound lateralization from interaural time differences is level invariant. Here we observe that two prevalent theories of sound localization make opposing predictions. The labelled-line model encodes location through tuned representations of spatial location and predicts that perceived direction is level invariant. In contrast, the hemispheric-difference model encodes location through spike-rate and predicts that perceived direction becomes medially biased at low sound levels. Here, behavioral experiments find that softer sounds are perceived closer to midline than louder sounds, favoring rate-coding models of human sound localization. Analogously, visual depth perception, which is based on interocular disparity, depends on the contrast of the target. The similar results in hearing and vision suggest that the brain may use a canonical computation of location: encoding perceived location through population spike rate relative to baseline. Being able to localize sounds helps us make sense of the world around us. The brain works out sound direction by comparing the times of when sound reaches the left versus the right ear. This cue is known as interaural time difference, or ITD for short. But how exactly the brain decodes this information is still unknown. The brain contains nerve cells that each show maximum activity in response to one particular ITD. One idea is that these nerve cells are arranged in the brain like a map from left to right, and that the brain then uses this map to estimate sound direction. This is known as the Jeffress model, after the scientist who first proposed it. There is some evidence that birds and alligators actually use a system like this to localize sounds, but no such map of nerve cells has yet been identified in mammals. An alternative possibility is that the brain compares activity across groups of ITD-sensitive nerve cells. One of the oldest and simplest ways to measure this is to compare nerve activity in the left and right hemispheres of the brain. This readout is known as the hemispheric difference model. By analyzing data from published studies, Ihlefeld, Alamatsaz, and Shapley discovered that these two models make opposing predictions about the effects of volume. The Jeffress model predicts that the volume of a sound will not affect a person’s ability to localize it. By contrast, the hemispheric difference model predicts that very soft sounds will lead to systematic errors, so that for the same ITD, softer sounds are perceived closer towards the front than louder sounds. To investigate this further, Ihlefeld, Alamatsaz, and Shapley asked healthy volunteers to localize sounds of different volumes. The volunteers tended to mis-localize quieter sounds, believing them to be closer to the body’s midline than they actually were, which is inconsistent with the predictions of the Jeffress model. These new findings also reveal key parallels to processing in the visual system. Visual areas of the brain estimate how far away an object is by comparing the input that reaches the two eyes. But these estimates are also systematically less accurate for low-contrast stimuli than for high-contrast ones, just as sound localization is less accurate for softer sounds than for louder ones. The idea that the brain uses the same basic strategy to localize both sights and sounds generates a number of predictions for future studies to test.
Collapse
Affiliation(s)
- Antje Ihlefeld
- New Jersey Institute of Technology, Newark, United States
| | - Nima Alamatsaz
- New Jersey Institute of Technology, Newark, United States.,Rutgers University, Newark, United States
| | | |
Collapse
|
5
|
Wahl S, Dragneva D, Rifai K. Digitalization versus immersion: performance and subjective evaluation of 3D perception with emulated accommodation and parallax in digital microsurgery. JOURNAL OF BIOMEDICAL OPTICS 2019; 24:1-7. [PMID: 31617336 PMCID: PMC7000911 DOI: 10.1117/1.jbo.24.10.106501] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Accepted: 09/12/2019] [Indexed: 06/10/2023]
Abstract
In the visually challenging situation of microsurgery with many altered depth cues, digitalization of surgical systems disrupts two further depth cues, namely focus and parallax. Although in purely optical surgical systems accommodation and eye movements induce expected focus and parallax changes, they become statically fixed through digitalization. Our study evaluates the impact of static focus and parallax onto performance and subjective 3D perception. Subjects reported decreased depth realism under static parallax and focus. Thus surgeons’ depth perception is impacted further through digitalization of microsurgery, increasing the potential of artificial stereo-induced fatigue.
Collapse
Affiliation(s)
- Siegfried Wahl
- University Eye Clinics, Institute for Ophthalmic Research, Tuebingen, Germany
- Carl Zeiss Vision International GmbH, Aalen, Germany
| | - Denitsa Dragneva
- University Eye Clinics, Institute for Ophthalmic Research, Tuebingen, Germany
| | - Katharina Rifai
- University Eye Clinics, Institute for Ophthalmic Research, Tuebingen, Germany
- Carl Zeiss Vision International GmbH, Aalen, Germany
| |
Collapse
|
6
|
Lin D, Liu Z, Cao Q, Wu X, Liu J, Chen J, Lin Z, Li X, Zhang L, Long E, Zhang X, Wang J, Wu D, Zhao X, Yu T, Li J, Zhou X, Wang L, Lin H, Chen W, Liu Y. Clinical characteristics of young adult cataract patients: a 10-year retrospective study of the Zhongshan Ophthalmic Center. BMJ Open 2018; 8:e020234. [PMID: 30037862 PMCID: PMC6059294 DOI: 10.1136/bmjopen-2017-020234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
AIM To investigate the characteristics of young adult cataract (YAC) patients over a 10-year period. METHODS This observational study included YAC patients aged 18-49 years who were treated surgically for the first time at the Zhongshan Ophthalmic Center in China. YAC patients were analysed and compared with patients with childhood cataract (CC) in January 2005 to December 2014. RESULTS During the 10-year period, 515 YAC patients and 2421 inpatients with CC were enrolled. Among the YAC patients, 76.76% (109/142) of unilateral patients had a corrected distance visual acuity (CDVA) better than 20/40 in the healthy eye, whereas only 20.38% (76/373) of bilateral patients had a CDVA better than 20/40 in the eye with better visual acuity. Compared with the CC group, the YAC group had a higher proportion of rural patients (40.40% vs 31.60%, p=0.001). Furthermore, the prevalence of other ocular abnormalities in YAC patients was higher than that in patients with CC (29.71% vs 17.47%, p<0.001). CONCLUSIONS A large proportion coming from rural areas and a high prevalence of complicated ocular abnormalities may be the most salient characteristics of YAC patients. Strengthening the counselling and screening strategy for cataract and health education for young adults are required especially for those in rural areas.
Collapse
Affiliation(s)
- Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zhenzhen Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Qianzhong Cao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Jinchao Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Jingjing Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zhuoling Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiaoyan Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Li Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Erping Long
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiayin Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Jinghui Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Dongxuan Wu
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Xutu Zhao
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Tongyong Yu
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Jing Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiaojia Zhou
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Lisha Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Weirong Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yizhi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|