1
|
Cai Y, Sun M, Lu Y, Wang Y, Zhang J, Goto S. A Comparative Study on the Effects of Viewing Real and Virtual Reality Classical Chinese Gardens. HERD-HEALTH ENVIRONMENTS RESEARCH & DESIGN JOURNAL 2025:19375867251330834. [PMID: 40232286 DOI: 10.1177/19375867251330834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/16/2025]
Abstract
Purpose: This study evaluates the relaxation effects of viewing a Chinese classical garden and a Chinese-style public park in reality and virtual reality (VR) environments by focusing on their psychological and physiological impacts. Background: The experiment examined two landscapes: the culturally rich and elaborately designed Humble Administrator's Garden and the Hefeng Pavilion. Methods: Twenty-eight participants participated in four sessions, each consisting of a 5-minute viewing session of the Humble Administrator's Garden and the Hefeng Pavilion under Conditions A (real) and B (VR). Each session was evaluated using the Profile of Mood States (POMS) questionnaire, the Semantic Differences Scale, the Supplemental Questionnaire, and eye tracking technology. Results: In Condition A, POMS scores, Semantic Differential Scale ratings, the Supplemental Questionnaire results, and eye movement patterns indicated that viewing the Humble Administrator's Garden resulted in greater relaxation compared to the Hefeng Pavilion. Similar findings were observed in Condition B, reinforcing the Humble Administrator's Garden's relaxing effect. However, there was a noticeable disparity in the relaxation effects between Conditions B and A, with real settings offering more pronounced benefits. Conclusions: This study demonstrated that the culturally rich Humble Administrator's Garden significantly improves mood more effectively than the Hefeng Pavilion, whether viewed in real or VR environments. This study suggests that although VR can offer an immersive experience, it may not fully capture the sensory richness and therapeutic benefits of actual garden environments. Real settings delivered more substantial relaxation effects compared to VR settings.
Collapse
Affiliation(s)
- Yanning Cai
- Department of Environmental Sciences, Nagasaki University, Nagasaki, Japan
| | - Minkai Sun
- Department of Architecture and Urban Planning, Suzhou University of Science and Technology, Suzhou, China
| | - Yudie Lu
- Department of Architecture and Urban Planning, Suzhou University of Science and Technology, Suzhou, China
| | - Yuqin Wang
- Department of Architecture and Urban Planning, Suzhou University of Science and Technology, Suzhou, China
| | - Jian Zhang
- Department of Architecture and Urban Planning, Suzhou University of Science and Technology, Suzhou, China
| | - Seiko Goto
- Department of Environmental Sciences, Nagasaki University, Nagasaki, Japan
| |
Collapse
|
2
|
Sun N, Jiang Y. Eye movements and user emotional experience: a study in interface design. Front Psychol 2025; 16:1455177. [PMID: 40177033 PMCID: PMC11961869 DOI: 10.3389/fpsyg.2025.1455177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2024] [Accepted: 03/05/2025] [Indexed: 04/05/2025] Open
Abstract
The purpose of this study is to explore the correlation between eye movement metrics and user emotional experience metrics during the user's process of using the interface in a task-oriented manner through an eye-tracking study. Fifty-four participants were recruited, who were divided into two groups and asked to complete the same task using two different sets of interfaces. The two sets of interfaces were proved to have differences in the emotional experience of users before the experiment. The participants' eye movement data were recorded as they operated, and correlation analyzes were performed using biserial correlation tests. The results show that different interface designs affect the three dimensions of user emotional experience (PAD) and also lead to changes in eye movement patterns as the users complete tasks. Interface designs that elicit higher pleasure will lead to longer duration of fixations. Interface designs that elicit higher arousal will lead to more fixations and higher peak velocity of saccades. Interface designs that elicit higher dominance will lead to longer duration of fixations, fewer fixations and fewer saccades. This study identifies eye movement metrics related to the user emotional experience in interface design that are different from those in other fields, providing a new perspective for the scientific validation of the emotional experience in interface design. The designers can apply the findings to optimize the specific interface design to improve the user's emotional experience.
Collapse
Affiliation(s)
| | - Yufei Jiang
- Department of Industrial Design, School of Art, Jiangsu University, Zhenjiang, Jiangsu, China
| |
Collapse
|
3
|
Yu Y, Lu Q, Wu X, Wang Z, Zhang C, Wu X, Yan C. Detecting five-pattern personality traits using eye movement features for observing emotional faces. Front Psychol 2024; 15:1397340. [PMID: 39380759 PMCID: PMC11459401 DOI: 10.3389/fpsyg.2024.1397340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Accepted: 09/04/2024] [Indexed: 10/10/2024] Open
Abstract
The five-pattern personality traits rooted in the theory of traditional Chinese medicine (TCM) have promising prospects for clinical application. However, they are currently assessed using a self-report scale, which may have certain limitations. Eye tracking technology, with its non-intrusive, objective, and culturally neutral characteristics, has become a powerful tool for revealing individual cognitive and emotional processes. Therefore, applying this technology for personality assessment is a promising approach. In this study, participants observed five emotional faces (anger, happy, calm, sad, and fear) selected from the Chinese Facial Affective Picture System. Utilizing artificial intelligence algorithms, we evaluated the feasibility of automatically identifying different traits of the five-pattern personality traits from participants' eye movement patterns. Based on the analysis of five supervised learning algorithms, we draw the following conclusions: The Lasso feature selection method and Logistic Regression achieve the highest prediction accuracy for most of the traits (TYa, SYa, SYi, TYi). This study develops a framework for predicting five-pattern personality traits using eye movement behavior, offering a novel approach for personality assessment in TCM.
Collapse
Affiliation(s)
- Ying Yu
- School of Life Sciences, Beijing University of Chinese Medicine, Beijing, China
| | - Qingya Lu
- School of Life Sciences, Beijing University of Chinese Medicine, Beijing, China
| | - Xinyue Wu
- School of Life Sciences, Beijing University of Chinese Medicine, Beijing, China
| | - Zefeng Wang
- College of Information Engineering, Huzhou University, Huzhou, China
| | - Chenggang Zhang
- School of Life Sciences, Beijing University of Chinese Medicine, Beijing, China
| | - Xuanmei Wu
- School of Life Sciences, Beijing University of Chinese Medicine, Beijing, China
| | - Cong Yan
- School of Life Sciences, Beijing University of Chinese Medicine, Beijing, China
| |
Collapse
|
4
|
Kasneci E, Kasneci G, Trautwein U, Appel T, Tibus M, Jaeggi SM, Gerjets P. Do your eye movements reveal your performance on an IQ test? A study linking eye movements and socio-demographic information to fluid intelligence. PLoS One 2022; 17:e0264316. [PMID: 35349582 PMCID: PMC8963570 DOI: 10.1371/journal.pone.0264316] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 02/08/2022] [Indexed: 11/30/2022] Open
Abstract
Understanding the main factors contributing to individual differences in fluid intelligence is one of the main challenges of psychology. A vast body of research has evolved from the theoretical framework put forward by Cattell, who developed the Culture-Fair IQ Test (CFT 20-R) to assess fluid intelligence. In this work, we extend and complement the current state of research by analysing the differential and combined relationship between eye-movement patterns and socio-demographic information and the ability of a participant to correctly solve a CFT item. Our work shows that a participant’s eye movements while solving a CFT item contain discriminative information and can be used to predict whether the participant will succeed in solving the test item. Moreover, the information related to eye movements complements the information provided by socio-demographic data when it comes to success prediction. In combination, both types of information yield a significantly higher predictive performance than each information type individually. To better understand the contributions of features related to eye movements and socio-demographic information to predict a participant’s success in solving a CFT item, we employ state-of-the-art explainability techniques and show that, along with socio-demographic variables, eye-movement data. Especially the number of saccades and the mean pupil diameter, significantly increase the discriminating power. The eye-movement features are likely indicative of processing efficiency and invested mental effort. Beyond the specific contribution to research on how eye movements can serve as a means to uncover mechanisms underlying cognitive processes, the findings presented in this work pave the way for further in-depth investigations of factors predicting individual differences in fluid intelligence.
Collapse
Affiliation(s)
- Enkelejda Kasneci
- Human-Computer Interaction, Department of Computer Science, University of Tübingen, Tübingen, Germany
- * E-mail:
| | - Gjergji Kasneci
- Data Science and Analytics, Department of Computer Science, University of Tübingen, Tübingen, Germany
| | - Ulrich Trautwein
- Hector Research Institute of Education Sciences and Psychology, University of Tübingen, Tübingen, Germany
| | - Tobias Appel
- Hector Research Institute of Education Sciences and Psychology, University of Tübingen, Tübingen, Germany
| | - Maike Tibus
- Hector Research Institute of Education Sciences and Psychology, University of Tübingen, Tübingen, Germany
| | - Susanne M. Jaeggi
- School of Education, University of California, Irvine, CA, United States of America
| | - Peter Gerjets
- Leibniz-Institut für Wissensmedien, Tübingen, Germany
| |
Collapse
|
5
|
Holm SK, Häikiö T, Olli K, Kaakinen JK. Eye Movements during Dynamic Scene Viewing are Affected by Visual Attention Skills and Events of the Scene: Evidence from First-Person Shooter Gameplay Videos. J Eye Mov Res 2021; 14. [PMID: 34745442 PMCID: PMC8566014 DOI: 10.16910/jemr.14.2.3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
The role of individual differences during dynamic scene viewing was explored. Participants
(N=38) watched a gameplay video of a first-person shooter (FPS) videogame while their
eye movements were recorded. In addition, the participants’ skills in three visual attention
tasks (attentional blink, visual search, and multiple object tracking) were assessed. The
results showed that individual differences in visual attention tasks were associated with eye
movement patterns observed during viewing of the gameplay video. The differences were
noted in four eye movement measures: number of fixations, fixation durations, saccade amplitudes
and fixation distances from the center of the screen. The individual differences
showed during specific events of the video as well as during the video as a whole. The results
highlight that an unedited, fast-paced and cluttered dynamic scene can bring about individual
differences in dynamic scene viewing.
Collapse
|
6
|
Tenenbaum EJ, Major S, Carpenter KLH, Howard J, Murias M, Dawson G. Distance from Typical Scan Path When Viewing Complex Stimuli in Children with Autism Spectrum Disorder and its Association with Behavior. J Autism Dev Disord 2021; 51:3492-3505. [PMID: 33387244 PMCID: PMC9903808 DOI: 10.1007/s10803-020-04812-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/21/2020] [Indexed: 01/14/2023]
Abstract
Eye-tracking is often used to study attention in children with autism spectrum disorder (ASD). Previous research has identified multiple atypical patterns of attention in children with ASD based on areas-of-interest analysis. Fewer studies have investigated gaze path, a measure which is dependent on the dynamic content of the stimulus presented. Here, rather than looking at proportions of looking time to areas of interest, we calculated mean fixations frame-by-frame in a group of typically developing children (36 to 72 months) and determined the distance from those typical fixations for 155 children with ASD (27-95 months). Findings revealed that distance from the typical scan path among the children with ASD was associated with lower communication abilities and greater ASD symptomatology.
Collapse
Affiliation(s)
- Elena J Tenenbaum
- Duke Center for Autism and Brain Development, Duke University, Durham, NC, 27705, USA.
- Department of Psychiatry and Behavioral Sciences, Duke University Medical Center, Durham, NC, 27705, USA.
- Duke Institute for Brain Sciences, Duke University, Durham, NC, 27708, USA.
| | - Samantha Major
- Duke Center for Autism and Brain Development, Duke University, Durham, NC, 27705, USA
| | - Kimberly L H Carpenter
- Duke Center for Autism and Brain Development, Duke University, Durham, NC, 27705, USA
- Department of Psychiatry and Behavioral Sciences, Duke University Medical Center, Durham, NC, 27705, USA
- Duke Institute for Brain Sciences, Duke University, Durham, NC, 27708, USA
| | - Jill Howard
- Duke Center for Autism and Brain Development, Duke University, Durham, NC, 27705, USA
| | - Michael Murias
- Duke Center for Autism and Brain Development, Duke University, Durham, NC, 27705, USA
- Medical Social Sciences, Northwestern University, Chicago, IL, 60622, USA
| | - Geraldine Dawson
- Duke Center for Autism and Brain Development, Duke University, Durham, NC, 27705, USA
- Department of Psychiatry and Behavioral Sciences, Duke University Medical Center, Durham, NC, 27705, USA
- Duke Institute for Brain Sciences, Duke University, Durham, NC, 27708, USA
| |
Collapse
|
7
|
TüEyeQ, a rich IQ test performance data set with eye movement, educational and socio-demographic information. Sci Data 2021; 8:154. [PMID: 34135342 PMCID: PMC8208979 DOI: 10.1038/s41597-021-00938-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 05/11/2021] [Indexed: 11/08/2022] Open
Abstract
We present the TüEyeQ data set - to the best of our knowledge - the most comprehensive data set generated on a culture fair intelligence test (CFT 20-R), i.e., an IQ Test, consisting of 56 single tasks, taken by 315 individuals aged between 18 and 30 years. In addition to socio-demographic and educational information, the data set also includes the eye movements of the individuals while taking the IQ test. Along with distributional information we also highlight the potential for predictive analysis on the TüEyeQ data set and report the most important covariates for predicting the performance of a participant on a given task along with their influence on the prediction. Measurement(s) | intelligence • eye movement • Socioeconomic Factors | Technology Type(s) | culture fair intelligence test (CFT-R) • eye tracking device • Questionnaire | Sample Characteristic - Organism | Homo sapiens |
Machine-accessible metadata file describing the reported data: 10.6084/m9.figshare.14173244
Collapse
|
8
|
Mengoudi K, Ravi D, Yong KXX, Primativo S, Pavisic IM, Brotherhood E, Lu K, Schott JM, Crutch SJ, Alexander DC. Augmenting Dementia Cognitive Assessment With Instruction-Less Eye-Tracking Tests. IEEE J Biomed Health Inform 2020; 24:3066-3075. [PMID: 32749977 DOI: 10.1109/jbhi.2020.3004686] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Eye-tracking technology is an innovative tool that holds promise for enhancing dementia screening. In this work, we introduce a novel way of extracting salient features directly from the raw eye-tracking data of a mixed sample of dementia patients during a novel instruction-less cognitive test. Our approach is based on self-supervised representation learning where, by training initially a deep neural network to solve a pretext task using well-defined available labels (e.g. recognising distinct cognitive activities in healthy individuals), the network encodes high-level semantic information which is useful for solving other problems of interest (e.g. dementia classification). Inspired by previous work in explainable AI, we use the Layer-wise Relevance Propagation (LRP) technique to describe our network's decisions in differentiating between the distinct cognitive activities. The extent to which eye-tracking features of dementia patients deviate from healthy behaviour is then explored, followed by a comparison between self-supervised and handcrafted representations on discriminating between participants with and without dementia. Our findings not only reveal novel self-supervised learning features that are more sensitive than handcrafted features in detecting performance differences between participants with and without dementia across a variety of tasks, but also validate that instruction-less eye-tracking tests can detect oculomotor biomarkers of dementia-related cognitive dysfunction. This work highlights the contribution of self-supervised representation learning techniques in biomedical applications where the small number of patients, the non-homogenous presentations of the disease and the complexity of the setting can be a challenge using state-of-the-art feature extraction methods.
Collapse
|
9
|
Eisma YB, de Winter J. How Do People Perform an Inspection Time Task? An Examination of Visual Illusions, Task Experience, and Blinking. J Cogn 2020; 3:34. [PMID: 33043244 PMCID: PMC7528665 DOI: 10.5334/joc.123] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2019] [Accepted: 08/31/2020] [Indexed: 11/26/2022] Open
Abstract
In the inspection time (IT) paradigm, participants view two lines of unequal length (called the Pi-figure) for a short exposure time, and then judge which of the two lines was longer. Early research has interpreted IT as a simple index of mental speed, which does not involve motor activity. However, more recent studies have associated IT with higher-level cognitive mechanisms, including focused attention, task experience, and the strategic use of visual illusions. The extent to which these factors affect IT is still a source of debate. We used an eye-tracker to capture participants' (N = 147) visual attention while performing IT trials. Results showed that blinking was time-dependent, with participants blinking less when the Pi-figure was visible as compared to before and after. Blinking during the presentation of the Pi-figure correlated negatively with response accuracy. Also, participants who reported seeing a brightness illusion had a higher response accuracy than those who did not. The first experiment was repeated with new participants (N = 159), enhanced task instructions, and the inclusion of practice trials. Results showed substantially improved response accuracy compared to the first experiment, and no significant difference in response accuracy between those who did and did not report illusions. IT response accuracy correlated modestly (r = 0.18) with performance on a short Raven's advanced progressive matrices task. In conclusion, performance at the IT task is affected by task familiarity and involves motor activity in the form of blinking. Visual illusions may be an epiphenomenon of understanding the IT task.
Collapse
|
10
|
Wilson P, Papageorgiou KA, Cooper C. Speed of saccadic responses and intelligence: An exponential-Gaussian analysis. PERSONALITY AND INDIVIDUAL DIFFERENCES 2020. [DOI: 10.1016/j.paid.2020.109860] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
11
|
Abstract
Aimed at the problem of how to objectively obtain the threshold of a user’s cognitive load in a virtual reality interactive system, a method for user cognitive load quantification based on an eye movement experiment is proposed. Eye movement data were collected in the virtual reality interaction process by using an eye movement instrument. Taking the number of fixation points, the average fixation duration, the average saccade length, and the number of the first mouse clicking fixation points as the independent variables, and the number of backward-looking times and the value of user cognitive load as the dependent variables, a cognitive load evaluation model was established based on the probabilistic neural network. The model was validated by using eye movement data and subjective cognitive load data. The results show that the absolute error and relative mean square error were 6.52%–16.01% and 6.64%–23.21%, respectively. Therefore, the model is feasible.
Collapse
|