1
|
Gundler C, Temmen M, Gulberti A, Pötter-Nerger M, Ückert F. Improving Eye-Tracking Data Quality: A Framework for Reproducible Evaluation of Detection Algorithms. SENSORS (BASEL, SWITZERLAND) 2024; 24:2688. [PMID: 38732794 PMCID: PMC11085612 DOI: 10.3390/s24092688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 04/18/2024] [Accepted: 04/20/2024] [Indexed: 05/13/2024]
Abstract
High-quality eye-tracking data are crucial in behavioral sciences and medicine. Even with a solid understanding of the literature, selecting the most suitable algorithm for a specific research project poses a challenge. Empowering applied researchers to choose the best-fitting detector for their research needs is the primary contribution of this paper. We developed a framework to systematically assess and compare the effectiveness of 13 state-of-the-art algorithms through a unified application interface. Hence, we more than double the number of algorithms that are currently usable within a single software package and allow researchers to identify the best-suited algorithm for a given scientific setup. Our framework validation on retrospective data underscores its suitability for algorithm selection. Through a detailed and reproducible step-by-step workflow, we hope to contribute towards significantly improved data quality in scientific experiments.
Collapse
Affiliation(s)
- Christopher Gundler
- Institute for Applied Medical Informatics, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany;
| | | | - Alessandro Gulberti
- Department of Neurology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany; (A.G.); (M.P.-N.)
| | - Monika Pötter-Nerger
- Department of Neurology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany; (A.G.); (M.P.-N.)
| | - Frank Ückert
- Institute for Applied Medical Informatics, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany;
| |
Collapse
|
2
|
Bonteanu G, Bonteanu P, Cracan A, Bozomitu RG. Implementation of a High-Accuracy Neural Network-Based Pupil Detection System for Real-Time and Real-World Applications. SENSORS (BASEL, SWITZERLAND) 2024; 24:2548. [PMID: 38676165 PMCID: PMC11054914 DOI: 10.3390/s24082548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Revised: 04/12/2024] [Accepted: 04/13/2024] [Indexed: 04/28/2024]
Abstract
In this paper, the implementation of a new pupil detection system based on artificial intelligence techniques suitable for real-time and real-word applications is presented. The proposed AI-based pupil detection system uses a classifier implemented with slim-type neural networks, with its classes being defined according to the possible positions of the pupil within the eye image. In order to reduce the complexity of the neural network, a new parallel architecture is used in which two independent classifiers deliver the pupil center coordinates. The training, testing, and validation of the proposed system were performed using almost 40,000 eye images with a resolution of 320 × 240 pixels and coming from 20 different databases, with a high degree of generality. The experimental results show a detection rate of 96.29% at five pixels with a standard deviation of 3.38 pixels for all eye images from all databases and a processing speed of 100 frames/s. These results indicate both high accuracy and high processing speed, and they allow us to use the proposed solution for different real-time applications in variable and non-uniform lighting conditions, in fields such as assistive technology to communicate with neuromotor-disabled patients by using eye typing, in computer gaming, and in the automotive industry for increasing traffic safety by monitoring the driver's cognitive state.
Collapse
Affiliation(s)
- Gabriel Bonteanu
- Fundamentals of Electronics Department, “Gheorghe Asachi” Technical University of Iasi, 700050 Iasi, Romania; (G.B.); (A.C.)
| | - Petronela Bonteanu
- Telecommunications and IT Department, “Gheorghe Asachi” Technical University of Iasi, 700050 Iasi, Romania;
| | - Arcadie Cracan
- Fundamentals of Electronics Department, “Gheorghe Asachi” Technical University of Iasi, 700050 Iasi, Romania; (G.B.); (A.C.)
| | - Radu Gabriel Bozomitu
- Telecommunications and IT Department, “Gheorghe Asachi” Technical University of Iasi, 700050 Iasi, Romania;
| |
Collapse
|
3
|
Zuo Y, Qi J, Fan Z, Wang Z, Xu H, Wang S, Zhang N, Hu J. The influence of target layout and target graphic type on searching performance based on eye-tracking technology. Front Psychol 2023; 14:1052488. [PMID: 36844297 PMCID: PMC9947834 DOI: 10.3389/fpsyg.2023.1052488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Accepted: 01/23/2023] [Indexed: 02/11/2023] Open
Abstract
With the development of various intelligent technologies, the application of interactive interfaces is becoming more and more widespread, and the related researches conducted for interactive interfaces are also increasing. The purpose of this study was to explore the influence of icon layout location, icon graphic type, and icon layout method on users' searching performance in interactive interfaces through eye-tracking technology. Participants were asked to perform search tasks for the search target (facet icon or linear icon) on each image. Thus, each trial consisted of a search task on a given image. In total, each participant had 36 trials to complete. Searching time, fixation duration, and fixation count were collected to evaluate the searching performance of participants. Results showed that when faced with familiar icons, whether the graphic type of icons was facet or linear did not affect the user's experience, but when other factors of the interaction interface changed, facet icons provided a more stable experience for users. And compared to the rectangular layout, the circular layout method provided a more stable experience for users when the location of icons in the interactive interface changed, but icons located in the top half of the interactive interface were easier to find than those located in the bottom half, regardless of whether the layout was circular or rectangular. These results could be used in the layout and icon design of the interactive interfaces to facilitate their optimization.
Collapse
Affiliation(s)
- Yaxue Zuo
- School of Design, Shanghai Jiao Tong University, Shanghai, China
| | - Jin Qi
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zhijun Fan
- School of Mechanical Engineering, Shandong University, Jinan, China
| | - Zhenya Wang
- School of Mechanical Engineering, Shandong University, Jinan, China
| | - Huiyun Xu
- School of Mechanical Engineering, Shandong University, Jinan, China
| | - Shurui Wang
- School of Mechanical Engineering, Shandong University, Jinan, China
| | - Nieqiang Zhang
- School of Mechanical Engineering, Shandong University, Jinan, China
| | - Jie Hu
- School of Design, Shanghai Jiao Tong University, Shanghai, China,*Correspondence: Jie Hu,
| |
Collapse
|
4
|
Li H, Yang Z. Vertical Nystagmus Recognition Based on Deep Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:1592. [PMID: 36772631 PMCID: PMC9920786 DOI: 10.3390/s23031592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 01/29/2023] [Accepted: 01/30/2023] [Indexed: 06/18/2023]
Abstract
Vertical nystagmus is a common neuro-ophthalmic sign in vestibular medicine. Vertical nystagmus not only reflects the functional state of vertical semicircular canal but also reflects the effect of otoliths. Medical experts can take nystagmus symptoms as the key factor to determine the cause of dizziness. Traditional observation (visual observation conducted by medical experts) may be biased subjectively. Visual examination also requires medical experts to have enough experience to make an accurate diagnosis. With the development of science and technology, the detection system for nystagmus can be realized by using artificial intelligence technology. In this paper, a vertical nystagmus recognition method is proposed based on deep learning. This method is mainly composed of a dilated convolution layer module, a depthwise separable convolution module, a convolution attention module, a Bilstm-GRU module, etc. The average recognition accuracy of the proposed method is 91%. Using the same training dataset and test set, the recognition accuracy of this method for vertical nystagmus was 2% higher than other methods.
Collapse
|
5
|
Jacob G, Katti H, Cherian T, Das J, Zhivago KA, Arun SP. A naturalistic environment to study visual cognition in unrestrained monkeys. eLife 2021; 10:63816. [PMID: 34821553 PMCID: PMC8676323 DOI: 10.7554/elife.63816] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 11/24/2021] [Indexed: 12/18/2022] Open
Abstract
Macaque monkeys are widely used to study vision. In the traditional approach, monkeys are brought into a lab to perform visual tasks while they are restrained to obtain stable eye tracking and neural recordings. Here, we describe a novel environment to study visual cognition in a more natural setting as well as other natural and social behaviors. We designed a naturalistic environment with an integrated touchscreen workstation that enables high-quality eye tracking in unrestrained monkeys. We used this environment to train monkeys on a challenging same-different task. We also show that this environment can reveal interesting novel social behaviors. As proof of concept, we show that two naive monkeys were able to learn this complex task through a combination of socially observing trained monkeys and solo trial-and-error. We propose that such naturalistic environments can be used to rigorously study visual cognition as well as other natural and social behaviors in freely moving monkeys.
Collapse
Affiliation(s)
- Georgin Jacob
- Centre for Neuroscience, Indian Institute of Science Bangalore, Bangalore, India.,Department of Electrical Communication Engineering Indian Institute of Science, Bangalore, India
| | - Harish Katti
- Centre for Neuroscience, Indian Institute of Science Bangalore, Bangalore, India
| | - Thomas Cherian
- Centre for Neuroscience, Indian Institute of Science Bangalore, Bangalore, India
| | - Jhilik Das
- Centre for Neuroscience, Indian Institute of Science Bangalore, Bangalore, India
| | - K A Zhivago
- Centre for Neuroscience, Indian Institute of Science Bangalore, Bangalore, India
| | - S P Arun
- Centre for Neuroscience, Indian Institute of Science Bangalore, Bangalore, India
| |
Collapse
|
6
|
Abstract
Between the cornea and the posterior pole of the eye, there is a transepithelial potential capable of being registered through an electrooculogram (EOG). It is questionable whether electrooculographic responses are similar in both eyes despite ocular dominance in human beings. We studied the effect of different electrooculographic stimulation parameters, in terms of directionality, linear and angular velocity, contrast, and state of adaptation to light/dark, that may induce possible interocular differences in visual function. The study was carried out with electroencephalography-type surface electrodes placed in the medial, lateral, superior, and inferior positions of both human eyes to record the eye movements. We found a greater amplitude of the EOG response in the left eye than to the right eye for light bars moving from right to left (p < 0.01; t-test). The EOG response amplitude was similar in both eyes for light bars moving in vertical directions, but greater than horizontal or rotational stimuli. We conclude that vertical stimuli should be used for EOG functional evaluation of eye movements, since horizontal stimuli generate significant interocular differences.
Collapse
|
7
|
Low-Complexity Pupil Tracking for Sunglasses-Wearing Faces for Glasses-Free 3D HUDs. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11104366] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
This study proposes a pupil-tracking method applicable to drivers both with and without sunglasses on, which has greater compatibility with augmented reality (AR) three-dimensional (3D) head-up displays (HUDs). Performing real-time pupil localization and tracking is complicated by drivers wearing facial accessories such as masks, caps, or sunglasses. The proposed method fulfills two key requirements: low complexity and algorithm performance. Our system assesses both bare and sunglasses-wearing faces by first classifying images according to these modes and then assigning the appropriate eye tracker. For bare faces with unobstructed eyes, we applied our previous regression-algorithm-based method that uses scale-invariant feature transform features. For eyes occluded by sunglasses, we propose an eye position estimation method: our eye tracker uses nonoccluded face area tracking and a supervised regression-based pupil position estimation method to locate pupil centers. Experiments showed that the proposed method achieved high accuracy and speed, with a precision error of <10 mm in <5 ms for bare and sunglasses-wearing faces for both a 2.5 GHz CPU and a commercial 2.0 GHz CPU vehicle-embedded system. Coupled with its performance, the low CPU consumption (10%) demonstrated by the proposed algorithm highlights its promise for implementation in AR 3D HUD systems.
Collapse
|
8
|
Carr DB, Grover P. The Role of Eye Tracking Technology in Assessing Older Driver Safety. Geriatrics (Basel) 2020; 5:E36. [PMID: 32517336 PMCID: PMC7345272 DOI: 10.3390/geriatrics5020036] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Revised: 05/21/2020] [Accepted: 05/22/2020] [Indexed: 12/11/2022] Open
Abstract
A growing body of literature is focused on the use of eye tracking (ET) technology to understand the association between objective visual parameters and higher order brain processes such as cognition. One of the settings where this principle has found practical utility is in the area of driving safety. METHODS We reviewed the literature to identify the changes in ET parameters with older adults and neurodegenerative disease. RESULTS This narrative review provides a brief overview of oculomotor system anatomy and physiology, defines common eye movements and tracking variables that are typically studied, explains the most common methods of eye tracking measurements during driving in simulation and in naturalistic settings, and examines the association of impairment in ET parameters with advanced age and neurodegenerative disease. CONCLUSION ET technology is becoming less expensive, more portable, easier to use, and readily applicable in a variety of clinical settings. Older adults and especially those with neurodegenerative disease may have impairments in visual search parameters, placing them at risk for motor vehicle crashes. Advanced driver assessment systems are becoming more ubiquitous in newer cars and may significantly reduce crashes related to impaired visual search, distraction, and/or fatigue.
Collapse
Affiliation(s)
- David B. Carr
- Department of Medicine and Neurology, Washington University School of Medicine, St Louis, MO 63110, USA
| | - Prateek Grover
- Department of Neurology, Washington University School of Medicine, St Louis, MO 63110, USA;
| |
Collapse
|
9
|
Exploring Visual Perceptions of Spatial Information for Wayfinding in Virtual Reality Environments. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10103461] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Human cognitive processes in wayfinding may differ depending on the time taken to accept visual information in environments. This study investigated users’ wayfinding processes using eye-tracking experiments, simulating a complex cultural space to analyze human visual movements in the perception and the cognitive processes through visual perception responses. The experiment set-up consisted of several paths in COEX Mall, Seoul—from the entrance of the shopping mall Starfield to the Star Hall Library to the COEX Exhibition Hall—using visual stimuli created by virtual reality (four stimuli and a total of 60 seconds stimulation time). The participants in the environment were 24 undergraduate or graduate students, with an average age of 24.8 years. Participants’ visual perception processes were analyzed in terms of the clarity and the recognition of spatial information and the activation of gaze fixation on spatial information. That is, the analysis of the visual perception process was performed by extracting “conscious gaze perspective” data comprising more than 50 consecutive 200 ms continuous gaze fixations; “visual understanding perspective” data were also extracted for more than 300 ms of continuous gaze fixation. The results show that the methods for analyzing the gaze data may vary in terms of processing, analysis, and scope of the data depending on the purpose of the virtual reality experiments. Further, they demonstrate the importance of what purpose statements are given to the subject during the experiment and the possibility of a technical approach being used for the interpretation of spatial information.
Collapse
|
10
|
Gaze and Eye Tracking: Techniques and Applications in ADAS. SENSORS 2019; 19:s19245540. [PMID: 31847432 PMCID: PMC6960643 DOI: 10.3390/s19245540] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/04/2019] [Revised: 12/06/2019] [Accepted: 12/11/2019] [Indexed: 11/17/2022]
Abstract
Tracking drivers’ eyes and gazes is a topic of great interest in the research of advanced driving assistance systems (ADAS). It is especially a matter of serious discussion among the road safety researchers’ community, as visual distraction is considered among the major causes of road accidents. In this paper, techniques for eye and gaze tracking are first comprehensively reviewed while discussing their major categories. The advantages and limitations of each category are explained with respect to their requirements and practical uses. In another section of the paper, the applications of eyes and gaze tracking systems in ADAS are discussed. The process of acquisition of driver’s eyes and gaze data and the algorithms used to process this data are explained. It is explained how the data related to a driver’s eyes and gaze can be used in ADAS to reduce the losses associated with road accidents occurring due to visual distraction of the driver. A discussion on the required features of current and future eye and gaze trackers is also presented.
Collapse
|