1
|
Liu J, Chi J, Yang Z. A review on personal calibration issues for video-oculographic-based gaze tracking. Front Psychol 2024; 15:1309047. [PMID: 38572211 PMCID: PMC10987702 DOI: 10.3389/fpsyg.2024.1309047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 03/08/2024] [Indexed: 04/05/2024] Open
Abstract
Personal calibration is a process of obtaining personal gaze-related information by focusing on some calibration benchmarks when the user initially uses a gaze tracking system. It not only provides conditions for gaze estimation, but also improves gaze tracking performance. Existing eye-tracking products often require users to conduct explicit personal calibration first, thereby tracking and interacting based on their gaze. This calibration mode has certain limitations, and there is still a significant gap between theoretical personal calibration methods and their practicality. Therefore, this paper reviews the issues of personal calibration for video-oculographic-based gaze tracking. The personal calibration information in typical gaze tracking methods is first summarized, and then some main settings in existing personal calibration processes are analyzed. Several personal calibration modes are discussed and compared subsequently. The performance of typical personal calibration methods for 2D and 3D gaze tracking is quantitatively compared through simulation experiments, highlighting the characteristics of different personal calibration settings. On this basis, we discuss several key issues in designing personal calibration. To the best of our knowledge, this is the first review on personal calibration issues for video-oculographic-based gaze tracking. It aims to provide a comprehensive overview of the research status of personal calibration, explore its main directions for further studies, and provide guidance for seeking personal calibration modes that conform to natural human-computer interaction and promoting the widespread application of eye-movement interaction.
Collapse
Affiliation(s)
- Jiahui Liu
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
- Beijing Engineering Research Center of Industrial Spectrum Imaging, University of Science and Technology Beijing, Beijing, China
- Shunde Innovation School, University of Science and Technology Beijing, Foshan, China
| | - Jiannan Chi
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
- Beijing Engineering Research Center of Industrial Spectrum Imaging, University of Science and Technology Beijing, Beijing, China
- Shunde Innovation School, University of Science and Technology Beijing, Foshan, China
| | - Zuoyun Yang
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| |
Collapse
|
2
|
Wang L, Wang C, Zhang Y, Gao L. An integrated neural network model for eye-tracking during human-computer interaction. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:13974-13988. [PMID: 37679119 DOI: 10.3934/mbe.2023622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/09/2023]
Abstract
Improving the efficiency of human-computer interaction is one of the critical goals of intelligent aircraft cockpit research. The gaze interaction control method can vastly reduce the manual operation of operators and improve the intellectual level of human-computer interaction. Eye-tracking is the basis of sight interaction, so the performance of eye-tracking will directly affect the outcome of gaze interaction. This paper presents an eye-tracking method suitable for human-computer interaction in an aircraft cockpit, which can now estimate the gaze position of operators on multiple screens based on face images. We use a multi-camera system to capture facial images, so that operators are not limited by the angle of head rotation. To improve the accuracy of gaze estimation, we have constructed a hybrid network. One branch uses the transformer framework to extract the global features of the face images; the other branch uses a convolutional neural network structure to extract the local features of the face images. Finally, the extracted features of the two branches are fused for eye-tracking. The experimental results show that the proposed method not only solves the problem of limited head movement for operators but also improves the accuracy of gaze estimation. In addition, our method has a capture rate of more than 80% for targets of different sizes, which is better than the other compared models.
Collapse
Affiliation(s)
- Li Wang
- School of Optoelectronic Engineering, Xi'an Technological University, Xi'an 710000, China
| | - Changyuan Wang
- School of Computer Science, Xi'an Technological University, Xi'an 710000, China
| | - Yu Zhang
- School of Optoelectronic Engineering, Xi'an Technological University, Xi'an 710000, China
| | - Lina Gao
- School of Optoelectronic Engineering, Xi'an Technological University, Xi'an 710000, China
| |
Collapse
|
3
|
Mokatren M, Kuflik T, Shimshoni I. 3D Gaze Estimation Using RGB-IR Cameras. SENSORS (BASEL, SWITZERLAND) 2022; 23:381. [PMID: 36616978 PMCID: PMC9823916 DOI: 10.3390/s23010381] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 12/22/2022] [Accepted: 12/23/2022] [Indexed: 06/17/2023]
Abstract
In this paper, we present a framework for 3D gaze estimation intended to identify the user's focus of attention in a corneal imaging system. The framework uses a headset that consists of three cameras, a scene camera and two eye cameras: an IR camera and an RGB camera. The IR camera is used to continuously and reliably track the pupil and the RGB camera is used to acquire corneal images of the same eye. Deep learning algorithms are trained to detect the pupil in IR and RGB images and to compute a per user 3D model of the eye in real time. Once the 3D model is built, the 3D gaze direction is computed starting from the eyeball center and passing through the pupil center to the outside world. This model can also be used to transform the pupil position detected in the IR image into its corresponding position in the RGB image and to detect the gaze direction in the corneal image. This technique circumvents the problem of pupil detection in RGB images, which is especially difficult and unreliable when the scene is reflected in the corneal images. In our approach, the auto-calibration process is transparent and unobtrusive. Users do not have to be instructed to look at specific objects to calibrate the eye tracker. They need only to act and gaze normally. The framework was evaluated in a user study in realistic settings and the results are promising. It achieved a very low 3D gaze error (2.12°) and very high accuracy in acquiring corneal images (intersection over union-IoU = 0.71). The framework may be used in a variety of real-world mobile scenarios (indoors, indoors near windows and outdoors) with high accuracy.
Collapse
|
4
|
Multimodal Natural Human–Computer Interfaces for Computer-Aided Design: A Review Paper. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12136510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Computer-aided design (CAD) systems have advanced to become a critical tool in product design. Nevertheless, they still primarily rely on the traditional mouse and keyboard interface. This limits the naturalness and intuitiveness of the 3D modeling process. Recently, a multimodal human–computer interface (HCI) has been proposed as the next-generation interaction paradigm. Widening the use of a multimodal HCI provides new opportunities for realizing natural interactions in 3D modeling. In this study, we conducted a literature review of a multimodal HCI for CAD to summarize the state-of-the-art research and establish a solid foundation for future research. We explore and categorize the requirements for natural HCIs and discuss paradigms for their implementation in CAD. Following this, factors to evaluate the system performance and user experience of a natural HCI are summarized and analyzed. We conclude by discussing challenges and key research directions for a natural HCI in product design to inspire future studies.
Collapse
|
5
|
Schweizer T, Wyss T, Gilgen-Ammann R. Detecting Soldiers' Fatigue Using Eye-Tracking Glasses: Practical Field Applications and Research Opportunities. Mil Med 2021; 187:e1330-e1337. [PMID: 34915554 PMCID: PMC10100772 DOI: 10.1093/milmed/usab509] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 11/04/2021] [Accepted: 11/29/2021] [Indexed: 11/14/2022] Open
Abstract
INTRODUCTION Objectively determining soldiers' fatigue levels could help prevent injuries or accidents resulting from inattention or decreased alertness. Eye-tracking technologies, such as optical eye tracking (OET) and electrooculography (EOG), are often used to monitor fatigue. Eyeblinks-especially blink frequency and blink duration-are known as easily observable and valid biomarkers of fatigue. Currently, various eye trackers (i.e., eye-tracking glasses) are available on the market using either OET or EOG technologies. These wearable eye trackers offer several advantages, including unobtrusive functionality, practicality, and low costs. However, several challenges and limitations must be considered when implementing these technologies in the field to monitor fatigue levels. This review investigates the feasibility of eye tracking in the field focusing on the practical applications in military operational environments. MATERIALS AND METHOD This paper summarizes the existing literature about eyeblink dynamics and available wearable eye-tracking technologies, exposing challenges and limitations, as well as discussing practical recommendations on how to improve the feasibility of eye tracking in the field. RESULTS So far, no eye-tracking glasses can be recommended for use in a demanding work environment. First, eyeblink dynamics are influenced by multiple factors; therefore, environments, situations, and individual behavior must be taken into account. Second, the glasses' placement, sunlight, facial or body movements, vibrations, and sweat can drastically decrease measurement accuracy. The placement of the eye cameras for the OET and the placement of the electrodes for the EOG must be chosen consciously, the sampling rate must be minimal 200 Hz, and software and hardware must be robust to resist any factors influencing eye tracking. CONCLUSION Monitoring physiological and psychological readiness of soldiers, as well as other civil professionals that face higher risks when their attention is impaired or reduced, is necessary. However, improvements to eye-tracking devices' hardware, calibration method, sampling rate, and algorithm are needed in order to accurately monitor fatigue levels in the field.
Collapse
Affiliation(s)
- Theresa Schweizer
- Monitoring, Swiss Federal Institute of Sport Magglingen (SFISM), Macolin 2532, Switzerland
| | - Thomas Wyss
- Monitoring, Swiss Federal Institute of Sport Magglingen (SFISM), Macolin 2532, Switzerland
| | - Rahel Gilgen-Ammann
- Monitoring, Swiss Federal Institute of Sport Magglingen (SFISM), Macolin 2532, Switzerland
| |
Collapse
|
6
|
Narcizo FB, dos Santos FED, Hansen DW. High-Accuracy Gaze Estimation for Interpolation-Based Eye-Tracking Methods. Vision (Basel) 2021; 5:41. [PMID: 34564339 PMCID: PMC8482219 DOI: 10.3390/vision5030041] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 08/29/2021] [Accepted: 09/02/2021] [Indexed: 11/17/2022] Open
Abstract
This study investigates the influence of the eye-camera location associated with the accuracy and precision of interpolation-based eye-tracking methods. Several factors can negatively influence gaze estimation methods when building a commercial or off-the-shelf eye tracker device, including the eye-camera location in uncalibrated setups. Our experiments show that the eye-camera location combined with the non-coplanarity of the eye plane deforms the eye feature distribution when the eye-camera is far from the eye's optical axis. This paper proposes geometric transformation methods to reshape the eye feature distribution based on the virtual alignment of the eye-camera in the center of the eye's optical axis. The data analysis uses eye-tracking data from a simulated environment and an experiment with 83 volunteer participants (55 males and 28 females). We evaluate the improvements achieved with the proposed methods using Gaussian analysis, which defines a range for high-accuracy gaze estimation between -0.5∘ and 0.5∘. Compared to traditional polynomial-based and homography-based gaze estimation methods, the proposed methods increase the number of gaze estimations in the high-accuracy range.
Collapse
Affiliation(s)
- Fabricio Batista Narcizo
- Eye Information Laboratory, Department of Computer Science, IT University of Copenhagen (ITU), 2300 Copenhagen, Denmark;
- Office of CTO, GN Audio A/S (Jabra), 2750 Ballerup, Denmark
| | | | - Dan Witzner Hansen
- Eye Information Laboratory, Department of Computer Science, IT University of Copenhagen (ITU), 2300 Copenhagen, Denmark;
| |
Collapse
|
7
|
Qian K, Arichi T, Price A, Dall'Orso S, Eden J, Noh Y, Rhode K, Burdet E, Neil M, Edwards AD, Hajnal JV. An eye tracking based virtual reality system for use inside magnetic resonance imaging systems. Sci Rep 2021; 11:16301. [PMID: 34381099 PMCID: PMC8357830 DOI: 10.1038/s41598-021-95634-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 06/18/2021] [Indexed: 11/09/2022] Open
Abstract
Patients undergoing Magnetic Resonance Imaging (MRI) often experience anxiety and sometimes distress prior to and during scanning. Here a full MRI compatible virtual reality (VR) system is described and tested with the aim of creating a radically different experience. Potential benefits could accrue from the strong sense of immersion that can be created with VR, which could create sense experiences designed to avoid the perception of being enclosed and could also provide new modes of diversion and interaction that could make even lengthy MRI examinations much less challenging. Most current VR systems rely on head mounted displays combined with head motion tracking to achieve and maintain a visceral sense of a tangible virtual world, but this technology and approach encourages physical motion, which would be unacceptable and could be physically incompatible for MRI. The proposed VR system uses gaze tracking to control and interact with a virtual world. MRI compatible cameras are used to allow real time eye tracking and robust gaze tracking is achieved through an adaptive calibration strategy in which each successive VR interaction initiated by the subject updates the gaze estimation model. A dedicated VR framework has been developed including a rich virtual world and gaze-controlled game content. To aid in achieving immersive experiences physical sensations, including noise, vibration and proprioception associated with patient table movements, have been made congruent with the presented virtual scene. A live video link allows subject-carer interaction, projecting a supportive presence into the virtual world.
Collapse
Affiliation(s)
- Kun Qian
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK.
| | - Tomoki Arichi
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
- Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK
| | - Anthony Price
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Sofia Dall'Orso
- Department of Electrical Engineering, Chalmers University of Technology, 412 96, Gothenburg, Sweden
| | - Jonathan Eden
- Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK
| | - Yohan Noh
- Department of Mechanical and Aerospace Engineering, Brunel University London, London, UB8 3PN, UK
| | - Kawal Rhode
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Etienne Burdet
- Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK
| | - Mark Neil
- Department of Physics, Imperial College London, London, SW7 2AZ, UK
| | - A David Edwards
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Joseph V Hajnal
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK.
| |
Collapse
|
8
|
Experimental Verification of Objective Visual Fatigue Measurement Based on Accurate Pupil Detection of Infrared Eye Image and Multi-Feature Analysis. SENSORS 2020; 20:s20174814. [PMID: 32858920 PMCID: PMC7506756 DOI: 10.3390/s20174814] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 08/21/2020] [Accepted: 08/24/2020] [Indexed: 12/15/2022]
Abstract
As the use of electronic displays increases rapidly, visual fatigue problems are also increasing. The subjective evaluation methods used for visual fatigue measurement have individual difference problems, while objective methods based on bio-signal measurement have problems regarding motion artifacts. Conventional eye image analysis-based visual fatigue measurement methods do not accurately characterize the complex changes in the appearance of the eye. To solve this problem, in this paper, an objective visual fatigue measurement method based on infrared eye image analysis is proposed. For accurate pupil detection, a convolutional neural network-based semantic segmentation method was used. Three features are calculated based on the pupil detection results: (1) pupil accommodation speed, (2) blink frequency, and (3) eye-closed duration. In order to verify the calculated features, differences in fatigue caused by changes in content color components such as gamma, color temperature, and brightness were compared with a reference video. The pupil detection accuracy was confirmed to be 96.63% based on the mean intersection over union. In addition, it was confirmed that all three features showed significant differences from the reference group; thus, it was verified that the proposed analysis method can be used for the objective measurement of visual fatigue.
Collapse
|
9
|
Gaze Tracking and Point Estimation Using Low-Cost Head-Mounted Devices. SENSORS 2020; 20:s20071917. [PMID: 32235523 PMCID: PMC7181118 DOI: 10.3390/s20071917] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/20/2020] [Revised: 03/24/2020] [Accepted: 03/25/2020] [Indexed: 11/22/2022]
Abstract
In this study, a head-mounted device was developed to track the gaze of the eyes and estimate the gaze point on the user’s visual plane. To provide a cost-effective vision tracking solution, this head-mounted device is combined with a sized endoscope camera, infrared light, and mobile phone; the devices are also implemented via 3D printing to reduce costs. Based on the proposed image pre-processing techniques, the system can efficiently extract and estimate the pupil ellipse from the camera module. A 3D eye model was also developed to effectively locate eye gaze points from extracted eye images. In the experimental results, average accuracy, precision, and recall rates of the proposed system can achieve an average of over 97%, which can demonstrate the efficiency of the proposed system. This study can be widely used in the Internet of Things, virtual reality, assistive devices, and human-computer interaction applications.
Collapse
|
10
|
Singh J, Modi N. Use of information modelling techniques to understand research trends in eye gaze estimation methods: An automated review. Heliyon 2019; 5:e03033. [PMID: 31890964 PMCID: PMC6928306 DOI: 10.1016/j.heliyon.2019.e03033] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Revised: 10/22/2019] [Accepted: 12/10/2019] [Indexed: 10/31/2022] Open
Abstract
Eye gaze tracking has been used to study the influence of visual stimuli on consumer behavior and attentional processes. Eye gaze tracking techniques have made substantial contributions in advertisement design, human computer interaction, virtual reality and disease diagnosis. Eye gaze estimation is considered critical for prediction of human attention, and hence indispensable for better understanding human activities. In this paper, Latent Semantic Analysis is used to develop an information model for identifying emerging research trends within eye gaze estimation techniques. An exhaustive collection of 423 titles and abstracts of research papers published during 2005-2018 were used. Five major research areas and ten research trends were classified based upon this study.
Collapse
Affiliation(s)
- Jaiteg Singh
- Department of Computer Applications, Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, 140401, India
| | - Nandini Modi
- Department of Computer Science and Engineering, Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, 140401, India
| |
Collapse
|
11
|
Harrar V, Le Trung W, Malienko A, Khan AZ. A nonvisual eye tracker calibration method for video-based tracking. J Vis 2018; 18:13. [PMID: 30208432 DOI: 10.1167/18.9.13] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Video-based eye trackers have enabled major advancements in our understanding of eye movements through their ease of use and their non-invasiveness. One necessity to obtain accurate eye recordings using video-based trackers is calibration. The aim of the current study was to determine the feasibility and reliability of alternative calibration methods for scenarios in which the standard visual-calibration is not possible. Fourteen participants were tested using the EyeLink 1000 Plus video-based eye tracker, and each completed the following 5-point calibration methods: 1) standard visual-target calibration, 2) described calibration where participants were provided with verbal instructions about where to direct their eyes (without vision of the screen), 3) proprioceptive calibration where participants were asked to look at their hidden finger, 4) replacement calibration, where the visual calibration was performed by 3 different people; the calibrators were temporary substitutes for the participants. Following calibration, participants performed a simple visually-guided saccade task to 16 randomly presented targets on a grid. We found that precision errors were comparable across the alternative calibration methods. In terms of accuracy, compared to the standard calibration, non-visual calibration methods (described and proprioception) led to significantly larger errors, whilst the replacement calibration method had much smaller errors. In conditions where calibration is not possible, for example when testing blind or visually impaired people who are unable to foveate the calibration targets, we suggest that using a single stand-in to perform the calibration is a simple and easy alternative calibration method, which should only cause a minimal decrease in accuracy.
Collapse
Affiliation(s)
- Vanessa Harrar
- Vision, Attention, and Action Laboratory (VISATTAC), School of Optometry, University of Montreal, Montreal, Quebec, Canada
| | - William Le Trung
- Vision, Attention, and Action Laboratory (VISATTAC), School of Optometry, University of Montreal, Montreal, Quebec, Canada
| | - Anton Malienko
- Vision, Attention, and Action Laboratory (VISATTAC), School of Optometry, University of Montreal, Montreal, Quebec, Canada
| | - Aarlenne Zein Khan
- Vision, Attention, and Action Laboratory (VISATTAC), School of Optometry, University of Montreal, Montreal, Quebec, Canada
| |
Collapse
|
12
|
Cognolato M, Atzori M, Müller H. Head-mounted eye gaze tracking devices: An overview of modern devices and recent advances. J Rehabil Assist Technol Eng 2018; 5:2055668318773991. [PMID: 31191938 PMCID: PMC6453044 DOI: 10.1177/2055668318773991] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2017] [Accepted: 04/05/2018] [Indexed: 11/17/2022] Open
Abstract
An increasing number of wearable devices performing eye gaze tracking have been released in recent years. Such devices can lead to unprecedented opportunities in many applications. However, staying updated regarding the continuous advances and gathering the technical features that allow to choose the best device for a specific application is not trivial. The last eye gaze tracker overview was written more than 10 years ago, while more recent devices are substantially improved both in hardware and software. Thus, an overview of current eye gaze trackers is needed. This review fills the gap by providing an overview of the current level of advancement for both techniques and devices, leading finally to the analysis of 20 essential features in six head-mounted eye gaze trackers commercially available. The analyzed characteristics represent a useful selection providing an overview of the technology currently implemented. The results show that many technical advances were made in this field since the last survey. Current wearable devices allow to capture and exploit visual information unobtrusively and in real time, leading to new applications in wearable technologies that can also be used to improve rehabilitation and enable a more active living for impaired persons.
Collapse
Affiliation(s)
- Matteo Cognolato
- Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland.,Rehabilitation Engineering Laboratory, Swiss Federal Institute of Technology of Zurich (ETHZ), Zurich, Switzerland
| | - Manfredo Atzori
- Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland
| | - Henning Müller
- Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland
| |
Collapse
|
13
|
Hassoumi A, Peysakhovich V, Hurter C. Uncertainty visualization of gaze estimation to support operator-controlled calibration. J Eye Mov Res 2018; 10:10.16910/jemr.10.5.6. [PMID: 33828671 PMCID: PMC7141080 DOI: 10.16910/jemr.10.5.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
In this paper, we investigate how visualization assets can support the qualitative evaluation of gaze estimation uncertainty. Although eye tracking data are commonly available, little has been done to visually investigate the uncertainty of recorded gaze information. This paper tries to fill this gap by using innovative uncertainty computation and visualization. Given a gaze processing pipeline, we estimate the location of this gaze position in the world camera. To do so we developed our own gaze data processing which give us access to every stage of the data transformation and thus the uncertainty computation. To validate our gaze estimation pipeline, we designed an experiment with 12 participants and showed that the correction methods we proposed reduced the Mean Angular Error by about 1.32 cm, aggregating all 12 participants' results. The Mean Angular Error is 0.25° (SD=0.15°) after correction of the estimated gaze. Next, to support the qualitative assessment of this data, we provide a map which codes the actual uncertainty in the user point of view.
Collapse
|
14
|
Fuzzy-System-Based Detection of Pupil Center and Corneal Specular Reflection for a Driver-Gaze Tracking System Based on the Symmetrical Characteristics of Face and Facial Feature Points. Symmetry (Basel) 2017. [DOI: 10.3390/sym9110267] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
|
15
|
Abstract
OBJECTIVE The aim of this study was to develop computational methods for estimating limbus position based on the measurements of three-dimensional (3-D) corneoscleral topography and ascertain whether corneoscleral limbus routinely estimated from the frontal image corresponds to that derived from topographical information. METHODS Two new computational methods for estimating the limbus position are proposed: One based on approximating the raw anterior eye height data by series of Zernike polynomials and one that combines the 3-D corneoscleral topography with the frontal grayscale image acquired with the digital camera in-built in the profilometer. The proposed methods are contrasted against a previously described image-only-based procedure and to a technique of manual image annotation. RESULTS The estimates of corneoscleral limbus radius were characterized with a high precision. The group average (mean ± standard deviation) of the maximum difference between estimates derived from all considered methods was 0.27 ± 0.14 mm and reached up to 0.55 mm. The four estimating methods lead to statistically significant differences (nonparametric ANOVA (the Analysis of Variance) test, p 0.05). CONCLUSION Precise topographical limbus demarcation is possible either from the frontal digital images of the eye or from the 3-D topographical information of corneoscleral region. However, the results demonstrated that the corneoscleral limbus estimated from the anterior eye topography does not always correspond to that obtained through image-only based techniques. SIGNIFICANCE The experimental findings have shown that 3-D topography of anterior eye, in the absence of a gold standard, has the potential to become a new computational methodology for estimating the corneoscleral limbus.
Collapse
|
16
|
Estimation of Gaze Detection Accuracy Using the Calibration Information-Based Fuzzy System. SENSORS 2016; 16:s16010060. [PMID: 26742045 PMCID: PMC4732093 DOI: 10.3390/s16010060] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/27/2015] [Revised: 12/30/2015] [Accepted: 12/31/2015] [Indexed: 11/26/2022]
Abstract
Gaze tracking is a camera-vision based technology for identifying the location where a user is looking. In general, a calibration process is applied at the initial stage of most gaze tracking systems. This process is necessary to calibrate for the differences in the eyeballs and cornea size of the user, as well as the angle kappa, and to find the relationship between the user’s eye and screen coordinates. It is applied on the basis of the information of the user’s pupil and corneal specular reflection obtained while the user is looking at several predetermined positions on a screen. In previous studies, user calibration was performed using various types of markers and marker display methods. However, studies on estimating the accuracy of gaze detection through the results obtained during the calibration process have yet to be carried out. Therefore, we propose the method for estimating the accuracy of a final gaze tracking system with a near-infrared (NIR) camera by using a fuzzy system based on the user calibration information. Here, the accuracy of the final gaze tracking system ensures the gaze detection accuracy during the testing stage of the gaze tracking system. Experiments were performed using a total of four types of markers and three types of marker display methods. From them, it was found that the proposed method correctly estimated the accuracy of the gaze tracking regardless of the various marker and marker display types applied.
Collapse
|
17
|
Zhao Q, Yuan X, Tu D, Lu J. Multi-Initialized States Referred Work Parameter Calibration for Gaze Tracking Human-Robot Interaction. INT J ADV ROBOT SYST 2012. [DOI: 10.5772/50891] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
In order to adaptively calibrate the work parameters in the infrared-TV based eye gaze tracking Human-Robot Interaction (HRI) system, a kind of gaze direction sensing model has been provided for detecting the eye gaze identified parameters. We paid more attention to situations where the user's head was in a different position to the interaction interface. Furthermore, the algorithm for automatically correcting work parameters of the system has also been put up by defining certain initial reference system states and analysing the historical information of the interaction between a user and the system. Moreover, considering some application cases and factors, and relying on minimum error rate Bayesian decision-making theory, a mechanism for identifying system state and adaptively calibrating parameters has been proposed. Finally, some experiments have been done with the established system and the results suggest that the proposed mechanism and algorithm can identify the system work state in multi-situations, and can automatically correct the work parameters to meet the demands of a gaze tracking HRI system.
Collapse
Affiliation(s)
- Qijie Zhao
- Shanghai Key Laboratory of Manufacturing Automation and Robotics, School of Mechatronics Engineering and Automation, Shanghai University, China
| | - Xinming Yuan
- Shanghai Key Laboratory of Manufacturing Automation and Robotics, School of Mechatronics Engineering and Automation, Shanghai University, China
| | - Dawei Tu
- Shanghai Key Laboratory of Manufacturing Automation and Robotics, School of Mechatronics Engineering and Automation, Shanghai University, China
| | - Jianxia Lu
- Shanghai Key Laboratory of Manufacturing Automation and Robotics, School of Mechatronics Engineering and Automation, Shanghai University, China
| |
Collapse
|