1
|
Robert FM, Otheguy M, Nourrit V, de Bougrenet de la Tocnaye JL. Potential of a laser pointer contact lens to improve the reliability of video-based eye-trackers in indoor and outdoor conditions. J Eye Mov Res 2024; 17:10.16910/jemr.17.1.5. [PMID: 38818405 PMCID: PMC11138218 DOI: 10.16910/jemr.17.1.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/01/2024] Open
Abstract
Many video-based eye trackers rely on detecting and tracking ocular features, a task that can be negatively affected by a number of individual or environmental factors. In this context, the aim of this study was to practically evaluate how the use of a scleral contact lens with two integrated nearinfrared lasers (denoted CLP) could improve the tracking robustness in difficult lighting conditions, particularly outdoor ones. We assessed the ability of the CLP (on a model eye) to detect the lasers and to deduce a gaze position with an accuracy better than 1° under four lighting conditions (1 lx, 250 lx, 50 klux and alternating 1lx /250 lx) on an artificial eye. These results were compared to the ability of a commercial eye tracker (Pupil Core) to detect the pupil on human eyes with a confidence score equal to or greater than 0.9. CLP provided good results in all conditions (tracking accuracy and detection rates). In comparison, the Pupil Core performed well in all indoor conditions (99% detection) but failed in outdoor conditions (9.85% detection). In conclusion, the CLP presents strong potential to improve the reliability of video-based eyetrackers in outdoor conditions by providing easy trackable feature.
Collapse
|
2
|
Saxena S, Fink LK, Lange EB. Deep learning models for webcam eye tracking in online experiments. Behav Res Methods 2024; 56:3487-3503. [PMID: 37608235 PMCID: PMC11133145 DOI: 10.3758/s13428-023-02190-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/03/2023] [Indexed: 08/24/2023]
Abstract
Eye tracking is prevalent in scientific and commercial applications. Recent computer vision and deep learning methods enable eye tracking with off-the-shelf webcams and reduce dependence on expensive, restrictive hardware. However, such deep learning methods have not yet been applied and evaluated for remote, online psychological experiments. In this study, we tackle critical challenges faced in remote eye tracking setups and systematically evaluate appearance-based deep learning methods of gaze tracking and blink detection. From their own homes and laptops, 65 participants performed a battery of eye tracking tasks including (i) fixation, (ii) zone classification, (iii) free viewing, (iv) smooth pursuit, and (v) blink detection. Webcam recordings of the participants performing these tasks were processed offline through appearance-based models of gaze and blink detection. The task battery required different eye movements that characterized gaze and blink prediction accuracy over a comprehensive list of measures. We find the best gaze accuracy to be 2.4° and precision of 0.47°, which outperforms previous online eye tracking studies and reduces the gap between laboratory-based and online eye tracking performance. We release the experiment template, recorded data, and analysis code with the motivation to escalate affordable, accessible, and scalable eye tracking that has the potential to accelerate research in the fields of psychological science, cognitive neuroscience, user experience design, and human-computer interfaces.
Collapse
Affiliation(s)
- Shreshth Saxena
- Music Depart., Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
- Dept. of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada.
| | - Lauren K Fink
- Music Depart., Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Dept. of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada
- Max Planck - NYU Center for Language Music & Emotion, Frankfurt am Main, Germany
| | - Elke B Lange
- Music Depart., Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| |
Collapse
|
3
|
Guo X, Wang Y, Kan Y, Wu M, Ball LJ, Duan H. The HPA and SAM axis mediate the impairment of creativity under stress. Psychophysiology 2024; 61:e14472. [PMID: 37968552 DOI: 10.1111/psyp.14472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 10/01/2023] [Accepted: 10/03/2023] [Indexed: 11/17/2023]
Abstract
With the ever-changing social environment, individual creativity is facing a severe challenge induced by stress. However, little is known regarding the underlying mechanisms by which acute stress affects creative cognitive processing. The current research explored the impacts of the neuroendocrine response on creativity under stress and its underlying cognitive flexibility mechanisms. The enzyme-linked immuno sorbent assay was employed to assess salivary cortisol, which acted as a marker of stress-induced activation of the hypothalamic-pituitary-adrenal (HPA) axis. Eye blink rate (EBR) and pupil diameter were measured as respective indicators of dopamine and noradrenaline released by the activation of the sympathetic-adrenal-medullary (SAM) axis. The Wisconsin card task (WCST) measured cognitive flexibility, while the alternative uses task (AUT) and the remote association task (RAT) measured separately divergent and convergent thinking in creativity. Results showed higher cortisol increments following acute stress induction in the stress group than control group. Ocular results showed that the stress manipulation significantly increased EBR and pupil diameter compared to controls, reflecting increased SAM activity. Further analysis revealed that stress-released cortisol impaired the originality component of the AUT, reducing cognitive flexibility as measured by perseverative errors on the WCST task. Serial mediation analyses showed that both EBR and pupil diameter were also associated with increased perseverative errors leading to poor originality on the AUT. These findings confirm that physiological arousal under stress can impair divergent thinking through the regulation of different neuroendocrine pathways, in which the deterioration of flexible switching plays an important mediating role.
Collapse
Affiliation(s)
- Xiaoyu Guo
- Key Laboratory of Modern Teaching Technology, Ministry of Education, Shaanxi Normal University, Xi'an, China
- Key Laboratory of Human Development and Mental Health of Hubei Province, School of Psychology, Central China Normal University, Wuhan, China
| | - Yifan Wang
- Key Laboratory of Modern Teaching Technology, Ministry of Education, Shaanxi Normal University, Xi'an, China
| | - Yuecui Kan
- Department of Medical Psychology, Psychological Science and Health Management Center, Harbin Medical University, Harbin, China
| | - Meilin Wu
- Key Laboratory of Modern Teaching Technology, Ministry of Education, Shaanxi Normal University, Xi'an, China
| | - Linden J Ball
- School of Psychology & Computer Science, University of Central Lancashire, Preston, UK
| | - Haijun Duan
- Key Laboratory of Modern Teaching Technology, Ministry of Education, Shaanxi Normal University, Xi'an, China
| |
Collapse
|
4
|
Bogdan PC, Dolcos S, Buetti S, Lleras A, Dolcos F. Investigating the suitability of online eye tracking for psychological research: Evidence from comparisons with in-person data using emotion-attention interaction tasks. Behav Res Methods 2024; 56:2213-2226. [PMID: 37340240 DOI: 10.3758/s13428-023-02143-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/09/2023] [Indexed: 06/22/2023]
Abstract
The future is bound to bring rapid methodological changes to psychological research. One such promising candidate is the use of webcam-based eye tracking. Earlier research investigating the quality of online eye-tracking data has found increased spatial and temporal error compared to infrared recordings. Our studies expand on this work by investigating how this spatial error impacts researchers' abilities to study psychological phenomena. We carried out two studies involving emotion-attention interaction tasks, using four participant samples. In each study, one sample involved typical in-person collection of infrared eye-tracking data, and the other involved online collection of webcam-based data. We had two main findings: First, we found that the online data replicated seven of eight in-person results, although the effect sizes were just 52% [42%, 62%] the size of those seen in-person. Second, explaining the lack of replication in one result, we show how online eye tracking is biased toward recording more gaze points near the center of participants' screen, which can interfere with comparisons if left unchecked. Overall, our results suggest that well-powered online eye-tracking research is highly feasible, although researchers must exercise caution, collecting more participants and potentially adjusting their stimulus designs or analytic procedures.
Collapse
Affiliation(s)
- Paul C Bogdan
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, Urbana, IL, USA.
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA.
| | - Sanda Dolcos
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, Urbana, IL, USA
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
| | - Simona Buetti
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
| | - Alejandro Lleras
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, Urbana, IL, USA
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
| | - Florin Dolcos
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, Urbana, IL, USA.
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA.
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, USA.
| |
Collapse
|
5
|
Huang Z, Duan X, Zhu G, Zhang S, Wang R, Wang Z. Assessing the data quality of AdHawk MindLink eye-tracking glasses. Behav Res Methods 2024:10.3758/s13428-023-02310-2. [PMID: 38168041 DOI: 10.3758/s13428-023-02310-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/30/2023] [Indexed: 01/05/2024]
Abstract
Most commercially available eye-tracking devices rely on video cameras and image processing algorithms to track gaze. Despite this, emerging technologies are entering the field, making high-speed, cameraless eye-tracking more accessible. In this study, a series of tests were conducted to compare the data quality of MEMS-based eye-tracking glasses (AdHawk MindLink) with three widely used camera-based eye-tracking devices (EyeLink Portable Duo, Tobii Pro Glasses 2, and SMI Eye Tracking Glasses 2). The data quality measures assessed in these tests included accuracy, precision, data loss, and system latency. The results suggest that, overall, the data quality of the eye-tracking glasses was lower compared to that of a desktop EyeLink Portable Duo eye-tracker. Among the eye-tracking glasses, the accuracy and precision of the MindLink eye-tracking glasses were either higher or on par with those of Tobii Pro Glasses 2 and SMI Eye Tracking Glasses 2. The system latency of MindLink was approximately 9 ms, significantly lower than that of camera-based eye-tracking devices found in VR goggles. These results suggest that the MindLink eye-tracking glasses show promise for research applications where high sampling rates and low latency are preferred.
Collapse
Affiliation(s)
- Zehao Huang
- Center for Psychological Sciences, Zhejiang University, 148 Tianmushan Rd., Hangzhou, 310028, China
| | - Xiaoting Duan
- Center for Psychological Sciences, Zhejiang University, 148 Tianmushan Rd., Hangzhou, 310028, China
| | - Gancheng Zhu
- Center for Psychological Sciences, Zhejiang University, 148 Tianmushan Rd., Hangzhou, 310028, China
| | - Shuai Zhang
- Center for Psychological Sciences, Zhejiang University, 148 Tianmushan Rd., Hangzhou, 310028, China
| | - Rong Wang
- Center for Psychological Sciences, Zhejiang University, 148 Tianmushan Rd., Hangzhou, 310028, China
| | - Zhiguo Wang
- Center for Psychological Sciences, Zhejiang University, 148 Tianmushan Rd., Hangzhou, 310028, China.
| |
Collapse
|
6
|
Velisar A, Shanidze NM. Noise estimation for head-mounted 3D binocular eye tracking using Pupil Core eye-tracking goggles. Behav Res Methods 2024; 56:53-79. [PMID: 37369939 PMCID: PMC11062346 DOI: 10.3758/s13428-023-02150-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/23/2023] [Indexed: 06/29/2023]
Abstract
Head-mounted, video-based eye tracking is becoming increasingly common and has promise in a range of applications. Here, we provide a practical and systematic assessment of the sources of measurement uncertainty for one such device - the Pupil Core - in three eye-tracking domains: (1) the 2D scene camera image; (2) the physical rotation of the eye relative to the scene camera 3D space; and (3) the external projection of the estimated gaze point location onto the target plane or in relation to world coordinates. We also assess eye camera motion during active tasks relative to the eye and the scene camera, an important consideration as the rigid arrangement of eye and scene camera is essential for proper alignment of the detected gaze. We find that eye camera motion, improper gaze point depth estimation, and erroneous eye models can all lead to added noise that must be considered in the experimental design. Further, while calibration accuracy and precision estimates can help assess data quality in the scene camera image, they may not be reflective of errors and variability in gaze point estimation. These findings support the importance of eye model constancy for comparisons across experimental conditions and suggest additional assessments of data reliability may be warranted for experiments that require the gaze point or measure eye movements relative to the external world.
Collapse
Affiliation(s)
- Anca Velisar
- The Smith-Kettlewell Eye Research Institute, 2318 Fillmore Street, San Francisco, CA, 94115, USA.
| | - Natela M Shanidze
- The Smith-Kettlewell Eye Research Institute, 2318 Fillmore Street, San Francisco, CA, 94115, USA
| |
Collapse
|
7
|
Lotze A, Love K, Velisar A, Shanidze NM. A low-cost robotic oculomotor simulator for assessing eye tracking accuracy in health and disease. Behav Res Methods 2024; 56:80-92. [PMID: 35948762 PMCID: PMC9911554 DOI: 10.3758/s13428-022-01938-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/21/2022] [Indexed: 12/24/2022]
Abstract
Eye tracking accuracy is affected in individuals with vision and oculomotor deficits, impeding our ability to answer important scientific and clinical questions about these disorders. It is difficult to disambiguate decreases in eye movement accuracy and changes in accuracy of the eye tracking itself. We propose the EyeRobot-a low-cost, robotic oculomotor simulator capable of emulating healthy and compromised eye movements to provide ground truth assessment of eye tracker performance, and how different aspects of oculomotor deficits might affect tracking accuracy and performance. The device can operate with eccentric optical axes or large deviations between the eyes, as well as simulate oculomotor pathologies, such as large fixational instabilities. We find that our design can provide accurate eye movements for both central and eccentric viewing conditions, which can be tracked by using a head-mounted eye tracker, Pupil Core. As proof of concept, we examine the effects of eccentric fixation on calibration accuracy and find that Pupil Core's existing eye tracking algorithm is robust to large fixation offsets. In addition, we demonstrate that the EyeRobot can simulate realistic eye movements like saccades and smooth pursuit that can be tracked using video-based eye tracking. These tests suggest that the EyeRobot, an easy to build and flexible tool, can aid with eye tracking validation and future algorithm development in healthy and compromised vision.
Collapse
Affiliation(s)
- Al Lotze
- Smith-Kettlewell Eye Research Institute, 2318 Fillmore Street, San Francisco, CA, 94115, USA
| | | | - Anca Velisar
- Smith-Kettlewell Eye Research Institute, 2318 Fillmore Street, San Francisco, CA, 94115, USA
| | - Natela M Shanidze
- Smith-Kettlewell Eye Research Institute, 2318 Fillmore Street, San Francisco, CA, 94115, USA.
| |
Collapse
|
8
|
Nieboer W, Ghiani A, de Vries R, Brenner E, Mann DL. Eye Tracking to Assess the Functional Consequences of Vision Impairment: A Systematic Review. Optom Vis Sci 2023; 100:861-875. [PMID: 38165789 DOI: 10.1097/opx.0000000000002088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2024] Open
Abstract
BACKGROUND Eye tracking is a promising method for objectively assessing functional visual capabilities, but its suitability remains unclear when assessing the vision of people with vision impairment. In particular, accurate eye tracking typically relies on a stable and reliable image of the pupil and cornea, which may be compromised by abnormalities associated with vision impairment (e.g., nystagmus, aniridia). OBJECTIVES This study aimed to establish the degree to which video-based eye tracking can be used to assess visual function in the presence of vision impairment. DATA SOURCES A systematic review was conducted using PubMed, EMBASE, and Web of Science databases, encompassing literature from inception to July 2022. STUDY ELIGIBILITY CRITERIA, PARTICIPANTS, AND INTERVENTIONS Studies included in the review used video-based eye tracking, included individuals with vision impairment, and used screen-based tasks unrelated to practiced skills such as reading or driving. STUDY APPRAISAL AND SYNTHESIS METHODS The included studies were assessed for quality using the Strengthening the Reporting of Observational Studies in Epidemiology assessment tool. Data extraction and synthesis were performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. RESULTS Our analysis revealed that five common tests of visual function were used: (i) fixation stability, (ii) smooth pursuit, (iii) saccades, (iv) free viewing, and (v) visual search. The studies reported considerable success when testing individuals with vision impairment, yielding usable data from 96.5% of participants. LIMITATIONS There was an overrepresentation of conditions affecting the optic nerve or macula and an underrepresentation of conditions affecting the anterior segment or peripheral retina. CONCLUSIONS AND IMPLICATIONS OF KEY FINDINGS The results offer promise for the use of eye tracking to assess the visual function of a considerable proportion of those with vision impairment. Based on the findings, we outline a framework for how eye tracking can be used to test visual function in the presence of vision impairment.
Collapse
Affiliation(s)
| | - Andrea Ghiani
- Department of Human Movement Sciences, Amsterdam Movement Sciences and Institute of Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - Ralph de Vries
- Medical Library, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - Eli Brenner
- Department of Human Movement Sciences, Amsterdam Movement Sciences and Institute of Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - David L Mann
- Department of Human Movement Sciences, Amsterdam Movement Sciences and Institute of Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
9
|
Schmälzle R, Lim S, Cho HJ, Wu J, Bente G. Examining the exposure-reception-retention link in realistic communication environments via VR and eye-tracking: The VR billboard paradigm. PLoS One 2023; 18:e0291924. [PMID: 38033032 PMCID: PMC10688884 DOI: 10.1371/journal.pone.0291924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 09/10/2023] [Indexed: 12/02/2023] Open
Abstract
Exposure is key to message effects. No effects can ensue if a health, political, or commercial message is not noticed. Yet, existing research in communication, advertising, and related disciplines often measures 'opportunities for exposure' at an aggregate level, whereas knowing whether recipients were 'actually exposed' to a message requires a micro-level approach. Micro-level research, on the other hand, focuses on message processing and retention, takes place under highly controlled laboratory conditions with forced message exposure, and largely ignores how recipients attend selectively to messages under more natural conditions. Eye-tracking enables us to assess actual exposure, but its previous applications were restricted to screen-based reading paradigms lacking ecological validity or field studies that suffer from limited experimental control. Our solution is to measure eye-tracking within an immersive VR environment that creates the message delivery and reception context. Specifically, we simulate a car ride down a highway alongside which billboards are placed. The VR headset (HP Omnicept Pro) provides an interactive 3D view of the environment and holds a seamlessly integrated binocular eye tracker that records the drivers' gaze and detects all fixations on the billboards. This allows us to quantify the nexus between exposure and reception rigorously, and to link our measures to subsequent memory, i.e., whether messages were remembered, forgotten, or not even encoded. An empirical study shows that incidental memory for messages differs based on participants' gaze behavior while passing the billboards. The study further shows how an experimental manipulation of attentional demands directly impacts drivers' gaze behavior and memory. We discuss the large potential of this paradigm to quantify exposure and message reception in realistic communication environments and the equally promising applications in new media contexts (e.g., the Metaverse).
Collapse
Affiliation(s)
- Ralf Schmälzle
- Department of Communication, College of Communication Arts and Sciences, Michigan State University, East Lansing, Michigan, United States of America
| | - Sue Lim
- Department of Communication, College of Communication Arts and Sciences, Michigan State University, East Lansing, Michigan, United States of America
| | - Hee Jung Cho
- Department of Communication, College of Communication Arts and Sciences, Michigan State University, East Lansing, Michigan, United States of America
| | - Juncheng Wu
- Department of Communication, College of Communication Arts and Sciences, Michigan State University, East Lansing, Michigan, United States of America
| | - Gary Bente
- Department of Communication, College of Communication Arts and Sciences, Michigan State University, East Lansing, Michigan, United States of America
| |
Collapse
|
10
|
Kaduk T, Goeke C, Finger H, König P. Webcam eye tracking close to laboratory standards: Comparing a new webcam-based system and the EyeLink 1000. Behav Res Methods 2023:10.3758/s13428-023-02237-8. [PMID: 37821751 DOI: 10.3758/s13428-023-02237-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/04/2023] [Indexed: 10/13/2023]
Abstract
This paper aims to compare a new webcam-based eye-tracking system, integrated into the Labvanced platform for online experiments, to a "gold standard" lab-based eye tracker (EyeLink 1000 - SR Research). Specifically, we simultaneously recorded data with both eye trackers in five different tasks, analyzing their real-time performance. These tasks were a subset of a standardized test battery for eye trackers, including a Large Grid task, Smooth Pursuit eye movements, viewing natural images, and two Head Movements tasks (roll, yaw). The results show that the webcam-based system achieved an overall accuracy of 1.4°, and a precision of 1.1° (standard deviation (SD) across subjects), an error of about 0.5° larger than the EyeLink system. Interestingly, both accuracy (1.3°) and precision (0.9°) were slightly better for centrally presented targets, the region of interest in many psychophysical experiments. Remarkably, the correlation of raw gaze samples between the EyeLink and webcam-based was at about 90% for the Large Grid task and about 80% for Free View and Smooth Pursuit. Overall, these results put the performance of the webcam-based system roughly on par with mobile eye-tracking devices (Ehinger et al. PeerJ, 7, e7086, 2019; Tonsen et al., 2020) and demonstrate substantial improvement compared to existing webcam eye-tracking solutions (Papoutsaki et al., 2017).
Collapse
Affiliation(s)
- Tobiasz Kaduk
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany.
- Research and Development Division, Scicovery GmbH, Paderborn, Germany.
| | - Caspar Goeke
- Research and Development Division, Scicovery GmbH, Paderborn, Germany
| | - Holger Finger
- Research and Development Division, Scicovery GmbH, Paderborn, Germany
| | - Peter König
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
11
|
Faraji Y, van Rijn JW, van Nispen RMA, van Rens GHMB, Melis-Dankers BJM, Koopman J, van Rijn LJ. A toolkit for wide-screen dynamic area of interest measurements using the Pupil Labs Core Eye Tracker. Behav Res Methods 2023; 55:3820-3830. [PMID: 36253600 PMCID: PMC10616213 DOI: 10.3758/s13428-022-01991-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/26/2022] [Indexed: 11/08/2022]
Abstract
Eye tracking measurements taken while watching a wide field screen are challenging to perform. Commercially available remote eye trackers typically do not measure more than 35 degrees in eccentricity. Analysis software was developed using the Pupil Core Eye Tracking data to analyze viewing behavior under circumstances as natural as possible, on a 1.55-m-wide screen allowing free head movements. Additionally, dynamic area of interest (AOI) analyses were performed on data of participants viewing traffic scenes. A toolkit was created including software for simple allocation of dynamic AOIs (semi-automatically and manually), measurement of parameters such as dwell times and time to first entry, and overlaying gaze and AOIs on video. Participants (n =11) were asked to look at 13 dynamic AOIs in traffic scenes from appearance to disappearance in order to validate the setup and software. Different AOI margins were explored for the included objects. The median ratio between total appearance time and dwell time was about 90% for most objects when appropriate margins were chosen. This validated open-source toolkit is readily available for researchers who want to perform dynamic AOI analyses with the Pupil Core eye tracker, especially when measurements are desired on a wide screen, in various fields such as psychology, transportation, and low vision research.
Collapse
Affiliation(s)
- Yasmin Faraji
- Amsterdam UMC location Vrije Universiteit Amsterdam, Ophthalmology, Amsterdam, The Netherlands
- Amsterdam Public Health, Quality of Care, Societal Participation & Health, Mental Health, Aging and Later Life, Amsterdam, The Netherlands
| | - Joris W van Rijn
- Amsterdam UMC location Vrije Universiteit Amsterdam, Ophthalmology, Amsterdam, The Netherlands
| | - Ruth M A van Nispen
- Amsterdam UMC location Vrije Universiteit Amsterdam, Ophthalmology, Amsterdam, The Netherlands
- Amsterdam Public Health, Quality of Care, Societal Participation & Health, Mental Health, Aging and Later Life, Amsterdam, The Netherlands
| | - Ger H M B van Rens
- Amsterdam UMC location Vrije Universiteit Amsterdam, Ophthalmology, Amsterdam, The Netherlands
- Amsterdam Public Health, Quality of Care, Societal Participation & Health, Mental Health, Aging and Later Life, Amsterdam, The Netherlands
| | - Bart J M Melis-Dankers
- Royal Dutch Visio, Centre of Expertise for Blind and Partially Sighted People, Huizen, The Netherlands
| | - Jan Koopman
- Royal Dutch Visio, Centre of Expertise for Blind and Partially Sighted People, Huizen, The Netherlands
| | - Laurentius J van Rijn
- Amsterdam UMC location Vrije Universiteit Amsterdam, Ophthalmology, Amsterdam, The Netherlands.
- Department of Ophthalmology, Onze Lieve Vrouwe Gasthuis, Amsterdam, The Netherlands.
- Amsterdam Neuroscience, Systems & Network Neurosciences, Amsterdam, The Netherlands.
| |
Collapse
|
12
|
Slim MS, Hartsuiker RJ. Moving visual world experiments online? A web-based replication of Dijkgraaf, Hartsuiker, and Duyck (2017) using PCIbex and WebGazer.js. Behav Res Methods 2023; 55:3786-3804. [PMID: 36323996 DOI: 10.3758/s13428-022-01989-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/23/2022] [Indexed: 11/07/2022]
Abstract
The visual world paradigm is one of the most influential paradigms to study real-time language processing. The present study tested whether visual world studies can be moved online, using PCIbex software (Zehr & Schwarz, 2018) and the WebGazer.js algorithm (Papoutsaki et al., 2016) to collect eye-movement data. Experiment 1 was a fixation task in which the participants looked at a fixation cross in multiple positions on the computer screen. Experiment 2 was a web-based replication of a visual world experiment by Dijkgraaf et al. (2017). Firstly, both experiments revealed that the spatial accuracy of the data allowed us to distinguish looks across the four quadrants of the computer screen. This suggest that the spatial resolution of WebGazer.js is fine-grained enough for most visual world experiments (which typically involve a two-by-two quadrant-based set-up of the visual display). Secondly, both experiments revealed a delay of roughly 300 ms in the time course of the eye movements, possibly caused by the internal processing speed of the browser or WebGazer.js. This delay can be problematic in studying questions that require a fine-grained temporal resolution and requires further investigation.
Collapse
Affiliation(s)
- Mieke Sarah Slim
- Department of Experimental Psychology, Ghent University, Henri Dunantlaan 2, 9000, Ghent, Belgium.
| | - Robert J Hartsuiker
- Department of Experimental Psychology, Ghent University, Henri Dunantlaan 2, 9000, Ghent, Belgium
| |
Collapse
|
13
|
Akerman M, Choudhary S, Liebmann JM, Cioffi GA, Chen RWS, Thakoor KA. Extracting decision-making features from the unstructured eye movements of clinicians on glaucoma OCT reports and developing AI models to classify expertise. Front Med (Lausanne) 2023; 10:1251183. [PMID: 37841006 PMCID: PMC10571140 DOI: 10.3389/fmed.2023.1251183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Accepted: 09/14/2023] [Indexed: 10/17/2023] Open
Abstract
This study aimed to investigate the eye movement patterns of ophthalmologists with varying expertise levels during the assessment of optical coherence tomography (OCT) reports for glaucoma detection. Objectives included evaluating eye gaze metrics and patterns as a function of ophthalmic education, deriving novel features from eye-tracking, and developing binary classification models for disease detection and expertise differentiation. Thirteen ophthalmology residents, fellows, and clinicians specializing in glaucoma participated in the study. Junior residents had less than 1 year of experience, while senior residents had 2-3 years of experience. The expert group consisted of fellows and faculty with over 3 to 30+ years of experience. Each participant was presented with a set of 20 Topcon OCT reports (10 healthy and 10 glaucomatous) and was asked to determine the presence or absence of glaucoma and rate their confidence of diagnosis. The eye movements of each participant were recorded as they diagnosed the reports using a Pupil Labs Core eye tracker. Expert ophthalmologists exhibited more refined and focused eye fixations, particularly on specific regions of the OCT reports, such as the retinal nerve fiber layer (RNFL) probability map and circumpapillary RNFL b-scan. The binary classification models developed using the derived features demonstrated high accuracy up to 94.0% in differentiating between expert and novice clinicians. The derived features and trained binary classification models hold promise for improving the accuracy of glaucoma detection and distinguishing between expert and novice ophthalmologists. These findings have implications for enhancing ophthalmic education and for the development of effective diagnostic tools.
Collapse
Affiliation(s)
- Michelle Akerman
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Sanmati Choudhary
- Department of Computer Science, Columbia University, New York, NY, United States
| | - Jeffrey M. Liebmann
- Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY, United States
| | - George A. Cioffi
- Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY, United States
| | - Royce W. S. Chen
- Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY, United States
| | - Kaveri A. Thakoor
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
- Department of Computer Science, Columbia University, New York, NY, United States
- Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY, United States
| |
Collapse
|
14
|
Zafar A, Martin Calderon C, Yeboah AM, Dalton K, Irving E, Niechwiej-Szwedo E. Investigation of Camera-Free Eye-Tracking Glasses Compared to a Video-Based System. SENSORS (BASEL, SWITZERLAND) 2023; 23:7753. [PMID: 37765810 PMCID: PMC10535734 DOI: 10.3390/s23187753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 09/03/2023] [Accepted: 09/06/2023] [Indexed: 09/29/2023]
Abstract
Technological advances in eye-tracking have resulted in lightweight, portable solutions that are capable of capturing eye movements beyond laboratory settings. Eye-tracking devices have typically relied on heavier, video-based systems to detect pupil and corneal reflections. Advances in mobile eye-tracking technology could facilitate research and its application in ecological settings; more traditional laboratory research methods are able to be modified and transferred to real-world scenarios. One recent technology, the AdHawk MindLink, introduced a novel camera-free system embedded in typical eyeglass frames. This paper evaluates the AdHawk MindLink by comparing the eye-tracking recordings with a research "gold standard", the EyeLink II. By concurrently capturing data from both eyes, we compare the capability of each eye tracker to quantify metrics from fixation, saccade, and smooth pursuit tasks-typical elements in eye movement research-across a sample of 13 adults. The MindLink system was capable of capturing fixation stability within a radius of less than 0.5∘, estimating horizontal saccade amplitudes with an accuracy of 0.04∘± 2.3∘, vertical saccade amplitudes with an accuracy of 0.32∘± 2.3∘, and smooth pursuit speeds with an accuracy of 0.5 to 3∘s, depending on the pursuit speed. While the performance of the MindLink system in measuring fixation stability, saccade amplitude, and smooth pursuit eye movements were slightly inferior to the video-based system, MindLink provides sufficient gaze-tracking capabilities for dynamic settings and experiments.
Collapse
Affiliation(s)
- Abdullah Zafar
- Department of Kinesiology & Health Sciences, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (A.Z.)
| | - Claudia Martin Calderon
- Department of Kinesiology & Health Sciences, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (A.Z.)
| | - Anne Marie Yeboah
- School of Optometry & Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| | - Kristine Dalton
- School of Optometry & Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| | - Elizabeth Irving
- School of Optometry & Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| | - Ewa Niechwiej-Szwedo
- Department of Kinesiology & Health Sciences, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (A.Z.)
| |
Collapse
|
15
|
Onkhar V, Dodou D, de Winter JCF. Evaluating the Tobii Pro Glasses 2 and 3 in static and dynamic conditions. Behav Res Methods 2023:10.3758/s13428-023-02173-7. [PMID: 37550466 DOI: 10.3758/s13428-023-02173-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/15/2023] [Indexed: 08/09/2023]
Abstract
Over the past few decades, there have been significant developments in eye-tracking technology, particularly in the domain of mobile, head-mounted devices. Nevertheless, questions remain regarding the accuracy of these eye-trackers during static and dynamic tasks. In light of this, we evaluated the performance of two widely used devices: Tobii Pro Glasses 2 and Tobii Pro Glasses 3. A total of 36 participants engaged in tasks under three dynamicity conditions. In the "seated with a chinrest" trial, only the eyes could be moved; in the "seated without a chinrest" trial, both the head and the eyes were free to move; and during the walking trial, participants walked along a straight path. During the seated trials, participants' gaze was directed towards dots on a wall by means of audio instructions, whereas in the walking trial, participants maintained their gaze on a bullseye while walking towards it. Eye-tracker accuracy was determined using computer vision techniques to identify the target within the scene camera image. The findings showed that Tobii 3 outperformed Tobii 2 in terms of accuracy during the walking trials. Moreover, the results suggest that employing a chinrest in the case of head-mounted eye-trackers is counterproductive, as it necessitates larger eye eccentricities for target fixation, thereby compromising accuracy compared to not using a chinrest, which allows for head movement. Lastly, it was found that participants who reported higher workload demonstrated poorer eye-tracking accuracy. The current findings may be useful in the design of experiments that involve head-mounted eye-trackers.
Collapse
Affiliation(s)
- V Onkhar
- Department of Cognitive Robotics, Delft University of Technology, Delft, The Netherlands
| | - D Dodou
- Department of Biomechanical Engineering, Delft University of Technology, Delft, The Netherlands
| | - J C F de Winter
- Department of Cognitive Robotics, Delft University of Technology, Delft, The Netherlands.
| |
Collapse
|
16
|
Adhanom IB, MacNeilage P, Folmer E. Eye Tracking in Virtual Reality: a Broad Review of Applications and Challenges. VIRTUAL REALITY 2023; 27:1481-1505. [PMID: 37621305 PMCID: PMC10449001 DOI: 10.1007/s10055-022-00738-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 11/30/2022] [Indexed: 08/26/2023]
Abstract
Eye tracking is becoming increasingly available in head-mounted virtual reality displays with various headsets with integrated eye trackers already commercially available. The applications of eye tracking in virtual reality are highly diversified and span multiple disciplines. As a result, the number of peer-reviewed publications that study eye tracking applications has surged in recent years. We performed a broad review to comprehensively search academic literature databases with the aim of assessing the extent of published research dealing with applications of eye tracking in virtual reality, and highlighting challenges, limitations and areas for future research.
Collapse
Affiliation(s)
| | - Paul MacNeilage
- University of Nevada Reno, 1664 N Virginia St, Reno, NV 89557, USA
| | - Eelke Folmer
- University of Nevada Reno, 1664 N Virginia St, Reno, NV 89557, USA
| |
Collapse
|
17
|
Robert FM, Abiven B, Sinou M, Heggarty K, Adam L, Nourrit V, de Bougrenet de la Tocnaye JL. Contact lens embedded holographic pointer. Sci Rep 2023; 13:6919. [PMID: 37106122 PMCID: PMC10140282 DOI: 10.1038/s41598-023-33420-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Accepted: 04/12/2023] [Indexed: 04/29/2023] Open
Abstract
In this paper we present an infrared laser pointer, consisting of a vertical-cavity surface-emitting laser (VCSEL) and a diffractive optical element (DOE), encapsulated into a scleral contact lens (SCL). The VCSEL is powered remotely by inductive coupling from a primary antenna embedded into an eyewear frame. The DOE is used either to collimate the laser beam or to project a pattern image at a chosen distance in front of the eye. We detail the different SCL constitutive blocks, how they are manufactured and assembled. We particularly emphasize the various technological challenges related to their encapsulation in the reduced volume of the SCL, while keeping the pupil free. Finally, we describe how the laser pointer operates, what are its performances (e.g. collimation, image formation) and how it can be used efficiently in various application fields such as visual assistance and augmented reality.
Collapse
Affiliation(s)
- François-Maël Robert
- Département Optique, IMT Atlantique, Technopôle Brest-Iroise, 655 Avenue du Technopôle, CS 83818 - 29238, Brest Cedex 3, France
| | - Bernard Abiven
- Département Optique, IMT Atlantique, Technopôle Brest-Iroise, 655 Avenue du Technopôle, CS 83818 - 29238, Brest Cedex 3, France
| | - Maïna Sinou
- Département Optique, IMT Atlantique, Technopôle Brest-Iroise, 655 Avenue du Technopôle, CS 83818 - 29238, Brest Cedex 3, France
| | - Kevin Heggarty
- Département Optique, IMT Atlantique, Technopôle Brest-Iroise, 655 Avenue du Technopôle, CS 83818 - 29238, Brest Cedex 3, France
| | - Laure Adam
- LCS, 14 Place Gardin, 14000, Caen, France
| | - Vincent Nourrit
- Département Optique, IMT Atlantique, Technopôle Brest-Iroise, 655 Avenue du Technopôle, CS 83818 - 29238, Brest Cedex 3, France.
| | | |
Collapse
|
18
|
Lohr D, Aziz S, Friedman L, Komogortsev OV. GazeBaseVR, a large-scale, longitudinal, binocular eye-tracking dataset collected in virtual reality. Sci Data 2023; 10:177. [PMID: 36997558 PMCID: PMC10060927 DOI: 10.1038/s41597-023-02075-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 03/15/2023] [Indexed: 03/31/2023] Open
Abstract
AbstractWe present GazeBaseVR, a large-scale, longitudinal, binocular eye-tracking (ET) dataset collected at 250 Hz with an ET-enabled virtual-reality (VR) headset. GazeBaseVR comprises 5,020 binocular recordings from a diverse population of 407 college-aged participants. Participants were recorded up to six times each over a 26-month period, each time performing a series of five different ET tasks: (1) a vergence task, (2) a horizontal smooth pursuit task, (3) a video-viewing task, (4) a self-paced reading task, and (5) a random oblique saccade task. Many of these participants have also been recorded for two previously published datasets with different ET devices, and 11 participants were recorded before and after COVID-19 infection and recovery. GazeBaseVR is suitable for a wide range of research on ET data in VR devices, especially eye movement biometrics due to its large population and longitudinal nature. In addition to ET data, additional participant details are provided to enable further research on topics such as fairness.
Collapse
|
19
|
Adhikari K. Application of selected neuroscientific methods in consumer sensory analysis: A review. J Food Sci 2023; 88:53-64. [PMID: 36915966 DOI: 10.1111/1750-3841.16526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 02/19/2023] [Accepted: 02/21/2023] [Indexed: 03/15/2023]
Abstract
Neuromarketing or consumer neuroscience is a relatively new market research subdiscipline that has gained popularity among consumer behavior scientists in the past two decades or so. It combines neurobiology with behavioral psychology to understand consumer behavior, more specifically about their decisions related to choices/preferences and purchase. The purpose of this review is to explore the potential of using neuroscientific methods for consumer sensory science research. By no means, this is an exhaustive review hindered by the fact that there are countless articles on neuromarketing and consumer neuroscience in the literature. The author has tried to show the applicability of neuroscientific methods in consumer sensory sciences, specifically electroencephalography and eye tracking, which could potentially "complement" the sensory methodologies to gain better consumer insight. Both these techniques are relatively inexpensive, portable, and minimally invasive techniques that are already being used by some sensory scientists. They could be incorporated with ease in the research portfolio of consumer sensory researchers who would like to use them to study consumer affect. It is recommended that the researchers use proper experimental design that takes into consideration the confounding variables as much as possible. The two methods mentioned before have been proven to be relatively reliable and repeatable. Lastly, these methods would also require ethical oversight because of the involvement of human subjects.
Collapse
Affiliation(s)
- Koushik Adhikari
- Department of Food Science and Technology, University of Georgia, Griffin, Georgia, USA
| |
Collapse
|
20
|
Holmqvist K, Örbom SL, Hooge ITC, Niehorster DC, Alexander RG, Andersson R, Benjamins JS, Blignaut P, Brouwer AM, Chuang LL, Dalrymple KA, Drieghe D, Dunn MJ, Ettinger U, Fiedler S, Foulsham T, van der Geest JN, Hansen DW, Hutton SB, Kasneci E, Kingstone A, Knox PC, Kok EM, Lee H, Lee JY, Leppänen JM, Macknik S, Majaranta P, Martinez-Conde S, Nuthmann A, Nyström M, Orquin JL, Otero-Millan J, Park SY, Popelka S, Proudlock F, Renkewitz F, Roorda A, Schulte-Mecklenbeck M, Sharif B, Shic F, Shovman M, Thomas MG, Venrooij W, Zemblys R, Hessels RS. Eye tracking: empirical foundations for a minimal reporting guideline. Behav Res Methods 2023; 55:364-416. [PMID: 35384605 PMCID: PMC9535040 DOI: 10.3758/s13428-021-01762-8] [Citation(s) in RCA: 45] [Impact Index Per Article: 45.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2021] [Indexed: 11/08/2022]
Abstract
In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").
Collapse
Affiliation(s)
- Kenneth Holmqvist
- Department of Psychology, Nicolaus Copernicus University, Torun, Poland.
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa.
- Department of Psychology, Regensburg University, Regensburg, Germany.
| | - Saga Lee Örbom
- Department of Psychology, Regensburg University, Regensburg, Germany
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Robert G Alexander
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | | | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Social, Health and Organizational Psychology, Utrecht University, Utrecht, The Netherlands
| | - Pieter Blignaut
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| | | | - Lewis L Chuang
- Department of Ergonomics, Leibniz Institute for Working Environments and Human Factors, Dortmund, Germany
- Institute of Informatics, LMU Munich, Munich, Germany
| | | | - Denis Drieghe
- School of Psychology, University of Southampton, Southampton, UK
| | - Matt J Dunn
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK
| | | | - Susann Fiedler
- Vienna University of Economics and Business, Vienna, Austria
| | - Tom Foulsham
- Department of Psychology, University of Essex, Essex, UK
| | | | - Dan Witzner Hansen
- Machine Learning Group, Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | | | - Enkelejda Kasneci
- Human-Computer Interaction, University of Tübingen, Tübingen, Germany
| | | | - Paul C Knox
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Ellen M Kok
- Department of Education and Pedagogy, Division Education, Faculty of Social and Behavioral Sciences, Utrecht University, Utrecht, The Netherlands
- Department of Online Learning and Instruction, Faculty of Educational Sciences, Open University of the Netherlands, Heerlen, The Netherlands
| | - Helena Lee
- University of Southampton, Southampton, UK
| | - Joy Yeonjoo Lee
- School of Health Professions Education, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Jukka M Leppänen
- Department of Psychology and Speed-Language Pathology, University of Turku, Turku, Finland
| | - Stephen Macknik
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Päivi Majaranta
- TAUCHI Research Center, Computing Sciences, Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland
| | - Susana Martinez-Conde
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Antje Nuthmann
- Institute of Psychology, University of Kiel, Kiel, Germany
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Jacob L Orquin
- Department of Management, Aarhus University, Aarhus, Denmark
- Center for Research in Marketing and Consumer Psychology, Reykjavik University, Reykjavik, Iceland
| | - Jorge Otero-Millan
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | - Soon Young Park
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna, Vienna, Austria
| | - Stanislav Popelka
- Department of Geoinformatics, Palacký University Olomouc, Olomouc, Czech Republic
| | - Frank Proudlock
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Frank Renkewitz
- Department of Psychology, University of Erfurt, Erfurt, Germany
| | - Austin Roorda
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | | | - Bonita Sharif
- School of Computing, University of Nebraska-Lincoln, Lincoln, Nebraska, USA
| | - Frederick Shic
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, Seattle, WA, USA
- Department of General Pediatrics, University of Washington School of Medicine, Seattle, WA, USA
| | - Mark Shovman
- Eyeviation Systems, Herzliya, Israel
- Department of Industrial Design, Bezalel Academy of Arts and Design, Jerusalem, Israel
| | - Mervyn G Thomas
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Ward Venrooij
- Electrical Engineering, Mathematics and Computer Science (EEMCS), University of Twente, Enschede, The Netherlands
| | | | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
21
|
Berlijn AM, Hildebrandt LK, Gamer M. Idiosyncratic viewing patterns of social scenes reflect individual preferences. J Vis 2022; 22:10. [PMID: 36583910 PMCID: PMC9807181 DOI: 10.1167/jov.22.13.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
In general, humans preferentially look at conspecifics in naturalistic images. However, such group-based effects might conceal systematic individual differences concerning the preference for social information. Here, we investigated to what degree fixations on social features occur consistently within observers and whether this preference generalizes to other measures of social prioritization in the laboratory as well as the real world. Participants carried out a free viewing task, a relevance taps task that required them to actively select image regions that are crucial for understanding a given scene, and they were asked to freely take photographs outside the laboratory that were later classified regarding their social content. We observed stable individual differences in the fixation and active selection of human heads and faces that were correlated across tasks and partly predicted the social content of self-taken photographs. Such relationship was not observed for human bodies indicating that different social elements need to be dissociated. These findings suggest that idiosyncrasies in the visual exploration and interpretation of social features exist and predict real-world behavior. Future studies should further characterize these preferences and elucidate how they shape perception and interpretation of social contexts in healthy participants and patients with mental disorders that affect social functioning.
Collapse
Affiliation(s)
- Adam M. Berlijn
- Department of Experimental Psychology, Heinrich-Heine-University Düsseldorf, Düsseldorf, Germany,Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, University Hospital Düsseldorf, Heinrich-Heine University Düsseldorf, Düsseldorf, Germany,Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany,Department of Psychology, Julius-Maximilians-University Würzburg, Würzburg, Germany,
| | - Lea K. Hildebrandt
- Department of Psychology, Julius-Maximilians-University Würzburg, Würzburg, Germany,
| | - Matthias Gamer
- Department of Psychology, Julius-Maximilians-University Würzburg, Würzburg, Germany,
| |
Collapse
|
22
|
Tanwear A, Liang X, Paz E, Bohnert T, Ghannam R, Ferreira R, Heidari H. Spintronic Eyeblink Gesture Sensor With Wearable Interface System. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2022; 16:779-792. [PMID: 35830413 DOI: 10.1109/tbcas.2022.3190689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
This work presents an eyeblink system that detects magnets placed on the eyelid via integrated magnetic sensors and an analogue circuit on an eyewear frame (without a glass lens). The eyelid magnets were detected using tunnelling magnetoresistance (TMR) bridge sensors with a sensitivity of 14 mV/V/Oe and were positioned centre-right and centre-left of the eyewear frame. Each eye side has a single TMR sensor wired to a single circuit, where the signal was filtered (<0.5 Hz and >30 Hz) and amplified to detect the weak magnetic field produced by the 3-millimetre (mm) diameter and 0.5 mm thickness N42 Neodymium magnets attached to a medical tape strip, for the adult-age demographic. Each eyeblink was repeated by a trigger command (right eyeblink) followed by the appropriate command, right, left or both eyeblinks. The eyeblink gesture system has shown repeatability, resulting in blinking classification based on the analogue signal amplitude threshold. As a result, the signal can be scaled and classified as well as, integrated with a Bluetooth module in real-time. This will enable end-users to connect to various other Bluetooth enabled devices for wireless assistive technologies. The eyeblink system was tested by 14 participants via a stimuli-based game. Within an average time of 185-seconds, the system demonstrated a group mean accuracy of 72% for 40 commands. Moreover, the maximum information transfer rate (ITR) of the participants was 35.95 Bits per minute.
Collapse
|
23
|
Wagner P, Ho A, Kim J. Estimating 3D spatiotemporal point of regard: a device evaluation. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:1343-1351. [PMID: 36215577 DOI: 10.1364/josaa.457663] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 06/10/2022] [Indexed: 06/16/2023]
Abstract
This paper presents and evaluates a system and method that record spatiotemporal scene information and location of the center of visual attention, i.e., spatiotemporal point of regard (PoR) in ecological environments. A primary research application of the proposed system and method is for enhancing current 2D visual attention models. Current eye-tracking approaches collapse a scene's depth structures to a 2D image, omitting visual cues that trigger important functions of the human visual system (e.g., accommodation and vergence). We combined head-mounted eye-tracking with a miniature time-of-flight camera to produce a system that could be used to estimate the spatiotemporal location of the PoR-the point of highest visual attention-within 3D scene layouts. Maintaining calibration accuracy is a primary challenge for gaze mapping; hence, we measured accuracy repeatedly by matching the PoR to fixated targets arranged within a range of working distances in depth. Accuracy was estimated as the deviation from estimated PoR relative to known locations of scene targets. We found that estimates of 3D PoR had an overall accuracy of approximately 2° omnidirectional mean average error (OMAE) with variation over a 1 h recording maintained within 3.6° OMAE. This method can be used to determine accommodation and vergence cues of the human visual system continuously within habitual environments, including everyday applications (e.g., use of hand-held devices).
Collapse
|
24
|
Hsu WY, Cheng YW, Tsai CB. An Effective Algorithm to Analyze the Optokinetic Nystagmus Waveforms from a Low-Cost Eye Tracker. Healthcare (Basel) 2022; 10:healthcare10071281. [PMID: 35885808 PMCID: PMC9320438 DOI: 10.3390/healthcare10071281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 07/06/2022] [Accepted: 07/08/2022] [Indexed: 11/16/2022] Open
Abstract
Objective: Most neurological diseases are usually accompanied by changes in the oculomotor nerve. Analysis of different types of eye movements will help provide important information in ophthalmology, neurology, and psychology. At present, many scholars use optokinetic nystagmus (OKN) to study the physiological phenomenon of eye movement. OKN is an involuntary eye movement induced by a large moving surrounding visual field. It consists of a slow pursuing eye movement, called “slow phase” (SP), and a fast re-fixating saccade eye movement, called “fast phase” (FP). Non-invasive video-oculography has been used increasingly in eye movement research. However, research-grade eye trackers are often expensive and less accessible to most researchers. Using a low-cost eye tracker to quantitatively measure OKN eye movement will facilitate the general application of eye movement research. Methods & Results: We design an analytical algorithm to quantitatively measure OKN eye movements on a low-cost eye tracker. Using simple conditional filtering, accurate FP positions can be obtained quickly. The high-precision FP recognition rate is of great help for the subsequent calculation of eye movement analysis parameters, such as mean slow phase velocity (MSPV), which is beneficial as a reference index for patients with strabismus and other eye diseases. Conclusions: Experimental results indicate that the proposed method achieves faster and better results than other approaches, and can provide an effective algorithm to calculate and analyze the FP position of OKN waveforms.
Collapse
Affiliation(s)
- Wei-Yen Hsu
- Department of Information Management, National Chung Cheng University, Chiayi 621, Taiwan; (W.-Y.H.); (Y.-W.C.)
- Center for Innovative Research on Aging Society, National Chung Cheng University, Chiayi 621, Taiwan
- Advanced Institute of Manufacturing with High-Tech Innovations, National Chung Cheng University, Chiayi 621, Taiwan
| | - Ya-Wen Cheng
- Department of Information Management, National Chung Cheng University, Chiayi 621, Taiwan; (W.-Y.H.); (Y.-W.C.)
| | - Chong-Bin Tsai
- Department of Ophthalmology, Ditmanson Medical Foundation Chiayi Christian Hospital, Chiayi 600, Taiwan
- Department of Optometry, College of Medical and Health Science, Asia University, Chiayi 600, Taiwan
- Correspondence: ; Tel.: +886-5-2765041 #8503
| |
Collapse
|
25
|
Holmqvist K, Örbom SL, Zemblys R. Small head movements increase and colour noise in data from five video-based P-CR eye trackers. Behav Res Methods 2022; 54:845-863. [PMID: 34357538 PMCID: PMC8344338 DOI: 10.3758/s13428-021-01648-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/06/2021] [Indexed: 11/08/2022]
Abstract
We empirically investigate the role of small, almost imperceptible balance and breathing movements of the head on the level and colour of noise in data from five commercial video-based P-CR eye trackers. By comparing noise from recordings with completely static artificial eyes to noise from recordings where the artificial eyes are worn by humans, we show that very small head movements increase levels and colouring of the noise in data recorded from all five eye trackers in this study. This increase of noise levels is seen not only in the gaze signal, but also in the P and CR signals of the eye trackers that provide these camera image features. The P and CR signals of the SMI eye trackers correlate strongly during small head movements, but less so or not at all when the head is completely still, indicating that head movements are registered by the P and CR images in the eye camera. By recording with artificial eyes, we can also show that the pupil size artefact has no major role in increasing and colouring noise. Our findings add to and replicate the observation by Niehorster et al., (2021) that lowpass filters in video-based P-CR eye trackers colour the data. Irrespective of source, filters or head movements, coloured noise can be confused for oculomotor drift. We also find that usage of the default head restriction in the EyeLink 1000+, the EyeLink II and the HiSpeed240 result in noisier data compared to less head restriction. Researchers investigating data quality in eye trackers should consider not using the Gen 2 artificial eye from SR Research / EyeLink. Data recorded with this artificial eye are much noisier than data recorded with other artificial eyes, on average 2.2-14.5 times worse for the five eye trackers.
Collapse
Affiliation(s)
- Kenneth Holmqvist
- Institute of Psychology, Nicolaus Copernicus University in Torun, Torun, Poland
- Department of Psychology, Regensburg University, Regensburg, Germany
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| | - Saga Lee Örbom
- Department of Psychology, Regensburg University, Regensburg, Germany
| | | |
Collapse
|
26
|
Sanz Diez P, Bosco A, Fattori P, Wahl S. Horizontal target size perturbations during grasping movements are described by subsequent size perception and saccade amplitude. PLoS One 2022; 17:e0264560. [PMID: 35290373 PMCID: PMC8923441 DOI: 10.1371/journal.pone.0264560] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Accepted: 02/14/2022] [Indexed: 11/18/2022] Open
Abstract
Perception and action are essential in our day-to-day interactions with the environment. Despite the dual-stream theory of action and perception, it is now accepted that action and perception processes interact with each other. However, little is known about the impact of unpredicted changes of target size during grasping actions on perception. We assessed whether size perception and saccade amplitude were affected before and after grasping a target that changed its horizontal size during the action execution under the presence or absence of tactile feedback. We have tested twenty-one participants in 4 blocks of 30 trials. Blocks were divided into two experimental tactile feedback paradigms: tactile and non-tactile. Trials consisted of 3 sequential phases: pre-grasping size perception, grasping, and post-grasping size perception. During pre- and post-phases, participants executed a saccade towards a horizontal bar and performed a manual size estimation of the bar size. During grasping phase, participants were asked to execute a saccade towards the bar and to make a grasping action towards the screen. While grasping, 3 horizontal size perturbation conditions were applied: non-perturbation, shortening, and lengthening. 30% of the trials presented perturbation, meaning a symmetrically shortened or lengthened by 33% of the original size. Participants’ hand and eye positions were assessed by a motion capture system and a mobile eye-tracker, respectively. After grasping, in both tactile and non-tactile feedback paradigms, size estimation was significantly reduced in lengthening (p = 0.002) and non-perturbation (p<0.001), whereas shortening did not induce significant adjustments (p = 0.86). After grasping, saccade amplitude became significantly longer in shortening (p<0.001) and significantly shorter in lengthening (p<0.001). Non-perturbation condition did not display adjustments (p = 0.95). Tactile feedback did not generate changes in the collected perceptual responses, but horizontal size perturbations did so, suggesting that all relevant target information used in the movement can be extracted from the post-action target perception.
Collapse
Affiliation(s)
- Pablo Sanz Diez
- Carl Zeiss Vision International GmbH, Aalen, Germany
- Institute for Ophthalmic Research, Eberhard Karls University Tuebingen, Tuebingen, Germany
- * E-mail: (PSD); (AB)
| | - Annalisa Bosco
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Alma Mater Research Institute For Human-Centered Artificial Intelligence (Alma Human AI), University of Bologna, Bologna, Italy
- * E-mail: (PSD); (AB)
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Alma Mater Research Institute For Human-Centered Artificial Intelligence (Alma Human AI), University of Bologna, Bologna, Italy
| | - Siegfried Wahl
- Carl Zeiss Vision International GmbH, Aalen, Germany
- Institute for Ophthalmic Research, Eberhard Karls University Tuebingen, Tuebingen, Germany
| |
Collapse
|
27
|
Massin L, Lahuec C, Seguin F, Nourrit V, de Bougrenet de la Tocnaye JL. Multipurpose Bio-Monitored Integrated Circuit in a Contact Lens Eye-Tracker. SENSORS (BASEL, SWITZERLAND) 2022; 22:595. [PMID: 35062555 PMCID: PMC8778089 DOI: 10.3390/s22020595] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 12/28/2021] [Accepted: 01/11/2022] [Indexed: 02/01/2023]
Abstract
We present the design, fabrication, and test of a multipurpose integrated circuit (Application Specific Integrated Circuit) in AMS 0.35 µm Complementary Metal Oxide Semiconductor technology. This circuit is embedded in a scleral contact lens, combined with photodiodes enabling the gaze direction detection when illuminated and wirelessly powered by an eyewear. The gaze direction is determined by means of a centroid computation from the measured photocurrents. The ASIC is used simultaneously to detect specific eye blinking sequences to validate target designations, for instance. Experimental measurements and validation are performed on a scleral contact lens prototype integrating four infrared photodiodes, mounted on a mock-up eyeball, and combined with an artificial eyelid. The eye-tracker has an accuracy of 0.2°, i.e., 2.5 times better than current mobile video-based eye-trackers, and is robust with respect to process variations, operating time, and supply voltage. Variations of the computed gaze direction transmitted to the eyewear, when the eyelid moves, are detected and can be interpreted as commands based on blink duration or using blinks alternation on both eyes.
Collapse
Affiliation(s)
- Loïc Massin
- Optics Department, Institut Mines-Télécom Atlantique, Technopôle Brest Iroise, CS 83818, CEDEX 03, 29238 Brest, Brittany, France; (L.M.); (F.S.); (V.N.); (J.-L.d.B.d.l.T.)
- Laboratoire des Sciences et Techniques de l’Information, de la Communication et de la Connaissance, UMR 6285, 29238 Brest, Brittany, France
| | - Cyril Lahuec
- Optics Department, Institut Mines-Télécom Atlantique, Technopôle Brest Iroise, CS 83818, CEDEX 03, 29238 Brest, Brittany, France; (L.M.); (F.S.); (V.N.); (J.-L.d.B.d.l.T.)
- Laboratoire des Sciences et Techniques de l’Information, de la Communication et de la Connaissance, UMR 6285, 29238 Brest, Brittany, France
| | - Fabrice Seguin
- Optics Department, Institut Mines-Télécom Atlantique, Technopôle Brest Iroise, CS 83818, CEDEX 03, 29238 Brest, Brittany, France; (L.M.); (F.S.); (V.N.); (J.-L.d.B.d.l.T.)
- Laboratoire des Sciences et Techniques de l’Information, de la Communication et de la Connaissance, UMR 6285, 29238 Brest, Brittany, France
| | - Vincent Nourrit
- Optics Department, Institut Mines-Télécom Atlantique, Technopôle Brest Iroise, CS 83818, CEDEX 03, 29238 Brest, Brittany, France; (L.M.); (F.S.); (V.N.); (J.-L.d.B.d.l.T.)
| | - Jean-Louis de Bougrenet de la Tocnaye
- Optics Department, Institut Mines-Télécom Atlantique, Technopôle Brest Iroise, CS 83818, CEDEX 03, 29238 Brest, Brittany, France; (L.M.); (F.S.); (V.N.); (J.-L.d.B.d.l.T.)
| |
Collapse
|
28
|
Negative affect impedes perceptual filling-in in the uniformity illusion. Conscious Cogn 2021; 98:103258. [PMID: 34965506 DOI: 10.1016/j.concog.2021.103258] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Revised: 11/01/2021] [Accepted: 12/03/2021] [Indexed: 02/02/2023]
Abstract
The notion of cognitive penetrability, i.e., whether perceptual contents can in principle be influenced by non-perceptual factors, has sparked a significant debate over methodological concerns and the correct interpretation of existing findings. In this study, we combined predictive processing models of visual perception and affective states to investigate influences of affective valence on perceptual filling-in in extrafoveal vision. We tested how experimentally induced affect would influence the probability of perceptual filling-in occurring in the uniformity illusion (N = 50). Negative affect led to reduced occurrence rates and increased onset times of visual uniformity. This effect was selectively observed in illusionary trials, requiring perceptual filling-in, and not in control trials, where uniformity was the veridical percept, ruling out biased motor responses or deliberate judgments as confounding variables. This suggests an influential role of affective status on subsequent perceptual processing, specifically on how much weight is ascribed to priors as opposed to sensory evidence.
Collapse
|
29
|
An implicit representation of stimulus ambiguity in pupil size. Proc Natl Acad Sci U S A 2021; 118:2107997118. [PMID: 34819369 DOI: 10.1073/pnas.2107997118] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/07/2021] [Indexed: 11/18/2022] Open
Abstract
To guide behavior, perceptual systems must operate on intrinsically ambiguous sensory input. Observers are usually able to acknowledge the uncertainty of their perception, but in some cases, they critically fail to do so. Here, we show that a physiological correlate of ambiguity can be found in pupil dilation even when the observer is not aware of such ambiguity. We used a well-known auditory ambiguous stimulus, known as the tritone paradox, which can induce the perception of an upward or downward pitch shift within the same individual. In two experiments, behavioral responses showed that listeners could not explicitly access the ambiguity in this stimulus, even though their responses varied from trial to trial. However, pupil dilation was larger for the more ambiguous cases. The ambiguity of the stimulus for each listener was indexed by the entropy of behavioral responses, and this entropy was also a significant predictor of pupil size. In particular, entropy explained additional variation in pupil size independent of the explicit judgment of confidence in the specific situation that we investigated, in which the two measures were decoupled. Our data thus suggest that stimulus ambiguity is implicitly represented in the brain even without explicit awareness of this ambiguity.
Collapse
|
30
|
Current Challenges Supporting School-Aged Children with Vision Problems: A Rapid Review. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11209673] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Many children have undetected vision problems or insufficient visual information processing that may be a factor in lower academic outcomes. The aim of this paper is to contribute to a better understanding of the importance of vision screening for school-aged children, and to investigate the possibilities of how eye-tracking (ET) technologies can support this. While there are indications that these technologies can support vision screening, a broad understanding of how to apply them and by whom, and if it is possible to utilize them at schools, is lacking. We review interdisciplinary research on performing vision investigations, and discuss current challenges for technology support. The focus is on exploring the possibilities of ET technologies to better support screening and handling of vision disorders, especially by non-vision experts. The data orginate from a literature survey of peer-reviewed journals and conference articles complemented by secondary sources, following a rapid review methodology. We highlight current trends in supportive technologies for vision screening, and identify the involved stakeholders and the research studies that discuss how to develop more supportive ET technologies for vision screening and training by non-experts.
Collapse
|
31
|
Griffith H, Lohr D, Abdulin E, Komogortsev O. GazeBase, a large-scale, multi-stimulus, longitudinal eye movement dataset. Sci Data 2021; 8:184. [PMID: 34272404 PMCID: PMC8285447 DOI: 10.1038/s41597-021-00959-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Accepted: 05/20/2021] [Indexed: 11/09/2022] Open
Abstract
This manuscript presents GazeBase, a large-scale longitudinal dataset containing 12,334 monocular eye-movement recordings captured from 322 college-aged participants. Participants completed a battery of seven tasks in two contiguous sessions during each round of recording, including a - (1) fixation task, (2) horizontal saccade task, (3) random oblique saccade task, (4) reading task, (5/6) free viewing of cinematic video task, and (7) gaze-driven gaming task. Nine rounds of recording were conducted over a 37 month period, with participants in each subsequent round recruited exclusively from prior rounds. All data was collected using an EyeLink 1000 eye tracker at a 1,000 Hz sampling rate, with a calibration and validation protocol performed before each task to ensure data quality. Due to its large number of participants and longitudinal nature, GazeBase is well suited for exploring research hypotheses in eye movement biometrics, along with other applications applying machine learning to eye movement signal analysis. Classification labels produced by the instrument's real-time parser are provided for a subset of GazeBase, along with pupil area.
Collapse
Affiliation(s)
- Henry Griffith
- Texas State University, Department of Computer Science, San Marcos, TX, 78666, USA.
| | - Dillon Lohr
- Texas State University, Department of Computer Science, San Marcos, TX, 78666, USA
| | - Evgeny Abdulin
- Texas State University, Department of Computer Science, San Marcos, TX, 78666, USA
| | - Oleg Komogortsev
- Texas State University, Department of Computer Science, San Marcos, TX, 78666, USA
| |
Collapse
|
32
|
Motoki K, Saito T, Onuma T. Eye-tracking research on sensory and consumer science: A review, pitfalls and future directions. Food Res Int 2021; 145:110389. [PMID: 34112392 DOI: 10.1016/j.foodres.2021.110389] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 04/11/2021] [Accepted: 05/03/2021] [Indexed: 10/21/2022]
Abstract
Visual processing is a core cognitive element of sensory and consumer science. Consumers visually attend to food types, packaging, label design, advertisements, supermarket shelves, food menus, and other visible information. During the past decade, sensory and consumer science have used eye tracking to elucidate visual processing by consumers. This review paper summarizes earlier findings in terms of bottom-up (i.e., stimulus-driven) processing such as visual salience, size, and top-down (i.e., goal-driven) processing such as goals, task instructions, task complexity, and emotions. Downstream effects of gaze on choice are also reviewed. Pitfalls and future directions of eye-tracking research on sensory and consumer science are also discussed.
Collapse
Affiliation(s)
- Kosuke Motoki
- Department of Food Science and Business, Miyagi University, 2-2-1 Hatatate, Taihaku, Sendai 982-0215, Japan.
| | - Toshiki Saito
- Institute of Development, Aging and Cancer, Tohoku University, 4-1, Seiryo-machi Aoba, Sendai, Japan.
| | - Takuya Onuma
- Department of Management and Business, Faculty of Humanity-oriented Science and Engineering, Kindai University, Fukuoka, Japan.
| |
Collapse
|
33
|
Avoiding potential pitfalls in visual search and eye-movement experiments: A tutorial review. Atten Percept Psychophys 2021; 83:2753-2783. [PMID: 34089167 PMCID: PMC8460493 DOI: 10.3758/s13414-021-02326-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/03/2021] [Indexed: 12/15/2022]
Abstract
Examining eye-movement behavior during visual search is an increasingly popular approach for gaining insights into the moment-to-moment processing that takes place when we look for targets in our environment. In this tutorial review, we describe a set of pitfalls and considerations that are important for researchers – both experienced and new to the field – when engaging in eye-movement and visual search experiments. We walk the reader through the research cycle of a visual search and eye-movement experiment, from choosing the right predictions, through to data collection, reporting of methodology, analytic approaches, the different dependent variables to analyze, and drawing conclusions from patterns of results. Overall, our hope is that this review can serve as a guide, a talking point, a reflection on the practices and potential problems with the current literature on this topic, and ultimately a first step towards standardizing research practices in the field.
Collapse
|
34
|
Ocular measures during associative learning predict recall accuracy. Int J Psychophysiol 2021; 166:103-115. [PMID: 34052234 DOI: 10.1016/j.ijpsycho.2021.05.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2019] [Revised: 05/19/2021] [Accepted: 05/25/2021] [Indexed: 11/20/2022]
Abstract
The ability to form associations between stimuli and commit those associations to memory is a cornerstone of human cognition. Dopamine and noradrenaline are critical neuromodulators implicated in a range of cognitive functions, including learning and memory. Eye blink rate (EBR) and pupil diameter have been shown to index dopaminergic and noradrenergic activity. Here, we examined how these ocular measures relate to accuracy in a paired-associate learning task where participants (N = 73) learned consistent object-location associations over eight trials consisting of pre-trial fixation, encoding, delay, and retrieval epochs. In order to examine how within-subject changes and between-subject changes in ocular metrics related to accuracy, we mean centered individual metric values on each trial based on within-person and across-subject means for each epoch. Within-participant variation in EBR was positively related to accuracy in both encoding and delay epochs: faster EBR within the individual predicted better retrieval. Differences in EBR across participants was negatively related to accuracy in the encoding epoch and in early trials of the pre-trial fixation: faster EBR, relative to other subjects, predicted poorer retrieval. Visual scanning behavior in pre-trial fixation and delay epochs was also positively related to accuracy in early trials: more scanning predicted better retrieval. We found no relationship between pupil diameter and accuracy. These results provide novel evidence supporting the utility of ocular metrics in illuminating cognitive and neurobiological mechanisms of paired-associate learning.
Collapse
|
35
|
Visual Neuroscience Methods for Marmosets: Efficient Receptive Field Mapping and Head-Free Eye Tracking. eNeuro 2021; 8:ENEURO.0489-20.2021. [PMID: 33863782 PMCID: PMC8143020 DOI: 10.1523/eneuro.0489-20.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Revised: 02/18/2021] [Accepted: 03/25/2021] [Indexed: 11/21/2022] Open
Abstract
The marmoset has emerged as a promising primate model system, in particular for visual neuroscience. Many common experimental paradigms rely on head fixation and an extended period of eye fixation during the presentation of salient visual stimuli. Both of these behavioral requirements can be challenging for marmosets. Here, we present two methodological developments, each addressing one of these difficulties. First, we show that it is possible to use a standard eye-tracking system without head fixation to assess visual behavior in the marmoset. Eye-tracking quality from head-free animals is sufficient to obtain precise psychometric functions from a visual acuity task. Second, we introduce a novel method for efficient receptive field (RF) mapping that does not rely on moving stimuli but uses fast flashing annuli and wedges. We present data recorded during head-fixation in areas V1 and V6 and show that RF locations are readily obtained within a short period of recording time. Thus, the methodological advancements presented in this work will contribute to establish the marmoset as a valuable model in neuroscience.
Collapse
|
36
|
Angelopoulos AN, Martel JNP, Kohli AP, Conradt J, Wetzstein G. Event-Based Near-Eye Gaze Tracking Beyond 10,000 Hz. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:2577-2586. [PMID: 33780340 DOI: 10.1109/tvcg.2021.3067784] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The cameras in modern gaze-tracking systems suffer from fundamental bandwidth and power limitations, constraining data acquisition speed to 300 Hz realistically. This obstructs the use of mobile eye trackers to perform, e.g., low latency predictive rendering, or to study quick and subtle eye motions like microsaccades using head-mounted devices in the wild. Here, we propose a hybrid frame-event-based near-eye gaze tracking system offering update rates beyond 10,000 Hz with an accuracy that matches that of high-end desktop-mounted commercial trackers when evaluated in the same conditions. Our system, previewed in Figure 1, builds on emerging event cameras that simultaneously acquire regularly sampled frames and adaptively sampled events. We develop an online 2D pupil fitting method that updates a parametric model every one or few events. Moreover, we propose a polynomial regressor for estimating the point of gaze from the parametric pupil model in real time. Using the first event-based gaze dataset, we demonstrate that our system achieves accuracies of 0.45°-1.75° for fields of view from 45° to 98°. With this technology, we hope to enable a new generation of ultra-low-latency gaze-contingent rendering and display techniques for virtual and augmented reality.
Collapse
|
37
|
Pfeiffer C, Scaramuzza D. Human-Piloted Drone Racing: Visual Processing and Control. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3064282] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
38
|
Schweizer T, Wyss T, Gilgen-Ammann R. Eyeblink Detection in the Field: A Proof of Concept Study of Two Mobile Optical Eye-Trackers. Mil Med 2021; 187:e404-e409. [PMID: 33564826 PMCID: PMC9244949 DOI: 10.1093/milmed/usab032] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 01/19/2021] [Accepted: 01/25/2021] [Indexed: 12/02/2022] Open
Abstract
Introduction High physical and cognitive strain, high pressure, and sleep deficit are part of daily
life for military professionals and civilians working in physiologically demanding
environments. As a result, cognitive and physical capacities decline and the risk of
illness, injury, or accidents increases. Such unfortunate outcomes could be prevented by
tracking real-time physiological information, revealing individuals’ objective fatigue
levels. Oculometrics, and especially eyeblinks, have been shown to be promising
biomarkers that reflect fatigue development. Head-mounted optical eye-trackers are a
common method to monitor these oculometrics. However, studies measuring eyeblink
detection in real-life settings have been lacking in the literature. Therefore, this
study aims to validate two current mobile optical eye-trackers in an unrestrained
military training environment. Materials and Method Three male participants (age 20.0 ± 1.0) of the Swiss Armed Forces participated in this
study by wearing three optical eye-trackers, two VPS16s (Viewpointsystem GmbH, Vienna,
Austria) and one Pupil Core (Pupil Labs GmbH, Berlin, Germany), during four military
training events: Healthcare education, orienteering, shooting, and military marching.
Software outputs were analyzed against a visual inspection (VI) of the video recordings
of participants’ eyes via the respective software. Absolute and relative blink numbers
were provided. Each blink detected by the software was classified as a “true blink” (TB)
when it occurred in the software output and the VI at the same time, as a “false blink”
(FB) when it occurred in the software but not in the VI, and as a “missed blink” (MB)
when the software failed to detect a blink that occurred in the VI. The FBs were further
examined for causes of the incorrect recordings, and they were divided into four
categories: “sunlight,” “movements,” “lost pupil,” and “double-counted”. Blink frequency
(i.e., blinks per minute) was also analyzed. Results Overall, 49.3% and 72.5% of registered eyeblinks were classified as TBs for the VPS16
and Pupil Core, respectively. The VPS16 recorded 50.7% of FBs and accounted for 8.5% of
MBs, while the Pupil Core recorded 27.5% of FBs and accounted for 55.5% of MBs. The
majority of FBs—45.5% and 73.9% for the VPS16 and Pupil Core, respectively—were
erroneously recorded due to participants’ eye movements while looking up, down, or to
one side. For blink frequency analysis, systematic biases (±limits of agreement) stood
at 23.3 (±43.5) and −4.87 (±14.1) blinks per minute for the VPS16 and Pupil Core,
respectively. Significant differences in systematic bias between devices and the
respective VIs were found for nearly all activities (P < .05). Conclusion An objective physiological monitoring of fatigue is necessary for soldiers as well as
civil professionals who are exposed to higher risks when their cognitive or physical
capacities weaken. However, optical eye-trackers’ accuracy has not been specified under
field conditions—especially not in monitoring fatigue. The significant overestimation
and underestimation of the VPS16 and Pupil Core, respectively, demonstrate the general
difficulty of blink detection in the field.
Collapse
Affiliation(s)
- Theresa Schweizer
- Monitoring Canton: Bern, Swiss Federal Institute of Sport Magglingen (SFISM), Magglingen/Macolin 2532, Switzerland
| | - Thomas Wyss
- Monitoring Canton: Bern, Swiss Federal Institute of Sport Magglingen (SFISM), Magglingen/Macolin 2532, Switzerland
| | - Rahel Gilgen-Ammann
- Monitoring Canton: Bern, Swiss Federal Institute of Sport Magglingen (SFISM), Magglingen/Macolin 2532, Switzerland
| |
Collapse
|
39
|
Neurogastronomy as a Tool for Evaluating Emotions and Visual Preferences of Selected Food Served in Different Ways. Foods 2021; 10:foods10020354. [PMID: 33562287 PMCID: PMC7914587 DOI: 10.3390/foods10020354] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 01/28/2021] [Accepted: 02/04/2021] [Indexed: 01/22/2023] Open
Abstract
The appearance of food provides certain expectations regarding the harmonization of taste, delicacy, and overall quality, which subsequently affects not only the intake itself but also many other features of the behavior of customers of catering facilities. The main goal of this article is to find out what effect the visual design of food (waffles) prepared from the same ingredients and served in three different ways-a stone plate, street food style, and a white classic plate-has on the consumer's preferences. In addition to the classic tablet assistance personal interview (TAPI) tools, biometric methods such as eye tracking and face reading were used in order to obtain unconscious feedback. During testing, air quality in the room by means of the Extech device and the influence of the visual design of food on the perception of its smell were checked. At the end of the paper, we point out the importance of using classical feedback collection techniques (TAPI) and their extension in measuring subconscious reactions based on monitoring the eye movements and facial expressions of the respondents, which provides a whole new perspective on the perception of visual design and serving food as well as more effective targeting and use of corporate resources.
Collapse
|
40
|
A Fast and Effective System for Analysis of Optokinetic Waveforms with a Low-Cost Eye Tracking Device. Healthcare (Basel) 2020; 9:healthcare9010010. [PMID: 33374811 PMCID: PMC7824545 DOI: 10.3390/healthcare9010010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 12/19/2020] [Accepted: 12/21/2020] [Indexed: 11/16/2022] Open
Abstract
Optokinetic nystagmus (OKN) is an involuntary eye movement induced by motion of a large proportion of the visual field. It consists of a "slow phase (SP)" with eye movements in the same direction as the movement of the pattern and a "fast phase (FP)" with saccadic eye movements in the opposite direction. Study of OKN can reveal valuable information in ophthalmology, neurology and psychology. However, the current commercially available high-resolution and research-grade eye tracker is usually expensive. Methods & Results: We developed a novel fast and effective system combined with a low-cost eye tracking device to accurately quantitatively measure OKN eye movement. Conclusions: The experimental results indicate that the proposed method achieves fast and promising results in comparisons with several traditional approaches.
Collapse
|
41
|
Luidolt LR, Wimmer M, Krosl K. Gaze-Dependent Simulation of Light Perception in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:3557-3567. [PMID: 32941149 DOI: 10.1109/tvcg.2020.3023604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The perception of light is inherently different inside a virtual reality (VR) or augmented reality (AR) simulation when compared to the real world. Conventional head-worn displays (HWDs) are not able to display the same high dynamic range of brightness and color as the human eye can perceive in the real world. To mimic the perception of real-world scenes in virtual scenes, it is crucial to reproduce the effects of incident light on the human visual system. In order to advance virtual simulations towards perceptual realism, we present an eye-tracked VR/AR simulation comprising effects for gaze-dependent temporal eye adaption, perceptual glare, visual acuity reduction, and scotopic color vision. Our simulation is based on medical expert knowledge and medical studies of the healthy human eye. We conducted the first user study comparing the perception of light in a real-world low-light scene to a VR simulation. Our results show that the proposed combination of simulated visual effects is well received by users and also indicate that an individual adaptation is necessary, because perception of light is highly subjective.
Collapse
|
42
|
Massin L, Nourrit V, Lahuec C, Seguin F, Adam L, Daniel E, de Bougrenet de la Tocnaye JL. Development of a new scleral contact lens with encapsulated photodetectors for eye tracking. OPTICS EXPRESS 2020; 28:28635-28647. [PMID: 32988130 DOI: 10.1364/oe.399823] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Accepted: 07/24/2020] [Indexed: 06/11/2023]
Abstract
Most eye trackers nowadays are video-based, which allows for a relatively simple and non-invasive approach but also imposes several constraints in terms of necessary computing power and conditions of use (e.g., lighting, spectacles, etc.). We introduce a new eye tracker using a scleral lens equipped with photodiodes and an eyewear with active illumination. The direction of gaze is obtained from the weighted average of photocurrents (centroid) and communicated through an optical link. After discussing the optimum photodiodes configuration (number, layout) and associated lighting (collimated, Lambertian), we present prototypes demonstrating the high performances possibilities (0.11° accuracy when placed on an artificial eye) and wireless optical communication.
Collapse
|
43
|
Haskins AJ, Mentch J, Botch TL, Robertson CE. Active vision in immersive, 360° real-world environments. Sci Rep 2020; 10:14304. [PMID: 32868788 PMCID: PMC7459302 DOI: 10.1038/s41598-020-71125-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 08/06/2020] [Indexed: 11/30/2022] Open
Abstract
How do we construct a sense of place in a real-world environment? Real-world environments are actively explored via saccades, head turns, and body movements. Yet, little is known about how humans process real-world scene information during active viewing conditions. Here, we exploited recent developments in virtual reality (VR) and in-headset eye-tracking to test the impact of active vs. passive viewing conditions on gaze behavior while participants explored novel, real-world, 360° scenes. In one condition, participants actively explored 360° photospheres from a first-person perspective via self-directed motion (saccades and head turns). In another condition, photospheres were passively displayed to participants while they were head-restricted. We found that, relative to passive viewers, active viewers displayed increased attention to semantically meaningful scene regions, suggesting more exploratory, information-seeking gaze behavior. We also observed signatures of exploratory behavior in eye movements, such as quicker, more entropic fixations during active as compared with passive viewing conditions. These results show that active viewing influences every aspect of gaze behavior, from the way we move our eyes to what we choose to attend to. Moreover, these results offer key benchmark measurements of gaze behavior in 360°, naturalistic environments.
Collapse
Affiliation(s)
- Amanda J Haskins
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA.
| | - Jeff Mentch
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Thomas L Botch
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| | - Caroline E Robertson
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| |
Collapse
|
44
|
Essig P, Leube A, Rifai K, Wahl S. Microsaccadic rate signatures correlate under monocular and binocular stimulation conditions. J Eye Mov Res 2020; 11. [PMID: 33828709 PMCID: PMC8008506 DOI: 10.16910/jemr.13.5.3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Microsaccades are involuntary eye movements occurring naturally during fixation. In this
study, microsaccades were investigated under monocularly and binocularly stimulated
conditions with respect to their directional distribution and rate signature, that refers to a
curve reporting the frequency modulation of microsaccades over time. For monocular
stimulation the left eye was covered by an infrared filter. In both stimulation conditions,
participants fixated a Gabor patch presented randomly in orientation of 45° or 135° over a
wide range of spatial frequencies appearing in the center of a monitor. Considering the
microsaccadic directions, this study showed microsaccades to be preferably horizontally
oriented in their mean direction, regardless of the spatial characteristics of the grating.
Furthermore, this outcome was found to be consistent between both stimulation conditions.
Moreover, this study found that the microsaccadic rate signature curve correlates between
both stimulation conditions, while the curve given for binocular stimulation was already
proposed as a tool for estimation of visual performance in the past. Therefore, this study extends the applicability of microsaccades to clinical use, since
parameters as contrast sensitivity, has been measured monocularly in the clinical attitude.
Collapse
Affiliation(s)
- Peter Essig
- Institute for Ophthalmic Research, Eberhard Karls University Tuebingen, Germany
| | | | | | | |
Collapse
|
45
|
Shanidze NM, Velisar A. Eye, head, and gaze contributions to smooth pursuit in macular degeneration. J Neurophysiol 2020; 124:134-144. [PMID: 32519572 DOI: 10.1152/jn.00001.2020] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Macular degeneration (MD) often leads to the loss of the fovea and surrounding central visual field. This type of visual loss is very common and can present particular challenges for oculomotor tasks that may rely on the fovea. For certain tasks, individuals develop a new, eccentric fixational locus. Our previous work has shown that smooth pursuit is impaired in MD. However, extent of retinal lesion size and eccentricity of fixation do not directly contribute to changes in smooth pursuit gain. Oculomotor limitations due to eccentric eye position in the orbit may be another culprit. Here we test the hypothesis that deficits in smooth pursuit in MD would be reduced under head-unrestrained conditions. To that end, we examined eye, head, and gaze movements in eight individuals with MD and seven age-matched controls in response to a step-ramp pursuit stimulus. We found that despite variability across participants, both groups had similar smooth pursuit head movements (P = 0.76), while both had significantly higher pursuit gains in the head-restrained condition (P < 0.0001), suggesting that in older populations, head movements may lead to a decrease in pursuit gain. Furthermore, we did not find a correlation between eccentricity of fixation and amount of head displacement during the trial (P = 0.25), suggesting that eccentric eye position does not lead to higher reliance on head movements in smooth pursuit. Our finding that individuals with MD have lower pursuit gains, despite similar head movements as controls, suggests a difference in how MD affects mechanisms underlying eye versus head movements in smooth pursuit.NEW & NOTEWORTHY This article is the first to look at eye and head movements in observers with macular degeneration. It is the first to show that in older individuals, regardless of central field defect, freedom of head movement may reduce pursuit gain. Despite oculomotor limitations due to eccentric fixation, individuals with macular degeneration do not rely on head movements more than age-matched controls, with both groups having a similarly heterogenous eye and head movement strategy for pursuit.
Collapse
Affiliation(s)
- Natela M Shanidze
- The Smith-Kettlewell Eye Research Institute, San Francisco, California
| | - Anca Velisar
- The Smith-Kettlewell Eye Research Institute, San Francisco, California
| |
Collapse
|
46
|
Carter BT, Luke SG. Best practices in eye tracking research. Int J Psychophysiol 2020; 155:49-62. [PMID: 32504653 DOI: 10.1016/j.ijpsycho.2020.05.010] [Citation(s) in RCA: 61] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Revised: 05/26/2020] [Accepted: 05/27/2020] [Indexed: 12/14/2022]
Abstract
This guide describes best practices in using eye tracking technology for research in a variety of disciplines. A basic outline of the anatomy and physiology of the eyes and of eye movements is provided, along with a description of the sorts of research questions eye tracking can address. We then explain how eye tracking technology works and what sorts of data it generates, and provide guidance on how to select and use an eye tracker as well as selecting appropriate eye tracking measures. Challenges to the validity of eye tracking studies are described, along with recommendations for overcoming these challenges. We then outline correct reporting standards for eye tracking studies.
Collapse
|
47
|
Adhanom IB, Lee SC, Folmer E, MacNeilage P. GazeMetrics: An Open-Source Tool for Measuring the Data Quality of HMD-based Eye Trackers. PROCEEDINGS. EYE TRACKING RESEARCH & APPLICATIONS SYMPOSIUM 2020; 2020. [PMID: 33791686 DOI: 10.1145/3379156.3391374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
As virtual reality (VR) garners more attention for eye tracking research, knowledge of accuracy and precision of head-mounted display (HMD) based eye trackers becomes increasingly necessary. It is tempting to rely on manufacturer-provided information about the accuracy and precision of an eye tracker. However, unless data is collected under ideal conditions, these values seldom align with on-site metrics. Therefore, best practices dictate that accuracy and precision should be measured and reported for each study. To address this issue, we provide a novel open-source suite for rigorously measuring accuracy and precision for use with a variety of HMD-based eye trackers. This tool is customizable without having to alter the source code, but changes to the code allow for further alteration. The outputs are available in real time and easy to interpret, making eye tracking with VR more approachable for all users.
Collapse
|
48
|
Pfeiffer C, Hollenstein N, Zhang C, Langer N. Neural dynamics of sentiment processing during naturalistic sentence reading. Neuroimage 2020; 218:116934. [PMID: 32416227 DOI: 10.1016/j.neuroimage.2020.116934] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 04/24/2020] [Accepted: 05/07/2020] [Indexed: 12/15/2022] Open
Abstract
When we read, our eyes move through the text in a series of fixations and high-velocity saccades to extract visual information. This process allows the brain to obtain meaning, e.g., about sentiment, or the emotional valence, expressed in the written text. How exactly the brain extracts the sentiment of single words during naturalistic reading is largely unknown. This is due to the challenges of naturalistic imaging, which has previously led researchers to employ highly controlled, timed word-by-word presentations of custom reading materials that lack ecological validity. Here, we aimed to assess the electrical neural correlates of word sentiment processing during naturalistic reading of English sentences. We used a publicly available dataset of simultaneous electroencephalography (EEG), eye-tracking recordings, and word-level semantic annotations from 7129 words in 400 sentences (Zurich Cognitive Language Processing Corpus; Hollenstein et al., 2018). We computed fixation-related potentials (FRPs), which are evoked electrical responses time-locked to the onset of fixations. A general linear mixed model analysis of FRPs cleaned from visual- and motor-evoked activity showed a topographical difference between the positive and negative sentiment condition in the 224-304 ms interval after fixation onset in left-central and right-posterior electrode clusters. An additional analysis that included word-, phrase-, and sentence-level sentiment predictors showed the same FRP differences for the word-level sentiment, but no additional FRP differences for phrase- and sentence-level sentiment. Furthermore, decoding analysis that classified word sentiment (positive or negative) from sentiment-matched 40-trial average FRPs showed a 0.60 average accuracy (95% confidence interval: [0.58, 0.61]). Control analyses ruled out that these results were based on differences in eye movements or linguistic features other than word sentiment. Our results extend previous research by showing that the emotional valence of lexico-semantic stimuli evoke a fast electrical neural response upon word fixation during naturalistic reading. These results provide an important step to identify the neural processes of lexico-semantic processing in ecologically valid conditions and can serve to improve computer algorithms for natural language processing.
Collapse
Affiliation(s)
- Christian Pfeiffer
- Methods of Plasticity Research Laboratory, Department of Psychology, University of Zurich, Switzerland; University Research Priority Program (URPP) Dynamics of Healthy Aging, Zurich, Switzerland.
| | | | - Ce Zhang
- Department of Computer Science, ETH, Zurich, Switzerland
| | - Nicolas Langer
- Methods of Plasticity Research Laboratory, Department of Psychology, University of Zurich, Switzerland; University Research Priority Program (URPP) Dynamics of Healthy Aging, Zurich, Switzerland; Neuroscience Center Zurich (ZNZ), Zurich, Switzerland
| |
Collapse
|
49
|
Degno F, Liversedge SP. Eye Movements and Fixation-Related Potentials in Reading: A Review. Vision (Basel) 2020; 4:E11. [PMID: 32028566 PMCID: PMC7157570 DOI: 10.3390/vision4010011] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2019] [Revised: 01/27/2020] [Accepted: 01/28/2020] [Indexed: 11/19/2022] Open
Abstract
The present review is addressed to researchers in the field of reading and psycholinguistics who are both familiar with and new to co-registration research of eye movements (EMs) and fixation related-potentials (FRPs) in reading. At the outset, we consider a conundrum relating to timing discrepancies between EM and event related potential (ERP) effects. We then consider the extent to which the co-registration approach might allow us to overcome this and thereby discriminate between formal theoretical and computational accounts of reading. We then describe three phases of co-registration research before evaluating the existing body of such research in reading. The current, ongoing phase of co-registration research is presented in comprehensive tables which provide a detailed summary of the existing findings. The thorough appraisal of the published studies allows us to engage with issues such as the reliability of FRP components as correlates of cognitive processing in reading and the advantages of analysing both data streams (i.e., EMs and FRPs) simultaneously relative to each alone, as well as the current, and limited, understanding of the relationship between EM and FRP measures. Finally, we consider future directions and in particular the potential of analytical methods involving deconvolution and the potential of measurement of brain oscillatory activity.
Collapse
Affiliation(s)
- Federica Degno
- School of Psychology, University of Central Lancashire, Marsh Ln, Preston PR1 2HE, UK;
| | | |
Collapse
|
50
|
Stanley J, Forte JD, Carter O. Rivalry Onset in and around the Fovea: The Role of Visual Field Location and Eye Dominance on Perceptual Dominance Bias. Vision (Basel) 2019; 3:vision3040051. [PMID: 31735852 PMCID: PMC6969945 DOI: 10.3390/vision3040051] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 09/27/2019] [Accepted: 09/28/2019] [Indexed: 11/16/2022] Open
Abstract
When dissimilar images are presented to each eye, the images will alternate every few seconds in a phenomenon known as binocular rivalry. Recent research has found evidence of a bias towards one image at the initial 'onset' period of rivalry that varies across the peripheral visual field. To determine the role that visual field location plays in and around the fovea at onset, trained observers were presented small orthogonal achromatic grating patches at various locations across the central 3° of visual space for 1-s and 60-s intervals. Results reveal stronger bias at onset than during continuous rivalry, and evidence of temporal hemifield dominance across observers, however, the nature of the hemifield effects differed between individuals and interacted with overall eye dominance. Despite using small grating patches, a high proportion of mixed percept was still reported, with more mixed percept at onset along the vertical midline, in general, and in increasing proportions with eccentricity in the lateral hemifields. Results show that even within the foveal range, onset rivalry bias varies across visual space, and differs in degree and sensitivity to biases in average dominance over continuous viewing.
Collapse
|