1
|
Chandrika KR, Amudha J. Learner stimulus intent: a framework for eye tracking data collection and feature extraction in computer programming education. Sci Rep 2025; 15:11860. [PMID: 40195386 PMCID: PMC11976983 DOI: 10.1038/s41598-025-88172-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Accepted: 01/24/2025] [Indexed: 04/09/2025] Open
Abstract
Eye tracking technology offers valuable insights into the cognitive processes of learners in computer programming education. This research presents a novel framework called the Learner Stimulus Intent that offers useful insights into learners' cognitive processes in computer programming education and has significant implications for assessment in computer science education. The comprehensive data collection, extraction of eye gaze and semantic features, and effective visualization techniques can be utilized to evaluate students' understanding and engagement, offering a more nuanced and detailed picture of their learning progress than traditional assessment methods. Furthermore, the four distinct datasets generated by the framework each offers unique perspectives on learner behavior and their cognitive traits. These datasets are outcomes of the framework's application, embodying its potential to revolutionize the way we understand and assess learning in computer science education. By utilizing this framework, educators and researchers can gain deeper insights into the cognitive processes of learners like cognitive workload, processing order of information, confusion in mind, attention etc, ultimately enhancing instructional strategies and improving learner outcomes.
Collapse
Affiliation(s)
- K R Chandrika
- Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa Vidyapeetham, Bengaluru, India.
| | - J Amudha
- Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa Vidyapeetham, Bengaluru, India.
| |
Collapse
|
2
|
Fabian T. Exploring power-law behavior in human gaze shifts across tasks and populations. Cognition 2025; 257:106079. [PMID: 39904005 DOI: 10.1016/j.cognition.2025.106079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Revised: 01/18/2025] [Accepted: 01/30/2025] [Indexed: 02/06/2025]
Abstract
Visual perception is an integral part of human cognition. Vision comprises sampling information and processing them. Tasks and stimuli influence human sampling behavior, while cognitive and neurological processing mechanisms remain unchanged. A question still controversial today is whether the components interact with each other. Some theories see the components of visual cognition as separate and their influence on gaze behavior as additive. Others see gaze behavior as an emergent structure of visual cognition that emerges through multiplicative interactions. One way to approach this problem is to examine the magnitude of gaze shifts. Demonstrating that gaze shifts show a constant behavior across tasks would argue for the existence of an independent component in human visual behavior. However, studies attempting to generally describe gaze shift magnitudes deliver contradictory results. In this work, we analyze data from numerous experiments to advance the debate on visual cognition by providing a more comprehensive view of visual behavior. The data show that the magnitude of eye movements, also called saccades, cannot be described by a consistent distribution across different experiments. However, we also propose a new way of measuring the magnitude of saccades: relative saccade lengths. We find that a saccade's length relative to the preceding saccade's length consistently follows a power-law distribution. We observe this distribution for all datasets we analyze, regardless of the task, stimulus, age, or native language of the participants. Our results indicate the existence of an independent component utilized by other cognitive processes without interacting with them. This suggests that a part of human visual cognition is based on an additive component that does not depend on stimulus features.
Collapse
Affiliation(s)
- Thomas Fabian
- Department of History and Social Sciences, Technical University Darmstadt, Darmstadt, Germany.
| |
Collapse
|
3
|
Niehorster DC, Nyström M, Hessels RS, Andersson R, Benjamins JS, Hansen DW, Hooge ITC. The fundamentals of eye tracking part 4: Tools for conducting an eye tracking study. Behav Res Methods 2025; 57:46. [PMID: 39762687 PMCID: PMC11703944 DOI: 10.3758/s13428-024-02529-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/14/2024] [Indexed: 01/11/2025]
Abstract
Researchers using eye tracking are heavily dependent on software and hardware tools to perform their studies, from recording eye tracking data and visualizing it, to processing and analyzing it. This article provides an overview of available tools for research using eye trackers and discusses considerations to make when choosing which tools to adopt for one's study.
Collapse
Affiliation(s)
- Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden.
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands
| | | | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute & Social, Health and Organizational Psychology, Utrecht University, Utrecht, the Netherlands
| | - Dan Witzner Hansen
- Eye Information Laboratory, IT University of Copenhagen, Copenhagen, Denmark
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands
| |
Collapse
|
4
|
Wang C, Wang R, Leng Y, Iramina K, Yang Y, Ge S. An Eye Movement Classification Method Based on Cascade Forest. IEEE J Biomed Health Inform 2024; 28:7184-7194. [PMID: 39106144 DOI: 10.1109/jbhi.2024.3439568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/09/2024]
Abstract
Eye tracking technology has become increasingly important in scientific research and practical applications. In the field of eye tracking research, analysis of eye movement data is crucial, particularly for classifying raw eye movement data into eye movement events. Current classification methods exhibit considerable variation in adaptability across different participants, and it is necessary to address the issues of class imbalance and data scarcity in eye movement classification. In the current study, we introduce a novel eye movement classification method based on cascade forest (EMCCF), which comprises two modules: 1) a feature extraction module that employs a multi-scale time window method to extract features from raw eye movement data; 2) a classification module that innovatively employs a layered ensemble architecture, integrating the cascade forest structure with ensemble learning principles, specifically for eye movement classification. Consequently, EMCCF not only enhanced the accuracy and efficiency of eye movement classification but also represents an advancement in applying ensemble learning techniques within this domain. Furthermore, experimental results indicated that EMCCF outperformed existing deep learning-based classification models in several metrics and demonstrated robust performance across different datasets and participants.
Collapse
|
5
|
Roth N, McLaughlin J, Obermayer K, Rolfs M. Gaze Behavior Reveals Expectations of Potential Scene Changes. Psychol Sci 2024; 35:1350-1363. [PMID: 39570640 DOI: 10.1177/09567976241279198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2024] Open
Abstract
Even if the scene before our eyes remains static for some time, we might explore it differently compared with how we examine static images, which are commonly used in studies on visual attention. Here we show experimentally that the top-down expectation of changes in natural scenes causes clearly distinguishable gaze behavior for visually identical scenes. We present free-viewing eye-tracking data of 20 healthy adults on a new video dataset of natural scenes, each mapped for its potential for change (PfC) in independent ratings. Observers looking at frozen videos looked significantly more often at the parts of the scene with a high PfC compared with static images, with substantially higher interobserver coherence. This viewing difference peaked right before a potential movement onset. Established concepts like object animacy or salience alone could not explain this finding. Images thus conceal experience-based expectations that affect gaze behavior in the potentially dynamic real world.
Collapse
Affiliation(s)
- Nicolas Roth
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin
- Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin
| | - Jasper McLaughlin
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin
| | - Klaus Obermayer
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin
- Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Martin Rolfs
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Department of Psychology, Humboldt-Universität zu Berlin
| |
Collapse
|
6
|
Körner HM, Faul F, Nuthmann A. Is a knife the same as a plunger? Comparing the attentional effects of weapons and non-threatening unusual objects in dynamic scenes. Cogn Res Princ Implic 2024; 9:66. [PMID: 39379777 PMCID: PMC11461415 DOI: 10.1186/s41235-024-00579-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 08/01/2024] [Indexed: 10/10/2024] Open
Abstract
Observers' memory for a person's appearance can be compromised by the presence of a weapon, a phenomenon known as the weapon-focus effect (WFE). According to the unusual-item hypothesis, attention shifts from the perpetrator to the weapon because a weapon is an unusual object in many contexts. To test this assumption, we monitored participants' eye movements while they watched a mock-crime video. The video was presented with sound and featured a female perpetrator holding either a weapon, a non-threatening unusual object, or a neutral object. Contrary to the predictions of current theories, there were no significant differences in total viewing times for the three objects. For the perpetrator, total viewing time was reduced when she held the non-threatening unusual object, but not when she held the weapon. However, weapon presence led to an attentional shift from the perpetrator's face toward her body. Detailed time-course analyses revealed that the effects of object type were more pronounced during early scene viewing. Thus, our results do not support the idea of extended attentional shifts from the perpetrator toward the unusual objects, but instead suggest more complex attentional effects. Contrary to previous research, memory for the perpetrator's appearance was not affected by object type. Thus, there was no WFE. An additional online experiment using the same videos and methodology produced a WFE, but this effect disappeared when the videos were presented without sound.
Collapse
Affiliation(s)
- Hannes M Körner
- Institute of Psychology, Kiel University, Neufeldtstr. 4a, 24118, Kiel, Germany.
| | - Franz Faul
- Institute of Psychology, Kiel University, Neufeldtstr. 4a, 24118, Kiel, Germany
| | - Antje Nuthmann
- Institute of Psychology, Kiel University, Neufeldtstr. 4a, 24118, Kiel, Germany
| |
Collapse
|
7
|
Griffiths T, Judge S, Souto D. Use of eye-gaze technology feedback by assistive technology professionals: findings from a thematic analysis. Disabil Rehabil Assist Technol 2024; 19:2708-2725. [PMID: 38592954 DOI: 10.1080/17483107.2024.2338125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 02/21/2024] [Accepted: 03/27/2024] [Indexed: 04/11/2024]
Abstract
Purpose: Eye-gaze technology offers professionals a range of feedback tools, but it is not well understood how these are used to support decision-making or how professionals understand their purpose and function. This paper explores how professionals use a variety of feedback tools and provides commentary on their current use and ideas for future tool development.Methods and Materials: The study adopted a focus group methodology with two groups of professional participants: those involved in the assessment and provision of eye-gaze technology (n = 6) and those who interact with individuals using eye-gaze technology on an ongoing basis (n = 5). Template analysis was used to provide qualitative insight into the research questions.Results: Professionals highlighted several issues with existing tools and gave suggestions on how these could be made better. It is generally felt that existing tools highlight the existence of problems but offer little in the way of solutions or suggestions. Some differences of opinion related to professional perspective were highlighted. Questions about automating certain processes were raised by both groups.Conclusions: Discussion highlighted the need for different levels of feedback for users and professionals. Professionals agreed that current tools are useful to identify problems but do not offer insight into potential solutions. Some tools are being used to draw inferences about vision and cognition which are not supported by existing literature. New tools may be needed to better meet the needs of professionals and an increased understanding of how existing tools function may support such development.
Collapse
Affiliation(s)
- Tom Griffiths
- School of Computing, University of Dundee, Dundee, UK
| | - Simon Judge
- Barnsley Assistive Technology Team, Barnsley Hospital NHS Foundation Trust, Barnsley, UK
| | - David Souto
- School of Psychology and Vision Sciences, University of Leicester, Leicester, UK
| |
Collapse
|
8
|
Borovska P, de Haas B. Individual gaze shapes diverging neural representations. Proc Natl Acad Sci U S A 2024; 121:e2405602121. [PMID: 39213176 PMCID: PMC11388360 DOI: 10.1073/pnas.2405602121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 08/04/2024] [Indexed: 09/04/2024] Open
Abstract
Complex visual stimuli evoke diverse patterns of gaze, but previous research suggests that their neural representations are shared across brains. Here, we used hyperalignment to compare visual responses between observers viewing identical stimuli. We find that individual eye movements enhance cortical visual responses but also lead to representational divergence. Pairwise differences in the spatial distribution of gaze and in semantic salience predict pairwise representational divergence in V1 and inferior temporal cortex, respectively. This suggests that individual gaze sculpts individual visual worlds.
Collapse
Affiliation(s)
- Petra Borovska
- Department of Experimental Psychology, Justus Liebig University, Giessen 35394, Germany
| | - Benjamin de Haas
- Department of Experimental Psychology, Justus Liebig University, Giessen 35394, Germany
- Center for Mind, Brain and Behavior, Marburg and Giessen, Darmstadt 35032, Germany
| |
Collapse
|
9
|
Becker M, Troje NF, Schmidt F, Haberkamp A. Moving spiders do not boost visual search in spider fear. Sci Rep 2024; 14:19006. [PMID: 39152224 PMCID: PMC11329515 DOI: 10.1038/s41598-024-69468-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2024] [Accepted: 08/02/2024] [Indexed: 08/19/2024] Open
Abstract
Previous research on attention to fear-relevant stimuli has largely focused on static pictures or drawings, and thus did not consider the potential effect of natural motion. Here, we aimed to investigate the effect of motion on attentional capture in spider-fearful and non-fearful participants by using point-light stimuli and naturalistic videos. Point-light stimuli consist of moving dots representing joints and thereby visualizing biological motion (e.g. of a walking human or cat) without needing a visible body. Spider-fearful (n = 30) and non-spider-fearful (n = 31) participants completed a visual search task with moving targets (point-light/naturalistic videos) and static distractors (images), static targets and moving distractors, or static targets and static distractors. Participants searched for a specified animal type (snakes, spiders, cats, or doves) as quickly as possible. We replicated previous findings with static stimuli: snakes were detected faster and increased distraction, while spiders just increased distraction. However, contrary to our hypotheses, spider targets did not speed up responses, neither in the group of control nor in the group of spider-fearful participants. Interestingly, stimuli-specific effects were toned down, abolished, or even changed direction when motion was introduced. Also, we demonstrated that point-light stimuli were of similar efficiency as naturalistic videos, indicating that for testing effects of motion in visual search, "pure" motion stimuli might be sufficient. As we do show a substantial modulation of visual search phenomena by biological motion, we advocate for future studies to use moving stimuli, equivalent to our dynamic environment, to increase ecological validity.
Collapse
Affiliation(s)
- Miriam Becker
- Clinical Psychology and Psychotherapy, University of Marburg, Gutenbergstr. 18, 35032, Marburg, Germany.
| | | | - Filipp Schmidt
- Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany
| | - Anke Haberkamp
- Clinical Psychology and Psychotherapy, University of Marburg, Gutenbergstr. 18, 35032, Marburg, Germany
- Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
10
|
Berneshawi AR, Seyedmadani K, Goel R, Anderson MR, Tyson TL, Akay YM, Akay M, Leung LSB, Stone LS. Oculometric biomarkers of visuomotor deficits in clinically asymptomatic patients with systemic lupus erythematosus undergoing long-term hydroxychloroquine treatment. FRONTIERS IN OPHTHALMOLOGY 2024; 4:1354892. [PMID: 39104603 PMCID: PMC11298511 DOI: 10.3389/fopht.2024.1354892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 05/10/2024] [Indexed: 08/07/2024]
Abstract
Introduction This study examines a set of oculomotor measurements, or "oculometric" biomarkers, as potential early indicators of visual and visuomotor deficits due to retinal toxicity in asymptomatic Systemic Lupus Erythematosus (SLE) patients on long-term hydroxychloroquine (HCQ) treatment. The aim is to identify subclinical functional impairments that are otherwise undetectable by standard clinical tests and to link them to structural retinal changes. Methods We measured oculomotor responses in a cohort of SLE patients on chronic HCQ therapy using a previously established behavioral task and analysis technique. We also examined the relationship between oculometrics, OCT measures of retinal thickness, and standard clinical perimetry measures of visual function in our patient group using Bivariate Pearson Correlation and a Linear Mixed-Effects Model (LMM). Results Significant visual and visuomotor deficits were found in 12 asymptomatic SLE patients on long-term HCQ therapy compared to a cohort of 17 age-matched healthy controls. Notably, six oculometrics were significantly different. The median initial pursuit acceleration was 22%, steady-state pursuit gain 16%, proportion smooth 7%, and target speed responsiveness 31% lower, while catch-up saccade amplitude was 46% and fixation error 46% larger. Excluding the two patients with diagnosed mild toxicity, four oculometrics, all but fixation error and proportion smooth, remained significantly impaired compared to controls. Across our population of 12 patients (24 retinae), we found that pursuit latency, initial acceleration, steady-state gain, and fixation error were linearly related to retinal thickness even when age was accounted for, while standard measures of clinical function (Mean Deviation and Pattern Standard Deviation) were not. Discussion Our data show that specific oculometrics are sensitive early biomarkers of functional deficits in SLE patients on HCQ that could be harnessed to assist in the early detection of HCQ-induced retinal toxicity and other visual pathologies, potentially providing early diagnostic value beyond standard visual field and OCT evaluations.
Collapse
Affiliation(s)
- Andrew R. Berneshawi
- Ophthalmology Department, Stanford University School of Medicine, Stanford, CA, United States
| | - Kimia Seyedmadani
- Research Operations and Integration Laboratory, Johnson Space Center, National Aeronautics and Space Administration, Houston, TX, United States
- Biomedical Engineering Department, University of Houston, Houston, TX, United States
| | - Rahul Goel
- San Jose State University Foundation, San Jose, CA, United States
- Human Systems Integration Division, Ames Research Center, National Aeronautics and Space Administration, Moffett Field, CA, United States
| | - Mark R. Anderson
- Human Systems Integration Division, Ames Research Center, National Aeronautics and Space Administration, Moffett Field, CA, United States
- Arctic Slope Regional Corporation (ASRC) Federal Data Solutions, Moffett Field, CA, United States
| | - Terence L. Tyson
- Human Systems Integration Division, Ames Research Center, National Aeronautics and Space Administration, Moffett Field, CA, United States
| | - Yasmin M. Akay
- Biomedical Engineering Department, University of Houston, Houston, TX, United States
| | - Metin Akay
- Biomedical Engineering Department, University of Houston, Houston, TX, United States
| | - Loh-Shan B. Leung
- Ophthalmology Department, Stanford University School of Medicine, Stanford, CA, United States
| | - Leland S. Stone
- Human Systems Integration Division, Ames Research Center, National Aeronautics and Space Administration, Moffett Field, CA, United States
| |
Collapse
|
11
|
Harrison WJ, Stead I, Wallis TSA, Bex PJ, Mattingley JB. A computational account of transsaccadic attentional allocation based on visual gain fields. Proc Natl Acad Sci U S A 2024; 121:e2316608121. [PMID: 38941277 PMCID: PMC11228487 DOI: 10.1073/pnas.2316608121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Accepted: 05/13/2024] [Indexed: 06/30/2024] Open
Abstract
Coordination of goal-directed behavior depends on the brain's ability to recover the locations of relevant objects in the world. In humans, the visual system encodes the spatial organization of sensory inputs, but neurons in early visual areas map objects according to their retinal positions, rather than where they are in the world. How the brain computes world-referenced spatial information across eye movements has been widely researched and debated. Here, we tested whether shifts of covert attention are sufficiently precise in space and time to track an object's real-world location across eye movements. We found that observers' attentional selectivity is remarkably precise and is barely perturbed by the execution of saccades. Inspired by recent neurophysiological discoveries, we developed an observer model that rapidly estimates the real-world locations of objects and allocates attention within this reference frame. The model recapitulates the human data and provides a parsimonious explanation for previously reported phenomena in which observers allocate attention to task-irrelevant locations across eye movements. Our findings reveal that visual attention operates in real-world coordinates, which can be computed rapidly at the earliest stages of cortical processing.
Collapse
Affiliation(s)
- William J. Harrison
- Psychology, School of Health, University of the Sunshine Coast, Sippy Downs, QLD4556, Australia
- Queensland Brain Institute, The University of Queensland, St. Lucia, QLD4072, Australia
- The School of Psychology, The University of Queensland, St. Lucia, QLD4072, Australia
| | - Imogen Stead
- Queensland Brain Institute, The University of Queensland, St. Lucia, QLD4072, Australia
| | - Thomas S. A. Wallis
- Centre for Cognitive Science and Institute of Psychology, Technical University of Darmstadt, Darmstadt64283, Germany
- Center for Mind, Brain and Behavior (CMBB), Universities of Marburg, Giessen, and Darmstadt, Marburg35032, Germany
| | - Peter J. Bex
- Department of Psychology, Northeastern University, Boston, MA02115
| | - Jason B. Mattingley
- Queensland Brain Institute, The University of Queensland, St. Lucia, QLD4072, Australia
- The School of Psychology, The University of Queensland, St. Lucia, QLD4072, Australia
- Canadian Institute for Advanced Research, Toronto, ONM5G 1M1, Canada
| |
Collapse
|
12
|
Mohamed Selim A, Barz M, Bhatti OS, Alam HMT, Sonntag D. A review of machine learning in scanpath analysis for passive gaze-based interaction. Front Artif Intell 2024; 7:1391745. [PMID: 38903158 PMCID: PMC11188426 DOI: 10.3389/frai.2024.1391745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Accepted: 05/15/2024] [Indexed: 06/22/2024] Open
Abstract
The scanpath is an important concept in eye tracking. It refers to a person's eye movements over a period of time, commonly represented as a series of alternating fixations and saccades. Machine learning has been increasingly used for the automatic interpretation of scanpaths over the past few years, particularly in research on passive gaze-based interaction, i.e., interfaces that implicitly observe and interpret human eye movements, with the goal of improving the interaction. This literature review investigates research on machine learning applications in scanpath analysis for passive gaze-based interaction between 2012 and 2022, starting from 2,425 publications and focussing on 77 publications. We provide insights on research domains and common learning tasks in passive gaze-based interaction and present common machine learning practices from data collection and preparation to model selection and evaluation. We discuss commonly followed practices and identify gaps and challenges, especially concerning emerging machine learning topics, to guide future research in the field.
Collapse
Affiliation(s)
- Abdulrahman Mohamed Selim
- German Research Center for Artificial Intelligence (DFKI), Interactive Machine Learning Department, Saarbrücken, Germany
| | - Michael Barz
- German Research Center for Artificial Intelligence (DFKI), Interactive Machine Learning Department, Saarbrücken, Germany
- Applied Artificial Intelligence, University of Oldenburg, Oldenburg, Germany
| | - Omair Shahzad Bhatti
- German Research Center for Artificial Intelligence (DFKI), Interactive Machine Learning Department, Saarbrücken, Germany
| | - Hasan Md Tusfiqur Alam
- German Research Center for Artificial Intelligence (DFKI), Interactive Machine Learning Department, Saarbrücken, Germany
| | - Daniel Sonntag
- German Research Center for Artificial Intelligence (DFKI), Interactive Machine Learning Department, Saarbrücken, Germany
- Applied Artificial Intelligence, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
13
|
Zhang X, He L, Gao Q, Jiang N. Performance of the Action Observation-Based Brain-Computer Interface in Stroke Patients and Gaze Metrics Analysis. IEEE Trans Neural Syst Rehabil Eng 2024; 32:1370-1379. [PMID: 38512735 DOI: 10.1109/tnsre.2024.3379995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/23/2024]
Abstract
Brain-computer interfaces (BCIs) are anticipated to improve the efficacy of rehabilitation for people with motor disabilities. However, applying BCI in clinical practice is still a challenge due to the great diversity of patients. In the current study, a novel action observation (AO) based BCI was proposed and tested on stroke patients. Ten non-hemineglect patients and ten hemineglect patients were recruited. Four AO stimuli were designed, each presenting a decomposed action to complete the reach-and-grasp task. EEG data and eye movement data were collected. Eye movement data was utilized to analyze the reasons for individual differences in BCI performance. Task discriminative component analysis was utilized to perform online target detection. The results showed that the designed AO-based BCI could simultaneously induce steady state motion visual evoked potential (SSMVEP) from the occipital region and sensory motor rhythm from the sensorimotor region in stroke patients. The average online detection accuracy among the four AO stimuli reached 67% within 3 s in the non-hemineglect group, while the accuracy only reached 35% in the hemineglect group. Gaze metrics showed that the average total duration of fixations during the stimulus phase in the hemineglect group was only 1.31 s ± 0.532 s which was significantly lower than that in the non-hemineglect group. The results indicated that hemineglect patients have difficulty gazing at the AO stimulus, resulting in poor detection performance and weak desynchronization in the sensorimotor region. Furthermore, the degree of neglect is inversely proportional to the target detection accuracy in hemineglect stroke patients. In addition, the gaze metrics associated with cognitive load were significantly correlated with the accuracy in non-hemineglect patients. It indicated the cognitive load may affect the AO-based BCI. The current study will expedite the clinical application of AO-based BCI.
Collapse
|
14
|
Hirata T, Hirata Y, Kawai N. Human estimates of descending objects' motion are more accurate than those of ascending objects regardless of gravity information. J Vis 2024; 24:2. [PMID: 38436983 PMCID: PMC10913939 DOI: 10.1167/jov.24.3.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 12/27/2023] [Indexed: 03/05/2024] Open
Abstract
Humans can accurately estimate and track object motion, even if it accelerates. Research shows that humans exhibit superior estimation and tracking performance for descending (falling) than ascending (rising) objects. Previous studies presented ascending and descending targets along the gravitational and body axes in an upright posture. Thus, it is unclear whether humans rely on congruent information between the direction of the target motion and gravity or the direction of the target motion and longitudinal body axes. Two experiments were conducted to explore these possibilities. In Experiment 1, participants estimated the arrival time at a goal for both upward and downward motion of targets along the longitudinal body axis in the upright (both axes of target motion and gravity congruent) and supine (both axes incongruent) postures. In Experiment 2, smooth pursuit eye movements were assessed while tracking both targets in the same postures. Arrival time estimation and smooth pursuit eye movement performance were consistently more accurate for downward target motion than for upward motion, irrespective of posture. These findings suggest that the visual experience of seeing an object moving along an observer's leg side in everyday life may influence the ability to accurately estimate and track the descending object's motion.
Collapse
Affiliation(s)
- Takashi Hirata
- Department of Cognitive and Psychological Sciences, Nagoya University Graduate School of Informatics, Nagoya, Aichi, Japan
- JSPS Research Fellowships for Young Scientists, Tokyo, Japan
| | - Yutaka Hirata
- Department of Artificial Intelligence and Robotics, Chubu University College of Science and Engineering, Kasugai, Aichi, Japan
- Academy of Emerging Sciences, Chubu University, Kasugai, Aichi, Japan
- Center for Mathematical Science and Artificial Intelligence, Chubu University, Kasugai, Aichi, Japan
| | - Nobuyuki Kawai
- Department of Cognitive and Psychological Sciences, Nagoya University Graduate School of Informatics, Nagoya, Aichi, Japan
- Academy of Emerging Sciences, Chubu University, Kasugai, Aichi, Japan
| |
Collapse
|
15
|
Melnyk K, Friedman L, Komogortsev OV. What can entropy metrics tell us about the characteristics of ocular fixation trajectories? PLoS One 2024; 19:e0291823. [PMID: 38166054 PMCID: PMC10760742 DOI: 10.1371/journal.pone.0291823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Accepted: 09/06/2023] [Indexed: 01/04/2024] Open
Abstract
In this study, we provide a detailed analysis of entropy measures calculated for fixation eye movement trajectories from the three different datasets. We employed six key metrics (Fuzzy, Increment, Sample, Gridded Distribution, Phase, and Spectral Entropies). We calculate these six metrics on three sets of fixations: (1) fixations from the GazeCom dataset, (2) fixations from what we refer to as the "Lund" dataset, and (3) fixations from our own research laboratory ("OK Lab" dataset). For each entropy measure, for each dataset, we closely examined the 36 fixations with the highest entropy and the 36 fixations with the lowest entropy. From this, it was clear that the nature of the information from our entropy metrics depended on which dataset was evaluated. These entropy metrics found various types of misclassified fixations in the GazeCom dataset. Two entropy metrics also detected fixation with substantial linear drift. For the Lund dataset, the only finding was that low spectral entropy was associated with what we call "bumpy" fixations. These are fixations with low-frequency oscillations. For the OK Lab dataset, three entropies found fixations with high-frequency noise which probably represent ocular microtremor. In this dataset, one entropy found fixations with linear drift. The between-dataset results are discussed in terms of the number of fixations in each dataset, the different eye movement stimuli employed, and the method of eye movement classification.
Collapse
Affiliation(s)
- Kateryna Melnyk
- Department of Computer Science, Texas State University, San Marcos, TX, United States of America
| | - Lee Friedman
- Department of Computer Science, Texas State University, San Marcos, TX, United States of America
| | - Oleg V. Komogortsev
- Department of Computer Science, Texas State University, San Marcos, TX, United States of America
| |
Collapse
|
16
|
Horvath S, Arunachalam S. Assessing receptive verb knowledge in late talkers and autistic children: advances and cautionary tales. J Neurodev Disord 2023; 15:44. [PMID: 38087233 PMCID: PMC10717976 DOI: 10.1186/s11689-023-09512-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 11/16/2023] [Indexed: 12/18/2023] Open
Abstract
PURPOSE Using eye-tracking, we assessed the receptive verb vocabularies of age-matched late talkers and typically developing children (experiment 1) and autistic preschoolers (experiment 2). We evaluated how many verbs participants knew and how quickly they processed the linguistic prompt. Our goal is to explore how these eye-gaze measures can be operationalized to capture verb knowledge in late talkers and autistic children. METHOD Participants previewed two dynamic scenes side-by-side (e.g., "stretching" and "clapping") and were then prompted to find the target verb's referent. Children's eye-gaze behaviors were operationalized using established approaches in the field with modifications in consideration for the type of stimuli (dynamic scenes versus static images) and the populations included. Accuracy was calculated as a proportion of time spent looking to the target, and linguistic processing was operationalized as latency of children's first look to the target. RESULTS In experiment 1, there were no group differences in the proportion of verbs known, but late talkers required longer to demonstrate their knowledge than typically developing children. Latency was predicted by age but not language abilities. In experiment 2, autistic children's accuracy and latency were both predicted by receptive language abilities. CONCLUSION Eye gaze can be used to assess receptive verb vocabulary in a variety of populations, but in operationalizing gaze behavior, we must account for between- and within-group differences. Bootstrapped cluster-permutation analysis is one way to create individualized measures of children's gaze behavior, but more research is warranted using an individual differences approach with this type of analysis.
Collapse
|
17
|
Radecke JO, Sprenger A, Stöckler H, Espeter L, Reichhardt MJ, Thomann LS, Erdbrügger T, Buschermöhle Y, Borgwardt S, Schneider TR, Gross J, Wolters CH, Lencer R. Normative tDCS over V5 and FEF reveals practice-induced modulation of extraretinal smooth pursuit mechanisms, but no specific stimulation effect. Sci Rep 2023; 13:21380. [PMID: 38049419 PMCID: PMC10695990 DOI: 10.1038/s41598-023-48313-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 11/24/2023] [Indexed: 12/06/2023] Open
Abstract
The neural networks subserving smooth pursuit eye movements (SPEM) provide an ideal model for investigating the interaction of sensory processing and motor control during ongoing movements. To better understand core plasticity aspects of sensorimotor processing for SPEM, normative sham, anodal or cathodal transcranial direct current stimulation (tDCS) was applied over visual area V5 and frontal eye fields (FEF) in sixty healthy participants. The identical within-subject paradigm was used to assess SPEM modulations by practice. While no specific tDCS effects were revealed, within- and between-session practice effects indicate plasticity of top-down extraretinal mechanisms that mainly affect SPEM in the absence of visual input and during SPEM initiation. To explore the potential of tDCS effects, individual electric field simulations were computed based on calibrated finite element head models and individual functional localization of V5 and FEF location (using functional MRI) and orientation (using combined EEG/MEG) was conducted. Simulations revealed only limited electric field target intensities induced by the applied normative tDCS montages but indicate the potential efficacy of personalized tDCS for the modulation of SPEM. In sum, results indicate the potential susceptibility of extraretinal SPEM control to targeted external neuromodulation (e.g., personalized tDCS) and intrinsic learning protocols.
Collapse
Affiliation(s)
- Jan-Ole Radecke
- Department of Psychiatry and Psychotherapy, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany.
- Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, 23562, Lübeck, Germany.
| | - Andreas Sprenger
- Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, 23562, Lübeck, Germany
- Department of Neurology, University of Lübeck, 23562, Lübeck, Germany
- Institute of Psychology II, University of Lübeck, 23562, Lübeck, Germany
| | - Hannah Stöckler
- Department of Psychiatry and Psychotherapy, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany
- Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, 23562, Lübeck, Germany
| | - Lisa Espeter
- Department of Psychiatry and Psychotherapy, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany
- Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, 23562, Lübeck, Germany
| | - Mandy-Josephine Reichhardt
- Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, 23562, Lübeck, Germany
- Institute of Psychology II, University of Lübeck, 23562, Lübeck, Germany
| | - Lara S Thomann
- Department of Psychiatry and Psychotherapy, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany
- Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, 23562, Lübeck, Germany
| | - Tim Erdbrügger
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, 48149, Münster, Germany
| | - Yvonne Buschermöhle
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, 48149, Münster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, 48149, Münster, Germany
| | - Stefan Borgwardt
- Department of Psychiatry and Psychotherapy, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany
- Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, 23562, Lübeck, Germany
| | - Till R Schneider
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
| | - Joachim Gross
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, 48149, Münster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, 48149, Münster, Germany
| | - Carsten H Wolters
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, 48149, Münster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, 48149, Münster, Germany
| | - Rebekka Lencer
- Department of Psychiatry and Psychotherapy, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany
- Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, 23562, Lübeck, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, 48149, Münster, Germany
- Institute for Translational Psychiatry, University of Münster, 48149, Münster, Germany
| |
Collapse
|
18
|
Landová E, Štolhoferová I, Vobrubová B, Polák J, Sedláčková K, Janovcová M, Rádlová S, Frynta D. Attentional, emotional, and behavioral response toward spiders, scorpions, crabs, and snakes provides no evidence for generalized fear between spiders and scorpions. Sci Rep 2023; 13:20972. [PMID: 38017048 PMCID: PMC10684562 DOI: 10.1038/s41598-023-48229-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Accepted: 11/23/2023] [Indexed: 11/30/2023] Open
Abstract
Spiders are among the animals evoking the highest fear and disgust and such a complex response might have been formed throughout human evolution. Ironically, most spiders do not present a serious threat, so the evolutionary explanation remains questionable. We suggest that other chelicerates, such as scorpions, have been potentially important in the formation and fixation of the spider-like category. In this eye-tracking study, we focused on the attentional, behavioral, and emotional response to images of spiders, scorpions, snakes, and crabs used as task-irrelevant distractors. Results show that spider-fearful subjects were selectively distracted by images of spiders and crabs. Interestingly, these stimuli were not rated as eliciting high fear contrary to the other animals. We hypothesize that spider-fearful participants might have mistaken crabs for spiders based on their shared physical characteristics. In contrast, subjects with no fear of spiders were the most distracted by snakes and scorpions which supports the view that scorpions as well as snakes are prioritized evolutionary relevant stimuli. We also found that the reaction time increased systematically with increasing subjective fear of spiders only when using spiders (and crabs to some extent) but not snakes and scorpions as distractors. The maximal pupil response covered not only the attentional and cognitive response but was also tightly correlated with the fear ratings of the picture stimuli. However, participants' fear of spiders did not affect individual reactions to scorpions measured by the maximal pupil response. We conclude that scorpions are evolutionary fear-relevant stimuli, however, the generalization between scorpions and spiders was not supported in spider-fearful participants. This result might be important for a better understanding of the evolution of spider phobia.
Collapse
Affiliation(s)
- E Landová
- Department of Zoology, Faculty of Science, Charles University, Prague, Czech Republic.
- National Institute of Mental Health, Klecany, Czech Republic.
| | - I Štolhoferová
- Department of Zoology, Faculty of Science, Charles University, Prague, Czech Republic
| | - B Vobrubová
- Department of Zoology, Faculty of Science, Charles University, Prague, Czech Republic
| | - J Polák
- Department of Zoology, Faculty of Science, Charles University, Prague, Czech Republic
| | - K Sedláčková
- National Institute of Mental Health, Klecany, Czech Republic
| | - M Janovcová
- Department of Zoology, Faculty of Science, Charles University, Prague, Czech Republic
| | - S Rádlová
- National Institute of Mental Health, Klecany, Czech Republic
| | - D Frynta
- Department of Zoology, Faculty of Science, Charles University, Prague, Czech Republic
| |
Collapse
|
19
|
Roth N, Rolfs M, Hellwich O, Obermayer K. Objects guide human gaze behavior in dynamic real-world scenes. PLoS Comput Biol 2023; 19:e1011512. [PMID: 37883331 PMCID: PMC10602265 DOI: 10.1371/journal.pcbi.1011512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 09/12/2023] [Indexed: 10/28/2023] Open
Abstract
The complexity of natural scenes makes it challenging to experimentally study the mechanisms behind human gaze behavior when viewing dynamic environments. Historically, eye movements were believed to be driven primarily by space-based attention towards locations with salient features. Increasing evidence suggests, however, that visual attention does not select locations with high saliency but operates on attentional units given by the objects in the scene. We present a new computational framework to investigate the importance of objects for attentional guidance. This framework is designed to simulate realistic scanpaths for dynamic real-world scenes, including saccade timing and smooth pursuit behavior. Individual model components are based on psychophysically uncovered mechanisms of visual attention and saccadic decision-making. All mechanisms are implemented in a modular fashion with a small number of well-interpretable parameters. To systematically analyze the importance of objects in guiding gaze behavior, we implemented five different models within this framework: two purely spatial models, where one is based on low-level saliency and one on high-level saliency, two object-based models, with one incorporating low-level saliency for each object and the other one not using any saliency information, and a mixed model with object-based attention and selection but space-based inhibition of return. We optimized each model's parameters to reproduce the saccade amplitude and fixation duration distributions of human scanpaths using evolutionary algorithms. We compared model performance with respect to spatial and temporal fixation behavior, including the proportion of fixations exploring the background, as well as detecting, inspecting, and returning to objects. A model with object-based attention and inhibition, which uses saliency information to prioritize between objects for saccadic selection, leads to scanpath statistics with the highest similarity to the human data. This demonstrates that scanpath models benefit from object-based attention and selection, suggesting that object-level attentional units play an important role in guiding attentional processing.
Collapse
Affiliation(s)
- Nicolas Roth
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin, Germany
- Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Germany
| | - Martin Rolfs
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin, Germany
- Department of Psychology, Humboldt-Universität zu Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Germany
| | - Olaf Hellwich
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin, Germany
- Institute of Computer Engineering and Microelectronics, Technische Universität Berlin, Germany
| | - Klaus Obermayer
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin, Germany
- Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Germany
| |
Collapse
|
20
|
Elmadjian C, Gonzales C, Costa RLD, Morimoto CH. Online eye-movement classification with temporal convolutional networks. Behav Res Methods 2023; 55:3602-3620. [PMID: 36220951 DOI: 10.3758/s13428-022-01978-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/08/2022] [Indexed: 11/08/2022]
Abstract
The simultaneous classification of the three most basic eye-movement patterns is known as the ternary eye-movement classification problem (3EMCP). Dynamic, interactive real-time applications that must instantly adjust or respond to certain eye behaviors would highly benefit from accurate, robust, fast, and low-latency classification methods. Recent developments based on 1D-CNN-BiLSTM and TCN architectures have demonstrated to be more accurate and robust than previous solutions, but solely considering offline applications. In this paper, we propose a TCN classifier for the 3EMCP, adapted to online applications, that does not require look-ahead buffers. We introduce a new lightweight preprocessing technique that allows the TCN to make real-time predictions at about 500 Hz with low latency using commodity hardware. We evaluate the TCN performance against other two deep neural models: a CNN-LSTM and a CNN-BiLSTM, also adapted to online classification. Furthermore, we compare the performance of the deep neural models against a lightweight real-time Bayesian classifier (I-BDT). Our results, considering two publicly available datasets, show that the proposed TCN model consistently outperforms other methods for all classes. The results also show that, though it is possible to achieve reasonable accuracy levels with zero-length look ahead, the performance of all methods improve with the use of look-ahead information. The codebase, pre-trained models, and datasets are available at https://github.com/elmadjian/OEMC.
Collapse
Affiliation(s)
- Carlos Elmadjian
- University of São Paulo, R. do Matão, 1010, 256-A, São Paulo, Brazil.
| | - Candy Gonzales
- University of São Paulo, R. do Matão, 1010, 256-A, São Paulo, Brazil
| | | | - Carlos H Morimoto
- University of São Paulo, R. do Matão, 1010, 209-C, São Paulo, Brazil
| |
Collapse
|
21
|
Pedziwiatr MA, Heer S, Coutrot A, Bex PJ, Mareschal I. Influence of prior knowledge on eye movements to scenes as revealed by hidden Markov models. J Vis 2023; 23:10. [PMID: 37721772 PMCID: PMC10511023 DOI: 10.1167/jov.23.10.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 08/14/2023] [Indexed: 09/19/2023] Open
Abstract
Human visual experience usually provides ample opportunity to accumulate knowledge about events unfolding in the environment. In typical scene perception experiments, however, participants view images that are unrelated to each other and, therefore, they cannot accumulate knowledge relevant to the upcoming visual input. Consequently, the influence of such knowledge on how this input is processed remains underexplored. Here, we investigated this influence in the context of gaze control. We used sequences of static film frames arranged in a way that allowed us to compare eye movements to identical frames between two groups: a group that accumulated prior knowledge relevant to the situations depicted in these frames and a group that did not. We used a machine learning approach based on hidden Markov models fitted to individual scanpaths to demonstrate that the gaze patterns from the two groups differed systematically and, thereby, showed that recently accumulated prior knowledge contributes to gaze control. Next, we leveraged the interpretability of hidden Markov models to characterize these differences. Additionally, we report two unexpected and interesting caveats of our approach. Overall, our results highlight the importance of recently acquired prior knowledge for oculomotor control and the potential of hidden Markov models as a tool for investigating it.
Collapse
Affiliation(s)
- Marek A Pedziwiatr
- School of Biological and Behavioural Sciences, Queen Mary University of London, London, UK
| | - Sophie Heer
- School of Biological and Behavioural Sciences, Queen Mary University of London, London, UK
| | - Antoine Coutrot
- Univ Lyon, CNRS, INSA Lyon, UCBL, LIRIS, UMR5205, F-69621 Lyon, France
| | - Peter J Bex
- Department of Psychology, Northeastern University, Boston, MA, USA
| | - Isabelle Mareschal
- School of Biological and Behavioural Sciences, Queen Mary University of London, London, UK
| |
Collapse
|
22
|
Ntodie M, Saunders K, Little JA. Accuracy and stability of accommodation and vergence responses during sustained near tasks in uncorrected hyperopes. Sci Rep 2023; 13:14389. [PMID: 37658084 PMCID: PMC10474059 DOI: 10.1038/s41598-023-41244-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Accepted: 08/23/2023] [Indexed: 09/03/2023] Open
Abstract
This study investigated the accuracy and stability of accommodative and vergence functions in children with and without hyperopia while engaged in two sustained near tasks. The sustained accommodative and vergence characteristics of participants without refractive correction (n = 92, aged 5-10 years) with and without hyperopia (defined as cycloplegic retinoscopy ≥ + 1.00D and less than + 5.00D) were measured using eccentric infrared photorefraction (PowerRef 3; PlusOptix, Germany). Binocular measures of accommodation and eye position were recorded while participants engaged in 2 tasks at 25 cm for 15 min each: (1) reading small print on an Amazon Kindle and (2) watching an animated movie on liquid crystal display screen. Comprehensive visual assessment, including measurement of presenting visual acuity, amplitude of accommodation, and stereoacuity was conducted. The magnitude of accommodative and vergence responses was not related to refractive error (P > 0.05). However, there were inter-task differences in the accuracy and stability of the accommodative responses across refractive groups (P < 0.05). The relationship between accommodation and vergence was not significant in both tasks (P > 0.05). However, increased accommodative and vergence instabilities were associated with total accommodative response (P < 0.05). Despite having greater accommodative demand, uncorrected hyperopes accommodate comparably to emmetropic controls. However, uncorrected hyperopes have increased instabilities in their accommodative and vergence responses, which may adversely impact their visual experience.
Collapse
Affiliation(s)
- Michael Ntodie
- Department of Optometry and Vision Science, School of Allied Health Sciences, College of Health and Allied Sciences, University of Cape Coast, Cape Coast, Ghana.
- Centre for Optometry and Vision Science, Biomedical Sciences Research Institute, Ulster University, Coleraine, UK.
| | - Kathryn Saunders
- Centre for Optometry and Vision Science, Biomedical Sciences Research Institute, Ulster University, Coleraine, UK
| | - Julie-Anne Little
- Centre for Optometry and Vision Science, Biomedical Sciences Research Institute, Ulster University, Coleraine, UK
| |
Collapse
|
23
|
Bruckert A, Christie M, Le Meur O. Where to look at the movies: Analyzing visual attention to understand movie editing. Behav Res Methods 2023; 55:2940-2959. [PMID: 36002630 DOI: 10.3758/s13428-022-01949-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/29/2022] [Indexed: 11/08/2022]
Abstract
In the process of making a movie, directors constantly care about where the spectator will look on the screen. Shot composition, framing, camera movements, or editing are tools commonly used to direct attention. In order to provide a quantitative analysis of the relationship between those tools and gaze patterns, we propose a new eye-tracking database, containing gaze-pattern information on movie sequences, as well as editing annotations, and we show how state-of-the-art computational saliency techniques behave on this dataset. In this work, we expose strong links between movie editing and spectators gaze distributions, and open several leads on how the knowledge of editing information could improve human visual attention modeling for cinematic content. The dataset generated and analyzed for this study is available at https://github.com/abruckert/eye_tracking_filmmaking.
Collapse
|
24
|
Pedziwiatr MA, Heer S, Coutrot A, Bex P, Mareschal I. Prior knowledge about events depicted in scenes decreases oculomotor exploration. Cognition 2023; 238:105544. [PMID: 37419068 DOI: 10.1016/j.cognition.2023.105544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 06/27/2023] [Accepted: 06/28/2023] [Indexed: 07/09/2023]
Abstract
The visual input that the eyes receive usually contains temporally continuous information about unfolding events. Therefore, humans can accumulate knowledge about their current environment. Typical studies on scene perception, however, involve presenting multiple unrelated images and thereby render this accumulation unnecessary. Our study, instead, facilitated it and explored its effects. Specifically, we investigated how recently-accumulated prior knowledge affects gaze behavior. Participants viewed sequences of static film frames that contained several 'context frames' followed by a 'critical frame'. The context frames showed either events from which the situation depicted in the critical frame naturally followed, or events unrelated to this situation. Therefore, participants viewed identical critical frames while possessing prior knowledge that was either relevant or irrelevant to the frames' content. In the former case, participants' gaze behavior was slightly more exploratory, as revealed by seven gaze characteristics we analyzed. This result demonstrates that recently-gained prior knowledge reduces exploratory eye movements.
Collapse
Affiliation(s)
- Marek A Pedziwiatr
- School of Biological and Behavioural Sciences, Queen Mary University of London, Mile End Road, London E1 4NS, United Kingdom.
| | - Sophie Heer
- School of Biological and Behavioural Sciences, Queen Mary University of London, Mile End Road, London E1 4NS, United Kingdom
| | - Antoine Coutrot
- Univ Lyon, CNRS, INSA Lyon, UCBL, LIRIS, UMR5205, F-69621 Lyon, France
| | - Peter Bex
- Department of Psychology, Northeastern University, 107 Forsyth Street, Boston, MA 02115, United States of America
| | - Isabelle Mareschal
- School of Biological and Behavioural Sciences, Queen Mary University of London, Mile End Road, London E1 4NS, United Kingdom
| |
Collapse
|
25
|
de la Malla C, Goettker A. The effect of impaired velocity signals on goal-directed eye and hand movements. Sci Rep 2023; 13:13646. [PMID: 37607970 PMCID: PMC10444871 DOI: 10.1038/s41598-023-40394-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 08/09/2023] [Indexed: 08/24/2023] Open
Abstract
Information about position and velocity is essential to predict where moving targets will be in the future, and to accurately move towards them. But how are the two signals combined over time to complete goal-directed movements? We show that when velocity information is impaired due to using second-order motion stimuli, saccades directed towards moving targets land at positions where targets were ~ 100 ms before saccade initiation, but hand movements are accurate. Importantly, the longer latencies of hand movements allow for additional time to process the sensory information available. When increasing the period of time one sees the moving target before making the saccade, saccades become accurate. In line with that, hand movements with short latencies show higher curvature, indicating corrections based on an update of incoming sensory information. These results suggest that movements are controlled by an independent and evolving combination of sensory information about the target's position and velocity.
Collapse
Affiliation(s)
- Cristina de la Malla
- Vision and Control of Action Group, Department of Cognition, Development, and Psychology of Education, Institute of Neurosciences, Universitat de Barcelona, Barcelona, Catalonia, Spain.
| | - Alexander Goettker
- Justus Liebig Universität Giessen, Giessen, Germany.
- Center for Mind, Brain and Behavior, University of Marburg and Justus Liebig University, Giessen, Germany.
| |
Collapse
|
26
|
Miller SS, Hutson JP, Strain ML, Smith TJ, Palavamäki M, Loschky LC, Saucier DA. The role of individual differences in resistance to persuasion on memory for political advertisements. Front Psychol 2023; 14:1196209. [PMID: 37621945 PMCID: PMC10445487 DOI: 10.3389/fpsyg.2023.1196209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 07/24/2023] [Indexed: 08/26/2023] Open
Abstract
When people see political advertisements on a polarized issue they take a stance on, what factors influence how they respond to and remember the adverts contents? Across three studies, we tested competing hypotheses about how individual differences in social vigilantism (i.e., attitude superiority) and need for cognition relate to intentions to resist attitude change and memory for political advertisements concerning abortion. In Experiments 1 and 2, we examined participants' intentions to use resistance strategies to preserve their pre-existing attitudes about abortion, by either engaging against opposing opinions or disengaging from them. In Experiment 3, we examined participants' memory for information about both sides of the controversy presented in political advertisements. Our results suggest higher levels of social vigilantism are related to greater intentions to counterargue and better memory for attitude-incongruent information. These findings extend our understanding of individual differences in how people process and respond to controversial social and political discourse.
Collapse
Affiliation(s)
- Stuart S Miller
- Department of Psychological Sciences, Kansas State University, Manhattan, KS, United States
| | - John P Hutson
- Department of Psychological Sciences, Kansas State University, Manhattan, KS, United States
| | - Megan L Strain
- Psychology, University of Nebraska at Kearney, Kearney, NE, United States
| | - Tim J Smith
- Department of Psychological Sciences, University of London Birkbeck College, London, United Kingdom
| | | | - Lester C Loschky
- Department of Psychological Sciences, Kansas State University, Manhattan, KS, United States
| | - Donald A Saucier
- Department of Psychological Sciences, Kansas State University, Manhattan, KS, United States
| |
Collapse
|
27
|
Recker L, Poth CH. Test-retest reliability of eye tracking measures in a computerized Trail Making Test. J Vis 2023; 23:15. [PMID: 37594452 PMCID: PMC10445213 DOI: 10.1167/jov.23.8.15] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Accepted: 07/12/2023] [Indexed: 08/19/2023] Open
Abstract
The Trail Making Test (TMT) is a frequently applied neuropsychological test that evaluates participants' executive functions based on their time to connect a sequence of numbers (TMT-A) or alternating numbers and letters (TMT-B). Test performance is associated with various cognitive functions ranging from visuomotor speed to working memory capabilities. However, although the test can screen for impaired executive functioning in a variety of neuropsychiatric disorders, it provides only little information about which specific cognitive impairments underlie performance detriments. To resolve this lack of specificity, recent cognitive research combined the TMT with eye tracking so that eye movements could help uncover reasons for performance impairments. However, using eye-tracking-based test scores to examine differences between persons, and ultimately apply the scores for diagnostics, presupposes that the reliability of the scores is established. Therefore, we investigated the test-retest reliabilities of scores in an eye-tracking version of the TMT recently introduced by Recker et al. (2022). We examined two healthy samples performing an initial test and then a retest 3 days (n = 31) or 10 to 30 days (n = 34) later. Results reveal that, although reliabilities of classic completion times were overall good, comparable with earlier versions, reliabilities of eye-tracking-based scores ranged from excellent (e.g., durations of fixations) to poor (e.g., number of fixations guiding manual responses). These findings indicate that some eye-tracking measures offer a strong basis for assessing interindividual differences beyond classic behavioral measures when examining processes related to information accumulation processes but are less suitable to diagnose differences in eye-hand coordination.
Collapse
Affiliation(s)
- Lukas Recker
- Neuro-Cognitive Psychology and Center for Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
- https://orcid.org/0000-0001-8465-9643
- https://www.uni-bielefeld.de/fakultaeten/psychologie/abteilung/arbeitseinheiten/01/people/scientificstaff/recker/
| | - Christian H Poth
- Neuro-Cognitive Psychology and Center for Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
- https://orcid.org/0000-0003-1621-4911
| |
Collapse
|
28
|
Körner HM, Faul F, Nuthmann A. Revisiting the role of attention in the "weapon focus effect": Do weapons draw gaze away from the perpetrator under naturalistic viewing conditions? Atten Percept Psychophys 2023; 85:1868-1887. [PMID: 36725782 PMCID: PMC10545598 DOI: 10.3758/s13414-022-02643-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/19/2022] [Indexed: 02/03/2023]
Abstract
The presence of a weapon in a scene has been found to attract observers' attention and to impair their memory of the person holding the weapon. Here, we examined the role of attention in this weapon focus effect (WFE) under different viewing conditions. German participants viewed stimuli in which a man committed a robbery while holding a gun or a cell phone. The stimuli were based on material used in a recent U.S. study reporting large memory effects. Recording eye movements allowed us to test whether observers' attention in the gun condition shifted away from the perpetrator towards the gun, compared with the phone condition. When using videos (Experiment 1), weapon presence did not appear to modulate the viewing time for the perpetrator, whereas the evidence concerning the critical object remained inconclusive. When using slide shows (Experiment 2), the gun attracted more gaze than the phone, replicating previous research. However, the attentional shift towards the weapon did not come at a cost of viewing time on the perpetrator. In both experiments, observers focused their attention predominantly on the depicted people and much less on the gun or phone. The presence of a weapon did not cause participants to recall fewer details about the perpetrator's appearance in either experiment. This null effect was replicated in an online study using the original videos and testing more participants. The results seem at odds with the attention-shift explanation of the WFE. Moreover, the results indicate that the WFE is not a universal phenomenon.
Collapse
Affiliation(s)
- Hannes M Körner
- Institute of Psychology, Kiel University, Olshausenstr. 62, 24118, Kiel, Germany.
| | - Franz Faul
- Institute of Psychology, Kiel University, Olshausenstr. 62, 24118, Kiel, Germany
| | - Antje Nuthmann
- Institute of Psychology, Kiel University, Olshausenstr. 62, 24118, Kiel, Germany
| |
Collapse
|
29
|
Nentwich M, Leszczynski M, Russ BE, Hirsch L, Markowitz N, Sapru K, Schroeder CE, Mehta AD, Bickel S, Parra LC. Semantic novelty modulates neural responses to visual change across the human brain. Nat Commun 2023; 14:2910. [PMID: 37217478 PMCID: PMC10203305 DOI: 10.1038/s41467-023-38576-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Accepted: 05/08/2023] [Indexed: 05/24/2023] Open
Abstract
Our continuous visual experience in daily life is dominated by change. Previous research has focused on visual change due to stimulus motion, eye movements or unfolding events, but not their combined impact across the brain, or their interactions with semantic novelty. We investigate the neural responses to these sources of novelty during film viewing. We analyzed intracranial recordings in humans across 6328 electrodes from 23 individuals. Responses associated with saccades and film cuts were dominant across the entire brain. Film cuts at semantic event boundaries were particularly effective in the temporal and medial temporal lobe. Saccades to visual targets with high visual novelty were also associated with strong neural responses. Specific locations in higher-order association areas showed selectivity to either high or low-novelty saccades. We conclude that neural activity associated with film cuts and eye movements is widespread across the brain and is modulated by semantic novelty.
Collapse
Affiliation(s)
- Maximilian Nentwich
- Department of Biomedical Engineering, The City College of New York, New York, NY, USA.
| | - Marcin Leszczynski
- Departments of Psychiatry and Neurology, Columbia University College of Physicians and Surgeons, New York, NY, USA
- Translational Neuroscience Lab Division, Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY, USA
- Cognitive Science Department, Institute of Philosophy, Jagiellonian University, Kraków, Poland
| | - Brian E Russ
- Translational Neuroscience Lab Division, Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY, USA
- Nash Family Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine, New York, NY, USA
- Department of Psychiatry, New York University at Langone, New York, NY, USA
| | - Lukas Hirsch
- Department of Biomedical Engineering, The City College of New York, New York, NY, USA
| | - Noah Markowitz
- The Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA
| | - Kaustubh Sapru
- Department of Biomedical Engineering, The City College of New York, New York, NY, USA
| | - Charles E Schroeder
- Departments of Psychiatry and Neurology, Columbia University College of Physicians and Surgeons, New York, NY, USA
- Translational Neuroscience Lab Division, Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY, USA
| | - Ashesh D Mehta
- The Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA
- Departments of Neurosurgery and Neurology, Zucker School of Medicine at Hofstra/Northwell, Manhasset, NY, USA
| | - Stephan Bickel
- Translational Neuroscience Lab Division, Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY, USA
- The Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA
- Departments of Neurosurgery and Neurology, Zucker School of Medicine at Hofstra/Northwell, Manhasset, NY, USA
| | - Lucas C Parra
- Department of Biomedical Engineering, The City College of New York, New York, NY, USA.
| |
Collapse
|
30
|
Li C, Du C, Ge S, Tong T. An eye-tracking study on visual perception of vegetation permeability in virtual reality forest exposure. Front Public Health 2023; 11:1089423. [PMID: 36761146 PMCID: PMC9902884 DOI: 10.3389/fpubh.2023.1089423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Accepted: 01/04/2023] [Indexed: 01/26/2023] Open
Abstract
Previous studies have confirmed the significant effects of single forest stand attributes, such as forest type (FT), understory vegetation cover (UVC), and understory vegetation height (UVH) on visitors' visual perception. However, rarely study has yet clearly determined the relationship between vegetation permeability and visual perception, while the former is formed by the interaction of multiple forest stand attributes (i.e., FT, UVC, UVH). Based on a mixed factor matrix of FT (i.e., coniferous forests and broadleaf), UVC level (i.e., 10, 60, and 100%), and UVH level (0.1, 1, and 3 m), the study creates 18 immersive virtual forest videos with different stand attributes. Virtual reality eye-tracking technology and questionnaires are used to collect visual perception data from viewing virtual forest videos. The study finds that vegetation permeability which is formed by the interaction effect of canopy density (i.e., FT) and understory density (i.e., UVC, UVH), significantly affects participant's visual perception: in terms of visual physiology characteristics, pupil size is significantly negatively correlated with vegetation permeability when participants are viewing virtual reality forest; in terms of visual psychological characteristics, the understory density formed by the interaction of UVC and UVH has a significant impact on visual attractiveness and perceived safety and the impact in which understory density is significantly negatively correlated with perceived safety. Apart from these, the study finds a significant negative correlation between average pupil diameter and perceived safety when participants are viewing virtual reality forests. The findings may be beneficial for the maintenance and management of forest parks, as well as provide insights into similar studies to explore urban public green spaces.
Collapse
Affiliation(s)
- Chang Li
- School of Architecture and Urban Planning, Suzhou University of Science and Technology, Suzhou, China,*Correspondence: Chang Li ✉
| | - Chunlan Du
- Key Laboratory of New Technology for Construction of Cities in Mountain Area, Ministry of Education, Chongqing University, Chongqing, China,School of Architecture and Urban Planning, Chongqing University, Chongqing, China,Chunlan Du ✉
| | - Shutong Ge
- School of Architecture and Urban Planning, Suzhou University of Science and Technology, Suzhou, China
| | - Tong Tong
- School of Architecture and Urban Planning, Suzhou University of Science and Technology, Suzhou, China
| |
Collapse
|
31
|
D’Amelio A, Patania S, Bursic S, Cuculo V, Boccignone G. Using Gaze for Behavioural Biometrics. SENSORS (BASEL, SWITZERLAND) 2023; 23:1262. [PMID: 36772302 PMCID: PMC9920149 DOI: 10.3390/s23031262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 01/15/2023] [Accepted: 01/20/2023] [Indexed: 06/18/2023]
Abstract
A principled approach to the analysis of eye movements for behavioural biometrics is laid down. The approach grounds in foraging theory, which provides a sound basis to capture the uniqueness of individual eye movement behaviour. We propose a composite Ornstein-Uhlenbeck process for quantifying the exploration/exploitation signature characterising the foraging eye behaviour. The relevant parameters of the composite model, inferred from eye-tracking data via Bayesian analysis, are shown to yield a suitable feature set for biometric identification; the latter is eventually accomplished via a classical classification technique. A proof of concept of the method is provided by measuring its identification performance on a publicly available dataset. Data and code for reproducing the analyses are made available. Overall, we argue that the approach offers a fresh view on either the analyses of eye-tracking data and prospective applications in this field.
Collapse
Affiliation(s)
- Alessandro D’Amelio
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
| | - Sabrina Patania
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
| | - Sathya Bursic
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
- Department of Psychology, University of Milano-Bicocca, Piazza dell’Ateneo Nuovo 1, 20126 Milan, Italy
| | - Vittorio Cuculo
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
| | - Giuseppe Boccignone
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
| |
Collapse
|
32
|
Elkin-Frankston S, Horner C, Alzahabi R, Cain MS. Characterizing motion prediction in small autonomous swarms. APPLIED ERGONOMICS 2023; 106:103909. [PMID: 36242872 DOI: 10.1016/j.apergo.2022.103909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 07/28/2022] [Accepted: 09/20/2022] [Indexed: 06/16/2023]
Abstract
The use of robotic swarms has become increasingly common in research, industrial, and military domains for tasks such as collective exploration, coordinated movement, and collective localization. Despite the expanded use of robotic swarms, little is known about how swarms are perceived by human operators. To characterize human-swarm interactions, we evaluate how operators perceive swarm characteristics, including movement patterns, control schemes, and occlusion. In a series of experiments manipulating movement patterns and control schemes, participants tracked swarms on a computer screen until they were occluded from view, at which point participants were instructed to estimate the spatiotemporal dynamics of the occluded swarm by mouse click. In addition to capturing mouse click responses, eye tracking was used to capture participants eye movements while visually tracking swarms. We observed that manipulating control schemes had minimal impact on the perception of swarms, and that swarms are easier to track when they are visible compared to when they were occluded. Regarding swarm movements, a complex pattern of data emerged. For example, eye tracking indicates that participants more closely track a swarm in an arc pattern compared to sinusoid and linear movement patterns. When evaluating behavioral click-responses, data show that time is underestimated, and that spatial accuracy is reduced in complex patterns. Results suggest that measures of performance may capture different patterns of behavior, underscoring the need for multiple measures to accurately characterize performance. In addition, the lack of generalizable data across different movement patterns highlights the complexity involved in the perception of swarms of objects.
Collapse
Affiliation(s)
- Seth Elkin-Frankston
- Center for Applied Brain and Cognitive Sciences, Medford, MA, USA; U.S. Army Combat Capabilities Development Command Soldier Center, Natick, MA, USA.
| | - Carlene Horner
- Department of Psychological & Brain Sciences, University of California Santa Barbara, Santa Barbara, CA, USA.
| | - Reem Alzahabi
- Center for Applied Brain and Cognitive Sciences, Medford, MA, USA.
| | | |
Collapse
|
33
|
Sasmita K, Swallow KM. Measuring event segmentation: An investigation into the stability of event boundary agreement across groups. Behav Res Methods 2023; 55:428-447. [PMID: 35441362 PMCID: PMC9017965 DOI: 10.3758/s13428-022-01832-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/11/2022] [Indexed: 11/08/2022]
Abstract
People spontaneously divide everyday experience into smaller units (event segmentation). To measure event segmentation, studies typically ask participants to explicitly mark the boundaries between events as they watch a movie (segmentation task). Their data may then be used to infer how others are likely to segment the same movie. However, significant variability in performance across individuals could undermine the ability to generalize across groups, especially as more research moves online. To address this concern, we used several widely employed and novel measures to quantify segmentation agreement across different sized groups (n = 2-32) using data collected on different platforms and movie types (in-lab & commercial film vs. online & everyday activities). All measures captured nonrandom and video-specific boundaries, but with notable between-sample variability. Samples of 6-18 participants were required to reliably detect video-driven segmentation behavior within a single sample. As sample size increased, agreement values improved and eventually stabilized at comparable sample sizes for in-lab & commercial film data and online & everyday activities data. Stabilization occurred at smaller sample sizes when measures reflected (1) agreement between two groups versus agreement between an individual and group, and (2) boundary identification between small (fine-grained) rather than large (coarse-grained) events. These analyses inform the tailoring of sample sizes based on the comparison of interest, materials, and data collection platform. In addition to demonstrating the reliability of online and in-lab segmentation performance at moderate sample sizes, this study supports the use of segmentation data to infer when events are likely to be segmented.
Collapse
Affiliation(s)
- Karen Sasmita
- Department of Psychology, Cornell University, 211 Uris Hall, Ithaca, NY, 14850, USA
| | - Khena M Swallow
- Department of Psychology, Cornell University, 211 Uris Hall, Ithaca, NY, 14850, USA.
| |
Collapse
|
34
|
Broda MD, de Haas B. Individual fixation tendencies in person viewing generalize from images to videos. Iperception 2022; 13:20416695221128844. [PMID: 36353505 PMCID: PMC9638695 DOI: 10.1177/20416695221128844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 09/09/2022] [Indexed: 11/06/2022] Open
Abstract
Fixation behavior toward persons in static scenes varies considerably between individuals. However, it is unclear whether these differences generalize to dynamic stimuli. Here, we examined individual differences in the distribution of gaze across seven person features (i.e. body and face parts) in static and dynamic scenes. Forty-four participants freely viewed 700 complex static scenes followed by eight director-cut videos (28,925 frames). We determined the presence of person features using hand-delineated pixel masks (images) and Deep Neural Networks (videos). Results replicated highly consistent individual differences in fixation tendencies for all person features in static scenes and revealed that these tendencies generalize to videos. Individual fixation behavior for both, images and videos, fell into two anticorrelated clusters representing the tendency to fixate faces versus bodies. These results corroborate a low-dimensional space for individual gaze biases toward persons and show they generalize from images to videos.
Collapse
Affiliation(s)
- Maximilian D. Broda
- Department of Experimental
Psychology, Justus Liebig University Giessen, Germany; Center for Mind, Brain and Behavior
(CMBB), University of Marburg and Justus Liebig University Giessen,
Germany
| | - Benjamin de Haas
- Department of Experimental
Psychology, Justus Liebig University Giessen, Germany; Center for Mind, Brain and Behavior
(CMBB), University of Marburg and Justus Liebig University Giessen,
Germany
| |
Collapse
|
35
|
Schmidt F, Schürmann L, Haberkamp A. Animal eMotion, or the emotional evaluation of moving animals. Cogn Emot 2022; 36:1132-1148. [PMID: 35749075 DOI: 10.1080/02699931.2022.2087600] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
Responding adequately to the behaviour of human and non-human animals in our environment has been crucial for our survival. This is also reflected in our exceptional capacity to detect and interpret biological motion signals. However, even though our emotions have specifically emerged as automatic adaptive responses to such vital stimuli, few studies investigated the influence of biological motion on emotional evaluations. Here, we test how the motion of animals affects emotional judgements by contrasting static animal images and videos. We investigated this question (1) in non-fearful observers across many different animals, and (2) in observers afraid of particular animals across four types of animals, including the feared ones. In line with previous studies, we find an idiosyncratic pattern of evoked emotions across different types of animals. These emotions can be explained to different extents by regression models based on relevant predictor variables (e.g. familiarity, dangerousness). Additionally, our findings show a boosting effect of motion on emotional evaluations across all animals, with an additional increase in (negative) emotions for moving feared animals (except snakes). We discuss implications of our results for experimental and clinical research and applications, highlighting the importance of experiments with dynamic and ecologically valid stimuli.
Collapse
Affiliation(s)
- Filipp Schmidt
- Experimental Psychology, Justus-Liebig-University Giessen, Giessen, Germany
| | - Lisa Schürmann
- Clinical Psychology and Psychotherapy, Philipps-University Marburg, Marburg, Germany
| | - Anke Haberkamp
- Clinical Psychology and Psychotherapy, Philipps-University Marburg, Marburg, Germany
| |
Collapse
|
36
|
Evaluating Eye Movement Event Detection: A Review of the State of the Art. Behav Res Methods 2022:10.3758/s13428-021-01763-7. [PMID: 35715615 DOI: 10.3758/s13428-021-01763-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/27/2021] [Indexed: 11/08/2022]
Abstract
Detecting eye movements in raw eye tracking data is a well-established research area by itself, as well as a common pre-processing step before any subsequent analysis. As in any field, however, progress and successful collaboration can only be achieved provided a shared understanding of the pursued goal. This is often formalised via defining metrics that express the quality of an approach to solving the posed problem. Both the big-picture intuition behind the evaluation strategies and seemingly small implementation details influence the resulting measures, making even studies with outwardly similar procedures essentially incomparable, impeding a common understanding. In this review, we systematically describe and analyse evaluation methods and measures employed in the eye movement event detection field to date. While recently developed evaluation strategies tend to quantify the detector's mistakes at the level of whole eye movement events rather than individual gaze samples, they typically do not separate establishing correspondences between true and predicted events from the quantification of the discovered errors. In our analysis we separate these two steps where possible, enabling their almost arbitrary combinations in an evaluation pipeline. We also present the first large-scale empirical analysis of event matching strategies in the literature, examining these various combinations both in practice and theoretically. We examine the particular benefits and downsides of the evaluation methods, providing recommendations towards more intuitive and informative assessment. We implemented the evaluation strategies on which this work focuses in a single publicly available library: https://github.com/r-zemblys/EM-event-detection-evaluation .
Collapse
|
37
|
Hutson JP, Chandran P, Magliano JP, Smith TJ, Loschky LC. Narrative Comprehension Guides Eye Movements in the Absence of Motion. Cogn Sci 2022; 46:e13131. [PMID: 35579883 DOI: 10.1111/cogs.13131] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Revised: 02/17/2022] [Accepted: 02/19/2022] [Indexed: 11/30/2022]
Abstract
Viewers' attentional selection while looking at scenes is affected by both top-down and bottom-up factors. However, when watching film, viewers typically attend to the movie similarly irrespective of top-down factors-a phenomenon we call the tyranny of film. A key difference between still pictures and film is that film contains motion, which is a strong attractor of attention and highly predictive of gaze during film viewing. The goal of the present study was to test if the tyranny of film is driven by motion. To do this, we created a slideshow presentation of the opening scene of Touch of Evil. Context condition participants watched the full slideshow. No-context condition participants did not see the opening portion of the scene, which showed someone placing a time bomb into the trunk of a car. In prior research, we showed that despite producing very different understandings of the clip, this manipulation did not affect viewers' attention (i.e., the tyranny of film), as both context and no-context participants were equally likely to fixate on the car with the bomb when the scene was presented as a film. The current study found that when the scene was shown as a slideshow, the context manipulation produced differences in attentional selection (i.e., it attenuated attentional synchrony). We discuss these results in the context of the Scene Perception and Event Comprehension Theory, which specifies the relationship between event comprehension and attentional selection in the context of visual narratives.
Collapse
Affiliation(s)
- John P Hutson
- Department of Learning Sciences, Georgia State University
| | | | | | - Tim J Smith
- Department of Psychological Sciences, Birkbeck, University of London
| | | |
Collapse
|
38
|
BTN: Neuroanatomical aligning between visual object tracking in deep neural network and smooth pursuit in brain. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.02.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
39
|
Hlinka J, Děchtěrenko F, Rydlo J, Androvičová R, Vejmelka M, Jajcay L, Tintěra J, Lukavský J, Horáček J. The intra-session reliability of functional connectivity during naturalistic viewing conditions. Psychophysiology 2022; 59:e14075. [PMID: 35460523 DOI: 10.1111/psyp.14075] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 03/11/2022] [Indexed: 11/30/2022]
Abstract
Functional connectivity analysis is a common approach to the characterization of brain function. While studies of functional connectivity have predominantly focused on resting-state fMRI, naturalistic paradigms, such as movie watching, are increasingly being used. This ecologically valid, yet relatively unconstrained acquisition state has been shown to improve subject compliance and, potentially, enhance individual differences. However, unlike the reliability of resting-state functional connectivity, the reliability of functional connectivity during naturalistic viewing has not yet been fully established. The current study investigates the intra-session reliability of functional connectivity during naturalistic viewing sessions to extend its understanding. Using fMRI data of 24 subjects measured at rest as well as during six naturalistic viewing conditions, we quantified the split-half reliability of each condition, as well as cross-condition reliabilities. We find that intra-session reliability is relatively high for all conditions. While cross-condition reliabilities are higher for pairings of two naturalistic viewing conditions, split-half reliability is highest for the resting state. Potential sources of variability across the conditions, as well as the strengths and limitations of using intra-session reliability as a measure in naturalistic viewing, are discussed.
Collapse
Affiliation(s)
- Jaroslav Hlinka
- Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic.,National Institute of Mental Health, Klecany, Czech Republic
| | - Filip Děchtěrenko
- Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic.,Institute of Psychology, Czech Academy of Sciences, Prague, Czech Republic
| | - Jan Rydlo
- National Institute of Mental Health, Klecany, Czech Republic.,Institute for Clinical and Experimental Medicine, Prague, Czech Republic
| | | | - Martin Vejmelka
- Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic
| | - Lucia Jajcay
- Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic.,National Institute of Mental Health, Klecany, Czech Republic.,Faculty of Electrical Engineering, Czech Technical University, Prague, Czech Republic
| | - Jaroslav Tintěra
- National Institute of Mental Health, Klecany, Czech Republic.,Institute for Clinical and Experimental Medicine, Prague, Czech Republic
| | - Jiří Lukavský
- Institute of Psychology, Czech Academy of Sciences, Prague, Czech Republic
| | - Jiří Horáček
- National Institute of Mental Health, Klecany, Czech Republic.,Faculty of Medicine, Charles University, Prague, Czech Republic
| |
Collapse
|
40
|
Madsen J, Parra LC. Cognitive processing of a common stimulus synchronizes brains, hearts, and eyes. PNAS NEXUS 2022; 1:pgac020. [PMID: 36712806 PMCID: PMC9802497 DOI: 10.1093/pnasnexus/pgac020] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 01/14/2022] [Accepted: 03/01/2022] [Indexed: 04/21/2023]
Abstract
Neural, physiological, and behavioral signals synchronize between human subjects in a variety of settings. Multiple hypotheses have been proposed to explain this interpersonal synchrony, but there is no clarity under which conditions it arises, for which signals, or whether there is a common underlying mechanism. We hypothesized that cognitive processing of a shared stimulus is the source of synchrony between subjects, measured here as intersubject correlation (ISC). To test this, we presented informative videos to participants in an attentive and distracted condition and subsequently measured information recall. ISC was observed for electro-encephalography, gaze position, pupil size, and heart rate, but not respiration and head movements. The strength of correlation was co-modulated in the different signals, changed with attentional state, and predicted subsequent recall of information presented in the videos. There was robust within-subject coupling between brain, heart, and eyes, but not respiration or head movements. The results suggest that ISC is the result of effective cognitive processing, and thus emerges only for those signals that exhibit a robust brain-body connection. While physiological and behavioral fluctuations may be driven by multiple features of the stimulus, correlation with other individuals is co-modulated by the level of attentional engagement with the stimulus.
Collapse
Affiliation(s)
| | - Lucas C Parra
- Department of Biomedical Engineering, City College of New York, 160 Convent Avenue, New York, NY 10031, USA
| |
Collapse
|
41
|
Callahan-Flintoft C, Barentine C, Touryan J, Ries AJ. A Case for Studying Naturalistic Eye and Head Movements in Virtual Environments. Front Psychol 2022; 12:650693. [PMID: 35035362 PMCID: PMC8759101 DOI: 10.3389/fpsyg.2021.650693] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Accepted: 11/10/2021] [Indexed: 12/03/2022] Open
Abstract
Using head mounted displays (HMDs) in conjunction with virtual reality (VR), vision researchers are able to capture more naturalistic vision in an experimentally controlled setting. Namely, eye movements can be accurately tracked as they occur in concert with head movements as subjects navigate virtual environments. A benefit of this approach is that, unlike other mobile eye tracking (ET) set-ups in unconstrained settings, the experimenter has precise control over the location and timing of stimulus presentation, making it easier to compare findings between HMD studies and those that use monitor displays, which account for the bulk of previous work in eye movement research and vision sciences more generally. Here, a visual discrimination paradigm is presented as a proof of concept to demonstrate the applicability of collecting eye and head tracking data from an HMD in VR for vision research. The current work’s contribution is 3-fold: firstly, results demonstrating both the strengths and the weaknesses of recording and classifying eye and head tracking data in VR, secondly, a highly flexible graphical user interface (GUI) used to generate the current experiment, is offered to lower the software development start-up cost of future researchers transitioning to a VR space, and finally, the dataset analyzed here of behavioral, eye and head tracking data synchronized with environmental variables from a task specifically designed to elicit a variety of eye and head movements could be an asset in testing future eye movement classification algorithms.
Collapse
Affiliation(s)
- Chloe Callahan-Flintoft
- Humans in Complex System Directorate, United States Army Research Laboratory, Adelphi, MD, United States
| | - Christian Barentine
- Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO, United States
| | - Jonathan Touryan
- Humans in Complex System Directorate, United States Army Research Laboratory, Adelphi, MD, United States
| | - Anthony J Ries
- Humans in Complex System Directorate, United States Army Research Laboratory, Adelphi, MD, United States.,Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO, United States
| |
Collapse
|
42
|
Nuthmann A, Canas-Bajo T. Visual search in naturalistic scenes from foveal to peripheral vision: A comparison between dynamic and static displays. J Vis 2022; 22:10. [PMID: 35044436 PMCID: PMC8802022 DOI: 10.1167/jov.22.1.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Accepted: 12/03/2021] [Indexed: 11/24/2022] Open
Abstract
How important foveal, parafoveal, and peripheral vision are depends on the task. For object search and letter search in static images of real-world scenes, peripheral vision is crucial for efficient search guidance, whereas foveal vision is relatively unimportant. Extending this research, we used gaze-contingent Blindspots and Spotlights to investigate visual search in complex dynamic and static naturalistic scenes. In Experiment 1, we used dynamic scenes only, whereas in Experiments 2 and 3, we directly compared dynamic and static scenes. Each scene contained a static, contextually irrelevant target (i.e., a gray annulus). Scene motion was not predictive of target location. For dynamic scenes, the search-time results from all three experiments converge on the novel finding that neither foveal nor central vision was necessary to attain normal search proficiency. Since motion is known to attract attention and gaze, we explored whether guidance to the target was equally efficient in dynamic as compared to static scenes. We found that the very first saccade was guided by motion in the scene. This was not the case for subsequent saccades made during the scanning epoch, representing the actual search process. Thus, effects of task-irrelevant motion were fast-acting and short-lived. Furthermore, when motion was potentially present (Spotlights) or absent (Blindspots) in foveal or central vision only, we observed differences in verification times for dynamic and static scenes (Experiment 2). When using scenes with greater visual complexity and more motion (Experiment 3), however, the differences between dynamic and static scenes were much reduced.
Collapse
Affiliation(s)
- Antje Nuthmann
- Institute of Psychology, Kiel University, Kiel, Germany
- Psychology Department, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh, UK
- http://orcid.org/0000-0003-3338-3434
| | - Teresa Canas-Bajo
- Vision Science Graduate Group, University of California, Berkeley, Berkeley, CA, USA
- Psychology Department, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
43
|
One object, two networks? Assessing the relationship between the face and body-selective regions in the primate visual system. Brain Struct Funct 2021; 227:1423-1438. [PMID: 34792643 DOI: 10.1007/s00429-021-02420-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 10/22/2021] [Indexed: 10/19/2022]
Abstract
Faces and bodies are often treated as distinct categories that are processed separately by face- and body-selective brain regions in the primate visual system. These regions occupy distinct regions of visual cortex and are often thought to constitute independent functional networks. Yet faces and bodies are part of the same object and their presence inevitably covary in naturalistic settings. Here, we re-evaluate both the evidence supporting the independent processing of faces and bodies and the organizational principles that have been invoked to explain this distinction. We outline four hypotheses ranging from completely separate networks to a single network supporting the perception of whole people or animals. The current evidence, especially in humans, is compatible with all of these hypotheses, making it presently unclear how the representation of faces and bodies is organized in the cortex.
Collapse
|
44
|
Holm SK, Häikiö T, Olli K, Kaakinen JK. Eye Movements during Dynamic Scene Viewing are Affected by Visual Attention Skills and Events of the Scene: Evidence from First-Person Shooter Gameplay Videos. J Eye Mov Res 2021; 14. [PMID: 34745442 PMCID: PMC8566014 DOI: 10.16910/jemr.14.2.3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
The role of individual differences during dynamic scene viewing was explored. Participants
(N=38) watched a gameplay video of a first-person shooter (FPS) videogame while their
eye movements were recorded. In addition, the participants’ skills in three visual attention
tasks (attentional blink, visual search, and multiple object tracking) were assessed. The
results showed that individual differences in visual attention tasks were associated with eye
movement patterns observed during viewing of the gameplay video. The differences were
noted in four eye movement measures: number of fixations, fixation durations, saccade amplitudes
and fixation distances from the center of the screen. The individual differences
showed during specific events of the video as well as during the video as a whole. The results
highlight that an unedited, fast-paced and cluttered dynamic scene can bring about individual
differences in dynamic scene viewing.
Collapse
|
45
|
Ki JJ, Dmochowski JP, Touryan J, Parra LC. Neural responses to natural visual motion are spatially selective across the visual field, with selectivity differing across brain areas and task. Eur J Neurosci 2021; 54:7609-7625. [PMID: 34679237 PMCID: PMC9298375 DOI: 10.1111/ejn.15503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Revised: 09/16/2021] [Accepted: 10/07/2021] [Indexed: 11/28/2022]
Abstract
It is well established that neural responses to visual stimuli are enhanced at select locations in the visual field. Although spatial selectivity and the effects of spatial attention are well understood for discrete tasks (e.g. visual cueing), little is known for naturalistic experience that involves continuous dynamic visual stimuli (e.g. driving). Here, we assess the strength of neural responses across the visual space during a kart‐race game. Given the varying relevance of visual location in this task, we hypothesized that the strength of neural responses to movement will vary across the visual field, and it would differ between active play and passive viewing. To test this, we measure the correlation strength of scalp‐evoked potentials with optical flow magnitude at individual locations on the screen. We find that neural responses are strongly correlated at task‐relevant locations in visual space, extending beyond the focus of overt attention. Although the driver's gaze is directed upon the heading direction at the centre of the screen, neural responses were robust at the peripheral areas (e.g. roads and surrounding buildings). Importantly, neural responses to visual movement are broadly distributed across the scalp, with visual spatial selectivity differing across electrode locations. Moreover, during active gameplay, neural responses are enhanced at select locations in the visual space. Conventionally, spatial selectivity of neural response has been interpreted as an attentional gain mechanism. In the present study, the data suggest that different brain areas focus attention on different portions of the visual field that are task‐relevant, beyond the focus of overt attention.
Collapse
Affiliation(s)
- Jason J Ki
- Department of Biomedical Engineering, City College of the City University of New York, New York, New York, USA
| | - Jacek P Dmochowski
- Department of Biomedical Engineering, City College of the City University of New York, New York, New York, USA
| | | | - Lucas C Parra
- Department of Biomedical Engineering, City College of the City University of New York, New York, New York, USA
| |
Collapse
|
46
|
Onwuegbusi T, Hermens F, Hogue T. Data-driven group comparisons of eye fixations to dynamic stimuli. Q J Exp Psychol (Hove) 2021; 75:989-1003. [PMID: 34507503 PMCID: PMC9016662 DOI: 10.1177/17470218211048060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Recent advances in software and hardware have allowed eye tracking to move away from static images to more ecologically relevant video streams. The analysis of eye tracking data for such dynamic stimuli, however, is not without challenges. The frame-by-frame coding of regions of interest (ROIs) is labour-intensive and computer vision techniques to automatically code such ROIs are not yet mainstream, restricting the use of such stimuli. Combined with the more general problem of defining relevant ROIs for video frames, methods are needed that facilitate data analysis. Here, we present a first evaluation of an easy-to-implement data-driven method with the potential to address these issues. To test the new method, we examined the differences in eye movements of self-reported politically left- or right-wing leaning participants to video clips of left- and right-wing politicians. The results show that our method can accurately predict group membership on the basis of eye movement patterns, isolate video clips that best distinguish people on the political left-right spectrum, and reveal the section of each video clip with the largest group differences. Our methodology thereby aids the understanding of group differences in gaze behaviour, and the identification of critical stimuli for follow-up studies or for use in saccade diagnosis.
Collapse
Affiliation(s)
| | - Frouke Hermens
- School of Psychology, University of Lincoln, Lincoln, UK
| | - Todd Hogue
- School of Psychology, University of Lincoln, Lincoln, UK
| |
Collapse
|
47
|
Stankov AD, Touryan J, Gordon S, Ries AJ, Ki J, Parra LC. During natural viewing, neural processing of visual targets continues throughout saccades. J Vis 2021; 21:7. [PMID: 34491271 PMCID: PMC8431980 DOI: 10.1167/jov.21.10.7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Relatively little is known about visual processing during free-viewing visual search in realistic dynamic environments. Free-viewing is characterized by frequent saccades. During saccades, visual processing is thought to be suppressed, yet we know that the presaccadic visual content can modulate postsaccadic processing. To better understand these processes in a realistic setting, we study here saccades and neural responses elicited by the appearance of visual targets in a realistic virtual environment. While subjects were being driven through a 3D virtual town, they were asked to discriminate between targets that appear on the road. Using a system identification approach, we separated overlapping and correlated activity evoked by visual targets, saccades, and button presses. We found that the presence of a target enhances early occipital as well as late frontocentral saccade-related responses. The earlier potential, shortly after 125 ms post-saccade onset, was enhanced for targets that appeared in the peripheral vision as compared to the central vision, suggesting that fast peripheral processing initiated before saccade onset. The later potential, at 195 ms post-saccade onset, was strongly modulated by the visibility of the target. Together these results suggest that, during natural viewing, neural processing of the presaccadic visual stimulus continues throughout the saccade, apparently unencumbered by saccadic suppression.
Collapse
Affiliation(s)
- Atanas D Stankov
- Department of Biomedical Engineering, City College of New York, New York, NY, USA.,
| | - Jonathan Touryan
- U.S. Army Research Laboratory, Aberdeen Proving Ground, MD, USA.,
| | | | - Anthony J Ries
- U.S. Army Research Laboratory, Aberdeen Proving Ground, MD, USA.,
| | - Jason Ki
- Department of Biomedical Engineering, City College of New York, New York, NY, USA.,
| | - Lucas C Parra
- Department of Biomedical Engineering, City College of New York, New York, NY, USA.,
| |
Collapse
|
48
|
Smith ME, Loschky LC, Bailey HR. Knowledge guides attention to goal-relevant information in older adults. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2021; 6:56. [PMID: 34406505 PMCID: PMC8374018 DOI: 10.1186/s41235-021-00321-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Accepted: 07/31/2021] [Indexed: 11/18/2022]
Abstract
How does viewers’ knowledge guide their attention while they watch everyday events, how does it affect their memory, and does it change with age? Older adults have diminished episodic memory for everyday events, but intact semantic knowledge. Indeed, research suggests that older adults may rely on their semantic memory to offset impairments in episodic memory, and when relevant knowledge is lacking, older adults’ memory can suffer. Yet, the mechanism by which prior knowledge guides attentional selection when watching dynamic activity is unclear. To address this, we studied the influence of knowledge on attention and memory for everyday events in young and older adults by tracking their eyes while they watched videos. The videos depicted activities that older adults perform more frequently than young adults (balancing a checkbook, planting flowers) or activities that young adults perform more frequently than older adults (installing a printer, setting up a video game). Participants completed free recall, recognition, and order memory tests after each video. We found age-related memory deficits when older adults had little knowledge of the activities, but memory did not differ between age groups when older adults had relevant knowledge and experience with the activities. Critically, results showed that knowledge influenced where viewers fixated when watching the videos. Older adults fixated less goal-relevant information compared to young adults when watching young adult activities, but they fixated goal-relevant information similarly to young adults, when watching more older adult activities. Finally, results showed that fixating goal-relevant information predicted free recall of the everyday activities for both age groups. Thus, older adults may use relevant knowledge to more effectively infer the goals of actors, which guides their attention to goal-relevant actions, thus improving their episodic memory for everyday activities.
Collapse
Affiliation(s)
- Maverick E Smith
- Department of Psychological Sciences, Kansas State University, 471 Bluemont Hall, 1100 Mid-campus Dr., Manhattan, KS, 66506, USA.
| | - Lester C Loschky
- Department of Psychological Sciences, Kansas State University, 471 Bluemont Hall, 1100 Mid-campus Dr., Manhattan, KS, 66506, USA
| | - Heather R Bailey
- Department of Psychological Sciences, Kansas State University, 471 Bluemont Hall, 1100 Mid-campus Dr., Manhattan, KS, 66506, USA
| |
Collapse
|
49
|
Kopiske K, Koska D, Baumann T, Maiwald C, Einhäuser W. Icy road ahead-rapid adjustments of gaze-gait interactions during perturbed naturalistic walking. J Vis 2021; 21:11. [PMID: 34351396 PMCID: PMC8354071 DOI: 10.1167/jov.21.8.11] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Most humans can walk effortlessly across uniform terrain even when they do not pay much attention to it. However, most natural terrain is far from uniform, and we need visual information to maintain stable gait. Recent advances in mobile eye-tracking technology have made it possible to study, in natural environments, how terrain affects gaze and thus the sampling of visual information. However, natural environments provide only limited experimental control, and some conditions cannot safely be tested. Typical laboratory setups, in contrast, are far from natural settings for walking. We used a setup consisting of a dual-belt treadmill, 240∘ projection screen, floor projection, three-dimensional optical motion tracking, and mobile eye tracking to investigate eye, head, and body movements during perturbed and unperturbed walking in a controlled yet naturalistic environment. In two experiments (N = 22 each), we simulated terrain difficulty by repeatedly inducing slipping through accelerating either of the two belts rapidly and unpredictably (Experiment 1) or sometimes following visual cues (Experiment 2). We quantified the distinct roles of eye and head movements for adjusting gaze on different time scales. While motor perturbations mainly influenced head movements, eye movements were primarily affected by the presence of visual cues. This was true both immediately following slips and—to a lesser extent—over the course of entire 5-min blocks. We find adapted gaze parameters already after the first perturbation in each block, with little transfer between blocks. In conclusion, gaze–gait interactions in experimentally perturbed yet naturalistic walking are adaptive, flexible, and effector specific.
Collapse
Affiliation(s)
- Karl Kopiske
- Cognitive Systems Lab, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany.,
| | - Daniel Koska
- Group "Research Methodology and Data Analysis in Biomechanics," Institute of Human Movement Science and Health, Chemnitz University of Technology, Chemnitz, Germany.,
| | - Thomas Baumann
- Cognitive Systems Lab, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany.,
| | - Christian Maiwald
- Group "Research Methodology and Data Analysis in Biomechanics," Institute of Human Movement Science and Health, Chemnitz University of Technology, Chemnitz, Germany.,
| | - Wolfgang Einhäuser
- Physics of Cognition Group, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany.,
| |
Collapse
|
50
|
Wang JZ, Kowler E. Micropursuit and the control of attention and eye movements in dynamic environments. J Vis 2021; 21:6. [PMID: 34347019 PMCID: PMC8340658 DOI: 10.1167/jov.21.8.6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
It is more challenging to plan eye movements during perceptual tasks performed in dynamic displays than in static displays. Decisions about the timing of saccades become more critical, and decisions must also involve smooth eye movements, as well as saccades. The present study examined eye movements when judging which of two moving discs would arrive first, or collide, at a common meeting point. Perceptual discrimination after training was precise (Weber fractions < 6%). Strategies reflected a combined contribution of saccades and smooth eye movements. The preferred strategy was to look near the meeting point when strategies were freely chosen. When strategies were assigned, looking near the meeting point produced better performance than switching between the discs. Smooth eye movements were engaged in two ways: (a) low-velocity smooth eye movements correlated with the motion of each disc (micropursuit) were found while the line of sight remained between the discs; and (b) spontaneous smooth pursuit of the pair of discs occurred after the perceptual report, when the discs moved as a pair along a common path. The results show clear preferences and advantages for those eye movement strategies during dynamic perceptual tasks that require minimal management or effort. In addition, smooth eye movements, whose involvement during perceptual tasks within dynamic displays may have previously escaped notice, provide useful indictors of the strategies used to select information and distribute attention during the performance of dynamic perceptual tasks.
Collapse
Affiliation(s)
- Jie Z Wang
- Department of Psychology, Rutgers University, Piscataway, NJ, USA.,http://orcid.org/0000-0002-8553-6706.,
| | - Eileen Kowler
- Department of Psychology, Rutgers University, Piscataway, NJ, USA.,http://orcid.org/0000-0001-7079-0376., https://ruccs.rutgers.edu/kowler
| |
Collapse
|