1
|
LaCroix AN, Ratiu I. Saccades and Blinks Index Cognitive Demand during Auditory Noncanonical Sentence Comprehension. J Cogn Neurosci 2025; 37:1147-1172. [PMID: 39792647 DOI: 10.1162/jocn_a_02295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2025]
Abstract
Noncanonical sentence structures pose comprehension challenges because they require increased cognitive demand. Prosody may partially alleviate this cognitive load. These findings largely stem from behavioral studies, yet physiological measures may reveal additional insights into how cognition is deployed to parse sentences. Pupillometry has been at the forefront of investigations into physiological measures of cognitive demand during auditory sentence comprehension. This study offers an alternative approach by examining whether eye-tracking measures, including blinks and saccades, index cognitive demand during auditory noncanonical sentence comprehension and whether these metrics are sensitive to reductions in cognitive load associated with typical prosodic cues. We further investigated how eye-tracking patterns differ across correct and incorrect responses, as a function of time, and how each related to behavioral measures of cognition. Canonical and noncanonical sentence comprehension was measured in 30 younger adults using an auditory sentence-picture matching task. We also assessed participants' attention and working memory. Blinking and saccades both differentiate noncanonical sentences from canonical sentences. Saccades further distinguish noncanonical structures from each other. Participants made more saccades on incorrect than correct trials. The number of saccades also related to working memory, regardless of syntax. However, neither eye-tracking metric was sensitive to the changes in cognitive demand that was behaviorally observed in response to typical prosodic cues. Overall, these findings suggest that eye-tracking indices, particularly saccades, reflect cognitive demand during auditory noncanonical sentence comprehension when visual information is present, offering greater insights into the strategies and neural resources participants use to parse auditory sentences.
Collapse
|
2
|
Zhang XY, Liu ZY, Li PY, Wang JH, Zang YF, Zhang H. Eye movements in visual fixation predict behavioral response performance in sustained attention. Int J Psychophysiol 2025; 211:112560. [PMID: 40127704 DOI: 10.1016/j.ijpsycho.2025.112560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2024] [Revised: 03/19/2025] [Accepted: 03/20/2025] [Indexed: 03/26/2025]
Abstract
Individuals naturally make eye movements over time during visual fixation and voluntarily perform changes in eye position. However, the functional implication of spontaneous changes of eye movements remains unclear. Given that visual fixation is commonly used as a baseline condition in cognitive experiments, we conducted an experiment using eye-tracking to test whether spontaneous fluctuations in eye position are linked to sustained attention. Eye-position data were collected while subjects performed visual fixation and a sustained attention task. We found that slow fluctuations (<0.2 Hz) in eye position correlated with slow fluctuations in response behavior [reaction time (RT)] during the sustained attention task. Further analysis revealed that off-task but not on-task slow fluctuations in eye position contributed to slow fluctuations in sustained attention behavior. The spontaneous fluctuations in eye position could predict the behavioral performance in sustained attention. These results provide new insights into the functional significance of eye movements during visual fixation, which should be considered in interpreting the findings of cognitive experiments using visual fixation as the baseline condition.
Collapse
Affiliation(s)
- Xiao-Yi Zhang
- Centre for Cognition and Brain Disorders / Department of Neurology, The Affiliated Hospital, Hangzhou Normal University, Hangzhou, Zhejiang, China; Institute of Psychological Science, Hangzhou Normal University, Hangzhou, Zhejiang, China
| | - Zi-Yang Liu
- Centre for Cognition and Brain Disorders / Department of Neurology, The Affiliated Hospital, Hangzhou Normal University, Hangzhou, Zhejiang, China; Institute of Psychological Science, Hangzhou Normal University, Hangzhou, Zhejiang, China
| | - Pei-Yan Li
- Centre for Cognition and Brain Disorders / Department of Neurology, The Affiliated Hospital, Hangzhou Normal University, Hangzhou, Zhejiang, China; Institute of Psychological Science, Hangzhou Normal University, Hangzhou, Zhejiang, China
| | - Jing-Hua Wang
- Centre for Cognition and Brain Disorders / Department of Neurology, The Affiliated Hospital, Hangzhou Normal University, Hangzhou, Zhejiang, China
| | - Yu-Feng Zang
- Centre for Cognition and Brain Disorders / Department of Neurology, The Affiliated Hospital, Hangzhou Normal University, Hangzhou, Zhejiang, China; Institute of Psychological Science, Hangzhou Normal University, Hangzhou, Zhejiang, China; TMS center, Deqing Hospital of Hangzhou Normal University, Deqing 313201, Zhejiang, China; Collaborative Innovation Center of Hebei Province for Mechanism, Diagnosis and Treatment of Neuropsychiatric disease, Hebei Medical University, Shijiazhuang, Hebei Province, China
| | - Hang Zhang
- Centre for Cognition and Brain Disorders / Department of Neurology, The Affiliated Hospital, Hangzhou Normal University, Hangzhou, Zhejiang, China; Institute of Psychological Science, Hangzhou Normal University, Hangzhou, Zhejiang, China.
| |
Collapse
|
3
|
Koevoet D, Van Zantwijk L, Naber M, Mathôt S, van der Stigchel S, Strauch C. Effort drives saccade selection. eLife 2025; 13:RP97760. [PMID: 40193176 PMCID: PMC11975373 DOI: 10.7554/elife.97760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/09/2025] Open
Abstract
What determines where to move the eyes? We recently showed that pupil size, a well-established marker of effort, also reflects the effort associated with making a saccade ('saccade costs'). Here, we demonstrate saccade costs to critically drive saccade selection: when choosing between any two saccade directions, the least costly direction was consistently preferred. Strikingly, this principle even held during search in natural scenes in two additional experiments. When increasing cognitive demand experimentally through an auditory counting task, participants made fewer saccades and especially cut costly directions. This suggests that the eye-movement system and other cognitive operations consume similar resources that are flexibly allocated among each other as cognitive demand changes. Together, we argue that eye-movement behavior is tuned to adaptively minimize saccade-inherent effort.
Collapse
Affiliation(s)
- Damian Koevoet
- Experimental Psychology, Helmholtz Institute, Utrecht UniversityUtrechtNetherlands
| | - Laura Van Zantwijk
- Experimental Psychology, Helmholtz Institute, Utrecht UniversityUtrechtNetherlands
| | - Marnix Naber
- Experimental Psychology, Helmholtz Institute, Utrecht UniversityUtrechtNetherlands
| | - Sebastiaan Mathôt
- Department of Psychology, University of GroningenGroningenNetherlands
| | | | - Christoph Strauch
- Experimental Psychology, Helmholtz Institute, Utrecht UniversityUtrechtNetherlands
| |
Collapse
|
4
|
Aitken B, Downey LA, Rose S, Manning B, Arkell TR, Shiferaw B, Hayley AC. Acute Administration of 10 mg Methylphenidate on Cognitive Performance and Visual Scanning in Healthy Adults: Randomised, Double-Blind, Placebo-Controlled Study. Hum Psychopharmacol 2025; 40:e70002. [PMID: 39930713 PMCID: PMC11811595 DOI: 10.1002/hup.70002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/13/2024] [Revised: 01/27/2025] [Accepted: 02/05/2025] [Indexed: 02/14/2025]
Abstract
OBJECTIVE To examine the effect of a low dose (10 mg) of methylphenidate on cognitive performance, visuospatial working memory (VSWM) and gaze behaviour capabilities in healthy adults. METHODS This randomised, double-blind, placebo-controlled and crossover study examined the effects of 10 mg methylphenidate on cognitive performance, VSWM and gaze behaviour. Fixation duration and rate, gaze transition entropy, and stationary gaze entropy were used to quantify visual scanning efficiency in 25 healthy adults (36% female, mean ± SD age = 33.5 ± 7.8 years, BMI = 24.1 ± 2.9 kg/m2). Attention, memory, and reaction time were assessed using the E-CogPro test battery. RESULTS Methylphenidate significantly enhanced performance in numeric working memory tasks, reflected by reduced errors and increased accuracy relative to placebo. No significant changes were observed in other cognitive or visual scanning metrics. CONCLUSIONS A low dose of methylphenidate improves limited domains of psychomotor speed and accuracy but does not affect visual scanning efficiency. This suggests limited usefulness as a general pro-cognitive aid and raises the possibility of a lower threshold of effect for measurable psychostimulant-induced changes to visual scanning behaviour. Further research is needed to explore these potential dose-response relationships and effects across diverse populations. TRIAL REGISTRATION ACTRN12620000499987.
Collapse
Affiliation(s)
- Blair Aitken
- Centre for Mental Health and Brain SciencesSwinburne University of TechnologyHawthornAustralia
| | - Luke A. Downey
- Centre for Mental Health and Brain SciencesSwinburne University of TechnologyHawthornAustralia
- Institute for Breathing and Sleep (IBAS)Austin HealthHeidelbergAustralia
| | - Serah Rose
- Centre for Mental Health and Brain SciencesSwinburne University of TechnologyHawthornAustralia
| | - Brooke Manning
- Centre for Mental Health and Brain SciencesSwinburne University of TechnologyHawthornAustralia
| | - Thomas R. Arkell
- Centre for Mental Health and Brain SciencesSwinburne University of TechnologyHawthornAustralia
| | - Brook Shiferaw
- Centre for Mental Health and Brain SciencesSwinburne University of TechnologyHawthornAustralia
- Human FactorsSeeing MachinesFyshwickAustralia
| | - Amie C. Hayley
- Centre for Mental Health and Brain SciencesSwinburne University of TechnologyHawthornAustralia
- Institute for Breathing and Sleep (IBAS)Austin HealthHeidelbergAustralia
| |
Collapse
|
5
|
Ekin M, Krejtz K, Duarte C, Duchowski AT, Krejtz I. Prediction of intrinsic and extraneous cognitive load with oculometric and biometric indicators. Sci Rep 2025; 15:5213. [PMID: 39939345 PMCID: PMC11822071 DOI: 10.1038/s41598-025-89336-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2024] [Accepted: 02/04/2025] [Indexed: 02/14/2025] Open
Abstract
This study focused on the prediction of intrinsic and extraneous cognitive load using eye-tracking metrics, heart rate variability, and galvanic skin response. Intrinsic cognitive load is associated with the inherent complexity of the mental task, whereas extraneous cognitive load is related to the distracting and unrelated elements in the task. Thirty-three participants (aged [Formula: see text]) performed different levels of mental calculations to induce intrinsic cognitive load in the first task and a visual search task to manipulate extraneous cognitive load in the second task. During both tasks, participants' eye movements, heart rate, and galvanic skin response were continuously recorded. Participants' working memory was controlled. Subjective cognitive load was also assessed following each experimental task. A discriminant model, consisting of oculo- and bio-metric indicators, could discriminate between cognitive loads (intrinsic vs.extraneous) and levels (low vs.high). In particular, average fixation duration, average saccade amplitude, and [Formula: see text] coefficient each have an impact on the model. In addition, task difficulty may be distinguished by the Low-High Index of Pupillary Activity (LHIPA) and heart rate variability.
Collapse
Affiliation(s)
- Merve Ekin
- Institute of Psychology, SWPS University, Warsaw, 03-815, Poland.
| | - Krzysztof Krejtz
- Institute of Psychology, SWPS University, Warsaw, 03-815, Poland
| | - Carlos Duarte
- LASIGE, Faculty of Sciences, University of Lisbon, Lisbon, 1649-004, Portugal
| | | | - Izabela Krejtz
- Institute of Psychology, SWPS University, Warsaw, 03-815, Poland
| |
Collapse
|
6
|
Kaesler M, Dunn JC, Semmler C. Clarifying the effects of sequential item presentation in the police lineup task. Cognition 2024; 250:105840. [PMID: 38908303 DOI: 10.1016/j.cognition.2024.105840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 05/21/2024] [Accepted: 05/28/2024] [Indexed: 06/24/2024]
Abstract
Previous research has reported diverging patterns of results with respect to discriminability and response bias when comparing the simultaneous lineup to two different lineup procedures in which items are presented sequentially, the sequential stopping rule lineup and the UK lineup. In a single large sample experiment, we compared discriminability and response bias in six-item photographic lineups presented either simultaneously, sequentially with a stopping rule, or sequentially requiring two full laps through the items before making an identification and including the ability to revisit items, analogous to the UK lineup procedure. Discriminability was greater for the simultaneous lineup compared to the sequential stopping rule lineup, despite a non-significant difference in empirical discriminability between the procedures. There was no significant difference in discriminability when comparing the simultaneous lineup to the sequential two lineup and the sequential two lap lineup to the sequential stopping rule lineup. Responding was most lenient for the sequential two lap lineup, followed by the simultaneous lineup, followed by the sequential lineup. These results imply that sequential item presentation may not exert a large effect in isolation on discriminability and response bias. Rather, discriminability and response bias in the sequential stopping rule lineup and UK lineup result from the interaction of sequential item presentation with other aspects of these procedures.
Collapse
Affiliation(s)
| | - John C Dunn
- University of Western Australia, Australia; Edith Cowan University, Australia
| | | |
Collapse
|
7
|
Laurinavichyute A, Ziubanova A, Lopukhina A. Eye-Movement Suppression in the Visual World Paradigm. Open Mind (Camb) 2024; 8:1012-1036. [PMID: 39170794 PMCID: PMC11338299 DOI: 10.1162/opmi_a_00157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 07/09/2024] [Indexed: 08/23/2024] Open
Abstract
Eye movements in the visual world paradigm are known to depend not only on linguistic input but on such factors as task, pragmatic context, affordances, etc. However, the degree to which eye movements may depend on task rather than on linguistic input is unclear. The present study for the first time tests how task constraints modulate eye movement behavior in the visual world paradigm by probing whether participants could refrain from looking at the referred image. Across two experiments with and without comprehension questions (total N = 159), we found that when participants were instructed to avoid looking at the referred images, the probability of fixating these reduced from 58% to 18% while comprehension scores remained high. Although language-mediated eye movements could not be suppressed fully, the degree of possible decoupling of eye movements from language processing suggests that participants can withdraw at least some looks from the referred images when needed. If they do so to different degrees in different experimental conditions, comparisons between conditions might be compromised. We discuss some cases where participants could adopt different viewing behaviors depending on the experimental condition, and provide some tentative ways to test for such differences.
Collapse
|
8
|
Herrmann B, Ryan JD. Pupil Size and Eye Movements Differently Index Effort in Both Younger and Older Adults. J Cogn Neurosci 2024; 36:1325-1340. [PMID: 38683698 DOI: 10.1162/jocn_a_02172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
The assessment of mental effort is increasingly relevant in neurocognitive and life span domains. Pupillometry, the measure of the pupil size, is often used to assess effort but has disadvantages. Analysis of eye movements may provide an alternative, but research has been limited to easy and difficult task demands in younger adults. An effort measure must be sensitive to the whole effort profile, including "giving up" effort investment, and capture effort in different age groups. The current study comprised three experiments in which younger (n = 66) and older (n = 44) adults listened to speech masked by background babble at different signal-to-noise ratios associated with easy, difficult, and impossible speech comprehension. We expected individuals to invest little effort for easy and impossible speech (giving up) but to exert effort for difficult speech. Indeed, pupil size was largest for difficult but lower for easy and impossible speech. In contrast, gaze dispersion decreased with increasing speech masking in both age groups. Critically, gaze dispersion during difficult speech returned to levels similar to easy speech after sentence offset, when acoustic stimulation was similar across conditions, whereas gaze dispersion during impossible speech continued to be reduced. These findings show that a reduction in eye movements is not a byproduct of acoustic factors, but instead suggest that neurocognitive processes, different from arousal-related systems regulating the pupil size, drive reduced eye movements during high task demands. The current data thus show that effort in one sensory domain (audition) differentially impacts distinct functional properties in another sensory domain (vision).
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, North York, Ontario, Canada
- University of Toronto, Ontario, Canada
| | - Jennifer D Ryan
- Rotman Research Institute, North York, Ontario, Canada
- University of Toronto, Ontario, Canada
| |
Collapse
|
9
|
Ben Itzhak N, Stijnen L, Ortibus E. The relation between visual orienting functions, visual perception, and functional vision in children with (suspected) cerebral visual impairment. RESEARCH IN DEVELOPMENTAL DISABILITIES 2023; 142:104619. [PMID: 39491302 DOI: 10.1016/j.ridd.2023.104619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 09/20/2023] [Accepted: 10/18/2023] [Indexed: 11/05/2024]
Abstract
BACKGROUND Brain-based impairments in visual perception (VP), termed cerebral visual impairment (CVI), are heterogeneous. AIMS To investigate relations between functional vision and (1) visual orienting functions (VOF) and (2) VP. METHODS AND PROCEDURES Forty-four children (Males = 20; Mean age = 9y11m) with (suspected) CVI were tested with an adapted virtual toy box (vTB) paradigm (eye tracking visual search task (VST) and a recognition/memory task), VP tests, a preferential looking eye tracking (PL-ET) paradigm, and the Flemish cerebral visual impairment questionnaire. Relations were tested with Spearman correlations. OUTCOMES AND RESULTS Functional vision was related to VOF and VP. Children who performed poorly on the VST performed worse on the PL-ET paradigm (r success rate = 0.508-0.654; r reaction time (to fixation) = 0.327-0.633; r fixation duration = 0.532; r gaze fixation area/error = 0.565). Faster VST reaction time was related to higher recognition/memory task accuracy (r = -0.385) and better object/picture recognition (r = -0.371). Higher accuracy in the recognition/memory task was related to better object and face processing (r = -0.539), less visual (dis)interest (r = -0.380), and better clutter and distance viewing (r = -0.353). CONCLUSIONS AND IMPLICATIONS In CVI, VOF, VP, and functional vision are interlinked, and when one is impaired, it negatively affects the others. Hence, quantitatively profiling basic functioning, higher-order and daily life abilities is crucial.
Collapse
Affiliation(s)
- N Ben Itzhak
- Department of Development and Regeneration, University of Leuven (KU Leuven), O&N IV Herestraat 49, Box 805, 3000 Leuven, Belgium; KU Leuven Child and Youth Institute (L-C&Y), Leuven, Belgium.
| | - L Stijnen
- Department of Development and Regeneration, University of Leuven (KU Leuven), O&N IV Herestraat 49, Box 805, 3000 Leuven, Belgium
| | - E Ortibus
- Department of Development and Regeneration, University of Leuven (KU Leuven), O&N IV Herestraat 49, Box 805, 3000 Leuven, Belgium; KU Leuven Child and Youth Institute (L-C&Y), Leuven, Belgium
| |
Collapse
|
10
|
Pedziwiatr MA, Heer S, Coutrot A, Bex P, Mareschal I. Prior knowledge about events depicted in scenes decreases oculomotor exploration. Cognition 2023; 238:105544. [PMID: 37419068 DOI: 10.1016/j.cognition.2023.105544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 06/27/2023] [Accepted: 06/28/2023] [Indexed: 07/09/2023]
Abstract
The visual input that the eyes receive usually contains temporally continuous information about unfolding events. Therefore, humans can accumulate knowledge about their current environment. Typical studies on scene perception, however, involve presenting multiple unrelated images and thereby render this accumulation unnecessary. Our study, instead, facilitated it and explored its effects. Specifically, we investigated how recently-accumulated prior knowledge affects gaze behavior. Participants viewed sequences of static film frames that contained several 'context frames' followed by a 'critical frame'. The context frames showed either events from which the situation depicted in the critical frame naturally followed, or events unrelated to this situation. Therefore, participants viewed identical critical frames while possessing prior knowledge that was either relevant or irrelevant to the frames' content. In the former case, participants' gaze behavior was slightly more exploratory, as revealed by seven gaze characteristics we analyzed. This result demonstrates that recently-gained prior knowledge reduces exploratory eye movements.
Collapse
Affiliation(s)
- Marek A Pedziwiatr
- School of Biological and Behavioural Sciences, Queen Mary University of London, Mile End Road, London E1 4NS, United Kingdom.
| | - Sophie Heer
- School of Biological and Behavioural Sciences, Queen Mary University of London, Mile End Road, London E1 4NS, United Kingdom
| | - Antoine Coutrot
- Univ Lyon, CNRS, INSA Lyon, UCBL, LIRIS, UMR5205, F-69621 Lyon, France
| | - Peter Bex
- Department of Psychology, Northeastern University, 107 Forsyth Street, Boston, MA 02115, United States of America
| | - Isabelle Mareschal
- School of Biological and Behavioural Sciences, Queen Mary University of London, Mile End Road, London E1 4NS, United Kingdom
| |
Collapse
|
11
|
Cui ME, Herrmann B. Eye Movements Decrease during Effortful Speech Listening. J Neurosci 2023; 43:5856-5869. [PMID: 37491313 PMCID: PMC10423048 DOI: 10.1523/jneurosci.0240-23.2023] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 06/09/2023] [Accepted: 07/18/2023] [Indexed: 07/27/2023] Open
Abstract
Hearing impairment affects many older adults but is often diagnosed decades after speech comprehension in noisy situations has become effortful. Accurate assessment of listening effort may thus help diagnose hearing impairment earlier. However, pupillometry-the most used approach to assess listening effort-has limitations that hinder its use in practice. The current study explores a novel way to assess listening effort through eye movements. Building on cognitive and neurophysiological work, we examine the hypothesis that eye movements decrease when speech listening becomes challenging. In three experiments with human participants from both sexes, we demonstrate, consistent with this hypothesis, that fixation duration increases and spatial gaze dispersion decreases with increasing speech masking. Eye movements decreased during effortful speech listening for different visual scenes (free viewing, object tracking) and speech materials (simple sentences, naturalistic stories). In contrast, pupillometry was less sensitive to speech masking during story listening, suggesting pupillometric measures may not be as effective for the assessments of listening effort in naturalistic speech-listening paradigms. Our results reveal a critical link between eye movements and cognitive load, suggesting that neural activity in the brain regions that support the regulation of eye movements, such as frontal eye field and superior colliculus, are modulated when listening is effortful.SIGNIFICANCE STATEMENT Assessment of listening effort is critical for early diagnosis of age-related hearing loss. Pupillometry is most used but has several disadvantages. The current study explores a novel way to assess listening effort through eye movements. We examine the hypothesis that eye movements decrease when speech listening becomes effortful. We demonstrate, consistent with this hypothesis, that fixation duration increases and gaze dispersion decreases with increasing speech masking. Eye movements decreased during effortful speech listening for different visual scenes (free viewing, object tracking) and speech materials (sentences, naturalistic stories). Our results reveal a critical link between eye movements and cognitive load, suggesting that neural activity in brain regions that support the regulation of eye movements are modulated when listening is effortful.
Collapse
Affiliation(s)
- M Eric Cui
- Rotman Research Institute, Baycrest Academy for Research and Education, North York, Ontario M6A 2E1, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario M5S 1A1, Canada
| | - Björn Herrmann
- Rotman Research Institute, Baycrest Academy for Research and Education, North York, Ontario M6A 2E1, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario M5S 1A1, Canada
| |
Collapse
|
12
|
Chen S, Epps J, Paas F. Pupillometric and blink measures of diverse task loads: Implications for working memory models. BRITISH JOURNAL OF EDUCATIONAL PSYCHOLOGY 2023; 93 Suppl 2:318-338. [PMID: 36572995 DOI: 10.1111/bjep.12577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 11/11/2022] [Accepted: 12/13/2022] [Indexed: 12/28/2022]
Abstract
BACKGROUND Inconsistent observations of pupillary response and blink change in response to different specific tasks raise questions regarding the relationship between eye measures, task types and working memory (WM) models. On the one hand, studies have provided mixed evidence from eye measures about tasks: pupil size has mostly been reported to increase with increasing task demand while this expected change was not observed in some studies, and blink rate has exhibited different trends in different tasks. On the other hand, a WM model has been developed to integrate a component to reconcile recent findings that the human motor system plays an important role in cognition and learning. However, how different tasks correlate with WM components has not been experimentally examined using eye activity measurements. AIMS The current study uses a four-dimensional task load framework to bridge eye measures, task types and WM models. SAMPLE Twenty participants (10 males, 10 females; Age: M = 25.8, SD = 7.17) above 18 years old volunteered. All participants had normal or corrected to normal vision with contact lenses and had no eye diseases causing obvious excessive blinking. METHODS We examined the ability of pupil size and blink rate to index low and high levels of cognitive, perceptual, physical and communicative task load. A network of the four load types and WM components was built and analysed to verify the necessity of integrating a physical task-related component into the WM model. RESULTS Results demonstrate that pupil size can index cognitive load and communicative load but not perceptual or physical load. Blink rate can index the level of cognitive load but is best at discriminating perceptual tasks from other types of tasks. Furthermore, pupil size measurement of the four task types was explained better during structural and factor analysis by a WM model that integrates a movement-related component. CONCLUSIONS This research provides new insights into the relationship between eye measures, task type and WM models and provides a comprehensive understanding from which to predict pupil size and blink behaviours in more complex and practical tasks.
Collapse
Affiliation(s)
- Siyuan Chen
- University of New South Wales, Sydney, New South Wales, Australia
| | - Julien Epps
- University of New South Wales, Sydney, New South Wales, Australia
| | - Fred Paas
- Erasmus University Rotterdam, Rotterdam, The Netherlands
- University of Wollongong, Wollongong, New South Wales, Australia
| |
Collapse
|
13
|
Hebbar PA, Vinod S, Shah AK, Pashilkar AA, Biswas P. Cognitive Load Estimation in VR Flight Simulator. J Eye Mov Res 2023; 15. [PMID: 39234220 PMCID: PMC11373147 DOI: 10.16910/jemr.15.3.11] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/06/2024] Open
Abstract
This paper discusses the design and development of a low-cost virtual reality (VR) based flight simulator with cognitive load estimation feature using ocular and EEG signals. Focus is on exploring methods to evaluate pilot's interactions with aircraft by means of quantifying pilot's perceived cognitive load under different task scenarios. Realistic target tracking and context of the battlefield is designed in VR. Head mounted eye gaze tracker and EEG headset are used for acquiring pupil diameter, gaze fixation, gaze direction and EEG theta, alpha, and beta band power data in real time. We developed an AI agent model in VR and created scenarios of interactions with the piloted aircraft. To estimate the pilot's cognitive load, we used low-frequency pupil diameter variations, fixation rate, gaze distribution pattern, EEG signal-based task load index and EEG task engagement index. We compared the physiological measures of workload with the standard user's inceptor control-based workload metrics. Results of the piloted simulation study indicate that the metrics discussed in the paper have strong association with pilot's perceived task difficulty.
Collapse
Affiliation(s)
- P Archana Hebbar
- CSIR-National Aerospace Laboratories Bengaluru, Karnataka, India
| | - Sanjana Vinod
- Indian Institute of Science (IISc), Bengaluru, Karnataka, India
| | | | | | - Pradipta Biswas
- Indian Institute of Science (IISc), Bengaluru, Karnataka, India
| |
Collapse
|
14
|
Do we rely on good-enough processing in reading under auditory and visual noise? PLoS One 2023; 18:e0277429. [PMID: 36693033 PMCID: PMC9873184 DOI: 10.1371/journal.pone.0277429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 10/27/2022] [Indexed: 01/25/2023] Open
Abstract
Noise, as part of real-life communication flow, degrades the quality of linguistic input and affects language processing. According to predictions of the noisy-channel and good-enough processing models, noise should make comprehenders rely more on word-level semantics instead of actual syntactic relations. However, empirical evidence supporting this prediction is still lacking. For the first time, we investigated whether auditory (three-talker babble) and visual (short idioms appearing next to a target sentence on the screen) noise would trigger greater reliance on semantics and make readers of Russian sentences process the sentences superficially. Our findings suggest that, although Russian speakers generally relied on semantics in sentence comprehension, neither auditory nor visual noise increased this reliance. The only effect of noise on semantic processing was found in reading speed under auditory noise measured by first fixation duration: only without noise, the semantically implausible sentences were read slower than semantically plausible ones. These results do not support the predictions of the study based on the noisy-channel and good-enough processing models, which is discussed in light of the methodological differences among the studies of noise and their possible limitations.
Collapse
|
15
|
Walter K, Bex P. Low-level factors increase gaze-guidance under cognitive load: A comparison of image-salience and semantic-salience models. PLoS One 2022; 17:e0277691. [PMID: 36441789 PMCID: PMC9704686 DOI: 10.1371/journal.pone.0277691] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Accepted: 11/01/2022] [Indexed: 11/29/2022] Open
Abstract
Growing evidence links eye movements and cognitive functioning, however there is debate concerning what image content is fixated in natural scenes. Competing approaches have argued that low-level/feedforward and high-level/feedback factors contribute to gaze-guidance. We used one low-level model (Graph Based Visual Salience, GBVS) and a novel language-based high-level model (Global Vectors for Word Representation, GloVe) to predict gaze locations in a natural image search task, and we examined how fixated locations during this task vary under increasing levels of cognitive load. Participants (N = 30) freely viewed a series of 100 natural scenes for 10 seconds each. Between scenes, subjects identified a target object from the scene a specified number of trials (N) back among three distracter objects of the same type but from alternate scenes. The N-back was adaptive: N-back increased following two correct trials and decreased following one incorrect trial. Receiver operating characteristic (ROC) analysis of gaze locations showed that as cognitive load increased, there was a significant increase in prediction power for GBVS, but not for GloVe. Similarly, there was no significant difference in the area under the ROC between the minimum and maximum N-back achieved across subjects for GloVe (t(29) = -1.062, p = 0.297), while there was a cohesive upwards trend for GBVS (t(29) = -1.975, p = .058), although not significant. A permutation analysis showed that gaze locations were correlated with GBVS indicating that salient features were more likely to be fixated. However, gaze locations were anti-correlated with GloVe, indicating that objects with low semantic consistency with the scene were more likely to be fixated. These results suggest that fixations are drawn towards salient low-level image features and this bias increases with cognitive load. Additionally, there is a bias towards fixating improbable objects that does not vary under increasing levels of cognitive load.
Collapse
Affiliation(s)
- Kerri Walter
- Psychology Department, Northeastern University, Boston, MA, United States of America
| | - Peter Bex
- Psychology Department, Northeastern University, Boston, MA, United States of America
| |
Collapse
|
16
|
Manelis A, Lima Santos JP, Suss SJ, Holland CL, Stiffler RS, Bitzer HB, Mailliard S, Shaffer MA, Caviston K, Collins MW, Phillips ML, Kontos AP, Versace A. Vestibular/ocular motor symptoms in concussed adolescents are linked to retrosplenial activation. Brain Commun 2022; 4:fcac123. [PMID: 35615112 PMCID: PMC9127539 DOI: 10.1093/braincomms/fcac123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 04/07/2022] [Accepted: 05/11/2022] [Indexed: 11/23/2022] Open
Abstract
Following concussion, adolescents often experience vestibular and ocular motor symptoms as well as working memory deficits that may affect their cognitive, academic and social well-being. Complex visual environments including school activities, playing sports, or socializing with friends may be overwhelming for concussed adolescents suffering from headache, dizziness, nausea and fogginess, thus imposing heightened requirements on working memory to adequately function in such environments. While understanding the relationship between working memory and vestibular/ocular motor symptoms is critically important, no previous study has examined how an increase in working memory task difficulty affects the relationship between severity of vestibular/ocular motor symptoms and brain and behavioural responses in a working memory task. To address this question, we examined 80 adolescents (53 concussed, 27 non-concussed) using functional MRI while performing a 1-back (easy) and 2-back (difficult) working memory tasks with angry, happy, neutral and sad face distractors. Concussed adolescents completed the vestibular/ocular motor screening and were scanned within 10 days of injury. We found that all participants showed lower accuracy and slower reaction time on difficult (2-back) versus easy (1-back) tasks (P-values < 0.05). Concussed adolescents were significantly slower than controls across all conditions (P < 0.05). In concussed adolescents, higher vestibular/ocular motor screening total scores were associated with significantly greater differences in reaction time between 1-back and 2-back across all distractor conditions and significantly greater differences in retrosplenial cortex activation for the 1-back versus 2-back condition with neutral face distractors (P-values < 0.05). Our findings suggest that processing of emotionally ambiguous information (e.g. neutral faces) additionally increases the task difficulty for concussed adolescents. Post-concussion vestibular/ocular motor symptoms may reduce the ability to inhibit emotionally ambiguous information during working memory tasks, potentially affecting cognitive, academic and social functioning in concussed adolescents.
Collapse
Affiliation(s)
- Anna Manelis
- Department of Psychiatry, University of Pittsburgh, Pittsburgh, PA, USA
| | | | - Stephen J. Suss
- Department of Psychiatry, University of Pittsburgh, Pittsburgh, PA, USA
| | - Cynthia L. Holland
- Department of Orthopaedic Surgery/UPMC Sports Medicine Concussion Program, University of Pittsburgh, Pittsburgh, PA, USA
| | | | - Hannah B. Bitzer
- Department of Orthopaedic Surgery/UPMC Sports Medicine Concussion Program, University of Pittsburgh, Pittsburgh, PA, USA
| | - Sarrah Mailliard
- Department of Psychiatry, University of Pittsburgh, Pittsburgh, PA, USA
| | - Madelyn A. Shaffer
- Department of Orthopaedic Surgery/UPMC Sports Medicine Concussion Program, University of Pittsburgh, Pittsburgh, PA, USA
| | - Kaitlin Caviston
- Department of Orthopaedic Surgery/UPMC Sports Medicine Concussion Program, University of Pittsburgh, Pittsburgh, PA, USA
| | - Michael W. Collins
- Department of Orthopaedic Surgery/UPMC Sports Medicine Concussion Program, University of Pittsburgh, Pittsburgh, PA, USA
| | - Mary L. Phillips
- Department of Psychiatry, University of Pittsburgh, Pittsburgh, PA, USA
| | - Anthony P. Kontos
- Department of Orthopaedic Surgery/UPMC Sports Medicine Concussion Program, University of Pittsburgh, Pittsburgh, PA, USA
| | - Amelia Versace
- Department of Psychiatry, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Radiology, Magnetic Resonance Research Center, University of Pittsburgh Medical Center, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
17
|
Hochhauser M, Aran A, Grynszpan O. Change Blindness in Adolescents With Attention-Deficit/Hyperactivity Disorder: Use of Eye-Tracking. Front Psychiatry 2022; 13:770921. [PMID: 35295775 PMCID: PMC8918561 DOI: 10.3389/fpsyt.2022.770921] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Accepted: 02/03/2022] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE This study investigated change detection of central or marginal interest in images using a change-blindness paradigm with eye tracking. METHOD Eighty-four drug-naïve adolescents [44 with attention-deficit/hyperactivity disorder (ADHD)/40 controls with typical development] searched for a change in 36 pairs of original and modified images, with an item of central or marginal interest present or absent, presented in rapid alternation. Collected data were detection rate, response time, and gaze fixation duration, latency, and dispersion data. RESULTS Both groups' change-detection times were similar, with no speed-accuracy trade-off. No between-group differences were found in time to first fixation, fixation duration, or scan paths. Both groups performed better for items of central level of interest. The ADHD group demonstrated greater fixation dispersion in scan paths for central- and marginal-interest items. CONCLUSION Results suggest the greater gaze dispersion may lead to greater fatigue in tasks that require longer attention duration.
Collapse
Affiliation(s)
| | - Adi Aran
- Neuropedeatric Unit, Shaare Zedek Medical Center, Jerusalem, Israel
| | - Ouriel Grynszpan
- Laboratoire Interdisciplinaire des Sciences du Numérique, Université Paris-Saclay, Orsay, France
| |
Collapse
|