1
|
Nguyen B, Benderius O. Intermittent control and retinal optic flow when maintaining a curvilinear path. Sci Rep 2025; 15:18926. [PMID: 40442194 PMCID: PMC12122885 DOI: 10.1038/s41598-025-02402-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2024] [Accepted: 05/13/2025] [Indexed: 06/02/2025] Open
Abstract
The topic of how humans navigate using vision has been studied for decades. Research has identified the emergent patterns of retinal optic flow from gaze behavior may play an essential role in human curvilinear locomotion. However, the link towards control has been poorly understood. Lately, it has been shown that human locomotor behavior is corrective, formed from intermittent decisions and responses. A simulated virtual reality experiment was conducted where fourteen participants drove through a texture-rich simplistic road environment with left and right curve bends. The goal was to investigate how human intermittent lateral control can be associated with the retinal optic flow-based cues and vehicular heading as sources of information. This work reconstructs dense retinal optic flow using a numerical estimation of optic flow with measured gaze behavior. By combining retinal optic flow with the drivable lane surface, a cross-correlational relation to intermittent steering behavior could be observed. In addition, a novel method of identifying constituent ballistic correction using particle swarm optimization was demonstrated to analyze the incremental correction-based behavior. Through time delay analysis, our results show a human response time of approximately 0.14 s for retinal optic flow-based cues and 0.44 s for heading-based cues, measured from stimulus onset to steering correction onset. These response times were further delayed by 0.17 s when the vehicle-fixed steering wheel was visibly removed. In contrast to classical continuous control strategies, our findings support and argue for the intermittency property in human neuromuscular control of muscle synergies, through the principle of satisficing behavior: to only actuate when there is a perceived need for it. This is aligned with the human sustained sensorimotor model, which uses readily available information and internal models to produce informed responses through evidence accumulation to initiate appropriate ballistic correction, even amidst another correction.
Collapse
Affiliation(s)
- Björnborg Nguyen
- Department of Mechanics and Maritime Sciences, Chalmers University of Technology, 412 96, Göteborg, Sweden
| | - Ola Benderius
- Department of Mechanics and Maritime Sciences, Chalmers University of Technology, 412 96, Göteborg, Sweden.
| |
Collapse
|
2
|
Park J, Jeon JY, Kim R, Kay KN, Shim WM. Motion-corrected eye tracking (MoCET) improves gaze accuracy during visual fMRI experiments. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.03.13.642919. [PMID: 40161851 PMCID: PMC11952553 DOI: 10.1101/2025.03.13.642919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 04/02/2025]
Abstract
Human eye movements are essential for understanding cognition, yet achieving high-precision eye tracking in fMRI remains challenging. Even slight head shifts from the initial calibration position can introduce drift in eye tracking data, leading to substantial gaze inaccuracies. To address this, we introduce Motion-Corrected Eye Tracking (MoCET), a novel approach that corrects drift using head motion parameters derived from the preprocessing of fMRI data. MoCET requires no additional hardware and can be applied retrospectively to existing datasets. We show that it outperforms traditional detrending methods with respect to accuracy of gaze estimation and offers higher spatial and temporal precision compared to MR-based eye tracking approaches. By overcoming a key limitation in integrating eye tracking with fMRI, MoCET facilitates investigations of naturalistic vision and cognition in fMRI research.
Collapse
Affiliation(s)
- Jiwoong Park
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Republic of Korea
- Department of Biomedical Engineering, Sungkyunkwan University (SKKU), Republic of Korea
- Department of Intelligent Precision Healthcare Convergence, Sungkyunkwan University (SKKU), Republic of Korea
| | - Jae Young Jeon
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Republic of Korea
- Department of Intelligent Precision Healthcare Convergence, Sungkyunkwan University (SKKU), Republic of Korea
| | - Royoung Kim
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Republic of Korea
- Department of Intelligent Precision Healthcare Convergence, Sungkyunkwan University (SKKU), Republic of Korea
| | - Kendrick N. Kay
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota
| | - Won Mok Shim
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Republic of Korea
- Department of Biomedical Engineering, Sungkyunkwan University (SKKU), Republic of Korea
- Department of Intelligent Precision Healthcare Convergence, Sungkyunkwan University (SKKU), Republic of Korea
| |
Collapse
|
3
|
Harris D, Arthur T, Wilson M, Le Gallais B, Parsons T, Dill A, Vine S. Counteracting uncertainty: exploring the impact of anxiety on updating predictions about environmental states. BIOLOGICAL CYBERNETICS 2025; 119:8. [PMID: 39976741 PMCID: PMC11842521 DOI: 10.1007/s00422-025-01006-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Accepted: 01/28/2025] [Indexed: 02/23/2025]
Abstract
Anxious emotional states disrupt decision-making and control of dexterous motor actions. Computational work has shown that anxiety-induced uncertainty alters the rate at which we learn about the environment, but the subsequent impact on the predictive beliefs that drive action control remains to be understood. In the present work we tested whether anxiety alters predictive (oculo)motor control mechanisms. Thirty participants completed an experimental task that consisted of manual interception of a projectile performed in virtual reality. Participants were subjected to conditions designed to induce states of high or low anxiety using performance incentives and social-evaluative pressure. We measured subsequent effects on physiological arousal, self-reported state anxiety, and eye movements. Under high pressure conditions we observed visual sampling of the task environment characterised by higher variability and entropy of position prior to release of the projectile, consistent with an active attempt to reduce uncertainty. Computational modelling of predictive beliefs, using gaze data as inputs to a partially observable Markov decision process model, indicated that trial-to-trial updating of predictive beliefs was reduced during anxiety, suggesting that updates to priors were constrained. Additionally, state anxiety was related to a less deterministic mapping of beliefs to actions. These results support the idea that organisms may attempt to counter anxiety-related uncertainty by moving towards more familiar and certain sensorimotor patterns.
Collapse
Affiliation(s)
- David Harris
- School of Public Health and Sport Sciences, Medical School, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK.
| | - Tom Arthur
- School of Public Health and Sport Sciences, Medical School, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - Mark Wilson
- School of Public Health and Sport Sciences, Medical School, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - Ben Le Gallais
- School of Public Health and Sport Sciences, Medical School, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - Thomas Parsons
- School of Public Health and Sport Sciences, Medical School, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - Ally Dill
- School of Public Health and Sport Sciences, Medical School, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - Sam Vine
- School of Public Health and Sport Sciences, Medical School, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| |
Collapse
|
4
|
Wass SV, Perapoch Amadó M, Northrop T, Marriott Haresign I, Phillips EAM. Foraging and inertia: Understanding the developmental dynamics of overt visual attention. Neurosci Biobehav Rev 2025; 169:105991. [PMID: 39722410 DOI: 10.1016/j.neubiorev.2024.105991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2024] [Revised: 12/05/2024] [Accepted: 12/19/2024] [Indexed: 12/28/2024]
Abstract
During early life, we develop the ability to choose what we focus on and what we ignore, allowing us to regulate perception and action in complex environments. But how does this change influence how we spontaneously allocate attention to real-world objects during free behaviour? Here, in this narrative review, we examine this question by considering the time dynamics of spontaneous overt visual attention, and how these develop through early life. Even in early childhood, visual attention shifts occur both periodically and aperiodically. These reorientations become more internally controlled as development progresses. Increasingly with age, attention states also develop self-sustaining attractor dynamics, known as attention inertia, in which the longer an attention episode lasts, the more the likelihood increases of its continuing. These self-sustaining dynamics are driven by amplificatory interactions between engagement, comprehension, and distractibility. We consider why experimental measures show decline in sustained attention over time, while real-world visual attention often demonstrates the opposite pattern. Finally, we discuss multi-stable attention states, where both hypo-arousal (mind-wandering) and hyper-arousal (fragmentary attention) may also show self-sustaining attractor dynamics driven by moment-by-moment amplificatory child-environment interactions; and we consider possible applications of this work, and future directions.
Collapse
Affiliation(s)
- S V Wass
- BabyDevLab, School of Psychology, University of East London, Water Lane, London E15 4LZ, UK.
| | - M Perapoch Amadó
- BabyDevLab, School of Psychology, University of East London, Water Lane, London E15 4LZ, UK
| | - T Northrop
- BabyDevLab, School of Psychology, University of East London, Water Lane, London E15 4LZ, UK
| | - I Marriott Haresign
- BabyDevLab, School of Psychology, University of East London, Water Lane, London E15 4LZ, UK
| | - E A M Phillips
- BabyDevLab, School of Psychology, University of East London, Water Lane, London E15 4LZ, UK
| |
Collapse
|
5
|
Márquez I, Lemus L, Treviño M. A continuum from predictive to online feedback in visuomotor interception. Eur J Neurosci 2024; 60:7211-7227. [PMID: 39603981 DOI: 10.1111/ejn.16628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 11/14/2024] [Accepted: 11/17/2024] [Indexed: 11/29/2024]
Abstract
Interception, essential for activities like driving and sports, can be characterized by varying degrees of predictive behaviour. We developed a visually guided task to explore how target predictability and visibility influenced interception actions. The task featured a falling dot influenced by horizontal velocity, gravity and air friction, with predictability manipulated through external forces that altered the target's trajectory. We also introduced spatial occlusion to limit visual information. Our results show that low target variability favoured predictive behaviours, while high variability led to more reactive responses relying on online feedback. Manual responses displayed increased variability with changes in target motion, whereas eye trajectories maintained constant curvature across conditions. Additionally, higher target variability delayed the onset of hand movements but did not affect eye movement onset, making gaze position a poor predictor of hand position. This distinction highlights the different adaptive patterns in hand and eye movements in response to target trajectory changes. Participants maintained stable interception behaviours within and across sessions, indicating individual preferences for either predictive or more reactive actions. Our findings reveal a dynamic interplay between target predictability and interception, illustrating how humans combine predictive and reactive behaviours to manage external variability.
Collapse
Affiliation(s)
- Inmaculada Márquez
- Departamento de Ciencias Médicas y de la Vida, Centro Universitario de la Ciénega, Universidad de Guadalajara, Ocotlán, Mexico
- Laboratorio de Conducta Animal, Departamento de Psicología, Centro Universitario de la Ciénega, Universidad de Guadalajara, Ocotlán, Mexico
| | - Luis Lemus
- Departamento de Neurociencias Cognitivas, Instituto de Fisiología Celular, Universidad Nacional Autónoma de México, Mexico City, Mexico
| | - Mario Treviño
- Laboratorio de Plasticidad Cortical y Aprendizaje Perceptual, Instituto de Neurociencias, Universidad de Guadalajara, Guadalajara, Jalisco, Mexico
| |
Collapse
|
6
|
Garlichs A, Lustig M, Gamer M, Blank H. Expectations guide predictive eye movements and information sampling during face recognition. iScience 2024; 27:110920. [PMID: 39351204 PMCID: PMC11439840 DOI: 10.1016/j.isci.2024.110920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 07/21/2024] [Accepted: 09/06/2024] [Indexed: 10/04/2024] Open
Abstract
Context information has a crucial impact on our ability to recognize faces. Theoretical frameworks of predictive processing suggest that predictions derived from context guide sampling of sensory evidence at informative locations. However, it is unclear how expectations influence visual information sampling during face perception. To investigate the effects of expectations on eye movements during face anticipation and recognition, we conducted two eye-tracking experiments (n = 34, each) using cued face morphs containing expected and unexpected facial features, and clear expected and unexpected faces. Participants performed predictive saccades toward expected facial features and fixated expected more often and longer than unexpected features. In face morphs, expected features attracted early eye movements, followed by unexpected features, indicating that top-down as well as bottom-up information drives face sampling. Our results provide compelling evidence that expectations influence face processing by guiding predictive and early eye movements toward anticipated informative locations, supporting predictive processing.
Collapse
Affiliation(s)
- Annika Garlichs
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Hamburg Brain School, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Mark Lustig
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Department of Psychology, University of Hamburg, Hamburg, Germany
| | - Matthias Gamer
- Department of Psychology, University of Würzburg, Würzburg, Germany
| | - Helen Blank
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Hamburg Brain School, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Predictive Cognition, Research Center One Health Ruhr of the University Alliance Ruhr, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
7
|
Brand TK, Schütz AC, Müller H, Maurer H, Hegele M, Maurer LK. Sensorimotor prediction is used to direct gaze toward task-relevant locations in a goal-directed throwing task. J Neurophysiol 2024; 132:485-500. [PMID: 38919149 DOI: 10.1152/jn.00052.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 06/17/2024] [Accepted: 06/19/2024] [Indexed: 06/27/2024] Open
Abstract
Previous research has shown that action effects of self-generated movements are internally predicted before outcome feedback becomes available. To test whether these sensorimotor predictions are used to facilitate visual information uptake for feedback processing, we measured eye movements during the execution of a goal-directed throwing task. Participants could fully observe the effects of their throwing actions (ball trajectory and either hitting or missing a target) in most of the trials. In a portion of the trials, the ball trajectory was not visible, and participants only received static information about the outcome. We observed a large proportion of predictive saccades, shifting gaze toward the goal region before the ball arrived and outcome feedback became available. Fixation locations after predictive saccades systematically covaried with future ball positions in trials with continuous ball flight information, but notably also in trials with static outcome feedback and only efferent and proprioceptive information about the movement that could be used for predictions. Fixation durations at the chosen positions after feedback onset were modulated by action outcome (longer durations for misses than for hits) and outcome uncertainty (longer durations for narrow vs. clear outcomes). Combining both effects, durations were longest for narrow errors and shortest for clear hits, indicating that the chosen locations offer informational value for feedback processing. Thus, humans are able to use sensorimotor predictions to direct their gaze toward task-relevant feedback locations. Outcome-dependent saccade latency differences (miss vs. hit) indicate that also predictive valuation processes are involved in planning predictive saccades.NEW & NOTEWORTHY We elucidate the potential benefits of sensorimotor predictions, focusing on how the system actually uses this information to optimize feedback processing in goal-directed actions. Sensorimotor information is used to predict spatial parameters of movement outcomes, guiding predictive saccades toward future action effects. Saccade latencies and fixation durations are modulated by outcome quality, indicating that predictive valuation processes are considered and that the locations chosen are of high informational value for feedback processing.
Collapse
Affiliation(s)
- Theresa K Brand
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Universities of Marburg and Giessen, Giessen, Germany
| | - Alexander C Schütz
- General and Biological Psychology, Department of Psychology, Philipps University Marburg, Marburg, Germany
- Center for Mind, Brain and Behavior (CMBB), Universities of Marburg and Giessen, Giessen, Germany
| | - Hermann Müller
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Universities of Marburg and Giessen, Giessen, Germany
| | - Heiko Maurer
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus Liebig University Giessen, Giessen, Germany
| | - Mathias Hegele
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Universities of Marburg and Giessen, Giessen, Germany
| | - Lisa K Maurer
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Universities of Marburg and Giessen, Giessen, Germany
| |
Collapse
|
8
|
Goodridge CM, Gonçalves RC, Arabian A, Horrobin A, Solernou A, Lee YT, Lee YM, Madigan R, Merat N. Gaze entropy metrics for mental workload estimation are heterogenous during hands-off level 2 automation. ACCIDENT; ANALYSIS AND PREVENTION 2024; 202:107560. [PMID: 38677239 DOI: 10.1016/j.aap.2024.107560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 02/28/2024] [Accepted: 03/23/2024] [Indexed: 04/29/2024]
Abstract
As the level of vehicle automation increases, drivers are more likely to engage in non-driving related tasks which take their hands, eyes, and/or mind away from the driving task. Consequently, there has been increased interest in creating Driver Monitoring Systems (DMS) that are valid and reliable for detecting elements of driver state. Workload is one element of driver state that has remained elusive within the literature. Whilst there has been promising work in estimating mental workload using gaze-based metrics, the literature has placed too much emphasis on point estimate differences. Whilst these are useful for establishing whether effects exist, they ignore the inherent variability within individuals and between different drivers. The current work builds on this by using a Bayesian distributional modelling approach to quantify the within and between participants variability in Information Theoretical gaze metrics. Drivers (N = 38) undertook two experimental drives in hands-off Level 2 automation with their hands and feet away from operational controls. During both drives, their priority was to monitor the road before a critical takeover. During one drive participants had to complete a secondary cognitive task (2-back) during the hands-off Level 2 automation. Changes in Stationary Gaze Entropy and Gaze Transition Entropy were assessed for conditions with and without the 2-back to investigate whether consistent differences between workload conditions could be found across the sample. Stationary Gaze Entropy proved a reliable indicator of mental workload; 92 % of the population were predicted to show a decrease when completing 2-back during hands-off Level 2 automated driving. Conversely, Gaze Transition Entropy showed substantial heterogeneity; only 66 % of the population were predicted to have similar decreases. Furthermore, age was a strong predictor of the heterogeneity of the average causal effect that high mental workload had on eye movements. These results indicate that, whilst certain elements of Information Theoretic metrics can be used to estimate mental workload by DMS, future research needs to focus on the heterogeneity of these processes. Understanding this heterogeneity has important implications toward the design of future DMS and thus the safety of drivers using automated vehicle functions. It must be ensured that metrics used to detect mental workload are valid (accurately detecting a particular driver state) as well as reliable (consistently detecting this driver state across a population).
Collapse
Affiliation(s)
| | | | - Ali Arabian
- Institute for Transport Studies, University of Leeds, United Kingdom
| | - Anthony Horrobin
- Institute for Transport Studies, University of Leeds, United Kingdom
| | - Albert Solernou
- Institute for Transport Studies, University of Leeds, United Kingdom
| | - Yee Thung Lee
- Institute for Transport Studies, University of Leeds, United Kingdom
| | - Yee Mun Lee
- Institute for Transport Studies, University of Leeds, United Kingdom
| | - Ruth Madigan
- Institute for Transport Studies, University of Leeds, United Kingdom
| | - Natasha Merat
- Institute for Transport Studies, University of Leeds, United Kingdom
| |
Collapse
|
9
|
Hsiao JHW. Understanding Human Cognition Through Computational Modeling. Top Cogn Sci 2024; 16:349-376. [PMID: 38781432 DOI: 10.1111/tops.12737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 05/07/2024] [Accepted: 05/08/2024] [Indexed: 05/25/2024]
Abstract
One important goal of cognitive science is to understand the mind in terms of its representational and computational capacities, where computational modeling plays an essential role in providing theoretical explanations and predictions of human behavior and mental phenomena. In my research, I have been using computational modeling, together with behavioral experiments and cognitive neuroscience methods, to investigate the information processing mechanisms underlying learning and visual cognition in terms of perceptual representation and attention strategy. In perceptual representation, I have used neural network models to understand how the split architecture in the human visual system influences visual cognition, and to examine perceptual representation development as the results of expertise. In attention strategy, I have developed the Eye Movement analysis with Hidden Markov Models method for quantifying eye movement pattern and consistency using both spatial and temporal information, which has led to novel findings across disciplines not discoverable using traditional methods. By integrating it with deep neural networks (DNN), I have developed DNN+HMM to account for eye movement strategy learning in human visual cognition. The understanding of the human mind through computational modeling also facilitates research on artificial intelligence's (AI) comparability with human cognition, which can in turn help explainable AI systems infer humans' belief on AI's operations and provide human-centered explanations to enhance human-AI interaction and mutual understanding. Together, these demonstrate the essential role of computational modeling methods in providing theoretical accounts of the human mind as well as its interaction with its environment and AI systems.
Collapse
|
10
|
Leemans M, Damiano C, Wagemans J. Finding the meaning in meaning maps: Quantifying the roles of semantic and non-semantic scene information in guiding visual attention. Cognition 2024; 247:105788. [PMID: 38579638 DOI: 10.1016/j.cognition.2024.105788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/16/2024] [Accepted: 03/30/2024] [Indexed: 04/07/2024]
Abstract
In real-world vision, people prioritise the most informative scene regions via eye-movements. According to the cognitive guidance theory of visual attention, viewers allocate visual attention to those parts of the scene that are expected to be the most informative. The expected information of a scene region is coded in the semantic distribution of that scene. Meaning maps have been proposed to capture the spatial distribution of local scene semantics in order to test cognitive guidance theories of attention. Notwithstanding the success of meaning maps, the reason for their success has been contested. This has led to at least two possible explanations for the success of meaning maps in predicting visual attention. On the one hand, meaning maps might measure scene semantics. On the other hand, meaning maps might measure scene features, overlapping with, but distinct from, scene semantics. This study aims to disentangle these two sources of information by considering both conceptual information and non-semantic scene entropy simultaneously. We found that both semantic and non-semantic information is captured by meaning maps, but scene entropy accounted for more unique variance in the success of meaning maps than conceptual information. Additionally, some explained variance was unaccounted for by either source of information. Thus, although meaning maps may index some aspect of semantic information, their success seems to be better explained by non-semantic information. We conclude that meaning maps may not yet be a good tool to test cognitive guidance theories of attention in general, since they capture non-semantic aspects of local semantic density and only a small portion of conceptual information. Rather, we suggest that researchers should better define the exact aspect of cognitive guidance theories they wish to test and then use the tool that best captures that desired semantic information. As it stands, the semantic information contained in meaning maps seems too ambiguous to draw strong conclusions about how and when semantic information guides visual attention.
Collapse
Affiliation(s)
- Maarten Leemans
- Laboratory of Experimental Psychology, Department of Brain and Cognition, University of Leuven (KU Leuven), Belgium.
| | - Claudia Damiano
- Laboratory of Experimental Psychology, Department of Brain and Cognition, University of Leuven (KU Leuven), Belgium
| | - Johan Wagemans
- Laboratory of Experimental Psychology, Department of Brain and Cognition, University of Leuven (KU Leuven), Belgium
| |
Collapse
|
11
|
Damiano C, Leemans M, Wagemans J. Exploring the Semantic-Inconsistency Effect in Scenes Using a Continuous Measure of Linguistic-Semantic Similarity. Psychol Sci 2024; 35:623-634. [PMID: 38652604 DOI: 10.1177/09567976241238217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2024] Open
Abstract
Viewers use contextual information to visually explore complex scenes. Object recognition is facilitated by exploiting object-scene relations (which objects are expected in a given scene) and object-object relations (which objects are expected because of the occurrence of other objects). Semantically inconsistent objects deviate from these expectations, so they tend to capture viewers' attention (the semantic-inconsistency effect). Some objects fit the identity of a scene more or less than others, yet semantic inconsistencies have hitherto been operationalized as binary (consistent vs. inconsistent). In an eye-tracking experiment (N = 21 adults), we study the semantic-inconsistency effect in a continuous manner by using the linguistic-semantic similarity of an object to the scene category and to other objects in the scene. We found that both highly consistent and highly inconsistent objects are viewed more than other objects (U-shaped relationship), revealing that the (in)consistency effect is more than a simple binary classification.
Collapse
Affiliation(s)
- Claudia Damiano
- Department of Psychology, University of Toronto
- Laboratory of Experimental Psychology, Department of Brain and Cognition, KU Leuven
| | - Maarten Leemans
- Laboratory of Experimental Psychology, Department of Brain and Cognition, KU Leuven
| | - Johan Wagemans
- Laboratory of Experimental Psychology, Department of Brain and Cognition, KU Leuven
| |
Collapse
|
12
|
Zhang R, Xu Q, Wang S, Parkinson S, Schoeffmann K. Information Difference of Transfer Entropies between Head Motion and Eye Movement Indicates a Proxy of Driving. ENTROPY (BASEL, SWITZERLAND) 2023; 26:3. [PMID: 38275483 PMCID: PMC11154336 DOI: 10.3390/e26010003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 12/07/2023] [Accepted: 12/13/2023] [Indexed: 01/27/2024]
Abstract
Visual scanning is achieved via head motion and gaze movement for visual information acquisition and cognitive processing, which plays a critical role in undertaking common sensorimotor tasks such as driving. The coordination of the head and eyes is an important human behavior to make a key contribution to goal-directed visual scanning and sensorimotor driving. In this paper, we basically investigate the two most common patterns in eye-head coordination: "head motion earlier than eye movement" and "eye movement earlier than head motion". We utilize bidirectional transfer entropies between head motion and eye movements to determine the existence of these two eye-head coordination patterns. Furthermore, we propose a unidirectional information difference to assess which pattern predominates in head-eye coordination. Additionally, we have discovered a significant correlation between the normalized unidirectional information difference and driving performance. This result not only indicates the influence of eye-head coordination on driving behavior from a computational perspective but also validates the practical significance of our approach utilizing transfer entropy for quantifying eye-head coordination.
Collapse
Affiliation(s)
- Runlin Zhang
- College of Intelligence and Computing, Tianjin University, Tianjin 300072, China; (R.Z.); (S.W.)
| | - Qing Xu
- College of Intelligence and Computing, Tianjin University, Tianjin 300072, China; (R.Z.); (S.W.)
| | - Shunbo Wang
- College of Intelligence and Computing, Tianjin University, Tianjin 300072, China; (R.Z.); (S.W.)
| | - Simon Parkinson
- Department of Computer Science, University of Huddersfield, Huddersfield HD1 3DH, UK;
| | - Klaus Schoeffmann
- Institute of Information Technology, Klagenfurt University, 9020 Klagenfurt, Austria;
| |
Collapse
|
13
|
Cioffi GM, Pinilla-Echeverri N, Sheth T, Sibbald MG. Does artificial intelligence enhance physician interpretation of optical coherence tomography: insights from eye tracking. Front Cardiovasc Med 2023; 10:1283338. [PMID: 38144364 PMCID: PMC10739524 DOI: 10.3389/fcvm.2023.1283338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 11/20/2023] [Indexed: 12/26/2023] Open
Abstract
Background and objectives The adoption of optical coherence tomography (OCT) in percutaneous coronary intervention (PCI) is limited by need for real-time image interpretation expertise. Artificial intelligence (AI)-assisted Ultreon™ 2.0 software could address this barrier. We used eye tracking to understand how these software changes impact viewing efficiency and accuracy. Methods Eighteen interventional cardiologists and fellows at McMaster University, Canada, were included in the study and categorized as experienced or inexperienced based on lifetime OCT use. They were tasked with reviewing OCT images from both Ultreon™ 2.0 and AptiVue™ software platforms while their eye movements were recorded. Key metrics, such as time to first fixation on the area of interest, total task time, dwell time (time spent on the area of interest as a proportion of total task time), and interpretation accuracy, were evaluated using a mixed multivariate model. Results Physicians exhibited improved viewing efficiency with Ultreon™ 2.0, characterized by reduced time to first fixation (Ultreon™ 0.9 s vs. AptiVue™ 1.6 s, p = 0.007), reduced total task time (Ultreon™ 10.2 s vs. AptiVue™ 12.6 s, p = 0.006), and increased dwell time in the area of interest (Ultreon™ 58% vs. AptiVue™ 41%, p < 0.001). These effects were similar for experienced and inexperienced physicians. Accuracy of OCT image interpretation was preserved in both groups, with experienced physicians outperforming inexperienced physicians. Discussion Our study demonstrated that AI-enabled Ultreon™ 2.0 software can streamline the image interpretation process and improve viewing efficiency for both inexperienced and experienced physicians. Enhanced viewing efficiency implies reduced cognitive load potentially reducing the barriers for OCT adoption in PCI decision-making.
Collapse
Affiliation(s)
| | | | | | - Matthew Gary Sibbald
- Division of Cardiology, Hamilton General Hospital, Hamilton Health Sciences, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
14
|
Fooken J, Baltaretu BR, Barany DA, Diaz G, Semrau JA, Singh T, Crawford JD. Perceptual-Cognitive Integration for Goal-Directed Action in Naturalistic Environments. J Neurosci 2023; 43:7511-7522. [PMID: 37940592 PMCID: PMC10634571 DOI: 10.1523/jneurosci.1373-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 08/15/2023] [Accepted: 08/18/2023] [Indexed: 11/10/2023] Open
Abstract
Real-world actions require one to simultaneously perceive, think, and act on the surrounding world, requiring the integration of (bottom-up) sensory information and (top-down) cognitive and motor signals. Studying these processes involves the intellectual challenge of cutting across traditional neuroscience silos, and the technical challenge of recording data in uncontrolled natural environments. However, recent advances in techniques, such as neuroimaging, virtual reality, and motion tracking, allow one to address these issues in naturalistic environments for both healthy participants and clinical populations. In this review, we survey six topics in which naturalistic approaches have advanced both our fundamental understanding of brain function and how neurologic deficits influence goal-directed, coordinated action in naturalistic environments. The first part conveys fundamental neuroscience mechanisms related to visuospatial coding for action, adaptive eye-hand coordination, and visuomotor integration for manual interception. The second part discusses applications of such knowledge to neurologic deficits, specifically, steering in the presence of cortical blindness, impact of stroke on visual-proprioceptive integration, and impact of visual search and working memory deficits. This translational approach-extending knowledge from lab to rehab-provides new insights into the complex interplay between perceptual, motor, and cognitive control in naturalistic tasks that are relevant for both basic and clinical research.
Collapse
Affiliation(s)
- Jolande Fooken
- Centre for Neuroscience, Queen's University, Kingston, Ontario K7L3N6, Canada
| | - Bianca R Baltaretu
- Department of Psychology, Justus Liebig University, Giessen, 35394, Germany
| | - Deborah A Barany
- Department of Kinesiology, University of Georgia, and Augusta University/University of Georgia Medical Partnership, Athens, Georgia 30602
| | - Gabriel Diaz
- Center for Imaging Science, Rochester Institute of Technology, Rochester, New York 14623
| | - Jennifer A Semrau
- Department of Kinesiology and Applied Physiology, University of Delaware, Newark, Delaware 19713
| | - Tarkeshwar Singh
- Department of Kinesiology, Pennsylvania State University, University Park, Pennsylvania 16802
| | - J Douglas Crawford
- Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
15
|
Peacock CE, Hall EH, Henderson JM. Objects are selected for attention based upon meaning during passive scene viewing. Psychon Bull Rev 2023; 30:1874-1886. [PMID: 37095319 PMCID: PMC11164276 DOI: 10.3758/s13423-023-02286-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/26/2023] [Indexed: 04/26/2023]
Abstract
While object meaning has been demonstrated to guide attention during active scene viewing and object salience guides attention during passive viewing, it is unknown whether object meaning predicts attention in passive viewing tasks and whether attention during passive viewing is more strongly related to meaning or salience. To answer this question, we used a mixed modeling approach where we computed the average meaning and physical salience of objects in scenes while statistically controlling for the roles of object size and eccentricity. Using eye-movement data from aesthetic judgment and memorization tasks, we then tested whether fixations are more likely to land on high-meaning objects than low-meaning objects while controlling for object salience, size, and eccentricity. The results demonstrated that fixations are more likely to be directed to high meaning objects than low meaning objects regardless of these other factors. Further analyses revealed that fixation durations were positively associated with object meaning irrespective of the other object properties. Overall, these findings provide the first evidence that objects are, in part, selected by meaning for attentional selection during passive scene viewing.
Collapse
Affiliation(s)
- Candace E Peacock
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA, 95618, USA.
- Department of Psychology, University of California, Davis, CA, USA.
| | - Elizabeth H Hall
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA, 95618, USA
- Department of Psychology, University of California, Davis, CA, USA
| | - John M Henderson
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA, 95618, USA
- Department of Psychology, University of California, Davis, CA, USA
| |
Collapse
|
16
|
Eisenberg ML, Rodebaugh TL, Flores S, Zacks JM. Impaired prediction of ongoing events in posttraumatic stress disorder. Neuropsychologia 2023; 188:108636. [PMID: 37437653 DOI: 10.1016/j.neuropsychologia.2023.108636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Revised: 06/12/2023] [Accepted: 06/30/2023] [Indexed: 07/14/2023]
Abstract
The ability to make accurate predictions about what is going to happen in the near future is critical for comprehension of everyday activity. However, predictive processing may be disrupted in Posttraumatic Stress Disorder (PTSD). Hypervigilance may lead people with PTSD to make inaccurate predictions about the likelihood of future danger. This disruption in predictive processing may occur not only in response to threatening stimuli, but also during processing of neutral stimuli. Therefore, the current study investigated whether PTSD was associated with difficulty making predictions about near-future neutral activity. Sixty-three participants with PTSD and 63 trauma controls completed two tasks, one testing explicit prediction and the other testing implicit prediction. Higher PTSD severity was associated with greater difficulty with predictive processing on both of these tasks. These results suggest that effective treatments to improve functional outcomes for people with PTSD may work, in part, by improving predictive processing.
Collapse
Affiliation(s)
- Michelle L Eisenberg
- Department of Psychology, Box 1125, Washington University in St. Louis, 1 Brookings Dr., St. Louis, MO, 63130, USA
| | - Thomas L Rodebaugh
- Department of Psychology, Box 1125, Washington University in St. Louis, 1 Brookings Dr., St. Louis, MO, 63130, USA
| | - Shaney Flores
- Department of Psychology, Box 1125, Washington University in St. Louis, 1 Brookings Dr., St. Louis, MO, 63130, USA
| | - Jeffrey M Zacks
- Department of Psychology, Box 1125, Washington University in St. Louis, 1 Brookings Dr., St. Louis, MO, 63130, USA.
| |
Collapse
|
17
|
Goettker A, Borgerding N, Leeske L, Gegenfurtner KR. Cues for predictive eye movements in naturalistic scenes. J Vis 2023; 23:12. [PMID: 37728915 PMCID: PMC10516764 DOI: 10.1167/jov.23.10.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 08/23/2023] [Indexed: 09/22/2023] Open
Abstract
We previously compared following of the same trajectories with eye movements, but either as an isolated targets or embedded in a naturalistic scene-in this case, the movement of a puck in an ice hockey game. We observed that the oculomotor system was able to leverage the contextual cues available in the naturalistic scene to produce predictive eye movements. In this study, we wanted to assess which factors are critical for achieving this predictive advantage by manipulating four factors: the expertise of the viewers, the amount of available peripheral information, and positional and kinematic cues. The more peripheral information became available (by manipulating the area of the video that was visible), the better the predictions of all observers. However, expert ice hockey fans were consistently better at predicting than novices and used peripheral information more effectively for predictive saccades. Artificial cues about player positions did not lead to a predictive advantage, whereas impairing the causal structure of kinematic cues by playing the video in reverse led to a severe impairment. When videos were flipped vertically to introduce more difficult kinematic cues, predictive behavior was comparable to watching the original videos. Together, these results demonstrate that, when contextual information is available in naturalistic scenes, the oculomotor system is successfully integrating them and is not relying only on low-level information about the target trajectory. Critical factors for successful prediction seem to be the amount of available information, experience with the stimuli, and the availability of intact kinematic cues for player movements.
Collapse
Affiliation(s)
- Alexander Goettker
- Justus Liebig Universität Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior, University of Marburg and Justus Liebig University, Giessen, Germany
| | | | - Linus Leeske
- Justus Liebig Universität Giessen, Giessen, Germany
| | - Karl R Gegenfurtner
- Justus Liebig Universität Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior, University of Marburg and Justus Liebig University, Giessen, Germany
| |
Collapse
|
18
|
Pedziwiatr MA, Heer S, Coutrot A, Bex P, Mareschal I. Prior knowledge about events depicted in scenes decreases oculomotor exploration. Cognition 2023; 238:105544. [PMID: 37419068 DOI: 10.1016/j.cognition.2023.105544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 06/27/2023] [Accepted: 06/28/2023] [Indexed: 07/09/2023]
Abstract
The visual input that the eyes receive usually contains temporally continuous information about unfolding events. Therefore, humans can accumulate knowledge about their current environment. Typical studies on scene perception, however, involve presenting multiple unrelated images and thereby render this accumulation unnecessary. Our study, instead, facilitated it and explored its effects. Specifically, we investigated how recently-accumulated prior knowledge affects gaze behavior. Participants viewed sequences of static film frames that contained several 'context frames' followed by a 'critical frame'. The context frames showed either events from which the situation depicted in the critical frame naturally followed, or events unrelated to this situation. Therefore, participants viewed identical critical frames while possessing prior knowledge that was either relevant or irrelevant to the frames' content. In the former case, participants' gaze behavior was slightly more exploratory, as revealed by seven gaze characteristics we analyzed. This result demonstrates that recently-gained prior knowledge reduces exploratory eye movements.
Collapse
Affiliation(s)
- Marek A Pedziwiatr
- School of Biological and Behavioural Sciences, Queen Mary University of London, Mile End Road, London E1 4NS, United Kingdom.
| | - Sophie Heer
- School of Biological and Behavioural Sciences, Queen Mary University of London, Mile End Road, London E1 4NS, United Kingdom
| | - Antoine Coutrot
- Univ Lyon, CNRS, INSA Lyon, UCBL, LIRIS, UMR5205, F-69621 Lyon, France
| | - Peter Bex
- Department of Psychology, Northeastern University, 107 Forsyth Street, Boston, MA 02115, United States of America
| | - Isabelle Mareschal
- School of Biological and Behavioural Sciences, Queen Mary University of London, Mile End Road, London E1 4NS, United Kingdom
| |
Collapse
|
19
|
Klever L, Islam J, Võ MLH, Billino J. Aging attenuates the memory advantage for unexpected objects in real-world scenes. Heliyon 2023; 9:e20241. [PMID: 37809883 PMCID: PMC10560015 DOI: 10.1016/j.heliyon.2023.e20241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Revised: 09/14/2023] [Accepted: 09/14/2023] [Indexed: 10/10/2023] Open
Abstract
Across the adult lifespan memory processes are subject to pronounced changes. Prior knowledge and expectations might critically shape functional differences; however, corresponding findings have remained ambiguous so far. Here, we chose a tailored approach to scrutinize how schema (in-)congruencies affect older and younger adults' memory for objects embedded in real-world scenes, a scenario close to everyday life memory demands. A sample of 23 older (52-81 years) and 23 younger adults (18-38 years) freely viewed 60 photographs of scenes in which target objects were included that were either congruent or incongruent with the given context information. After a delay, recognition performance for those objects was determined. In addition, recognized objects had to be matched to the scene context in which they were previously presented. While we found schema violations beneficial for object recognition across age groups, the advantage was significantly less pronounced in older adults. We moreover observed an age-related congruency bias for matching objects to their original scene context. Our findings support a critical role of predictive processes for age-related memory differences and indicate enhanced weighting of predictions with age. We suggest that recent predictive processing theories provide a particularly useful framework to elaborate on age-related functional vulnerabilities as well as stability.
Collapse
Affiliation(s)
- Lena Klever
- Experimental Psychology, Justus Liebig University Giessen, Germany
- Center for Mind, Brain, And Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| | - Jasmin Islam
- Experimental Psychology, Justus Liebig University Giessen, Germany
| | - Melissa Le-Hoa Võ
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Jutta Billino
- Experimental Psychology, Justus Liebig University Giessen, Germany
- Center for Mind, Brain, And Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| |
Collapse
|
20
|
van Boxtel WS, Cox BN, Keen A, Lee J. Planning sentence production in aphasia: evidence from structural priming and eye-tracking. FRONTIERS IN LANGUAGE SCIENCES 2023; 2:1175579. [PMID: 39605949 PMCID: PMC11601089 DOI: 10.3389/flang.2023.1175579] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/29/2024]
Abstract
Background Grammatical encoding is impaired in many persons with aphasia (PWA), resulting in deficits in sentence production accuracies and underlying planning processes. However, relatively little is known on how these grammatical encoding deficits can be mediated in PWA. This study aimed to facilitate off-line (accuracy) and real-time (eye fixations) encoding of passive sentences through implicit structural priming, a tendency to better process a current sentence because of its grammatical similarity to a previously experienced (prime) sentence. Method Sixteen PWA and Sixteen age-matched controls completed an eyetracking-while-speaking task, where they described a target transitive picture preceded by a comprehension prime involving either an active or passive form. We measured immediate and cumulative priming effects on proportions of passives produced for the target pictures and proportions of eye fixations made to the theme actor in the target scene before speech onset of the sentence production. Results and conclusion Both PWA and controls produced cumulatively more passives as the experiment progressed despite an absence of immediate priming effects in PWA. Both groups also showed cumulative changes in the pre-speech eye fixations associated with passive productions, with this cumulative priming effect greater for the PWA group. These findings suggest that structural priming results in gradual adaptation of the grammatical encoding processes of PWA and that structural priming may be used as a treatment component for improving grammatical deficits in aphasia.
Collapse
Affiliation(s)
- Willem S. van Boxtel
- Aphasia Research Laboratory, Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, United States
| | - Briana N. Cox
- Aphasia Research Laboratory, Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, United States
| | | | - Jiyeon Lee
- Aphasia Research Laboratory, Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, United States
| |
Collapse
|
21
|
Scaliti E, Pullar K, Borghini G, Cavallo A, Panzeri S, Becchio C. Kinematic priming of action predictions. Curr Biol 2023:S0960-9822(23)00687-5. [PMID: 37339628 DOI: 10.1016/j.cub.2023.05.055] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 04/06/2023] [Accepted: 05/24/2023] [Indexed: 06/22/2023]
Abstract
The ability to anticipate what others will do next is crucial for navigating social, interactive environments. Here, we develop an experimental and analytical framework to measure the implicit readout of prospective intention information from movement kinematics. Using a primed action categorization task, we first demonstrate implicit access to intention information by establishing a novel form of priming, which we term kinematic priming: subtle differences in movement kinematics prime action prediction. Next, using data collected from the same participants in a forced-choice intention discrimination task 1 h later, we quantify single-trial intention readout-the amount of intention information read by individual perceivers in individual kinematic primes-and assess whether it can be used to predict the amount of kinematic priming. We demonstrate that the amount of kinematic priming, as indexed by both response times (RTs) and initial fixations to a given probe, is directly proportional to the amount of intention information read by the individual perceiver at the single-trial level. These results demonstrate that human perceivers have rapid, implicit access to intention information encoded in movement kinematics and highlight the potential of our approach to reveal the computations that permit the readout of this information with single-subject, single-trial resolution.
Collapse
Affiliation(s)
- Eugenio Scaliti
- Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen, 83, 16152 Genova, Italy; Department of Neurology, University Medical Center Hamburg-Eppendorf (UKE), Martinistrasse 52, 20246 Hamburg, Germany
| | - Kiri Pullar
- Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen, 83, 16152 Genova, Italy
| | - Giulia Borghini
- Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen, 83, 16152 Genova, Italy
| | - Andrea Cavallo
- Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen, 83, 16152 Genova, Italy; Department of Psychology, Università degli Studi di Torino, Via Giuseppe Verdi, 10, 10124 Torino, Italy
| | - Stefano Panzeri
- Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen, 83, 16152 Genova, Italy; Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, 20251 Hamburg, Germany.
| | - Cristina Becchio
- Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen, 83, 16152 Genova, Italy; Department of Neurology, University Medical Center Hamburg-Eppendorf (UKE), Martinistrasse 52, 20246 Hamburg, Germany.
| |
Collapse
|
22
|
Lin YC, Intoy J, Clark AM, Rucci M, Victor JD. Cognitive influences on fixational eye movements. Curr Biol 2023; 33:1606-1612.e4. [PMID: 37015221 PMCID: PMC10133196 DOI: 10.1016/j.cub.2023.03.026] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 01/16/2023] [Accepted: 03/09/2023] [Indexed: 04/05/2023]
Abstract
We perceive the world based on visual information acquired via oculomotor control,1 an activity intertwined with ongoing cognitive processes.2,3,4 Cognitive influences have been primarily studied in the context of macroscopic movements, like saccades and smooth pursuits. However, our eyes are never still, even during periods of fixation. One of the fixational eye movements, ocular drifts, shifts the stimulus over hundreds of receptors on the retina, a motion that has been argued to enhance the processing of spatial detail by translating spatial into temporal information.5 Despite their apparent randomness, ocular drifts are under neural control.6,7,8 However little is known about the control of drift beyond the brainstem circuitry of the vestibulo-ocular reflex.9,10 Here, we investigated the cognitive control of ocular drifts with a letter discrimination task. The experiment was designed to reveal open-loop effects, i.e., cognitive oculomotor control driven by specific prior knowledge of the task, independent of incoming sensory information. Open-loop influences were isolated by randomly presenting pure noise fields (no letters) while subjects engaged in discriminating specific letter pairs. Our results show open-loop control of drift direction in human observers.
Collapse
Affiliation(s)
- Yen-Chu Lin
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, 1300 York Avenue, New York, NY 10065, USA.
| | - Janis Intoy
- Department of Brain & Cognitive Sciences, University of Rochester, 358 Meliora Hall, Rochester, NY 14627, USA; Center for Visual Science, University of Rochester, 358 Meliora Hall, Rochester, NY 14627, USA
| | - Ashley M Clark
- Department of Brain & Cognitive Sciences, University of Rochester, 358 Meliora Hall, Rochester, NY 14627, USA; Center for Visual Science, University of Rochester, 358 Meliora Hall, Rochester, NY 14627, USA
| | - Michele Rucci
- Department of Brain & Cognitive Sciences, University of Rochester, 358 Meliora Hall, Rochester, NY 14627, USA; Center for Visual Science, University of Rochester, 358 Meliora Hall, Rochester, NY 14627, USA
| | - Jonathan D Victor
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, 1300 York Avenue, New York, NY 10065, USA
| |
Collapse
|
23
|
Peacock CE, Singh P, Hayes TR, Rehrig G, Henderson JM. Searching for meaning: Local scene semantics guide attention during natural visual search in scenes. Q J Exp Psychol (Hove) 2023; 76:632-648. [PMID: 35510885 PMCID: PMC11132926 DOI: 10.1177/17470218221101334] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Models of visual search in scenes include image salience as a source of attentional guidance. However, because scene meaning is correlated with image salience, it could be that the salience predictor in these models is driven by meaning. To test this proposal, we generated meaning maps that represented the spatial distribution of semantic informativeness in scenes, and salience maps which represented the spatial distribution of conspicuous image features and tested their influence on fixation densities from two object search tasks in real-world scenes. The results showed that meaning accounted for significantly greater variance in fixation densities than image salience, both overall and in early attention across both studies. Here, meaning explained 58% and 63% of the theoretical ceiling of variance in attention across both studies, respectively. Furthermore, both studies demonstrated that fast initial saccades were not more likely to be directed to higher salience regions than slower initial saccades, and initial saccades of all latencies were directed to regions containing higher meaning than salience. Together, these results demonstrated that even though meaning was task-neutral, the visual system still selected meaningful over salient scene regions for attention during search.
Collapse
Affiliation(s)
- Candace E Peacock
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
- Department of Psychology, University of California, Davis, Davis, CA, USA
| | - Praveena Singh
- Center for Neuroscience, University of California, Davis, Davis, CA, USA
| | - Taylor R Hayes
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
| | - Gwendolyn Rehrig
- Department of Psychology, University of California, Davis, Davis, CA, USA
| | - John M Henderson
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
- Department of Psychology, University of California, Davis, Davis, CA, USA
| |
Collapse
|
24
|
Bakst L, McGuire JT. Experience-driven recalibration of learning from surprising events. Cognition 2023; 232:105343. [PMID: 36481590 PMCID: PMC9851993 DOI: 10.1016/j.cognition.2022.105343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 10/13/2022] [Accepted: 11/21/2022] [Indexed: 12/12/2022]
Abstract
Different environments favor different patterns of adaptive learning. A surprising event that in one context would accelerate belief updating might, in another context, be downweighted as a meaningless outlier. Here, we investigated whether people would spontaneously regulate the influence of surprise on learning in response to event-by-event experiential feedback. Across two experiments, we examined whether participants performing a perceptual judgment task under spatial uncertainty (n = 29, n = 63) adapted their patterns of predictive gaze according to the informativeness or uninformativeness of surprising events in their current environment. Uninstructed predictive eye movements exhibited a form of metalearning in which surprise came to modulate event-by-event learning rates in opposite directions across contexts. Participants later appropriately readjusted their patterns of adaptive learning when the statistics of the environment underwent an unsignaled reversal. Although significant adjustments occurred in both directions, performance was consistently superior in environments in which surprising events reflected meaningful change, potentially reflecting a bias towards interpreting surprise as informative and/or difficulty ignoring salient outliers. Our results provide evidence for spontaneous, context-appropriate recalibration of the role of surprise in adaptive learning.
Collapse
Affiliation(s)
- Leah Bakst
- Department of Psychological & Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA 02215, USA; Center for Systems Neuroscience, Boston University, 610 Commonwealth Avenue, Boston, MA 02215, USA.
| | - Joseph T McGuire
- Department of Psychological & Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA 02215, USA; Center for Systems Neuroscience, Boston University, 610 Commonwealth Avenue, Boston, MA 02215, USA.
| |
Collapse
|
25
|
Abstract
Humans differ in the amount of time they direct their gaze toward different types of stimuli. Individuals' preferences are known to be reliable and can predict various cognitive and affective processes. However, it remains unclear whether humans are aware of their visual gaze preferences and are able to report it. In this study, across three different tasks and without prior warning, participants were asked to estimate the amount of time they had looked at a certain visual content (e.g., faces or texts) at the end of each experiment. The findings show that people can report accurately their visual gaze preferences. The implications are discussed in the context of visual perception, metacognition, and the development of applied diagnostic tools based on eye tracking.
Collapse
Affiliation(s)
- Nitzan Guy
- Cognitive and Brain Sciences Department, Hebrew University of Jerusalem, Mount Scopus, Jerusalem, Israel.,Psychology Department, Hebrew University of Jerusalem, Jerusalem, Israel.,
| | - Rasha Kardosh
- Psychology Department, New York University, New York, NY, USA.,
| | - Asael Y. Sklar
- Edmond & Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, Jerusalem, Israel,Arison School of Business, Reichman University, Herzliya, Israel,
| | | | - Yoni Pertzov
- Psychology Department, Hebrew University of Jerusalem, Jerusalem, Israel.,
| |
Collapse
|
26
|
D’Amelio A, Patania S, Bursic S, Cuculo V, Boccignone G. Using Gaze for Behavioural Biometrics. SENSORS (BASEL, SWITZERLAND) 2023; 23:1262. [PMID: 36772302 PMCID: PMC9920149 DOI: 10.3390/s23031262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 01/15/2023] [Accepted: 01/20/2023] [Indexed: 06/18/2023]
Abstract
A principled approach to the analysis of eye movements for behavioural biometrics is laid down. The approach grounds in foraging theory, which provides a sound basis to capture the uniqueness of individual eye movement behaviour. We propose a composite Ornstein-Uhlenbeck process for quantifying the exploration/exploitation signature characterising the foraging eye behaviour. The relevant parameters of the composite model, inferred from eye-tracking data via Bayesian analysis, are shown to yield a suitable feature set for biometric identification; the latter is eventually accomplished via a classical classification technique. A proof of concept of the method is provided by measuring its identification performance on a publicly available dataset. Data and code for reproducing the analyses are made available. Overall, we argue that the approach offers a fresh view on either the analyses of eye-tracking data and prospective applications in this field.
Collapse
Affiliation(s)
- Alessandro D’Amelio
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
| | - Sabrina Patania
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
| | - Sathya Bursic
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
- Department of Psychology, University of Milano-Bicocca, Piazza dell’Ateneo Nuovo 1, 20126 Milan, Italy
| | - Vittorio Cuculo
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
| | - Giuseppe Boccignone
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
| |
Collapse
|
27
|
Epperlein T, Kovacs G, Oña LS, Amici F, Bräuer J. Context and prediction matter for the interpretation of social interactions across species. PLoS One 2022; 17:e0277783. [PMID: 36477294 PMCID: PMC9728876 DOI: 10.1371/journal.pone.0277783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 11/02/2022] [Indexed: 12/12/2022] Open
Abstract
Predictions about others' future actions are crucial during social interactions, in order to react optimally. Another way to assess such interactions is to define the social context of the situations explicitly and categorize them according to their affective content. Here we investigate how humans assess aggressive, playful and neutral interactions between members of three species: human children, dogs and macaques. We presented human participants with short video clips of real-life interactions of dyads of the three species and asked them either to categorize the context of the situation or to predict the outcome of the observed interaction. Participants performed above chance level in assessing social situations in humans, in dogs and in monkeys. How accurately participants predicted and categorized the situations depended both on the species and on the context. Contrary to our hypothesis, participants were not better at assessing aggressive situations than playful or neutral situations. Importantly, participants performed particularly poorly when assessing aggressive behaviour for dogs. Also, participants were not better at assessing social interactions of humans compared to those of other species. We discuss what mechanism humans use to assess social situations and to what extent this skill can also be found in other social species.
Collapse
Affiliation(s)
- Theresa Epperlein
- DogStudies, Max Planck Institute for Geoanthropology, Jena, Germany
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University of Jena, Jena, Germany
| | - Gyula Kovacs
- Department of Biological Psychology and Cognitive Neuroscience, Friedrich Schiller University of Jena, Jena, Germany
| | - Linda S. Oña
- Max Planck Research Group Naturalistic Social Cognition, Max Planck Institute for Human Development, Berlin, Germany
| | - Federica Amici
- Department of Comparative Cultural Psychology, Max-Planck Institute for Evolutionary Anthropology, Leipzig, Germany
- Behavioral Ecology Research Group, Institute of Biology, Faculty of Life Science, University of Leipzig, Leipzig, Germany
| | - Juliane Bräuer
- DogStudies, Max Planck Institute for Geoanthropology, Jena, Germany
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University of Jena, Jena, Germany
| |
Collapse
|
28
|
Rehrig G, Barker M, Peacock CE, Hayes TR, Henderson JM, Ferreira F. Look at what I can do: Object affordances guide visual attention while speakers describe potential actions. Atten Percept Psychophys 2022; 84:1583-1610. [PMID: 35484443 PMCID: PMC9246959 DOI: 10.3758/s13414-022-02467-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/23/2022] [Indexed: 11/08/2022]
Abstract
As we act on the world around us, our eyes seek out objects we plan to interact with. A growing body of evidence suggests that overt visual attention selects objects in the environment that could be interacted with, even when the task precludes physical interaction. In previous work, objects that afford grasping interactions influenced attention when static scenes depicted reachable spaces, and attention was otherwise better explained by general informativeness. Because grasping is but one of many object interactions, previous work may have downplayed the influence of object affordances on attention. The current study investigated the relationship between overt visual attention and object affordances versus broadly construed semantic information in scenes as speakers describe or memorize scenes. In addition to meaning and grasp maps-which capture informativeness and grasping object affordances in scenes, respectively-we introduce interact maps, which capture affordances more broadly. In a mixed-effects analysis of 5 eyetracking experiments, we found that meaning predicted fixated locations in a general description task and during scene memorization. Grasp maps marginally predicted fixated locations during action description for scenes that depicted reachable spaces only. Interact maps predicted fixated regions in description experiments alone. Our findings suggest observers allocate attention to scene regions that could be readily interacted with when talking about the scene, while general informativeness preferentially guides attention when the task does not encourage careful consideration of objects in the scene. The current study suggests that the influence of object affordances on visual attention in scenes is mediated by task demands.
Collapse
Affiliation(s)
- Gwendolyn Rehrig
- Department of Psychology, University of California, Davis, Davis, CA, 95616, USA.
| | - Madison Barker
- Department of Psychology, University of California, Davis, Davis, CA, 95616, USA
| | - Candace E Peacock
- Department of Psychology and Center for Mind and Brain, University of California, Davis, Davis, CA, USA
| | - Taylor R Hayes
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
| | - John M Henderson
- Department of Psychology and Center for Mind and Brain, University of California, Davis, Davis, CA, USA
| | - Fernanda Ferreira
- Department of Psychology, University of California, Davis, Davis, CA, 95616, USA
| |
Collapse
|
29
|
Morris BJ, Todaro R, Arner T, Roche JM. How Does the Accuracy of Children’s Number Representations Influence the Accuracy of Their Numerical Predictions? Front Psychol 2022; 13:874230. [PMID: 35783810 PMCID: PMC9241830 DOI: 10.3389/fpsyg.2022.874230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Accepted: 05/16/2022] [Indexed: 11/24/2022] Open
Abstract
Predictions begin with an extrapolation of the properties of their underlying representations to forecast a future state not presently in evidence. For numerical predictions, sets of numbers are summarized and the result forms the basis of and constrains numerical predictions. One open question is how the accuracy of underlying representations influences predictions, particularly numerical predictions. It is possible that inaccuracies in individual number representations are randomly distributed and averaged over during summarization (e.g., wisdom of crowds). It is also possible that inaccuracies are not random and lead to errors in predictions. We investigated this question by measuring the accuracy of individual number representations of 279 children ages 8–12 years, using a 0–1,000 number line, and numerical predictions, measured using a home run derby task. Consistent with prior research, our results from mixed random effects models evaluating percent absolute error (PAE; prediction error) demonstrated that third graders’ representations of individual numbers were less accurate, characterized by overestimation errors, and were associated with overpredictions (i.e., predictions above the set mean). Older children had more accurate individual number representations and a slight tendency to underpredict (i.e., predictions below the set mean). The results suggest that large, systematic inaccuracies appear to skew predictions while small, random errors appear to be averaged over during summarization. These findings add to our understanding of summarization and its role in numerical predictions.
Collapse
Affiliation(s)
- Bradley J. Morris
- Department of Educational Psychology, Kent State University, Kent, OH, United States
- *Correspondence: Bradley J. Morris,
| | - Rachael Todaro
- Department of Psychology, Temple University, Ambler, PA, United States
| | - Tracy Arner
- Department of Psychology, Arizona State University, Tempe, AZ, United States
| | - Jennifer M. Roche
- Department of Educational Psychology, Kent State University, Kent, OH, United States
- Department of Speech Pathology and Audiology, Kent State University, Kent, OH, United States
| |
Collapse
|
30
|
Hutson JP, Chandran P, Magliano JP, Smith TJ, Loschky LC. Narrative Comprehension Guides Eye Movements in the Absence of Motion. Cogn Sci 2022; 46:e13131. [PMID: 35579883 DOI: 10.1111/cogs.13131] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Revised: 02/17/2022] [Accepted: 02/19/2022] [Indexed: 11/30/2022]
Abstract
Viewers' attentional selection while looking at scenes is affected by both top-down and bottom-up factors. However, when watching film, viewers typically attend to the movie similarly irrespective of top-down factors-a phenomenon we call the tyranny of film. A key difference between still pictures and film is that film contains motion, which is a strong attractor of attention and highly predictive of gaze during film viewing. The goal of the present study was to test if the tyranny of film is driven by motion. To do this, we created a slideshow presentation of the opening scene of Touch of Evil. Context condition participants watched the full slideshow. No-context condition participants did not see the opening portion of the scene, which showed someone placing a time bomb into the trunk of a car. In prior research, we showed that despite producing very different understandings of the clip, this manipulation did not affect viewers' attention (i.e., the tyranny of film), as both context and no-context participants were equally likely to fixate on the car with the bomb when the scene was presented as a film. The current study found that when the scene was shown as a slideshow, the context manipulation produced differences in attentional selection (i.e., it attenuated attentional synchrony). We discuss these results in the context of the Scene Perception and Event Comprehension Theory, which specifies the relationship between event comprehension and attentional selection in the context of visual narratives.
Collapse
Affiliation(s)
- John P Hutson
- Department of Learning Sciences, Georgia State University
| | | | | | - Tim J Smith
- Department of Psychological Sciences, Birkbeck, University of London
| | | |
Collapse
|
31
|
Harris DJ, Arthur T, Broadbent DP, Wilson MR, Vine SJ, Runswick OR. An Active Inference Account of Skilled Anticipation in Sport: Using Computational Models to Formalise Theory and Generate New Hypotheses. Sports Med 2022; 52:2023-2038. [PMID: 35503403 PMCID: PMC9388417 DOI: 10.1007/s40279-022-01689-w] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/06/2022] [Indexed: 11/30/2022]
Abstract
Optimal performance in time-constrained and dynamically changing environments depends on making reliable predictions about future outcomes. In sporting tasks, performers have been found to employ multiple information sources to maximise the accuracy of their predictions, but questions remain about how different information sources are weighted and integrated to guide anticipation. In this paper, we outline how predictive processing approaches, and active inference in particular, provide a unifying account of perception and action that explains many of the prominent findings in the sports anticipation literature. Active inference proposes that perception and action are underpinned by the organism’s need to remain within certain stable states. To this end, decision making approximates Bayesian inference and actions are used to minimise future prediction errors during brain–body–environment interactions. Using a series of Bayesian neurocomputational models based on a partially observable Markov process, we demonstrate that key findings from the literature can be recreated from the first principles of active inference. In doing so, we formulate a number of novel and empirically falsifiable hypotheses about human anticipation capabilities that could guide future investigations in the field.
Collapse
Affiliation(s)
- David J Harris
- School of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK.
| | - Tom Arthur
- School of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - David P Broadbent
- Division of Sport, Health and Exercise Sciences, Department of Life Sciences, Brunel University London, London, UK
| | - Mark R Wilson
- School of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - Samuel J Vine
- School of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - Oliver R Runswick
- Department of Psychology, Institute of Psychiatry, Psychology, and Neuroscience, King's College London, London, UK
| |
Collapse
|
32
|
Testing the underlying processes leading to learned distractor rejection: Learned oculomotor avoidance. Atten Percept Psychophys 2022; 84:1964-1981. [PMID: 35386017 DOI: 10.3758/s13414-022-02483-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/23/2022] [Indexed: 11/08/2022]
Abstract
Target templates stored in visual memory guide visual attention toward behaviorally relevant target objects. Visual attention also is guided away from nontarget distractors by longer-term learning, a phenomenon termed "learned distractor rejection." Template guidance and learned distractor rejection can occur simultaneously to further increase search efficiency. However, the underlying processes guiding learned distractor rejection are unknown. In two visual search experiments employing eye-tracking, we tested between two plausible processes: proactive versus reactive attentional control. Participants searched through two-color, spatially unsegregated displays. Participants could guide attention by both target templates and consistent nontarget distractors. We observed fewer distractor fixations (including the first eye movement) and shorter distractor dwell times. The data supported a single mechanism of learned distractor rejection, whereby observers adopted a learned, proactive attentional control setting to avoid distraction whenever possible. Further, when distraction occurred, observers rapidly recovered. We term this proactive mechanism "learned oculomotor avoidance." The current study informs theories of visual attention by demonstrating the underlying processes leading to learned distractor suppression during strong target guidance.
Collapse
|
33
|
Smith ES, Crawford TJ. Positive and Negative Symptoms Are Associated with Distinct Effects on Predictive Saccades. Brain Sci 2022; 12:brainsci12040418. [PMID: 35447950 PMCID: PMC9025332 DOI: 10.3390/brainsci12040418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 03/10/2022] [Accepted: 03/19/2022] [Indexed: 02/05/2023] Open
Abstract
The predictive saccade task is a motor learning paradigm requiring saccades to track a visual target moving in a predictable pattern. Previous research has explored extensively anti-saccade deficits observed across psychosis, but less is known about predictive saccade-related mechanisms. The dataset analysed came from the studies of Crawford et al, published in 1995, where neuroleptically medicated schizophrenia and bipolar affective disorder patients were compared with non-medicated patients and control participants using a predictive saccade paradigm. The participant groups consisted of medicated schizophrenia patients (n = 40), non-medicated schizophrenia patients (n = 18), medicated bipolar disorder patients (n = 14), non-medicated bipolar disorder patients (n = 18), and controls (n = 31). The current analyses explore relationships between predictive saccades and symptomatology, and the potential interaction of medication. Analyses revealed that the schizophrenia and bipolar disorder diagnostic categories are indistinguishable in patterns of predictive control across several saccadic parameters, supporting a dimensional hypothesis. Once collapsed into predominantly high-/low- negative/positive symptoms, regardless of diagnosis, differences were revealed, with significant hypometria and lower gain in those with more negative symptoms. This illustrates how the presentation of the deficits is homogeneous across diagnosis, but heterogeneous when surveyed by symptomatology; attesting that a diagnostic label is less informative than symptomatology when exploring predictive saccades.
Collapse
Affiliation(s)
- Eleanor S. Smith
- Department of Psychology, University of Cambridge, Cambridge CB2 3EB, UK;
| | - Trevor J. Crawford
- Centre for Ageing Research, Department of Psychology, Lancaster University, Lancaster LA1 4YF, UK
- Correspondence:
| |
Collapse
|
34
|
Rann JC, Almor A. Effects of verbal tasks on driving simulator performance. Cogn Res Princ Implic 2022; 7:12. [PMID: 35119569 PMCID: PMC8817015 DOI: 10.1186/s41235-022-00357-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Accepted: 01/08/2022] [Indexed: 11/10/2022] Open
Abstract
We report results from a driving simulator paradigm we developed to test the fine temporal effects of verbal tasks on simultaneous tracking performance. A total of 74 undergraduate students participated in two experiments in which they controlled a cursor using the steering wheel to track a moving target and where the dependent measure was overall deviation from target. Experiment 1 tested tracking performance during slow and fast target speeds under conditions involving either no verbal input or output, passive listening to spoken prompts via headphones, or responding to spoken prompts. Experiment 2 was similar except that participants read written prompts overlain on the simulator screen instead of listening to spoken prompts. Performance in both experiments was worse during fast speeds and worst overall during responding conditions. Most significantly, fine scale time-course analysis revealed deteriorating tracking performance as participants prepared and began speaking and steadily improving performance while speaking. Additionally, post-block survey data revealed that conversation recall was best in responding conditions, and perceived difficulty increased with task complexity. Our study is the first to track temporal changes in interference at high resolution during the first hundreds of milliseconds of verbal production and comprehension. Our results are consistent with load-based theories of multitasking performance and show that language production, and, to a lesser extent, language comprehension tap resources also used for tracking. More generally, our paradigm provides a useful tool for measuring dynamical changes in tracking performance during verbal tasks due to the rapidly changing resource requirements of language production and comprehension.
Collapse
Affiliation(s)
- Jonathan C Rann
- Department of Psychology, University of South Carolina, 1512 Pendelton Street, Columbia, SC, 29208, USA. .,Institute for Mind and Brain, University of South Carolina, Columbia, SC, 29208, USA.
| | - Amit Almor
- Department of Psychology, University of South Carolina, 1512 Pendelton Street, Columbia, SC, 29208, USA.,Institute for Mind and Brain, University of South Carolina, Columbia, SC, 29208, USA.,Linguistics Program, University of South Carolina, Columbia, SC, 29208, USA
| |
Collapse
|
35
|
Ferreira F, Qiu Z. Predicting syntactic structure. Brain Res 2021; 1770:147632. [PMID: 34453937 DOI: 10.1016/j.brainres.2021.147632] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Revised: 08/20/2021] [Accepted: 08/21/2021] [Indexed: 11/19/2022]
Abstract
Prediction in language processing has been a topic of major interest in psycholinguistics for at least the last two decades, but most investigations focus on semantic rather than syntactic prediction. This review begins with a discussion of some influential models of parsing which assume that comprehenders have the ability to anticipate syntactic nodes, beginning with left-corner parsers and the garden-path model and ending with current information-theoretic approaches that emphasize online probabilistic prediction. We then turn to evidence for the prediction of specific syntactic forms, including coordinate clauses and noun phrases, verb arguments, and individual nouns, as well as studies that use morphosyntactic constraints to assess whether a specific semantic prediction has been made. The last section considers the implications of syntactic prediction for theories of language architecture and describes four avenues for future research.
Collapse
Affiliation(s)
| | - Zhuang Qiu
- University of California, Davis, United States.
| |
Collapse
|
36
|
Cruz TL, Pérez SM, Chiappe ME. Fast tuning of posture control by visual feedback underlies gaze stabilization in walking Drosophila. Curr Biol 2021; 31:4596-4607.e5. [PMID: 34499851 PMCID: PMC8556163 DOI: 10.1016/j.cub.2021.08.041] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 07/01/2021] [Accepted: 08/13/2021] [Indexed: 02/08/2023]
Abstract
Locomotion requires a balance between mechanical stability and movement flexibility to achieve behavioral goals despite noisy neuromuscular systems, but rarely is it considered how this balance is orchestrated. We combined virtual reality tools with quantitative analysis of behavior to examine how Drosophila uses self-generated visual information (reafferent visual feedback) to control gaze during exploratory walking. We found that flies execute distinct motor programs coordinated across the body to maximize gaze stability. However, the presence of inherent variability in leg placement relative to the body jeopardizes fine control of gaze due to posture-stabilizing adjustments that lead to unintended changes in course direction. Surprisingly, whereas visual feedback is dispensable for head-body coordination, we found that self-generated visual signals tune postural reflexes to rapidly prevent turns rather than to promote compensatory rotations, a long-standing idea for visually guided course control. Together, these findings support a model in which visual feedback orchestrates the interplay between posture and gaze stability in a manner that is both goal dependent and motor-context specific.
Collapse
Affiliation(s)
- Tomás L Cruz
- Champalimaud Research, Champalimaud Centre for the Unknown, 1400-038 Lisbon, Portugal
| | | | - M Eugenia Chiappe
- Champalimaud Research, Champalimaud Centre for the Unknown, 1400-038 Lisbon, Portugal.
| |
Collapse
|
37
|
Arthur T, Harris DJ. Predictive eye movements are adjusted in a Bayes-optimal fashion in response to unexpectedly changing environmental probabilities. Cortex 2021; 145:212-225. [PMID: 34749190 DOI: 10.1016/j.cortex.2021.09.017] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 08/18/2021] [Accepted: 09/27/2021] [Indexed: 11/30/2022]
Abstract
This study examined the application of active inference to dynamic visuomotor control. Active inference proposes that actions are dynamically planned according to uncertainty about sensory information, prior expectations, and the environment, with motor adjustments serving to minimise future prediction errors. We investigated whether predictive gaze behaviours are indeed adjusted in this Bayes-optimal fashion during a virtual racquetball task. In this task, participants intercepted bouncing balls with varying levels of elasticity, under conditions of higher or lower environmental volatility. Participants' gaze patterns differed between stable and volatile conditions in a manner consistent with generative models of Bayes-optimal behaviour. Partially observable Markov models also revealed an increased rate of associative learning in response to unpredictable shifts in environmental probabilities, although there was no overall effect of volatility on this parameter. Findings extend active inference frameworks into complex and unconstrained visuomotor tasks and present important implications for a neurocomputational understanding of the visual guidance of action.
Collapse
Affiliation(s)
- Tom Arthur
- School of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, Exeter, EX1 2LU, UK; Centre for Applied Autism Research, Department of Psychology, University of Bath, Bath, BA2 7AY, UK
| | - David J Harris
- School of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, Exeter, EX1 2LU, UK.
| |
Collapse
|
38
|
Peacock CE, Cronin DA, Hayes TR, Henderson JM. Meaning and expected surfaces combine to guide attention during visual search in scenes. J Vis 2021; 21:1. [PMID: 34609475 PMCID: PMC8496418 DOI: 10.1167/jov.21.11.1] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Accepted: 09/02/2021] [Indexed: 11/24/2022] Open
Abstract
How do spatial constraints and meaningful scene regions interact to control overt attention during visual search for objects in real-world scenes? To answer this question, we combined novel surface maps of the likely locations of target objects with maps of the spatial distribution of scene semantic content. The surface maps captured likely target surfaces as continuous probabilities. Meaning was represented by meaning maps highlighting the distribution of semantic content in local scene regions. Attention was indexed by eye movements during the search for target objects that varied in the likelihood they would appear on specific surfaces. The interaction between surface maps and meaning maps was analyzed to test whether fixations were directed to meaningful scene regions on target-related surfaces. Overall, meaningful scene regions were more likely to be fixated if they appeared on target-related surfaces than if they appeared on target-unrelated surfaces. These findings suggest that the visual system prioritizes meaningful scene regions on target-related surfaces during visual search in scenes.
Collapse
Affiliation(s)
- Candace E Peacock
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
- Department of Psychology, University of California, Davis, Davis, CA, USA
| | - Deborah A Cronin
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
| | - Taylor R Hayes
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
| | - John M Henderson
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
- Department of Psychology, University of California, Davis, Davis, CA, USA
| |
Collapse
|
39
|
Henderson JM, Hayes TR, Peacock CE, Rehrig G. Meaning maps capture the density of local semantic features in scenes: A reply to Pedziwiatr, Kümmerer, Wallis, Bethge & Teufel (2021). Cognition 2021; 214:104742. [PMID: 33892912 PMCID: PMC11166323 DOI: 10.1016/j.cognition.2021.104742] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 04/13/2021] [Accepted: 04/15/2021] [Indexed: 11/17/2022]
Abstract
Pedziwiatr, Kümmerer, Wallis, Bethge, & Teufel (2021) contend that Meaning Maps do not represent the spatial distribution of semantic features in scenes. We argue that Pesziwiatr et al. provide neither logical nor empirical support for that claim, and we conclude that Meaning Maps do what they were designed to do: represent the spatial distribution of meaning in scenes.
Collapse
Affiliation(s)
- John M Henderson
- Center for Mind and Brain, University of California, Davis, USA; Department of Psychology, University of California, Davis, USA.
| | - Taylor R Hayes
- Center for Mind and Brain, University of California, Davis, USA
| | - Candace E Peacock
- Center for Mind and Brain, University of California, Davis, USA; Department of Psychology, University of California, Davis, USA
| | | |
Collapse
|
40
|
Pomaranski KI, Hayes TR, Kwon MK, Henderson JM, Oakes LM. Developmental changes in natural scene viewing in infancy. Dev Psychol 2021; 57:1025-1041. [PMID: 34435820 PMCID: PMC8406411 DOI: 10.1037/dev0001020] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We extend decades of research on infants' visual processing by examining their eye gaze during viewing of natural scenes. We examined the eye movements of a racially diverse group of 4- to 12-month-old infants (N = 54; 27 boys; 24 infants were White and not Hispanic, 30 infants were African American, Asian American, mixed race and/or Hispanic) as they viewed images selected from the MIT Saliency Benchmark Project. In general, across this age range infants' fixation distributions became more consistent and more adult-like, suggesting that infants' fixations in natural scenes become increasingly more systematic. Evaluation of infants' fixation patterns with saliency maps generated by different models of physical salience revealed that although over this age range there was an increase in the correlations between infants' fixations and saliency, the amount of variance accounted for by salience actually decreased. At the youngest age, the amount of variance accounted for by salience was very similar to the consistency between infants' fixations, suggesting that the systematicity in these youngest infants' fixations was explained by their attention to physically salient regions. By 12 months, in contrast, the consistency between infants was greater than the variance accounted for by salience, suggesting that the systematicity in older infants' fixations reflected more than their attention to physically salient regions. Together these results show that infants' fixations when viewing natural scenes becomes more systematic and predictable, and that predictability is due to their attention to features other than physical salience. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
|
41
|
Goettker A, Pidaparthy H, Braun DI, Elder JH, Gegenfurtner KR. Ice hockey spectators use contextual cues to guide predictive eye movements. Curr Biol 2021; 31:R991-R992. [PMID: 34428418 DOI: 10.1016/j.cub.2021.06.087] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Eye movements are an integral part of human visual perception. They allow us to have a small foveal region with exquisite acuity and at the same time a large visual field. For a long time, eye movements were regarded as machine-like behaviors in response to visual stimulation1, but over the past few decades it has been convincingly shown that expectations, intended actions, rewards and many other cognitive factors can have profound effects on the way we move our eyes2-4. In order to be useful, our oculomotor system must minimize delay with respect to the dynamic events in the visual scene. The ability to do so has been demonstrated in situations where we are in control of these events, for example when we are making a sandwich or tea5, and when we are active participants, for example when hitting a cricket ball6. But what about scenes with complex dynamics that we do not control or directly take part in, like a hockey game we are watching as a spectator? A semantic influence on gaze fixation location during viewing of tennis videos has been suggested before7. Here we use carefully annotated hockey videos to show that the brain is indeed able to exploit the semantic context of the game to anticipate the continuous motion of the puck, leading to eye movements that are fundamentally different than when following exactly the same motion without any context.
Collapse
Affiliation(s)
- Alexander Goettker
- Justus Liebig University Giessen, Giessen, Germany; Center for Mind, Brain and Behavior (CMBB), Justus Liebig University Giessen, Giessen, Germany.
| | | | - Doris I Braun
- Justus Liebig University Giessen, Giessen, Germany; Center for Mind, Brain and Behavior (CMBB), Justus Liebig University Giessen, Giessen, Germany
| | | | - Karl R Gegenfurtner
- Justus Liebig University Giessen, Giessen, Germany; Center for Mind, Brain and Behavior (CMBB), Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
42
|
Huber-Huber C, Buonocore A, Melcher D. The extrafoveal preview paradigm as a measure of predictive, active sampling in visual perception. J Vis 2021; 21:12. [PMID: 34283203 PMCID: PMC8300052 DOI: 10.1167/jov.21.7.12] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Accepted: 05/18/2021] [Indexed: 01/02/2023] Open
Abstract
A key feature of visual processing in humans is the use of saccadic eye movements to look around the environment. Saccades are typically used to bring relevant information, which is glimpsed with extrafoveal vision, into the high-resolution fovea for further processing. With the exception of some unusual circumstances, such as the first fixation when walking into a room, our saccades are mainly guided based on this extrafoveal preview. In contrast, the majority of experimental studies in vision science have investigated "passive" behavioral and neural responses to suddenly appearing and often temporally or spatially unpredictable stimuli. As reviewed here, a growing number of studies have investigated visual processing of objects under more natural viewing conditions in which observers move their eyes to a stationary stimulus, visible previously in extrafoveal vision, during each trial. These studies demonstrate that the extrafoveal preview has a profound influence on visual processing of objects, both for behavior and neural activity. Starting from the preview effect in reading research we follow subsequent developments in vision research more generally and finally argue that taking such evidence seriously leads to a reconceptualization of the nature of human visual perception that incorporates the strong influence of prediction and action on sensory processing. We review theoretical perspectives on visual perception under naturalistic viewing conditions, including theories of active vision, active sensing, and sampling. Although the extrafoveal preview paradigm has already provided useful information about the timing of, and potential mechanisms for, the close interaction of the oculomotor and visual systems while reading and in natural scenes, the findings thus far also raise many new questions for future research.
Collapse
Affiliation(s)
- Christoph Huber-Huber
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, The Netherlands
- CIMeC, University of Trento, Italy
| | - Antimo Buonocore
- Werner Reichardt Centre for Integrative Neuroscience, Tübingen University, Tübingen, BW, Germany
- Hertie Institute for Clinical Brain Research, Tübingen University, Tübingen, BW, Germany
| | - David Melcher
- CIMeC, University of Trento, Italy
- Division of Science, New York University Abu Dhabi, UAE
| |
Collapse
|
43
|
Bonner MF, Epstein RA. Object representations in the human brain reflect the co-occurrence statistics of vision and language. Nat Commun 2021; 12:4081. [PMID: 34215754 PMCID: PMC8253839 DOI: 10.1038/s41467-021-24368-2] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Accepted: 06/09/2021] [Indexed: 11/17/2022] Open
Abstract
A central regularity of visual perception is the co-occurrence of objects in the natural environment. Here we use machine learning and fMRI to test the hypothesis that object co-occurrence statistics are encoded in the human visual system and elicited by the perception of individual objects. We identified low-dimensional representations that capture the latent statistical structure of object co-occurrence in real-world scenes, and we mapped these statistical representations onto voxel-wise fMRI responses during object viewing. We found that cortical responses to single objects were predicted by the statistical ensembles in which they typically occur, and that this link between objects and their visual contexts was made most strongly in parahippocampal cortex, overlapping with the anterior portion of scene-selective parahippocampal place area. In contrast, a language-based statistical model of the co-occurrence of object names in written text predicted responses in neighboring regions of object-selective visual cortex. Together, these findings show that the sensory coding of objects in the human brain reflects the latent statistics of object context in visual and linguistic experience.
Collapse
Affiliation(s)
- Michael F Bonner
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA.
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA.
| | - Russell A Epstein
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
44
|
Levin DT, Salas JA, Wright AM, Seiffert AE, Carter KE, Little JW. The Incomplete Tyranny of Dynamic Stimuli: Gaze Similarity Predicts Response Similarity in Screen-Captured Instructional Videos. Cogn Sci 2021; 45:e12984. [PMID: 34170026 DOI: 10.1111/cogs.12984] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 01/19/2021] [Accepted: 04/16/2021] [Indexed: 11/27/2022]
Abstract
Although eye tracking has been used extensively to assess cognitions for static stimuli, recent research suggests that the link between gaze and cognition may be more tenuous for dynamic stimuli such as videos. Part of the difficulty in convincingly linking gaze with cognition is that in dynamic stimuli, gaze position is strongly influenced by exogenous cues such as object motion. However, tests of the gaze-cognition link in dynamic stimuli have been done on only a limited range of stimuli often characterized by highly organized motion. Also, analyses of cognitive contrasts between participants have been mostly been limited to categorical contrasts among small numbers of participants that may have limited the power to observe more subtle influences. We, therefore, tested for cognitive influences on gaze for screen-captured instructional videos, the contents of which participants were tested on. Between-participant scanpath similarity predicted between-participant similarity in responses on test questions, but with imperfect consistency across videos. We also observed that basic gaze parameters and measures of attention to centers of interest only inconsistently predicted learning, and that correlations between gaze and centers of interest defined by other-participant gaze and cursor movement did not predict learning. It, therefore, appears that the search for eye movement indices of cognition during dynamic naturalistic stimuli may be fruitful, but we also agree that the tyranny of dynamic stimuli is real, and that links between eye movements and cognition are highly dependent on task and stimulus properties.
Collapse
Affiliation(s)
- Daniel T Levin
- Department of Psychology and Human Development, Vanderbilt University
| | - Jorge A Salas
- Department of Psychology and Human Development, Vanderbilt University
| | - Anna M Wright
- Department of Psychology and Human Development, Vanderbilt University
| | | | - Kelly E Carter
- Department of Psychology and Human Development, Vanderbilt University
| | - Joshua W Little
- Department of Psychology and Human Development, Vanderbilt University
| |
Collapse
|
45
|
Zimmermann KM, Schmidt KD, Gronow F, Sommer J, Leweke F, Jansen A. Seeing things differently: Gaze shapes neural signal during mentalizing according to emotional awareness. Neuroimage 2021; 238:118223. [PMID: 34098065 DOI: 10.1016/j.neuroimage.2021.118223] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Revised: 05/26/2021] [Accepted: 05/29/2021] [Indexed: 12/19/2022] Open
Abstract
Studies on social cognition often use complex visual stimuli to asses neural processes attributed to abilities like "mentalizing" or "Theory of Mind" (ToM). During the processing of these stimuli, eye gaze, however, shapes neural signal patterns. Individual differences in neural operations on social cognition may therefore be obscured if individuals' gaze behavior differs systematically. These obstacles can be overcome by the combined analysis of neural signal and natural viewing behavior. Here, we combined functional magnetic resonance imaging (fMRI) with eye-tracking to examine effects of unconstrained gaze on neural ToM processes in healthy individuals with differing levels of emotional awareness, i.e. alexithymia. First, as previously described for emotional tasks, people with higher alexithymia levels look less at eyes in both ToM and task-free viewing contexts. Further, we find that neural ToM processes are not affected by individual differences in alexithymia per se. Instead, depending on alexithymia levels, gaze on critical stimulus aspects reversely shapes the signal in medial prefrontal cortex (MPFC) and anterior temporoparietal junction (TPJ) as distinct nodes of the ToM system. These results emphasize that natural selective attention affects fMRI patterns well beyond the visual system. Our study implies that, whenever using a task with multiple degrees of freedom in scan paths, ignoring the latter might obscure important conclusions.
Collapse
Affiliation(s)
- Kristin Marie Zimmermann
- Laboratory for Multimodal Neuroimaging, Department of Psychiatry and Psychotherapy, University of Marburg, Marburg, Germany; Department of Neurology and Neurorehabilitation, Hospital zum Heiligen Geist, Academic Teaching Hospital of the Heinrich-Heine-University Düsseldorf, Düsseldorf, Germany; Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen.
| | - Kirsten Daniela Schmidt
- Laboratory for Multimodal Neuroimaging, Department of Psychiatry and Psychotherapy, University of Marburg, Marburg, Germany
| | - Franziska Gronow
- Laboratory for Multimodal Neuroimaging, Department of Psychiatry and Psychotherapy, University of Marburg, Marburg, Germany; Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen
| | - Jens Sommer
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen; Core-Unit Brainimaging, Faculty of Medicine, University of Marburg, Marburg, Germany
| | - Frank Leweke
- Clinic for Psychosomatic Medicine and Psychotherapy, Justus Liebig University Giessen, Giessen, Germany
| | - Andreas Jansen
- Laboratory for Multimodal Neuroimaging, Department of Psychiatry and Psychotherapy, University of Marburg, Marburg, Germany; Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen; Core-Unit Brainimaging, Faculty of Medicine, University of Marburg, Marburg, Germany
| |
Collapse
|
46
|
Lyu M, Choe KW, Kardan O, Kotabe HP, Henderson JM, Berman MG. Overt attentional correlates of memorability of scene images and their relationships to scene semantics. J Vis 2021; 20:2. [PMID: 32876677 PMCID: PMC7476653 DOI: 10.1167/jov.20.9.2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Computer vision-based research has shown that scene semantics (e.g., presence of meaningful objects in a scene) can predict memorability of scene images. Here, we investigated whether and to what extent overt attentional correlates, such as fixation map consistency (also called inter-observer congruency of fixation maps) and fixation counts, mediate the relationship between scene semantics and scene memorability. First, we confirmed that the higher the fixation map consistency of a scene, the higher its memorability. Moreover, both fixation map consistency and its correlation to scene memorability were the highest in the first 2 seconds of viewing, suggesting that meaningful scene features that contribute to producing more consistent fixation maps early in viewing, such as faces and humans, may also be important for scene encoding. Second, we found that the relationship between scene semantics and scene memorability was partially (but not fully) mediated by fixation map consistency and fixation counts, separately as well as together. Third, we found that fixation map consistency, fixation counts, and scene semantics significantly and additively contributed to scene memorability. Together, these results suggest that eye-tracking measurements can complement computer vision-based algorithms and improve overall scene memorability prediction.
Collapse
Affiliation(s)
- Muxuan Lyu
- Department of Management and Marketing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Kyoung Whan Choe
- Department of Psychology, The University of Chicago, Chicago, IL, USA.,Mansueto Institute for Urban Innovation, The University of Chicago, Chicago, IL, USA
| | - Omid Kardan
- Department of Psychology, The University of Chicago, Chicago, IL, USA
| | | | - John M Henderson
- Center for Mind and Brain and Department of Psychology, University of California, Davis, Davis, CA, USA
| | - Marc G Berman
- Department of Psychology, The University of Chicago, Chicago, IL, USA.,Grossman Institute for Neuroscience, Quantitative Biology and Human Behavior, The University of Chicago, Chicago, IL, USA
| |
Collapse
|
47
|
Federico G, Ferrante D, Marcatto F, Brandimonte MA. How the fear of COVID-19 changed the way we look at human faces. PeerJ 2021; 9:e11380. [PMID: 33987036 PMCID: PMC8088764 DOI: 10.7717/peerj.11380] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Accepted: 04/09/2021] [Indexed: 12/14/2022] Open
Abstract
Do we look at persons currently or previously affected by COVID-19 the same way as we do with healthy ones? In this eye-tracking study, we investigated how participants (N = 54) looked at faces of individuals presented as "COVID-19 Free", "Sick with COVID-19", or "Recovered from COVID-19". Results showed that participants tend to look at the eyes of COVID-19-free faces longer than at those of both COVID-19-related faces. Crucially, we also found an increase of visual attention for the mouth of the COVID-19-related faces, possibly due to the threatening characterisation of such area as a transmission vehicle for SARS-CoV-2. Thus, by detailing how people dynamically changed the way of looking at faces as a function of the perceived risk of contagion, we provide the first evidence in the literature about the impact of the pandemic on the most basic level of social interaction.
Collapse
|
48
|
Fuchs S, Belardinelli A. Gaze-Based Intention Estimation for Shared Autonomy in Pick-and-Place Tasks. Front Neurorobot 2021; 15:647930. [PMID: 33935675 PMCID: PMC8085393 DOI: 10.3389/fnbot.2021.647930] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2020] [Accepted: 03/12/2021] [Indexed: 12/05/2022] Open
Abstract
Shared autonomy aims at combining robotic and human control in the execution of remote, teleoperated tasks. This cooperative interaction cannot be brought about without the robot first recognizing the current human intention in a fast and reliable way so that a suitable assisting plan can be quickly instantiated and executed. Eye movements have long been known to be highly predictive of the cognitive agenda unfolding during manual tasks and constitute, hence, the earliest and most reliable behavioral cues for intention estimation. In this study, we present an experiment aimed at analyzing human behavior in simple teleoperated pick-and-place tasks in a simulated scenario and at devising a suitable model for early estimation of the current proximal intention. We show that scan paths are, as expected, heavily shaped by the current intention and that two types of Gaussian Hidden Markov Models, one more scene-specific and one more action-specific, achieve a very good prediction performance, while also generalizing to new users and spatial arrangements. We finally discuss how behavioral and model results suggest that eye movements reflect to some extent the invariance and generality of higher-level planning across object configurations, which can be leveraged by cooperative robotic systems.
Collapse
Affiliation(s)
- Stefan Fuchs
- Honda Research Institute Europe, Offenbach, Germany
| | | |
Collapse
|
49
|
Tsuchiya KJ, Hakoshima S, Hara T, Ninomiya M, Saito M, Fujioka T, Kosaka H, Hirano Y, Matsuo M, Kikuchi M, Maegaki Y, Harada T, Nishimura T, Katayama T. Diagnosing Autism Spectrum Disorder Without Expertise: A Pilot Study of 5- to 17-Year-Old Individuals Using Gazefinder. Front Neurol 2021; 11:603085. [PMID: 33584502 PMCID: PMC7876254 DOI: 10.3389/fneur.2020.603085] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2020] [Accepted: 12/30/2020] [Indexed: 11/13/2022] Open
Abstract
Atypical eye gaze is an established clinical sign in the diagnosis of autism spectrum disorder (ASD). We propose a computerized diagnostic algorithm for ASD, applicable to children and adolescents aged between 5 and 17 years using Gazefinder, a system where a set of devices to capture eye gaze patterns and stimulus movie clips are equipped in a personal computer with a monitor. We enrolled 222 individuals aged 5–17 years at seven research facilities in Japan. Among them, we extracted 39 individuals with ASD without any comorbid neurodevelopmental abnormalities (ASD group), 102 typically developing individuals (TD group), and an independent sample of 24 individuals (the second control group). All participants underwent psychoneurological and diagnostic assessments, including the Autism Diagnostic Observation Schedule, second edition, and an examination with Gazefinder (2 min). To enhance the predictive validity, a best-fit diagnostic algorithm of computationally selected attributes originally extracted from Gazefinder was proposed. The inputs were classified automatically into either ASD or TD groups, based on the attribute values. We cross-validated the algorithm using the leave-one-out method in the ASD and TD groups and tested the predictability in the second control group. The best-fit algorithm showed an area under curve (AUC) of 0.84, and the sensitivity, specificity, and accuracy were 74, 80, and 78%, respectively. The AUC for the cross-validation was 0.74 and that for validation in the second control group was 0.91. We confirmed that the diagnostic performance of the best-fit algorithm is comparable to the diagnostic assessment tools for ASD.
Collapse
Affiliation(s)
- Kenji J Tsuchiya
- Research Center for Child Mental Development, Hamamatsu University School of Medicine, Hamamatsu, Japan.,Department of Child Development, United Graduate School of Child Development, Osaka University, Kanazawa University, Hamamatsu University School of Medicine, Chiba University, and University of Fukui, Suita, Japan
| | - Shuji Hakoshima
- Healthcare Business Division, Development Center, JVCKENWOOD Corporation, Yokohama, Japan
| | - Takeshi Hara
- Center for Healthcare Information Technology, Tokai National Higher Education and Research System, Gifu, Japan.,Faculty of Engineering, Gifu University, Gifu, Japan
| | - Masaru Ninomiya
- Healthcare Business Division, Development Center, JVCKENWOOD Corporation, Yokohama, Japan
| | - Manabu Saito
- Department of Neuropsychiatry, Graduate School of Medicine, Hirosaki University, Hirosaki, Japan.,Research Center for Child Mental Development, Graduate School of Medicine, Hirosaki University, Hirosaki, Japan
| | - Toru Fujioka
- Department of Child Development, United Graduate School of Child Development, Osaka University, Kanazawa University, Hamamatsu University School of Medicine, Chiba University, and University of Fukui, Suita, Japan.,Department of Science of Human Development, Faculty of Education, Humanities and Social Sciences, University of Fukui, Fukui, Japan.,Research Center for Child Mental Development, University of Fukui, Fukui, Japan
| | - Hirotaka Kosaka
- Department of Child Development, United Graduate School of Child Development, Osaka University, Kanazawa University, Hamamatsu University School of Medicine, Chiba University, and University of Fukui, Suita, Japan.,Research Center for Child Mental Development, University of Fukui, Fukui, Japan.,Department of Neuropsychiatry, Faculty of Medical Sciences, University of Fukui, Fukui, Japan
| | - Yoshiyuki Hirano
- Department of Child Development, United Graduate School of Child Development, Osaka University, Kanazawa University, Hamamatsu University School of Medicine, Chiba University, and University of Fukui, Suita, Japan.,Research Center for Child Mental Development, Chiba University, Chiba, Japan
| | - Muneaki Matsuo
- Department of Pediatrics, Faculty of Medicine, Saga University, Saga, Japan
| | - Mitsuru Kikuchi
- Department of Child Development, United Graduate School of Child Development, Osaka University, Kanazawa University, Hamamatsu University School of Medicine, Chiba University, and University of Fukui, Suita, Japan.,Department of Psychiatry and Neurobiology, Graduate School of Medical Science, Kanazawa University, Kanazawa, Japan.,Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | | | - Taeko Harada
- Research Center for Child Mental Development, Hamamatsu University School of Medicine, Hamamatsu, Japan.,Department of Child Development, United Graduate School of Child Development, Osaka University, Kanazawa University, Hamamatsu University School of Medicine, Chiba University, and University of Fukui, Suita, Japan
| | - Tomoko Nishimura
- Research Center for Child Mental Development, Hamamatsu University School of Medicine, Hamamatsu, Japan.,Department of Child Development, United Graduate School of Child Development, Osaka University, Kanazawa University, Hamamatsu University School of Medicine, Chiba University, and University of Fukui, Suita, Japan
| | - Taiichi Katayama
- Department of Child Development, United Graduate School of Child Development, Osaka University, Kanazawa University, Hamamatsu University School of Medicine, Chiba University, and University of Fukui, Suita, Japan
| |
Collapse
|
50
|
Drivers use active gaze to monitor waypoints during automated driving. Sci Rep 2021; 11:263. [PMID: 33420150 PMCID: PMC7794576 DOI: 10.1038/s41598-020-80126-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Accepted: 12/14/2020] [Indexed: 11/08/2022] Open
Abstract
Automated vehicles (AVs) will change the role of the driver, from actively controlling the vehicle to primarily monitoring it. Removing the driver from the control loop could fundamentally change the way that drivers sample visual information from the scene, and in particular, alter the gaze patterns generated when under AV control. To better understand how automation affects gaze patterns this experiment used tightly controlled experimental conditions with a series of transitions from 'Manual' control to 'Automated' vehicle control. Automated trials were produced using either a 'Replay' of the driver's own steering trajectories or standard 'Stock' trials that were identical for all participants. Gaze patterns produced during Manual and Automated conditions were recorded and compared. Overall the gaze patterns across conditions were very similar, but detailed analysis shows that drivers looked slightly further ahead (increased gaze time headway) during Automation with only small differences between Stock and Replay trials. A novel mixture modelling method decomposed gaze patterns into two distinct categories and revealed that the gaze time headway increased during Automation. Further analyses revealed that while there was a general shift to look further ahead (and fixate the bend entry earlier) when under automated vehicle control, similar waypoint-tracking gaze patterns were produced during Manual driving and Automation. The consistency of gaze patterns across driving modes suggests that active-gaze models (developed for manual driving) might be useful for monitoring driver engagement during Automated driving, with deviations in gaze behaviour from what would be expected during manual control potentially indicating that a driver is not closely monitoring the automated system.
Collapse
|