101
|
Liu XH, Gan L, Zhang ZT, Yu PK, Dai J. Probing the processing of facial expressions in monkeys via time perception and eye tracking. Zool Res 2023; 44:882-893. [PMID: 37545418 PMCID: PMC10559096 DOI: 10.24272/j.issn.2095-8137.2023.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 08/04/2023] [Indexed: 08/08/2023] Open
Abstract
Accurately recognizing facial expressions is essential for effective social interactions. Non-human primates (NHPs) are widely used in the study of the neural mechanisms underpinning facial expression processing, yet it remains unclear how well monkeys can recognize the facial expressions of other species such as humans. In this study, we systematically investigated how monkeys process the facial expressions of conspecifics and humans using eye-tracking technology and sophisticated behavioral tasks, namely the temporal discrimination task (TDT) and face scan task (FST). We found that monkeys showed prolonged subjective time perception in response to Negative facial expressions in monkeys while showing longer reaction time to Negative facial expressions in humans. Monkey faces also reliably induced divergent pupil contraction in response to different expressions, while human faces and scrambled monkey faces did not. Furthermore, viewing patterns in the FST indicated that monkeys only showed bias toward emotional expressions upon observing monkey faces. Finally, masking the eye region marginally decreased the viewing duration for monkey faces but not for human faces. By probing facial expression processing in monkeys, our study demonstrates that monkeys are more sensitive to the facial expressions of conspecifics than those of humans, thus shedding new light on inter-species communication through facial expressions between NHPs and humans.
Collapse
|
102
|
Hollett RC, Challis M. Experimental evidence that browsing for activewear lowers explicit body image attitudes and implicit self-esteem in women. Body Image 2023; 46:383-394. [PMID: 37490824 DOI: 10.1016/j.bodyim.2023.07.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 06/29/2023] [Accepted: 07/12/2023] [Indexed: 07/27/2023]
Abstract
Online apparel shopping is popular amongst women and offers salient visual information for making body image and self-worth judgements. Apparel segments which emphasize the value of women's bodies are particularly effective for eliciting low body image and self-worth. Across two studies, we investigated the association between self-reported and experimental online activewear exposure on women's self-worth, body image, appearance attitudes, mood and gaze behavior. In Study 1, participants (N = 399) completed a survey collecting their online apparel shopping habits, body appreciation, self-esteem, appearance comparison tendencies and self-objectification attitudes. Activewear was the second-most popular apparel segment amongst women (after casualwear) and weekly activewear browse time was positively correlated with appearance comparison tendencies, desires to be muscular/athletic and body shame. In Study 2, participants (N = 126) were randomly allocated to browse an activewear, casualwear or homewares website and completed pre and post measures of mood, body image, implicit self-esteem and body gaze behavior. In the activewear condition, there was a significant reduction in positive body image and implicit self-esteem scores. There were no experimental effects for body gaze behavior. These findings illustrate that apparel choices have value for understanding the aetiology of maladaptive body image attitudes and low self-esteem in women.
Collapse
|
103
|
Galuret S, Vallée N, Tronchot A, Thomazeau H, Jannin P, Huaulmé A. Gaze behavior is related to objective technical skills assessment during virtual reality simulator-based surgical training: a proof of concept. Int J Comput Assist Radiol Surg 2023; 18:1697-1705. [PMID: 37286642 DOI: 10.1007/s11548-023-02961-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 05/16/2023] [Indexed: 06/09/2023]
Abstract
PURPOSE Simulation-based training allows surgical skills to be learned safely. Most virtual reality-based surgical simulators address technical skills without considering non-technical skills, such as gaze use. In this study, we investigated surgeons' visual behavior during virtual reality-based surgical training where visual guidance is provided. Our hypothesis was that the gaze distribution in the environment is correlated with the simulator's technical skills assessment. METHODS We recorded 25 surgical training sessions on an arthroscopic simulator. Trainees were equipped with a head-mounted eye-tracking device. A U-net was trained on two sessions to segment three simulator-specific areas of interest (AoI) and the background, to quantify gaze distribution. We tested whether the percentage of gazes in those areas was correlated with the simulator's scores. RESULTS The neural network was able to segment all AoI with a mean Intersection over Union superior to 94% for each area. The gaze percentage in the AoI differed among trainees. Despite several sources of data loss, we found significant correlations between gaze position and the simulator scores. For instance, trainees obtained better procedural scores when their gaze focused on the virtual assistance (Spearman correlation test, N = 7, r = 0.800, p = 0.031). CONCLUSION Our findings suggest that visual behavior should be quantified for assessing surgical expertise in simulation-based training environments, especially when visual guidance is provided. Ultimately visual behavior could be used to quantitatively assess surgeons' learning curve and expertise while training on VR simulators, in a way that complements existing metrics.
Collapse
|
104
|
Gao M, Xin R, Wang Q, Gao D, Wang J, Yu Y. Abnormal eye movement features in patients with depression: Preliminary findings based on eye tracking technology. Gen Hosp Psychiatry 2023; 84:25-30. [PMID: 37307718 DOI: 10.1016/j.genhosppsych.2023.04.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Revised: 04/10/2023] [Accepted: 04/18/2023] [Indexed: 06/14/2023]
Abstract
BACKGROUND Saccadic eye movement (SEM) has been considered a non-invasive potential biomarker for the diagnosis of depression in recent years, but its application is not yet mature. In this study, we used eye-tracking technology to identify the eye movements of patients with depression to develop a new method for objectively identifying depression. METHODS Thirty-six patients with depression as the depression group, while thirty-six matched healthy individuals as the control group were recruited and completed eye movement tests, including two tasks: the prosaccade task and the antisaccade task. iViewX RED 500 eye-tracking instruments from SMI were used to collect eye movement data for both groups. RESULTS In the prosaccade task, there was no difference between the depression and control groups(t = 0.019, P > 0.05). In general, with increasing angle, both groups showed significantly higher peak velocity (F = 81.72, P < 0.0001), higher mean velocity (F = 32.83, P = 0.000), and greater SEM amplitude (F = 24.23, P < 0.0001). In the antisaccade task, there were significant differences in correct rate (t = 3.219, P = 0.002) and mean velocity (F = 3.253, P < 0.05) between the depression group and the control group. In the anti-effect analysis, there were significant differences in correct rate (F = 67.44, P < 0.0001) and accuracy (F = 79.02, P < 0.0001) between the depression group and the control group. Both groups showed longer latency and worse correct rate and precision in the antisaccade task compared with the prosaccade task. CONCLUSION Patients with depression showed different eye movement features, which could be potential biomarkers for clinical identification. Further studies must validate these results with larger sample sizes and more clinical populations.
Collapse
|
105
|
Bruckert A, Christie M, Le Meur O. Where to look at the movies: Analyzing visual attention to understand movie editing. Behav Res Methods 2023; 55:2940-2959. [PMID: 36002630 DOI: 10.3758/s13428-022-01949-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/29/2022] [Indexed: 11/08/2022]
Abstract
In the process of making a movie, directors constantly care about where the spectator will look on the screen. Shot composition, framing, camera movements, or editing are tools commonly used to direct attention. In order to provide a quantitative analysis of the relationship between those tools and gaze patterns, we propose a new eye-tracking database, containing gaze-pattern information on movie sequences, as well as editing annotations, and we show how state-of-the-art computational saliency techniques behave on this dataset. In this work, we expose strong links between movie editing and spectators gaze distributions, and open several leads on how the knowledge of editing information could improve human visual attention modeling for cinematic content. The dataset generated and analyzed for this study is available at https://github.com/abruckert/eye_tracking_filmmaking.
Collapse
|
106
|
Prahm C, Konieczny J, Bressler M, Heinzel J, Daigeler A, Kolbenschlag J, Lauer H. Influence of colored face masks on judgments of facial attractiveness and gaze patterns. Acta Psychol (Amst) 2023; 239:103994. [PMID: 37541135 DOI: 10.1016/j.actpsy.2023.103994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Revised: 07/27/2023] [Accepted: 07/28/2023] [Indexed: 08/06/2023] Open
Abstract
BACKGROUND Facial aesthetics are of great importance in social interaction. With the widespread adoption of face masks in response to the Covid-19 pandemic, there is growing interest in understanding how wearing masks might impact perceptions of attractiveness, as they partially or completely conceal facial features that are typically associated with attractiveness. OBJECTIVES This study aimed to explore the impact of mask wearing on attractiveness and to investigate whether the color (red or blue) of the mask has any effect on the perception of a person's attractiveness, while also considering gender and age as contributing factors. Additionally, the study intended to evaluate gaze patterns, initial focus, and dwell time in response to masked and unmasked faces. METHODS 30 AI-generated images of 15 female and 15 male faces were presented to 71 participants (35 male, 36 female) in 3 conditions: not wearing any mask, wearing a red surgical mask, and wearing a blue surgical mask. The perceived attractiveness was rated on an ordinal scale of 1-10 (10 being most attractive). Gaze behavior, dwell time and initial focus were recorded using a stationary eye-tracking system. RESULTS The study found that wearing masks had no significant effect on the attractiveness ratings of female faces (p = .084), but it did benefit the perceived attractiveness of male faces which were initially rated lower (p = .16). Gender and age also played a significant role, as both male and female participants rated female stimuli higher than male stimuli (p < .001), and younger participants rated both genders as less attractive than older participants (p < .01). However, there was no significant influence of the mask's color on attractiveness. During the eye-tracking analysis, the periorbital region was of greater interest while masked, with the time to first fixation for the eyes being lower than the non-masked stimulus (p < .001) and showed a longer dwell time (p < .001). The lower face was shown less interest while masked as the time to first fixation was higher (p < .001) and the fixation count was less (p < .001). Mask color did not influence the scan path and there was no difference in revisits to the mask area between red or blue masks (p = .202), nor was there a difference in time to first fixation (p = .660). CONCLUSIONS The study findings indicate that there is an interplay between the gender and age of the participant and the facial stimuli. The color red did have an effect on the perception attractiveness, however not in female faces. The results suggest that masks, especially red ones, might be more beneficial for male faces, which were perceived as less attractive without a mask. However, wearing a mask did not significantly impact already attractive faces. The eye-tracking results revealed that the periorbital region attracted more attention and was fixated on more quickly while wearing a mask, indicating the importance of eyes in social interaction and aesthetic perception.
Collapse
|
107
|
Sui L, Dirix N, Woumans E, Duyck W. GECO-CN: Ghent Eye-tracking COrpus of sentence reading for Chinese-English bilinguals. Behav Res Methods 2023; 55:2743-2763. [PMID: 35896891 DOI: 10.3758/s13428-022-01931-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/10/2022] [Indexed: 11/08/2022]
Abstract
The current work presents the very first eye-tracking corpus of natural reading by Chinese-English bilinguals, whose two languages entail different writing systems and orthographies. Participants read an entire novel in these two languages, presented in paragraphs on screen. Half of the participants first read half of the novel in their native language (Simplified Chinese) and then the rest of the novel in their second language (English), while the other half read in the reverse language order. This article presents some important basic descriptive statistics of reading times and compares the difference between reading in the two languages. However, this unique eye-tracking corpus also allows the exploration of theories of language processing and bilingualism. Importantly, it provides a solid and reliable ground for studying the difference between Eastern and Western languages, understanding the impact and consequences of having a completely different first language on bilingual processing. The materials are freely available for use by researchers interested in (bilingual) reading.
Collapse
|
108
|
Bellato A, Arora I, Kochhar P, Ropar D, Hollis C, Groom MJ. Relationship between autonomic arousal and attention orienting in children and adolescents with ADHD, autism and co-occurring ADHD and autism. Cortex 2023; 166:306-321. [PMID: 37459680 DOI: 10.1016/j.cortex.2023.06.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 05/22/2023] [Accepted: 06/08/2023] [Indexed: 08/29/2023]
Abstract
INTRODUCTION Attention-Deficit/Hyperactivity Disorder (ADHD) and Autism Spectrum Disorder (ASD) may be characterized by different profiles of visual attention orienting. However, there are also many inconsistent findings emerging from the literature, probably due to the fact that the potential effect of autonomic arousal (which has been proposed to be dysregulated in these conditions) on oculomotor performance has not been investigated before. Moreover, it is not known how visual attention orienting is affected by the co-occurrence of ADHD and autism in people with a double diagnosis. METHODS 99 children/adolescents with or without ADHD and/or autism (age 10.79 ± 2.05 years, 65% boys) completed an adapted version of the gap-overlap task (with baseline and overlap trials only). The social salience and modality of stimuli were manipulated between trials. Eye movements and pupil size were recorded. We compared saccadic reaction times (SRTs) between diagnostic groups and investigated if a trial-by-trial association existed between pre-saccadic pupil size and SRTs. RESULTS Faster orienting (shorter SRT) was found for baseline compared to overlap trials, faces compared to non-face stimuli and-more evidently in children without ADHD and/or autism-for multi-modal compared to uni-modal stimuli. We also found a linear negative association between pre-saccadic pupil size and SRTs, in autistic participants (without ADHD), and a quadratic association in children with ADHD (without autism), for which SRTs were slower when intra-individual pre-saccadic pupil size was smallest or largest. CONCLUSION Our findings are in line with previous literature and indicate a possible effect of dysregulated autonomic arousal on oculomotor mechanisms in autism and ADHD, which should be further investigated in future research studies with larger samples, to reliably investigate possible differences between children with single and dual diagnoses.
Collapse
|
109
|
González-Vides L, Hernández-Verdejo JL, Cañadas-Suárez P. Eye Tracking in Optometry: A Systematic Review. J Eye Mov Res 2023; 16:10.16910/jemr.16.3.3. [PMID: 38111688 PMCID: PMC10725735 DOI: 10.16910/jemr.16.3.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2023] Open
Abstract
This systematic review examines the use of eye-tracking devices in optometry, describing their main characteristics, areas of application and metrics used. Using the PRISMA method, a systematic search was performed of three databases. The search strategy identified 141 reports relevant to this topic, indicating the exponential growth over the past ten years of the use of eye trackers in optometry. Eye-tracking technology was applied in at least 12 areas of the field of optometry and rehabilitation, the main ones being optometric device technology, and the assessment, treatment, and analysis of ocular disorders. The main devices reported on were infrared light-based and had an image capture frequency of 60 Hz to 2000 Hz. The main metrics mentioned were fixations, saccadic movements, smooth pursuit, microsaccades, and pupil variables. Study quality was sometimes limited in that incomplete information was provided regarding the devices used, the study design, the methods used, participants' visual function and statistical treatment of data. While there is still a need for more research in this area, eye-tracking devices should be more actively incorporated as a useful tool with both clinical and research applications. This review highlights the robustness this technology offers to obtain objective information about a person's vision in terms of optometry and visual function, with implications for improving visual health services and our understanding of the vision process.
Collapse
|
110
|
Zettersten M, Yurovsky D, Xu TL, Uner S, Tsui ASM, Schneider RM, Saleh AN, Meylan SC, Marchman VA, Mankewitz J, MacDonald K, Long B, Lewis M, Kachergis G, Handa K, deMayo B, Carstensen A, Braginsky M, Boyce V, Bhatt NS, Bergey CA, Frank MC. Peekbank: An open, large-scale repository for developmental eye-tracking data of children's word recognition. Behav Res Methods 2023; 55:2485-2500. [PMID: 36002623 PMCID: PMC9950292 DOI: 10.3758/s13428-022-01906-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/27/2022] [Indexed: 02/03/2023]
Abstract
The ability to rapidly recognize words and link them to referents is central to children's early language development. This ability, often called word recognition in the developmental literature, is typically studied in the looking-while-listening paradigm, which measures infants' fixation on a target object (vs. a distractor) after hearing a target label. We present a large-scale, open database of infant and toddler eye-tracking data from looking-while-listening tasks. The goal of this effort is to address theoretical and methodological challenges in measuring vocabulary development. We first present how we created the database, its features and structure, and associated tools for processing and accessing infant eye-tracking datasets. Using these tools, we then work through two illustrative examples to show how researchers can use Peekbank to interrogate theoretical and methodological questions about children's developing word recognition ability.
Collapse
|
111
|
Hintz F, Voeten CC, Scharenborg O. Recognizing non-native spoken words in background noise increases interference from the native language. Psychon Bull Rev 2023; 30:1549-1563. [PMID: 36544064 PMCID: PMC10482792 DOI: 10.3758/s13423-022-02233-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/30/2022] [Indexed: 12/24/2022]
Abstract
Listeners frequently recognize spoken words in the presence of background noise. Previous research has shown that noise reduces phoneme intelligibility and hampers spoken-word recognition - especially for non-native listeners. In the present study, we investigated how noise influences lexical competition in both the non-native and the native language, reflecting the degree to which both languages are co-activated. We recorded the eye movements of native Dutch participants as they listened to English sentences containing a target word while looking at displays containing four objects. On target-present trials, the visual referent depicting the target word was present, along with three unrelated distractors. On target-absent trials, the target object (e.g., wizard) was absent. Instead, the display contained an English competitor, overlapping with the English target in phonological onset (e.g., window), a Dutch competitor, overlapping with the English target in phonological onset (e.g., wimpel, pennant), and two unrelated distractors. Half of the sentences was masked by speech-shaped noise; the other half was presented in quiet. Compared to speech in quiet, noise delayed fixations to the target objects on target-present trials. For target-absent trials, we observed that the likelihood for fixation biases towards the English and Dutch onset competitors (over the unrelated distractors) was larger in noise than in quiet. Our data thus show that the presence of background noise increases lexical competition in the task-relevant non-native (English) and in the task-irrelevant native (Dutch) language. The latter reflects stronger interference of one's native language during non-native spoken-word recognition under adverse conditions.
Collapse
|
112
|
Tsang T, Naples AJ, Barney EC, Xie M, Bernier R, Dawson G, Dziura J, Faja S, Jeste SS, McPartland JC, Nelson CA, Murias M, Seow H, Sugar C, Webb SJ, Shic F, Johnson SP. Attention Allocation During Exploration of Visual Arrays in ASD: Results from the ABC-CT Feasibility Study. J Autism Dev Disord 2023; 53:3220-3229. [PMID: 35657448 PMCID: PMC10980886 DOI: 10.1007/s10803-022-05569-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/05/2022] [Indexed: 11/29/2022]
Abstract
Visual exploration paradigms involving object arrays have been used to examine salience of social stimuli such as faces in ASD. Recent work suggests performance on these paradigms may associate with clinical features of ASD. We evaluate metrics from a visual exploration paradigm in 4-to-11-year-old children with ASD (n = 23; 18 males) and typical development (TD; n = 23; 13 males). Presented with arrays containing faces and nonsocial stimuli, children with ASD looked less at (p = 0.002) and showed fewer fixations to (p = 0.022) faces than TD children, and spent less time looking at each object on average (p = 0.004). Attention to the screen and faces correlated positively with social and cognitive skills in the ASD group (ps < .05). This work furthers our understanding of objective measures of visual exploration in ASD and its potential for quantifying features of ASD.
Collapse
|
113
|
Heatlie LC, Petterson LJ, Vasey PL. Heterosexual Men's Visual Attention to Nude Images Depicting Cisgender Males, Cisgender Females, and Gynandromorphs. ARCHIVES OF SEXUAL BEHAVIOR 2023:10.1007/s10508-023-02657-9. [PMID: 37495891 DOI: 10.1007/s10508-023-02657-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 07/03/2023] [Accepted: 07/04/2023] [Indexed: 07/28/2023]
Abstract
Gynandromorphophilia refers to sexual attraction and arousal to feminine males, who may or may not have breasts, and who retain their penises. Previous research has suggested that some capacity for gynandromorphophilia may characterize males who are gynephilic (i.e., sexually attracted and aroused to adult females). This study examined Canadian cisgender gynephilic men's (n = 65) visual attention and subjective ratings of sexual arousal when presented with nude images of feminine males with, and without breasts, masculine males, and feminine females. Visual attention was assessed using an infrared eye-tracker. Subjective arousal to feminine females was highest, followed by subjective arousal to feminine males with breasts, feminine males without breasts, and masculine males. However, subjective arousal to feminine males without breasts and to masculine males did not differ significantly. The patterning of visual attention to images of females was unique, in that participants were equally likely to attend first to the face, chest or genitals. These areas also elicited relatively greater fixation durations and counts. Although participants fixated onto the chests of feminine males with breasts for longer durations than those of masculine males, most of the differences between feminine males, with and without breasts, were non-significant. These results suggest that female sex-based traits play a more primary role in gynephilic men's sexual arousal than feminine gender-based traits.
Collapse
|
114
|
Yan Z, Yang Z, Griffiths MD. "Danmu" preference, problematic online video watching, loneliness and personality: An eye-tracking study and survey study. BMC Psychiatry 2023; 23:523. [PMID: 37474903 PMCID: PMC10360313 DOI: 10.1186/s12888-023-05018-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 07/10/2023] [Indexed: 07/22/2023] Open
Abstract
'Danmu' (i.e., comments that scroll across online videos), has become popular on several Asian online video platforms. Two studies were conducted to investigate the relationships between Danmu preference, problematic online video watching, loneliness and personality. Study 1 collected self-report data on the study variables from 316 participants. Study 2 collected eye-tracking data of Danmu fixation (duration, count, and the percentages) from 87 participants who watched videos. Results show that fixation on Danmu was significantly correlated with problematic online video watching, loneliness, and neuroticism. Self-reported Danmu preference was positively associated with extraversion, openness, problematic online video watching, and loneliness. The studies indicate the potential negative effects of Danmu preference (e.g., problematic watching and loneliness) during online video watching. The study is one of the first empirical investigations of Danmu and problematic online video watching using eye-tracking software. Online video platforms could consider adding more responsible use messaging relating to Danmu in videos. Such messages may help users to develop healthier online video watching habits.
Collapse
|
115
|
Sommerfeld L, Staudte M, Mani N, Kray J. Even young children make multiple predictions in the complex visual world. J Exp Child Psychol 2023; 235:105690. [PMID: 37419010 DOI: 10.1016/j.jecp.2023.105690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Revised: 02/14/2023] [Accepted: 04/03/2023] [Indexed: 07/09/2023]
Abstract
Children can anticipate upcoming input in sentences with semantically constraining verbs. In the visual world, the sentence context is used to anticipatorily fixate the only object matching potential sentence continuations. Adults can process even multiple visual objects in parallel when predicting language. This study examined whether young children can also maintain multiple prediction options in parallel during language processing. In addition, we aimed at replicating the finding that children's receptive vocabulary size modulates their prediction. German children (5-6 years, n = 26) and adults (19-40 years, n = 37) listened to 32 subject-verb-object sentences with semantically constraining verbs (e.g., "The father eats the waffle") while looking at visual scenes of four objects. The number of objects being consistent with the verb constraints (e.g., being edible) varied among 0, 1, 3, and 4. A linear mixed effects model on the proportion of target fixations with the effect coded factors condition (i.e., the number of consistent objects), time window, and age group revealed that upon hearing the verb, children and adults anticipatorily fixated the single visual object, or even multiple visual objects, being consistent with the verb constraints, whereas inconsistent objects were fixated less. This provides first evidence that, comparable to adults, young children maintain multiple prediction options in parallel. Moreover, children with larger receptive vocabulary sizes (Peabody Picture Vocabulary Test) anticipatorily fixated potential targets more often than those with smaller ones, showing that verbal abilities affect children's prediction in the complex visual world.
Collapse
|
116
|
Chen J, van den Bos E, Karch JD, Westenberg PM. Social anxiety is related to reduced face gaze during a naturalistic social interaction. ANXIETY, STRESS, AND COPING 2023; 36:460-474. [PMID: 36153759 DOI: 10.1080/10615806.2022.2125961] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 09/12/2022] [Accepted: 09/12/2022] [Indexed: 05/24/2023]
Abstract
BACKGROUND Social anxiety has long been related to reduced eye contact, and this feature is seen as a causal and a maintaining factor of social anxiety disorder. The present research adds to the literature by investigating the relationship between social anxiety and visual avoidance of faces in a reciprocal face-to-face conversation, while taking into account two aspects of conversations as potential moderating factors: conversational role and level of intimacy. METHOD Eighty-five female students (17-25 years) completed the Leibowitz Social Anxiety Scale and had a face-to-face getting-acquainted conversation with a female confederate. We alternated conversational role (talking versus listening) and manipulated intimacy of the topics (low versus high). Participants' gaze behavior was registered with Tobii eye-tracking glasses. Three dependent measures were extracted regarding fixations on the face of the confederate: total duration, proportion of fixations, and mean duration. RESULTS The results revealed that higher levels of social anxiety were associated with reduced face gaze on all three measures. The relation with total fixation duration was stronger for low intimate topics. The relation with mean fixation duration was stronger during listening than during speaking. CONCLUSION The results highlight the importance of studying gaze behavior in a naturalistic social interaction.
Collapse
|
117
|
Hafner C, Scharner V, Hermann M, Metelka P, Hurch B, Klaus DA, Schaubmayr W, Wagner M, Gleiss A, Willschke H, Hamp T. Eye-tracking during simulation-based echocardiography: a feasibility study. BMC MEDICAL EDUCATION 2023; 23:490. [PMID: 37393288 DOI: 10.1186/s12909-023-04458-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 06/15/2023] [Indexed: 07/03/2023]
Abstract
INTRODUCTION Due to the technical progress point-of-care ultrasound (POCUS) is increasingly used in critical care medicine. However, optimal training strategies and support for novices have not been thoroughly researched so far. Eye-tracking, which offers insights into the gaze behavior of experts may be a useful tool for better understanding. The aim of this study was to investigate the technical feasibility and usability of eye-tracking during echocardiography as well as to analyze differences of gaze patterns between experts and non-experts. METHODS Nine experts in echocardiography and six non-experts were equipped with eye-tracking glasses (Tobii, Stockholm, Sweden), while performing six medical cases on a simulator. For each view case specific areas of interests (AOI) were defined by the first three experts depending on the underlying pathology. Technical feasibility, participants' subjective experience on the usability of the eye-tracking glasses as well as the differences of relative dwell time (focus) inside the areas of interest (AOI) between six experts and six non-experts were evaluated. RESULTS Technical feasibility of eye-tracking during echocardiography was achieved with an accordance of 96% between the visual area orally described by participants and the area marked by the glasses. Experts had longer relative dwell time in the case specific AOI (50.6% versus 38.4%, p = 0.072) and performed ultrasound examinations faster (138 s versus 227 s, p = 0.068). Furthermore, experts fixated earlier in the AOI (5 s versus 10 s, p = 0.033). CONCLUSION This feasibility study demonstrates that eye-tracking can be used to analyze experts and non-experts gaze patterns during POCUS. Although, in this study the experts had a longer fixation time in the defined AOIs compared to non-experts, further studies are needed to investigate if eye-tracking could improve teaching of POCUS.
Collapse
|
118
|
Biondi FN, Graf F, Cort J. On the potential of pupil size as a metric of physical fatigue during a repeated handle push/pull task. APPLIED ERGONOMICS 2023; 110:104025. [PMID: 37071948 DOI: 10.1016/j.apergo.2023.104025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 03/24/2023] [Accepted: 04/05/2023] [Indexed: 05/03/2023]
Abstract
Force output and muscle activity represent the gold standards for measuring physical fatigue. This study explores using ocular metrics for tracking changes in physical fatigue during the completion of a repeated handle push/pull task. Participants completed this task over three trials, and pupil size was recorded by means of a head-mounted eye-tracker. Blink frequency was also measured. Force impulse and maximum peak force were used as ground-truth measures of physical fatigue. As expected, a reduction in peak force and impulse was observed over time as participants became more fatigued. More interestingly, pupil size was also found to decrease from trial 1 through trial 3. No changes in blink rate were found with increasing physical fatigue. While exploratory in nature, these findings add to the sparse literature exploring the use of ocular metrics in Ergonomics. They also advance the use of pupil size as a possible future alternative for physical fatigue detection.
Collapse
|
119
|
Kopecek M, Kremlacek J. Eye-tracking control of an adjustable electric bed: construction and validation by immobile patients with multiple sclerosis. J Neuroeng Rehabil 2023; 20:75. [PMID: 37296480 PMCID: PMC10251586 DOI: 10.1186/s12984-023-01193-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Accepted: 05/10/2023] [Indexed: 06/12/2023] Open
Abstract
BACKGROUND In severe conditions of limited motor abilities, frequent position changes for work or passive and active rest are essential bedside activities to prevent further health complications. We aimed to develop a system using eye movements for bed positioning and to verify its functionality in a control group and a group of patients with significant motor limitation caused by multiple sclerosis. METHODS The eye-tracking system utilized an innovative digital-to-analog converter module to control the positioning bed via a novel graphical user interface. We verified the ergonomics and usability of the system by performing a fixed sequence of positioning tasks, in which the leg and head support was repeatedly raised and then lowered. Fifteen women and eleven men aged 42.7 ± 15.9 years in the control group and nine women and eight men aged 60.3 ± 9.14 years in the patient group participated in the experiment. The degree of disability, according to the Expanded Disability Status Scale (EDSS), ranged from 7 to 9.5 points in the patients. We assessed the speed and efficiency of the bed control and the improvement during testing. In a questionnaire, we evaluated satisfaction with the system. RESULTS The control group mastered the task in 40.2 s (median) with an interquartile interval from 34.5 to 45.5 s, and patients mastered the task in in 56.5 (median) with an interquartile interval from 46.5 to 64.9 s. The efficiency of solving the task (100% corresponds to an optimal performance) was 86.3 (81.6; 91.0) % for the control group and 72.1 (63.0; 75.2) % for the patient group. Throughout testing, the patients learned to communicate with the system, and their efficiency and task time improved. A correlation analysis showed a negative relationship (rho = - 0.587) between efficiency improvement and the degree of impairment (EDSS). In the control group, the learning was not significant. On the questionnaire survey, sixteen patients reported gaining confidence in bed control. Seven patients preferred the offered form of bed control, and in six cases, they would choose another form of interface. CONCLUSIONS The proposed system and communication through eye movements are reliable for positioning the bed in people affected by advanced multiple sclerosis. Seven of 17 patients indicated that they would choose this system for bed control and wished to extend it for another application.
Collapse
|
120
|
Pelgrim MH, Espinosa J, Buchsbaum D. Head-mounted mobile eye-tracking in the domestic dog: A new method. Behav Res Methods 2023; 55:1924-1941. [PMID: 35788974 PMCID: PMC9255465 DOI: 10.3758/s13428-022-01907-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/14/2022] [Indexed: 11/08/2022]
Abstract
Humans rely on dogs for countless tasks, ranging from companionship to highly specialized detection work. In their daily lives, dogs must navigate a human-built visual world, yet comparatively little is known about what dogs visually attend to as they move through their environment. Real-world eye-tracking, or head-mounted eye-tracking, allows participants to freely move through their environment, providing more naturalistic results about visual attention while interacting with objects and agents. In dogs, real-world eye-tracking has the potential to inform our understanding of cross-species cognitive abilities as well as working dog training; however, a robust and easily deployed head-mounted eye-tracking method for dogs has not previously been developed and tested. We present a novel method for real-world eye-tracking in dogs, using a simple head-mounted mobile apparatus mounted onto goggles designed for dogs. This new method, adapted from systems that are widely used in humans, allows for eye-tracking during more naturalistic behaviors, namely walking around and interacting with real-world stimuli, as well as reduced training time as compared to traditional stationary eye-tracking methods. We found that while completing a simple forced-choice treat finding task, dogs look primarily to the treat, and we demonstrated the accuracy of this method using alternative gaze-tracking methods. Additionally, eye-tracking revealed more fine-grained time course information and individual differences in looking patterns.
Collapse
|
121
|
Pershin I, Mustafaev T, Ibragimova D, Ibragimov B. Changes in Radiologists' Gaze Patterns Against Lung X-rays with Different Abnormalities: a Randomized Experiment. J Digit Imaging 2023; 36:767-775. [PMID: 36622464 PMCID: PMC9838425 DOI: 10.1007/s10278-022-00760-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 11/23/2022] [Accepted: 12/15/2022] [Indexed: 01/10/2023] Open
Abstract
The workload of some radiologists increased dramatically in the last several, which resulted in a potentially reduced quality of diagnosis. It was demonstrated that diagnostic accuracy of radiologists significantly reduces at the end of work shifts. The study aims to investigate how radiologists cover chest X-rays with their gaze in the presence of different chest abnormalities and high workload. We designed a randomized experiment to quantitatively assess how radiologists' image reading patterns change with the radiological workload. Four radiologists read chest X-rays on a radiological workstation equipped with an eye-tracker. The lung fields on the X-rays were automatically segmented with U-Net neural network allowing to measure the lung coverage with radiologists' gaze. The images were randomly split so that each image was shown at a different time to a different radiologist. Regression models were fit to the gaze data to calculate the treads in lung coverage for individual radiologists and chest abnormalities. For the study, a database of 400 chest X-rays with reference diagnoses was assembled. The average lung coverage with gaze ranged from 55 to 65% per radiologist. For every 100 X-rays read, the lung coverage reduced from 1.3 to 7.6% for the different radiologists. The coverage reduction trends were consistent for all abnormalities ranging from 3.4% per 100 X-rays for cardiomegaly to 4.1% per 100 X-rays for atelectasis. The more image radiologists read, the smaller part of the lung fields they cover with the gaze. This pattern is very stable for all abnormality types and is not affected by the exact order the abnormalities are viewed by radiologists. The proposed randomized experiment captured and quantified consistent changes in X-ray reading for different lung abnormalities that occur due to high workload.
Collapse
|
122
|
Valiyamattam GJ, Katti H, Chaganti VK, O'Haire ME, Sachdeva V. Circumscribed interests in autism: Can animals potentially re-engage social attention? RESEARCH IN DEVELOPMENTAL DISABILITIES 2023; 137:104486. [PMID: 37062184 DOI: 10.1016/j.ridd.2023.104486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 02/25/2023] [Accepted: 03/10/2023] [Indexed: 05/10/2023]
Abstract
BACKGROUND Circumscribed interests (CI) in autism are highly fixated and repetitive interests, generally centering on non-social and idiosyncratic topics. The increased salience of CI objects often results in decreased social attention, thus interfering with social interactions. Behavioural, biomarker and neuroimaging research points to enhanced social functioning in autistic children in the presence of animals. For instance, neuroimaging studies report a greater activation of reward systems in the brain in response to animal stimuli whereas eye-tracking studies reveal a higher visual preference for animal faces in autistic individuals. This potentially greater social reward attached to animals, introduces the interesting and yet unexplored possibility that the presence of competing animal stimuli may reduce the disproportionately higher visual attention to CI objects. METHOD We examined this using a paired-preference eye-tracking paradigm where images of human and animal faces were paired with CI and non-CI objects. 31 children (ASD n = 16; TD n = 15) participated in the study (3391 observations). RESULTS Autistic children showed a significantly greater visual attention to CI objects whereas typical controls showed a significantly greater visual attention to social images across pairings. Interestingly, pairing with a CI object significantly reduced the social attention elicited to human faces but not animal faces. Further, in pairings with CI objects, significantly greater sustained attention per visit was seen for animal faces when compared to human faces. CONCLUSIONS These results thus suggest that social attention deficits in ASD may not be uniform across human and animal stimuli. Animals may comprise a potentially important stimulus category modulating visual attention in ASD.
Collapse
|
123
|
Idrees AR, Kraft R, Winter M, Küchler AM, Baumeister H, Reilly R, Reichert M, Pryss R. Exploring the usability of an internet-based intervention and its providing eHealth platform in an eye-tracking study. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2023; 14:9621-9636. [PMID: 37288130 PMCID: PMC10195654 DOI: 10.1007/s12652-023-04635-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 05/02/2023] [Indexed: 06/09/2023]
Abstract
The proliferation of online eHealth has made it much easier for users to access healthcare services and interventions from the comfort of their own homes. This study looks at how well one such platform-eSano-performs in terms of user experience when delivering mindfulness interventions. In order to assess usability and user experience, several tools such as eye-tracking technology, think-aloud sessions, a system usability scale questionnaire, an application questionnaire, and post-experiment interviews were employed. Participants were evaluated while they accessed the first module of the mindfulness intervention provided by eSano to measure their interaction with the app, and their level of engagement, and to obtain feedback on both the intervention and its overall usability. The results revealed that although users generally rated their experience with the app positively in terms of overall satisfaction, according to data collected through the system usability scale questionnaire, participants rated the first module of the mindfulness intervention as below average. Additionally, eye-tracking data showed that some users skipped long text blocks in favor of answering questions quickly while others spent more than half their time reading them. Henceforth, recommendations were put forward to improve both the usability and persuasiveness of the app-such as incorporating shorter text blocks and more engaging interactive elements-in order to raise adherence rates. Overall findings from this study provide valuable insights into how users interact with the eSano's participant app which can be used as guidelines for the future development of more effective and user-friendly platforms. Moreover, considering these potential improvements will help foster more positive experiences that promote regular engagement with these types of apps; taking into account emotional states and needs that vary across different age groups and abilities. Supplementary Information The online version contains supplementary material available at 10.1007/s12652-023-04635-4.
Collapse
|
124
|
Nielsen CP, Nielsen J, Sørensen CB, Pedersen ER. Eye-Tracking on Touch Screen - Evaluating User Interaction. Stud Health Technol Inform 2023; 302:631-635. [PMID: 37203767 DOI: 10.3233/shti230225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
This paper suggests a setup for using remote eye-tracking on a touchscreen tablet to evaluate user interaction for older adults interacting with a user-driven hearing test. By using video recordings to support the eye-tracking data, it was possible to evaluate quantitative usability metrics that could be compared to other research findings. The video recordings revealed useful information to distinguish between reasons for gaps in data and missing data and to inform future similar studies of human-computer interaction on a touch screen. Using only portable equipment allows researchers to move to the location of the user and investigate the user interaction of devices in real-world scenarios.
Collapse
|
125
|
6-, 10-, and 12-month-olds remember complex dynamic events across 2 weeks. J Exp Child Psychol 2023; 229:105627. [PMID: 36696740 DOI: 10.1016/j.jecp.2023.105627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 12/22/2022] [Accepted: 01/03/2023] [Indexed: 01/24/2023]
Abstract
Whereas infants' ability to remember simple static material (e.g., pictures) has been documented extensively, we know surprisingly little about infants' memory of dynamic events (i.e., events unfolding in time) in the first year after birth. Although there is evidence to suggest that infants show some kind of sensitivity toward complex dynamic events (i.e., events involving agents and a storyline) as indicated by visual engagement in the first year after birth, 16- to 18-month-olds are hitherto the youngest infants documented to remember such material. Using a visual paired-comparison (VPC) task, in Experiment 1 we examined 6-, 10-, and 12-month-olds' (N = 108) ability to encode and remember cartoons involving complex dynamic events across 2 weeks. Results showed that all age groups remembered these cartoons. To investigate further the role of a complex storyline, in Experiment 2 we assessed the memory of 107 infants of the same age groups for similar cartoons but without coherent storyline information by scrambling the temporal presentation of the information in the cartoons. The results showed that the two youngest age groups did not remember this version. To our knowledge, this is the first experiment to document memory for such complex material in young infants using VPC.
Collapse
|