1
|
Song JH, Kim SY. Push-Pull Mechanism of Attention and Emotion in Children with Attention Deficit Hyperactivity Disorder. J Clin Med 2024; 13:4206. [PMID: 39064246 PMCID: PMC11277595 DOI: 10.3390/jcm13144206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Revised: 07/14/2024] [Accepted: 07/15/2024] [Indexed: 07/28/2024] Open
Abstract
Background/Objectives: While deficits in executive attention and alerting systems in children with attention deficit hyperactivity disorder (ADHD) are well-documented, findings regarding orienting attention in ADHD have been inconsistent. The current study investigated the mechanism of attentional orienting in children with ADHD by examining their attentional bias towards threatening stimuli. Furthermore, we explored the modulating role of anxiety levels in ADHD on this attentional bias. Methods: In Experiment 1, 20 children with ADHD and 26 typically developing children (TDC) performed a continuous performance task that included task-irrelevant distractions consisting of angry faces and neutral places. In Experiment 2, 21 children with ADHD and 25 TDC performed the same task, but with angry and neutral faces as distractors. To measure children's anxiety levels, the State-Trait Anxiety Inventory was administered before each experiment. Results: In Experiment 1, results revealed no attentional bias effects in children with ADHD, whereas TDC exhibited attentional capture effects by both types of distractors. However, in Experiment 2, ADHD children demonstrated an attentional bias towards angry faces, which revealed a significant positive correlation with their trait anxiety levels (r = 0.61, p < 0.05). Further analyses combining all ADHD children showed that trait anxiety levels in Experiment 2 were significantly higher than those in Experiment 1. Finally, a significant positive correlation was found between anxiety levels and attentional bias towards angry faces in all ADHD children (r = 0.36, p < 0.01). Conclusions: Children with ADHD exhibited atypical attentional-orienting effects to threats, and their levels of trait anxiety appeared to modulate such attentional-orienting mechanisms.
Collapse
Affiliation(s)
| | - So-Yeon Kim
- Department of Psychology, Duksung Women’s University, Seoul 01369, Republic of Korea;
| |
Collapse
|
2
|
Rodger H, Sokhn N, Lao J, Liu Y, Caldara R. Developmental eye movement strategies for decoding facial expressions of emotion. J Exp Child Psychol 2023; 229:105622. [PMID: 36641829 DOI: 10.1016/j.jecp.2022.105622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 12/21/2022] [Accepted: 12/23/2022] [Indexed: 01/15/2023]
Abstract
In our daily lives, we routinely look at the faces of others to try to understand how they are feeling. Few studies have examined the perceptual strategies that are used to recognize facial expressions of emotion, and none have attempted to isolate visual information use with eye movements throughout development. Therefore, we recorded the eye movements of children from 5 years of age up to adulthood during recognition of the six "basic emotions" to investigate when perceptual strategies for emotion recognition become mature (i.e., most adult-like). Using iMap4, we identified the eye movement fixation patterns for recognition of the six emotions across age groups in natural viewing and gaze-contingent (i.e., expanding spotlight) conditions. While univariate analyses failed to reveal significant differences in fixation patterns, more sensitive multivariate distance analyses revealed a U-shaped developmental trajectory with the eye movement strategies of the 17- to 18-year-old group most similar to adults for all expressions. A developmental dip in strategy similarity was found for each emotional expression revealing which age group had the most distinct eye movement strategy from the adult group: the 13- to 14-year-olds for sadness recognition; the 11- to 12-year-olds for fear, anger, surprise, and disgust; and the 7- to 8-year-olds for happiness. Recognition performance for happy, angry, and sad expressions did not differ significantly across age groups, but the eye movement strategies for these expressions diverged for each group. Therefore, a unique strategy was not a prerequisite for optimal recognition performance for these expressions. Our data provide novel insights into the developmental trajectories underlying facial expression recognition, a critical ability for adaptive social relations.
Collapse
Affiliation(s)
- Helen Rodger
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland.
| | - Nayla Sokhn
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland
| | - Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland
| | - Yingdi Liu
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland.
| |
Collapse
|
3
|
Kim M, Cho Y, Kim SY. Effects of diagnostic regions on facial emotion recognition: The moving window technique. Front Psychol 2022; 13:966623. [PMID: 36186300 PMCID: PMC9518794 DOI: 10.3389/fpsyg.2022.966623] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Accepted: 08/01/2022] [Indexed: 11/13/2022] Open
Abstract
With regard to facial emotion recognition, previous studies found that specific facial regions were attended more in order to identify certain emotions. We investigated whether a preferential search for emotion-specific diagnostic regions could contribute toward the accurate recognition of facial emotions. Twenty-three neurotypical adults performed an emotion recognition task using six basic emotions: anger, disgust, fear, happiness, sadness, and surprise. The participants’ exploration patterns for the faces were measured using the Moving Window Technique (MWT). This technique presented a small window on a blurred face, and the participants explored the face stimuli through a mouse-controlled window in order to recognize the emotions on the face. Our results revealed that when the participants explored the diagnostic regions for each emotion more frequently, the correct recognition of the emotions occurred at a faster rate. To the best of our knowledge, this current study is the first to present evidence that an exploration of emotion-specific diagnostic regions can predict the reaction time of accurate emotion recognition among neurotypical adults. Such findings can be further applied in the evaluation and/or training (regarding emotion recognition functions) of both typically and atypically developing children with emotion recognition difficulties.
Collapse
Affiliation(s)
- Minhee Kim
- Department of Psychology, Duksung Women’s University, Seoul, South Korea
| | - Youngwug Cho
- Department of Computer Science, Hanyang University, Seoul, South Korea
| | - So-Yeon Kim
- Department of Psychology, Duksung Women’s University, Seoul, South Korea
- *Correspondence: So-Yeon Kim,
| |
Collapse
|
4
|
Poulin-Dubois D, Hastings PD, Chiarella SS, Geangu E, Hauf P, Ruel A, Johnson A. The eyes know it: Toddlers' visual scanning of sad faces is predicted by their theory of mind skills. PLoS One 2018; 13:e0208524. [PMID: 30521593 PMCID: PMC6283596 DOI: 10.1371/journal.pone.0208524] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2018] [Accepted: 11/19/2018] [Indexed: 11/19/2022] Open
Abstract
The current research explored toddlers' gaze fixation during a scene showing a person expressing sadness after a ball is stolen from her. The relation between the duration of gaze fixation on different parts of the person's sad face (e.g., eyes, mouth) and theory of mind skills was examined. Eye tracking data indicated that before the actor experienced the negative event, toddlers divided their fixation equally between the actor's happy face and other distracting objects, but looked longer at the face after the ball was stolen and she expressed sadness. The strongest predictor of increased focus on the sad face versus other elements of the scene was toddlers' ability to predict others' emotional reactions when outcomes fulfilled (happiness) or failed to fulfill (sadness) desires, whereas toddlers' visual perspective-taking skills predicted their more specific focusing on the actor's eyes and, for boys only, mouth. Furthermore, gender differences emerged in toddlers' fixation on parts of the scene. Taken together, these findings suggest that top-down processes are involved in the scanning of emotional facial expressions in toddlers.
Collapse
Affiliation(s)
| | - Paul D. Hastings
- Department of Psychology, University of California Davis, Davis, California, United States of America
| | | | - Elena Geangu
- Department of Psychology, University of York, York, United Kingdom
| | - Petra Hauf
- Department of Psychology, St. Francis Xavier University, Antigonish, Nova Scotia, Canada
| | - Alexa Ruel
- Department of Psychology, Concordia University, Montréal, Québec, Canada
| | - Aaron Johnson
- Department of Psychology, Concordia University, Montréal, Québec, Canada
| |
Collapse
|
5
|
Neta M, Dodd MD. Through the eyes of the beholder: Simulated eye-movement experience ("SEE") modulates valence bias in response to emotional ambiguity. Emotion 2018; 18:1122-1127. [PMID: 29389209 PMCID: PMC7952038 DOI: 10.1037/emo0000421] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Although some facial expressions provide clear information about people's emotions and intentions (happy, angry), others (surprise) are ambiguous because they can signal both positive (e.g., surprise party) and negative outcomes (e.g., witnessing an accident). Without a clarifying context, surprise is interpreted as positive by some and negative by others, and this valence bias is stable across time. When compared to fearful expressions, which are consistently rated as negative, surprise and fear share similar morphological features (e.g., widened eyes) primarily in the upper part of the face. Recently, we demonstrated that the valence bias was associated with a specific pattern of eye movements (positive bias associated with faster fixation to the lower part of the face). In this follow-up, we identified two participants from our previous study who had the most positive and most negative valence bias. We used their eye movements to create a moving window such that new participants viewed faces through the eyes of one our previous participants (subjects saw only the areas of the face that were directly fixated by the original participants in the exact order they were fixated; i.e., Simulated Eye-movement Experience). The input provided by these windows modulated the valence ratings of surprise, but not fear faces. These findings suggest there are meaningful individual differences in how people process faces, and that these differences impact our emotional perceptions. Furthermore, this study is unique in its approach to examining individual differences in emotion by creating a new methodology adapted from those used primarily in the vision/attention domain. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Collapse
Affiliation(s)
- Maital Neta
- Department of Psychology, University of Nebraska-Lincoln
| | - Michael D Dodd
- Department of Psychology, University of Nebraska-Lincoln
| |
Collapse
|
6
|
Birmingham E, Svärd J, Kanan C, Fischer H. Exploring emotional expression recognition in aging adults using the Moving Window Technique. PLoS One 2018; 13:e0205341. [PMID: 30335767 PMCID: PMC6193651 DOI: 10.1371/journal.pone.0205341] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Accepted: 09/24/2018] [Indexed: 11/22/2022] Open
Abstract
Adult aging is associated with difficulties in recognizing negative facial expressions such as fear and anger. However, happiness and disgust recognition is generally found to be less affected. Eye-tracking studies indicate that the diagnostic features of fearful and angry faces are situated in the upper regions of the face (the eyes), and for happy and disgusted faces in the lower regions (nose and mouth). These studies also indicate age-differences in visual scanning behavior, suggesting a role for attention in emotion recognition deficits in older adults. However, because facial features can be processed extrafoveally, and expression recognition occurs rapidly, eye-tracking has been questioned as a measure of attention during emotion recognition. In this study, the Moving Window Technique (MWT) was used as an alternative to the conventional eye-tracking technology. By restricting the visual field to a moveable window, this technique provides a more direct measure of attention. We found a strong bias to explore the mouth across both age groups. Relative to young adults, older adults focused less on the left eye, and marginally more on the mouth and nose. Despite these different exploration patterns, older adults were most impaired in recognition accuracy for disgusted expressions. Correlation analysis revealed that among older adults, more mouth exploration was associated with faster recognition of both disgusted and happy expressions. As a whole, these findings suggest that in aging there are both attentional differences and perceptual deficits contributing to less accurate emotion recognition.
Collapse
Affiliation(s)
- Elina Birmingham
- Faculty of Education, Simon Fraser University, Burnaby, BC, Canada
- * E-mail:
| | - Joakim Svärd
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Christopher Kanan
- Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, United States of America
| | - Håkan Fischer
- Department of Psychology, Stockholm University, Stockholm, Sweden
| |
Collapse
|
7
|
Abstract
Visual scanning of faces in individuals with Autism Spectrum Disorder (ASD) has been intensively studied using eye-tracking technology. However, most of studies have relied on the same analytic approach based on the quantification of fixation time, which may have failed to reveal some important features of the scanning strategies employed by individuals with ASD. In the present study, we examined the scanning of faces in a group of 20 preschoolers with ASD and their typically developing (TD) peers, using both classical fixation time approach and a new developed approach based on transition matrices and network analysis. We found between group differences in the eye region in terms of fixation time, with increased right eye fixation time for the ASD group and increased left eye fixation time for the TD group. Our complementary network approach revealed that the left eye might play the role of an anchor in the scanning strategies of TD children but not in that of children with ASD. In ASD, fixation time on the different facial parts was almost exclusively dependent on exploratory activity. Our study highlights the importance of developing innovative measures that bear the potential of revealing new properties of the scanning strategies employed by individuals with ASD.
Collapse
|
8
|
Viewing Complex, Dynamic Scenes "Through the Eyes" of Another Person: The Gaze-Replay Paradigm. PLoS One 2015; 10:e0134347. [PMID: 26252493 PMCID: PMC4529207 DOI: 10.1371/journal.pone.0134347] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2015] [Accepted: 07/08/2015] [Indexed: 11/30/2022] Open
Abstract
We present a novel “Gaze-Replay” paradigm that allows the experimenter to directly test how particular patterns of visual input—generated from people’s actual gaze patterns—influence the interpretation of the visual scene. Although this paradigm can potentially be applied across domains, here we applied it specifically to social comprehension. Participants viewed complex, dynamic scenes through a small window displaying only the foveal gaze pattern of a gaze “donor.” This was intended to simulate the donor’s visual selection, such that a participant could effectively view scenes “through the eyes” of another person. Throughout the presentation of scenes presented in this manner, participants completed a social comprehension task, assessing their abilities to recognize complex emotions. The primary aim of the study was to assess the viability of this novel approach by examining whether these Gaze-Replay windowed stimuli contain sufficient and meaningful social information for the viewer to complete this social perceptual and cognitive task. The results of the study suggested this to be the case; participants performed better in the Gaze-Replay condition compared to a temporally disrupted control condition, and compared to when they were provided with no visual input. This approach has great future potential for the exploration of experimental questions aiming to unpack the relationship between visual selection, perception, and cognition.
Collapse
|
9
|
Matsuzaki N, Schwarzlose RF, Nishida M, Ofen N, Asano E. Upright face-preferential high-gamma responses in lower-order visual areas: evidence from intracranial recordings in children. Neuroimage 2015; 109:249-59. [PMID: 25579446 DOI: 10.1016/j.neuroimage.2015.01.015] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2014] [Revised: 11/30/2014] [Accepted: 01/05/2015] [Indexed: 11/18/2022] Open
Abstract
Behavioral studies demonstrate that a face presented in the upright orientation attracts attention more rapidly than an inverted face. Saccades toward an upright face take place in 100-140 ms following presentation. The present study using electrocorticography determined whether upright face-preferential neural activation, as reflected by augmentation of high-gamma activity at 80-150 Hz, involved the lower-order visual cortex within the first 100 ms post-stimulus presentation. Sampled lower-order visual areas were verified by the induction of phosphenes upon electrical stimulation. These areas resided in the lateral-occipital, lingual, and cuneus gyri along the calcarine sulcus, roughly corresponding to V1 and V2. Measurement of high-gamma augmentation during central (circular) and peripheral (annular) checkerboard reversal pattern stimulation indicated that central-field stimuli were processed by the more polar surface whereas peripheral-field stimuli by the more anterior medial surface. Upright face stimuli, compared to inverted ones, elicited up to 23% larger augmentation of high-gamma activity in the lower-order visual regions at 40-90 ms. Upright face-preferential high-gamma augmentation was more highly correlated with high-gamma augmentation for central than peripheral stimuli. Our observations are consistent with the hypothesis that lower-order visual regions, especially those for the central field, are involved in visual cues for rapid detection of upright face stimuli.
Collapse
Affiliation(s)
- Naoyuki Matsuzaki
- Department of Pediatrics, Children's Hospital of Michigan, Wayne State University, Detroit Medical Center, Detroit, MI 48201, USA
| | - Rebecca F Schwarzlose
- Institute of Gerontology, Wayne State University, Detroit, MI, USA; Trends in Cognitive Sciences, Cell Press, Cambridge, MA 02139, USA
| | - Masaaki Nishida
- Department of Pediatrics, Children's Hospital of Michigan, Wayne State University, Detroit Medical Center, Detroit, MI 48201, USA; Department of Anesthesiology, Hanyu General Hospital, Hanyu City, Saitama 348-8505, Japan
| | - Noa Ofen
- Department of Pediatrics, Children's Hospital of Michigan, Wayne State University, Detroit Medical Center, Detroit, MI 48201, USA; Institute of Gerontology, Wayne State University, Detroit, MI, USA
| | - Eishi Asano
- Department of Pediatrics, Children's Hospital of Michigan, Wayne State University, Detroit Medical Center, Detroit, MI 48201, USA; Department of Neurology, Children's Hospital of Michigan, Wayne State University, Detroit Medical Center, Detroit, MI 48201, USA.
| |
Collapse
|
10
|
Xiao NG, Quinn PC, Wheeler A, Pascalis O, Lee K. Natural, but not artificial, facial movements elicit the left visual field bias in infant face scanning. Neuropsychologia 2014; 62:175-83. [PMID: 25064049 PMCID: PMC4167482 DOI: 10.1016/j.neuropsychologia.2014.07.017] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2014] [Revised: 07/09/2014] [Accepted: 07/16/2014] [Indexed: 11/28/2022]
Abstract
A left visual field (LVF) bias has been consistently reported in eye movement patterns when adults look at face stimuli, which reflects hemispheric lateralization of face processing and eye movements. However, the emergence of the LVF attentional bias in infancy is less clear. The present study investigated the emergence and development of the LVF attentional bias in infants from 3 to 9 months of age with moving face stimuli. We specifically examined the naturalness of facial movements in infants'LVF attentional bias by comparing eye movement patterns in naturally and artificially moving faces. Results showed that 3- to 5-month-olds exhibited the LVF attentional bias only in the lower half of naturally moving faces, but not in artificially moving faces. Six- to 9-month-olds showed the LVF attentional bias in both the lower and upper face halves only in naturally moving, but not in artificially moving faces. These results suggest that the LVF attentional bias for face processing may emerge around 3 months of age and is driven by natural facial movements. The LVF attentional bias reflects the role of natural face experience in real life situations that may drive the development of hemispheric lateralization of face processing in infancy.
Collapse
Affiliation(s)
- Naiqi G Xiao
- University of Toronto, 45 Walmer Road, Toronto, Ontario, Canada M5R 2X2.
| | - Paul C Quinn
- University of Delaware, Newark, DE 19716, United States.
| | - Andrea Wheeler
- University of Toronto, 45 Walmer Road, Toronto, Ontario, Canada M5R 2X2.
| | | | - Kang Lee
- University of Toronto, 45 Walmer Road, Toronto, Ontario, Canada M5R 2X2.
| |
Collapse
|
11
|
Marzoli D, Prete G, Tommasi L. Perceptual asymmetries and handedness: a neglected link? Front Psychol 2014; 5:163. [PMID: 24592250 PMCID: PMC3938099 DOI: 10.3389/fpsyg.2014.00163] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2013] [Accepted: 02/10/2014] [Indexed: 11/16/2022] Open
Abstract
Healthy individuals tend to weigh in more the left than the right side of visual space in a variety of contexts, ranging from pseudoneglect to perceptual asymmetries for faces. Among the common explanations proposed for the attentional and perceptual advantages of the left visual field, a link with the prevalence of right-handedness in humans has never been suggested, although some evidence seems to converge in favor of a bias of spatial attention toward the region most likely coincident with another person's right hand during a face-to-face interaction. Such a bias might imply an increased efficiency in monitoring both communicative and aggressive acts, the right limb being more used than the left in both types of behavior. Although attentional and perceptual asymmetries could be linked to right-handedness at the level of phylogeny because of the evolutionarily advantage of directing attention toward the region where others' dominant hand usually operates, it is also legitimate to question whether, at the ontogenetic level, frequent exposure to right-handed individuals may foster leftward biases. These views are discussed in the light of extant literature, and a number of tests are proposed in order to assess our hypotheses.
Collapse
Affiliation(s)
- Daniele Marzoli
- Department of Psychological Sciences, Humanities and Territory, University of ChietiChieti, Italy
| | - Giulia Prete
- Department of Neuroscience and Imaging, University of ChietiChieti, Italy
| | - Luca Tommasi
- Department of Psychological Sciences, Humanities and Territory, University of ChietiChieti, Italy
| |
Collapse
|