1
|
Let's face it! The role of social anxiety and executive functions in recognizing others' emotions from faces: Evidence from autism and specific learning disorders. Dev Psychopathol 2024:1-13. [PMID: 38327107 DOI: 10.1017/s0954579424000038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/09/2024]
Abstract
Youth with different developmental disorders might experience challenges when dealing with facial emotion recognition (FER). By comparing FER and related emotional and cognitive factors across developmental disorders, researchers can gain a better understanding of challenges and strengths associated with each condition. The aim of the present study was to investigate how social anxiety and executive functioning might underlie FER in youth with and without autism spectrum disorders (ASD) and specific learning disorders (SLD). The study involved 263 children and adolescents between 8 and 16 years old divided into three groups matched for age, sex, and IQ: 60 (52 M) with ASD without intellectual disability, 63 (44 M) with SLD, and 140 (105 M) non-diagnosed. Participants completed an FER test, three executive functions' tasks (inhibition, updating, and set-shifting), and parents filled in a questionnaire reporting their children's social anxiety. Our results suggest that better FER was consistent with higher social anxiety and better updating skills in ASD, while with lower social anxiety in SLD. Clinical practice should focus on coping strategies in autistic youth who could feel anxiety when facing social cues, and on self-efficacy and social worries in SLD. Executive functioning should also be addressed to support social learning in autism.
Collapse
|
2
|
Preschoolers' cognitive flexibility and emotion understanding: a developmental perspective. Front Psychol 2024; 15:1280739. [PMID: 38390421 PMCID: PMC10881749 DOI: 10.3389/fpsyg.2024.1280739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 01/22/2024] [Indexed: 02/24/2024] Open
Abstract
Introduction Cognitive flexibility is the ability to adapt to changing tasks or problems, while emotion understanding is the ability to interpret emotional cues and information in different contexts. Both abilities are crucial for preschoolers' socialization. Methods This study selected 532 preschool children aged 3-6 years from two kindergartens in a central province of China. The Dimensional Change Card Sorting (DCCS) task and emotion understanding tasks were used to investigate the developmental characteristics of cognitive flexibility, emotion understanding abilities, and their relationship. Results The results showed: (1) For cognitive flexibility, children older than 5 years scored significantly higher than younger children, and girls scored higher than boys. (2) For facial emotion recognition: (i) Children's recognition scores for happy, sad, and angry expressions were significantly higher than fear; children could accurately recognize happy, sad, and angry emotions by age 3, while fear recognition developed rapidly after age 5; (ii) Girls scored higher in recognizing fearful faces than boys. (3) For situational emotion understanding: (i) Children's development followed the hierarchical order of external, desire, clue, and belief-based understanding. Situational and desire-based understanding already reached high levels by age 3, while clue and belief-based understanding developed quickly after age 5; (ii) Girls scored higher than boys in belief-based emotion understanding. (4) Cognitive flexibility significantly predicted children's facial emotion recognition, external and desire-based emotion understanding. Discussion Parents and teachers should cultivate children's cognitive flexibility and provide personalized support. They should also fully grasp the characteristics of children's emotion understanding development, systematically nurture their emotion understanding abilities, and leverage cognitive flexibility training to improve their emotion understanding.
Collapse
|
3
|
Facial emotion recognition is associated with executive functions and depression scores, but not staging of dementia, in mild-to-moderate Alzheimer's disease. Brain Behav 2024; 14:e3390. [PMID: 38376045 PMCID: PMC10808849 DOI: 10.1002/brb3.3390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 12/20/2023] [Accepted: 12/22/2023] [Indexed: 02/21/2024] Open
Abstract
BACKGROUND Although deficits in facial emotion recognition (FER) significantly affect interpersonal communication and social functioning, there is no consensus on how FER affects Alzheimer's disease (AD). In this study, we aimed to investigate the clinical and neuropsychological factors affecting the possible deficits in the FER abilities of patients with AD. METHODS This cross-sectional study included 37 patients with mild [clinical dementia rating (CDR) scale score = 1] or moderate (CDR = 2) AD, in whom vascular dementia and depression were excluded, and 24 cognitively normal (CDR = 0) subjects. FER ability was determined using the facial emotion identification test (FEIT) and facial emotion discrimination test (FEDT). All participants underwent mini-mental state examination (MMSE), frontal assessment battery (FAB), and geriatric depression scale (GDS). The neuropsychiatric inventory-clinician rating scale (NPI-C), Katz index of independence in activities of daily living, and Lawton instrumental activities of daily living were also administered to patients with AD. RESULTS The FEIT and FEDT total scores showed that patients with mild and moderate AD had significant FER deficits compared to healthy controls. However, no significant difference was observed between patients with mild and moderate AD in the FEIT and FEDT total scores. FEIT and FEDT scores were not correlated with the MMSE and NPI-C total and subscales scores in patients with AD. Linear regression indicated that FEIT and FEDT total scores were significantly related to age and FAB scores. The GDS score negatively moderated the relationship between FAB and FEDT. CONCLUSIONS This study demonstrated a decreased FER ability in patients with AD. The critical point in FER deficits is the presence of dementia, not the dementia stage, in AD. It has been determined that executive functions and depression (even at a subsyndromal level), which have limited knowledge, are associated with FER abilities.
Collapse
|
4
|
"I cannot see your fear!" Altered recognition of fearful facial expressions in anorexia nervosa. Front Psychol 2023; 14:1280719. [PMID: 38125860 PMCID: PMC10732310 DOI: 10.3389/fpsyg.2023.1280719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 11/13/2023] [Indexed: 12/23/2023] Open
Abstract
Background The evidence about facial emotion recognition in anorexia nervosa as the role of alexithymic traits on this emotional ability is conflicting and heterogeneous. Objective We assessed the capability of recognizing facial expressions of two primary emotions, fear, and anger, in the context of anorexia nervosa. Methods Women affected by anorexia nervosa were compared with healthy weight women in a well-established implicit facial emotion recognition task. Both reaction time and level of accuracy were computed. Moreover, the individual levels of alexithymia were assessed through a standard self-report questionnaire. Results Participants with anorexia nervosa reported a significantly lower performance in terms of reaction time and accuracy when the emotion of fear-but not anger-was the target. Notably, such an alteration was linked to the levels of alexithymia reported in the self-report questionnaire. Conclusion In anorexia nervosa, difficulties in processing facial fearful (but not angry) expressions may be observed as linked to higher expressions of alexithymic traits. We suggested future research in which emotional processing will be investigated taking into account the role of the bodily dimensions of emotional awareness.
Collapse
|
5
|
Can the Ability to Recognize Facial Emotions in Individuals With Neurodegenerative Disease be Improved? A Systematic Review and Meta-analysis. Cogn Behav Neurol 2023; 36:202-218. [PMID: 37410880 PMCID: PMC10683976 DOI: 10.1097/wnn.0000000000000348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Accepted: 03/30/2023] [Indexed: 07/08/2023]
Abstract
BACKGROUND Facial emotion recognition (FER) is commonly impaired in individuals with neurodegenerative disease (NDD). This impairment has been linked to an increase in behavioral disorders and caregiver burden. OBJECTIVE To identify interventions targeting the improvement of FER ability in individuals with NDD and investigate the magnitude of the efficacy of the interventions. We also wanted to explore the duration of the effects of the intervention and their possible impacts on behavioral and psychological symptoms of dementia and caregiver burden. METHOD We included 15 studies with 604 individuals who had been diagnosed with NDD. The identified interventions were categorized into three types of approach (cognitive, neurostimulation, and pharmacological) as well as a combined approach (neurostimulation with pharmacological). RESULTS The three types of approaches pooled together had a significant large effect size for FER ability improvement (standard mean difference: 1.21, 95% CI = 0.11, 2.31, z = 2.15, P = 0.03). The improvement lasted post intervention, in tandem with a decrease in behavioral disorders and caregiver burden. CONCLUSION A combination of different approaches for FER ability improvement may be beneficial for individuals with NDD and their caregivers.
Collapse
|
6
|
Voluntary imitation of dynamic facial expressions in attention deficit hyperactivity disorder: a facial-behavior analysis. J Clin Exp Neuropsychol 2023; 45:915-927. [PMID: 38380655 DOI: 10.1080/13803395.2024.2320464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Accepted: 01/28/2024] [Indexed: 02/22/2024]
Abstract
OBJECTIVE The difficulties involved in social interaction among children with attention deficit hyperactivity disorder (ADHD) have been shown in many studies. Based on the knowledge that the imitation of facial expressions is a key factor in social interaction and functionality, the focus of prior studies has been on the evaluation of facial expressions in individuals with ADHD. However, little is known about voluntary facial mimicry in individuals with ADHD. In this context, we aimed to evaluate the voluntary-facial-imitation intensity of dynamic facial expressions in children with ADHD. METHOD Forty-one children with ADHD and 53 typically developing children were included in the study. Participants were presented with a video of six basic emotions and neutral facial expressions selected from the EU-Emotion Stimulus Set via a screen. After each emotion, the instruction "now imitate it" was given. While the children watched the video, their faces were recorded with a webcam. The intensity of the children's voluntary facial imitations was examined with a computer vision program (Openface) that performs facial analysis on recorded videos. RESULTS There was no significant difference between the groups in terms of facial emotion recognition accuracy. In group comparisons of voluntary facial mimicry, children with ADHD showed a significantly higher imitation intensity after emotional expressions of sadness, surprise and fear. There was no difference between the groups after the emotions of happiness, anger and disgust. CONCLUSION This non-obtrusive, noninvasive, and cost-effective method allowed us to measure the quantitative differences in facial mimicry between children with ADHD and typically developing children. Our results contribute new information to the literature by indicating which emotions can be used in the evaluation of social communication skills, as well as intervention targets for these skills, in children with ADHD.
Collapse
|
7
|
Neural Correlates of Facial Emotion Recognition Impairment in Blepharospasm: A Functional Magnetic Resonance Imaging Study. Neuroscience 2023; 531:50-59. [PMID: 37709002 DOI: 10.1016/j.neuroscience.2023.09.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 09/01/2023] [Accepted: 09/04/2023] [Indexed: 09/16/2023]
Abstract
Selective impairment in recognizing facial expressions of disgust was reported in patients with focal dystonia several years ago, but the basic neural mechanisms remain largely unexplored. Therefore, we investigated whether dysfunction of the brain network involved in disgust recognition processing was related to this selective impairment in blepharospasm. Facial emotion recognition evaluations and resting-state functional magnetic resonance imaging were performed in 33 blepharospasm patients and 33 healthy controls (HCs). The disgust processing network was constructed, and modularity analyses were performed to identify sub-networks. Regional functional indexes and intra- and inter-functional connections were calculated and compared between the groups. Compared to HCs, blepharospasm patients demonstrated a worse performance in disgust recognition. In addition, functional connections within the sub-network involved in perception processing rather than recognition processing of disgust were significantly decreased in blepharospasm patients compared to HCs. Specifically, decreased functional connections were noted between the left fusiform gyrus (FG) and right middle occipital gyrus (MOG), the left FG and right FG, and the right FG and left MOG. We identified decreased functional activity in these regions, as indicated by a lower amplitude of low-frequency fluctuation in the left MOG, fractional amplitude of low-frequency fluctuation in the right FG, and regional homogeneity in the right FG and left MOG in blepharospasm patients versus HCs. Our results suggest that dysfunctions of the disgust processing network exist in blepharospasm. A deficit in disgust emotion recognition may be attributed to disturbances in the early perception of visual disgust stimuli in blepharospasm patients.
Collapse
|
8
|
Facial expression recognition in virtual reality environments: challenges and opportunities. Front Psychol 2023; 14:1280136. [PMID: 37885738 PMCID: PMC10598841 DOI: 10.3389/fpsyg.2023.1280136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Accepted: 09/20/2023] [Indexed: 10/28/2023] Open
Abstract
This study delved into the realm of facial emotion recognition within virtual reality (VR) environments. Using a novel system with MobileNet V2, a lightweight convolutional neural network, we tested emotion detection on 15 university students. High recognition rates were observed for emotions like "Neutral", "Happiness", "Sadness", and "Surprise". However, the model struggled with 'Anger' and 'Fear', often confusing them with "neutral". These discrepancies might be attributed to overlapping facial indicators, limited training samples, and the precision of the devices used. Nonetheless, our research underscores the viability of using facial emotion recognition technology in VR and recommends model improvements, the adoption of advanced devices, and a more holistic approach to foster the future development of VR emotion recognition.
Collapse
|
9
|
Social cognition and adjustment in adult survivors of pediatric central nervous system tumors. Cancer 2023; 129:3064-3075. [PMID: 37329245 PMCID: PMC10528486 DOI: 10.1002/cncr.34889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 05/02/2023] [Accepted: 05/04/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND Survivors of pediatric central nervous system (CNS) tumors are at risk for neurocognitive and social difficulties throughout childhood. This study characterized social cognition (perception and reasoning from social cues) and adjustment in adulthood. METHODS A total of 81 adult survivors of pediatric CNS tumors (51% female; mean [SD] age, 28.0 [5.8] years), were recruited across four groups: (1) no radiation therapy (RT) [n = 21], (2) infratentorial (IT) tumors + focal RT [n = 20], (3) IT tumors + craniospinal irradiation [n = 20], and (4) supratentorial tumors + focal RT [n = 20]. Prevalence of social cognitive and adjustment impairments was compared to test norms. Multivariable models examined clinical and neurocognitive predictors of social cognition and its impact on functional outcomes. RESULTS Survivors demonstrated elevated risk of severe social cognitive impairments (social perception Morbidity Ratio [95% CI] 5.70 [3.46-9.20]), but self-reported few social adjustment problems. Survivors of IT tumors treated with craniospinal irradiation performed nearly 1 SD worse than survivors treated without RT on multiple measures of social cognition (e.g., social perception: β = -0.89, p = .004). Impaired executive functioning and nonverbal reasoning were associated with worse social cognitive performance (e.g., social perception: β = -0.75, p < .001; β = -0.84, p < .001, respectively). Better social perception was associated with higher odds of attaining full-time employment (odds ratio, 1.52 [1.17-1.97]) and at least some college education (odds ratio, 1.39 [1.11-1.74]). CONCLUSIONS Adult survivors of CNS tumors are at elevated risk of severely impaired social cognition, but do not perceive social adjustment difficulties. Better understanding of potential mechanisms underlying social cognitive deficits may inform intervention targets to promote better functional outcomes for at-risk survivors.
Collapse
|
10
|
New Trends in Emotion Recognition Using Image Analysis by Neural Networks, A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:7092. [PMID: 37631629 PMCID: PMC10458371 DOI: 10.3390/s23167092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 07/29/2023] [Accepted: 08/02/2023] [Indexed: 08/27/2023]
Abstract
Facial emotion recognition (FER) is a computer vision process aimed at detecting and classifying human emotional expressions. FER systems are currently used in a vast range of applications from areas such as education, healthcare, or public safety; therefore, detection and recognition accuracies are very important. Similar to any computer vision task based on image analyses, FER solutions are also suitable for integration with artificial intelligence solutions represented by different neural network varieties, especially deep neural networks that have shown great potential in the last years due to their feature extraction capabilities and computational efficiency over large datasets. In this context, this paper reviews the latest developments in the FER area, with a focus on recent neural network models that implement specific facial image analysis algorithms to detect and recognize facial emotions. This paper's scope is to present from historical and conceptual perspectives the evolution of the neural network architectures that proved significant results in the FER area. This paper endorses convolutional neural network (CNN)-based architectures against other neural network architectures, such as recurrent neural networks or generative adversarial networks, highlighting the key elements and performance of each architecture, and the advantages and limitations of the proposed models in the analyzed papers. Additionally, this paper presents the available datasets that are currently used for emotion recognition from facial expressions and micro-expressions. The usage of FER systems is also highlighted in various domains such as healthcare, education, security, or social IoT. Finally, open issues and future possible developments in the FER area are identified.
Collapse
|
11
|
Comparing Synchronicity in Body Movement among Jazz Musicians with Their Emotions. SENSORS (BASEL, SWITZERLAND) 2023; 23:6789. [PMID: 37571571 PMCID: PMC10422624 DOI: 10.3390/s23156789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 07/25/2023] [Accepted: 07/26/2023] [Indexed: 08/13/2023]
Abstract
This paper presents novel preliminary research that investigates the relationship between the flow of a group of jazz musicians, quantified through multi-person pose synchronization, and their collective emotions. We have developed a real-time software to calculate the physical synchronicity of team members by tracking the difference in arm, leg, and head movements using Lightweight OpenPose. We employ facial expression recognition to evaluate the musicians' collective emotions. Through correlation and regression analysis, we establish that higher levels of synchronized body and head movements correspond to lower levels of disgust, anger, sadness, and higher levels of joy among the musicians. Furthermore, we utilize 1-D CNNs to predict the collective emotions of the musicians. The model leverages 17 body synchrony keypoint vectors as features, resulting in a training accuracy of 61.47% and a test accuracy of 66.17%.
Collapse
|
12
|
Measuring Engagement in Robot-Assisted Therapy for Autistic Children. Behav Sci (Basel) 2023; 13:618. [PMID: 37622758 PMCID: PMC10451269 DOI: 10.3390/bs13080618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 07/11/2023] [Accepted: 07/13/2023] [Indexed: 08/26/2023] Open
Abstract
Children with autism face a range of challenges when it comes to verbal and nonverbal communication. It is essential that children participate in a variety of social, educational, and therapeutic activities to acquire knowledge that is essential for cognitive and social development. Recent studies have shown that children with autism may be interested in playing with an interactive robot. The robot can engage these children in ways that demonstrate and train essential aspects of human interaction, guiding them in therapeutic sessions to practice more complex forms of interaction found in social human-to-human interactions. This study sets out to investigate Robot-Assisted Autism Therapy (RAAT) and the use of artificial intelligence (AI) approaches for measuring the engagement of children during therapy sessions. The study population consisted of five native Arabic-speaking autistic children aged between 4 and 11 years old. The child-robot interaction was recorded by the robot camera and later used for analysis to detect engagement. The results show that the proposed system offers some accuracy in measuring the engagement of children with ASD. Our findings revealed that robot-assisted therapy is a promising field of application for intelligent social robots, especially to support autistic children in achieving their therapeutic and educational objectives.
Collapse
|
13
|
Impaired facial emotion recognition in relation to social behaviours in de novo Parkinson's disease. J Neuropsychol 2023. [PMID: 37488778 DOI: 10.1111/jnp.12341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 06/17/2023] [Accepted: 06/29/2023] [Indexed: 07/26/2023]
Abstract
Facial emotion recognition (FER) is a crucial component of social cognition and is essential in social-interpersonal behaviour regulation. Although FER impairment is well-established in advanced PD, data about FER at the time of diagnosis and its relationship with social behavioural problems in daily life are lacking. The aim was to examine FER at the time of PD diagnosis compared to a matched healthy control (HC) group and to associate FER with indices of social behavioural problems. In total, 142 de novo, treatment-naïve PD patients and 142 HC were included. FER was assessed by the Ekman 60 faces test (EFT). Behavioural problems in PD patients were assessed using the Dysexecutive Questionnaire (DEX-self and DEX-proxy) and the Apathy Evaluation Scale (AES-self). PD patients had significantly lower EFT-total scores (p = .001) compared to HC, with worse recognition of Disgust (p = .001) and Sadness (p = .016). Correlational analyses yielded significant correlations between AES-self and both EFT-total (rs = .28) and Fear (rs = .22). Significant negative correlations were found between DEX-proxy and both EFT-total (rs = -.28) and Anger (rs = -.26). Analyses of DEX-subscales showed that proxy ratings were significantly higher than patient-ratings for the Social Conventions subscale (p = .047). This DEX-proxy subscale had the strongest correlation with EFT-total (rs = -.29). Results show that de novo PD patients already show impaired FER compared to HC. In addition, lower FER is linked to self-reported apathy and proxy-reported social-behavioural problems, especially concerning social conventions. These findings validate the importance of the inclusion of social cognition measures in the neuropsychological assessment even in early PD.
Collapse
|
14
|
Facial emotion recognition in adolescent depression: The role of childhood traumas, emotion regulation difficulties, alexithymia and empathy. Indian J Psychiatry 2023; 65:443-452. [PMID: 37325105 PMCID: PMC10263086 DOI: 10.4103/indianjpsychiatry.indianjpsychiatry_284_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 09/09/2022] [Accepted: 12/17/2022] [Indexed: 06/17/2023] Open
Abstract
Introduction Facial emotion recognition (FER) is crucial for effective social competency, and problems in this skill are linked depression during adolescence. In this study, we aimed to find the rates of FER accuracy for negative (fearful, sad, angry, disgusted), positive (happy, surprised), and neutral emotions, and the possible predictors of FER skill for most confusing emotions. Subjects and Methods A total of 67 drug-naive adolescents with depression (11 boys, 56 girls; 11-17 years) were recruited for the study. The facial emotion recognition test, childhood trauma questionnaire and basic empathy, difficulty of emotion regulation, and Toronto alexithymia scales were used. Results The analysis demonstrated that adolescents have more difficulties in recognizing negative emotions when compared the positive ones. The most confusing emotion is fear (39.8% of fear was recognized as surprise). Boys have lower fear recognition skill than girls and higher childhood emotional abuse, physical abuse, emotional neglect, and difficulty in describing feelings to predict lower fear recognition skill. For sadness recognition skill, emotional neglect, difficulty in describing feelings, and depression severity were the negative predictors. Emotional empathy has a positive effect on disgust recognition skill. Conclusion Our findings demonstrated that impairment of FER skill for negative emotions is associated with childhood traumas, emotion regulation difficulties, alexithymia, and empathy symptoms in adolescent depression.
Collapse
|
15
|
Facial emotion recognition and schizotypal traits: A systematic review of behavioural studies. Early Interv Psychiatry 2023; 17:121-140. [PMID: 35840128 DOI: 10.1111/eip.13328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 03/19/2022] [Accepted: 05/29/2022] [Indexed: 11/27/2022]
Abstract
AIM Previous research has indicated that individuals expressing high schizotypal traits and patients with Schizotypal Personality Disorder (SPD), show deficits in facial emotion recognition, compared to low schizotypal or control groups. On the other hand, non-significant findings also exist and the association of facial emotion recognition deficits with the different schizotypal dimensions is not well defined, thus limiting any conclusive outcomes. Therefore, the aim of this systematic review was to further clarify this relationship. METHODS PsychInfo, Web of Science, Scopus and PubMed were systematically searched, and 23 papers with a cross-sectional design were selected. Nineteen studies examined individuals with high schizotypal traits and four studies evaluated SPD individuals with behavioural facial emotion recognition paradigms and self-report measures or clinical interviews for schizotypal traits. All selected studies were published between 1994 and August 2020. RESULTS According to the evidence of studies, high schizotypal individuals and SPD patients have poorer performance in facial emotion recognition tasks. Negative schizotypy was related to lower accuracy for positive and negative emotions and faster emotion labeling while positive schizotypy was associated with worse accuracy for positive, negative and neutral emotions and more biases. Disorganized schizotypy was associated with poorer accuracy for negative emotions and suspiciousness with higher accuracy for disgust faces but lower total accuracy. CONCLUSIONS These findings are consistent with the vulnerability for schizophrenia spectrum disorders and support the idea that emotion recognition deficits are trait markers for these conditions. Thus, the effectiveness of early-intervention programmes could increase by also targeting this class of deficits.
Collapse
|
16
|
The impact of briefly observing faces in opaque facial masks on emotion recognition and empathic concern. Q J Exp Psychol (Hove) 2023; 76:404-418. [PMID: 35319298 PMCID: PMC9896299 DOI: 10.1177/17470218221092590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Since the outbreak of SARS-CoV-2 in 2019, there have been global public health initiatives that have advocated for the community use of face masks to reduce spread of the virus. Although the community use of facial coverings has been deemed essential for public health, there have been calls for enquiries to ascertain how face masks may impact non-verbal methods of communication. This study aimed to ascertain how the brief observations of faces in opaque facial coverings could impact facial emotion recognition. It was also an aim to ascertain if there was an association between the levels of empathic concern and facial emotion recognition when viewing masked faces. An opportunity sample of 199 participants, who resided in the United Kingdom, were randomly assigned to briefly observe either masked (n = 102) or unmasked (n = 97) faces. Participants in both conditions were required to view a series of facial expressions, from the Radboud Faces Database, with models conveying the emotional states of anger, disgust, fear, happiness, sadness, and surprised. Each face was presented to participants for a period of 250 ms in the masked and unmasked conditions. A 6 (emotion type) x 2 (masked/unmasked condition) mixed ANOVA revealed that viewing masked faces significantly reduced facial emotion recognition of disgust, fear, happiness, sadness, and surprised. However, there were no differences in the success rate of recognising the emotional state of anger between the masked and unmasked conditions. Furthermore, higher levels of empathic concern were associated with greater success in facially recognising the emotional state of disgust. The results of this study suggest that significant reductions in emotion recognition, when viewing faces in opaque masks, can still be observed when people are exposed to facial stimuli for a brief period of time.
Collapse
|
17
|
Energetic optimization of an autonomous mobile socially assistive robot for autism spectrum disorder. Front Robot AI 2023; 9:1053115. [PMID: 36779207 PMCID: PMC9909178 DOI: 10.3389/frobt.2022.1053115] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Accepted: 12/30/2022] [Indexed: 01/27/2023] Open
Abstract
The usage of socially assistive robots for autism therapies has increased in recent years. This novel therapeutic tool allows the specialist to keep track of the improvement in socially assistive tasks for autistic children, who hypothetically prefer object-based over human interactions. These kinds of tools also allow the collection of new information to early diagnose neurodevelopment disabilities. This work presents the integration of an output feedback adaptive controller for trajectory tracking and energetic autonomy of a mobile socially assistive robot for autism spectrum disorder under an event-driven control scheme. The proposed implementation integrates facial expression and emotion recognition algorithms to detect the emotions and identities of users (providing robustness to the algorithm since it automatically generates the missing input parameters, which allows it to complete the recognition) to detonate a set of adequate trajectories. The algorithmic implementation for the proposed socially assistive robot is presented and implemented in the Linux-based Robot Operating System. It is considered that the optimization of energetic consumption of the proposal is the main contribution of this work, as it will allow therapists to extend and adapt sessions with autistic children. The experiment that validates the energetic optimization of the proposed integration of an event-driven control scheme is presented.
Collapse
|
18
|
Eye-movement analysis on facial expression for identifying children and adults with neurodevelopmental disorders. Front Digit Health 2023; 5:952433. [PMID: 36874367 PMCID: PMC9978093 DOI: 10.3389/fdgth.2023.952433] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Accepted: 01/30/2023] [Indexed: 02/18/2023] Open
Abstract
Experienced psychiatrists identify people with autism spectrum disorder (ASD) and schizophrenia (Sz) through interviews based on diagnostic criteria, their responses, and various neuropsychological tests. To improve the clinical diagnosis of neurodevelopmental disorders such as ASD and Sz, the discovery of disorder-specific biomarkers and behavioral indicators with sufficient sensitivity is important. In recent years, studies have been conducted using machine learning to make more accurate predictions. Among various indicators, eye movement, which can be easily obtained, has attracted much attention and various studies have been conducted for ASD and Sz. Eye movement specificity during facial expression recognition has been studied extensively in the past, but modeling taking into account differences in specificity among facial expressions has not been conducted. In this paper, we propose a method to detect ASD or Sz from eye movement during the Facial Emotion Identification Test (FEIT) while considering differences in eye movement due to the facial expressions presented. We also confirm that weighting using the differences improves classification accuracy. Our data set sample consisted of 15 adults with ASD and Sz, 16 controls, and 15 children with ASD and 17 controls. Random forest was used to weight each test and classify the participants as control, ASD, or Sz. The most successful approach used heat maps and convolutional neural networks (CNN) for eye retention. This method classified Sz in adults with 64.5% accuracy, ASD in adults with up to 71.0% accuracy, and ASD in children with 66.7% accuracy. Classifying of ASD result was significantly different (p<.05) by the binomial test with chance rate. The results show a 10% and 16.7% improvement in accuracy, respectively, compared to a model that does not take facial expressions into account. In ASD, this indicates that modeling is effective, which weights the output of each image.
Collapse
|
19
|
Light-FER: A Lightweight Facial Emotion Recognition System on Edge Devices. SENSORS (BASEL, SWITZERLAND) 2022; 22:9524. [PMID: 36502225 PMCID: PMC9738842 DOI: 10.3390/s22239524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 10/28/2022] [Accepted: 11/30/2022] [Indexed: 06/17/2023]
Abstract
Facial emotion recognition (FER) systems are imperative in recent advanced artificial intelligence (AI) applications to realize better human-computer interactions. Most deep learning-based FER systems have issues with low accuracy and high resource requirements, especially when deployed on edge devices with limited computing resources and memory. To tackle these problems, a lightweight FER system, called Light-FER, is proposed in this paper, which is obtained from the Xception model through model compression. First, pruning is performed during the network training to remove the less important connections within the architecture of Xception. Second, the model is quantized to half-precision format, which could significantly reduce its memory consumption. Third, different deep learning compilers performing several advanced optimization techniques are benchmarked to further accelerate the inference speed of the FER system. Lastly, to experimentally demonstrate the objectives of the proposed system on edge devices, Light-FER is deployed on NVIDIA Jetson Nano.
Collapse
|
20
|
Comparison of Subjective Facial Emotion Recognition and "Facial Emotion Recognition Based on Multi-Task Cascaded Convolutional Network Face Detection" between Patients with Schizophrenia and Healthy Participants. Healthcare (Basel) 2022; 10:healthcare10122363. [PMID: 36553887 PMCID: PMC9777528 DOI: 10.3390/healthcare10122363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 11/16/2022] [Accepted: 11/21/2022] [Indexed: 11/27/2022] Open
Abstract
Patients with schizophrenia may exhibit a flat affect and poor facial expressions. This study aimed to compare subjective facial emotion recognition (FER) and FER based on multi-task cascaded convolutional network (MTCNN) face detection in 31 patients with schizophrenia (patient group) and 40 healthy participants (healthy participant group). A Pepper Robot was used to converse with the 71 aforementioned participants; these conversations were recorded on video. Subjective FER (assigned by medical experts based on video recordings) and FER based on MTCNN face detection was used to understand facial expressions during conversations. This study confirmed the discriminant accuracy of the FER based on MTCNN face detection. The analysis of the smiles of healthy participants revealed that the kappa coefficients of subjective FER (by six examiners) and FER based on MTCNN face detection concurred (κ = 0.63). The perfect agreement rate between the subjective FER (by three medical experts) and FER based on MTCNN face detection in the patient, and healthy participant groups were analyzed using Fisher's exact probability test where no significant difference was observed (p = 0.72). The validity and reliability were assessed by comparing the subjective FER and FER based on MTCNN face detection. The reliability coefficient of FER based on MTCNN face detection was low for both the patient and healthy participant groups.
Collapse
|
21
|
An Intelligent Mental Health Identification Method for College Students: A Mixed-Method Study. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:14976. [PMID: 36429697 PMCID: PMC9690277 DOI: 10.3390/ijerph192214976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 11/10/2022] [Accepted: 11/11/2022] [Indexed: 06/16/2023]
Abstract
PURPOSE Mental health assessments that combine patients' facial expressions and behaviors have been proven effective, but screening large-scale student populations for mental health problems is time-consuming and labor-intensive. This study aims to provide an efficient and accurate intelligent method for further psychological diagnosis and treatment, which combines artificial intelligence technologies to assist in evaluating the mental health problems of college students. MATERIALS AND METHODS We propose a mixed-method study of mental health assessment that combines psychological questionnaires with facial emotion analysis to comprehensively evaluate the mental health of students on a large scale. The Depression Anxiety and Stress Scale-21(DASS-21) is used for the psychological questionnaire. The facial emotion recognition model is implemented by transfer learning based on neural networks, and the model is pre-trained using FER2013 and CFEE datasets. Among them, the FER2013 dataset consists of 48 × 48-pixel face gray images, a total of 35,887 face images. The CFEE dataset contains 950,000 facial images with annotated action units (au). Using a random sampling strategy, we sent online questionnaires to 400 college students and received 374 responses, and the response rate was 93.5%. After pre-processing, 350 results were available, including 187 male and 153 female students. First, the facial emotion data of students were collected in an online questionnaire test. Then, a pre-trained model was used for emotion recognition. Finally, the online psychological questionnaire scores and the facial emotion recognition model scores were collated to give a comprehensive psychological evaluation score. RESULTS The experimental results of the facial emotion recognition model proposed to show that its classification results are broadly consistent with the mental health survey results. This model can be used to improve efficiency. In particular, the accuracy of the facial emotion recognition model proposed in this paper is higher than that of the general mental health model, which only uses the traditional single questionnaire. Furthermore, the absolute errors of this study in the three symptoms of depression, anxiety, and stress are lower than other mental health survey results and are only 0.8%, 8.1%, 3.5%, and 1.8%, respectively. CONCLUSION The mixed method combining intelligent methods and scales for mental health assessment has high recognition accuracy. Therefore, it can support efficient large-scale screening of students' psychological problems.
Collapse
|
22
|
Generalized tendency to make extreme trait judgements from faces. ROYAL SOCIETY OPEN SCIENCE 2022; 9:220172. [PMID: 36425525 PMCID: PMC9682301 DOI: 10.1098/rsos.220172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 10/31/2022] [Indexed: 06/16/2023]
Abstract
People differ in their tendency to infer others' personalities and abilities from their faces. An extreme form of such face-based trait inference (FBTI) is problematic because of its unwarranted impact on real-world decision making. Evolutionary perspectives on FBTI suggest that its inter-individual variation would be trait-specific: e.g. those who make extreme face-based inferences about trustworthiness may not necessarily do so about dominance. However, there are several psychological variables that can increase the FBTI extremity across traits. Here, we show that there is a generalized individual tendency to make extreme FBTI across traits, in support of the latter view. We found that the degrees of extremity of face-based inferences about seven traits had high cross-trait correlations, constituting a general factor. This generalized FBTI extremity had good test-retest reliability and was neither an artefact of extreme nor socially desirable response biases. Moreover, it was positively associated with facial emotion recognition ability and tendencies to believe physiognomy and endorse stereotypes. Our results demonstrate that there are individuals who have a temporally stable disposition to draw extreme conclusions about various traits of others from facial appearance as well as their psychological characteristics.
Collapse
|
23
|
Comparison of Emotion Recognition in Young People, Healthy Older Adults, and Patients with Mild Cognitive Impairment. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:12757. [PMID: 36232057 PMCID: PMC9565174 DOI: 10.3390/ijerph191912757] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 10/01/2022] [Accepted: 10/03/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND The basic discrete emotions, namely, happiness, disgust, anger, fear, surprise, and sadness, are present across different cultures and societies. Facial emotion recognition is crucial in social interactions, but normal and pathological aging seem to affect this ability. The present research aims to identify the differences in the capacity for recognition of the six basic discrete emotions between young and older healthy controls (HOC) and mildly cognitively impaired patients (MCI). METHOD The sample (N = 107) consisted of 47 young adults, 27 healthy older adults, and 33 MCI patients. Several neuropsychological scales were administered to assess the cognitive state of the participants, followed by the emotional labeling task on the Ekman 60 Faces test. RESULTS The MANOVA analysis was significant and revealed the presence of differences in the emotion recognition abilities of the groups. Compared to HOC, the MCI group obtained a significantly lower number of hits on fear, anger, disgust, sadness, and surprise. The happiness emotion recognition rate did not differ significantly among the three groups. Surprisingly, young people and HOC did not show significant differences. CONCLUSIONS Our results demonstrated that MCI was associated with facial emotion recognition impairment, whereas normal aging did not seem to affect this ability.
Collapse
|
24
|
Recognition of Facial Expressions of Emotion and Depressive Symptoms among Caregivers with Different Levels of Empathy. Clin Gerontol 2022; 45:1245-1252. [PMID: 34219607 DOI: 10.1080/07317115.2021.1937426] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVES To assess differences in the recognition of facial expressions of emotion among caregivers of older people with different levels of empathy. METHODS A cross-sectional study was conducted with 158 caregivers of older adults who provided care in family residences or nursing homes. The caregivers were divided into three groups based on the score of the multidimensional Interpersonal Reactivity Index: "lower empathy", "intermediate empathy", and "higher empathy". Data collection involved the administration of a sociodemographic questionnaire, the Emotion Recognition Test, and the Patient Health Questionnaire. RESULTS No significant differences were found among the groups in terms of sociodemographic variables. Regarding clinical characteristics, the "higher empathy" group had more depressive symptoms than the other groups (p = .001). Moreover, the "higher empathy" group exhibited greater accuracy at recognizing the expression of sadness than the "lower empathy" group (p = .033). The recognition of sadness remained significant in the analysis of variance adjusted for depressive symptoms (p < .05). CONCLUSIONS Caregivers with higher levels of empathy showed greater accuracy at recognizing sadness emotion compared to caregivers with lower levels of empathy. Additionally, caregivers with greater empathy have more depressive symptoms. CLINICAL IMPLICATIONS The recognition of facial expressions of sadness may give caregivers a skill to infer possible needs in older care recipients. However, a higher level of empathy may exert a negative psychological impact on caregivers of older people, which could have repercussions regarding the quality of care provided.
Collapse
|
25
|
Impaired Facial Emotion Recognition and Gaze Direction Detection in Mild Alzheimer's Disease: Results from the PACO Study. J Alzheimers Dis 2022; 89:1427-1437. [PMID: 36057821 DOI: 10.3233/jad-220401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND Facial emotion recognition (FER) and gaze direction (GD) identification are core components of social cognition, possibly impaired in many psychiatric or neurological conditions. Regarding Alzheimer's disease (AD), current knowledge is controversial. OBJECTIVE The aim of this study was to explore FER and GD identification in mild AD compared to healthy controls. METHODS 180 participants with mild AD drawn from the PACO study and 74 healthy elderly controls were enrolled. Participants were asked to complete three socio-cognitive tasks: face sex identification, recognition of facial emotions (fear, happiness, anger, disgust) expressed at different intensities, and GD discrimination. Multivariate analyses were conducted to compare AD participants and healthy controls. RESULTS Sex recognition was preserved. GD determination for subtle deviations was impaired in AD. Recognition of prototypically expressed facial emotions was preserved while recognition of degraded facial emotions was impacted in AD participants compared to controls. Use of multivariate analysis suggested significant alteration of low-expressed fear and disgust recognition in the AD group. CONCLUSION Our results showed emotion recognition and GD identification in patients with early-stage AD compared to elderly controls. These impairments could be the object of specific therapeutic interventions such as social cognition remediation or raising awareness of primary caregivers to improve the quality of life of patients with early AD.
Collapse
|
26
|
A Computational Probe into the Behavioral and Neural Markers of Atypical Facial Emotion Processing in Autism. J Neurosci 2022; 42:5115-5126. [PMID: 35705489 PMCID: PMC9233437 DOI: 10.1523/jneurosci.2229-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 05/07/2022] [Accepted: 05/16/2022] [Indexed: 12/24/2022] Open
Abstract
Despite ample behavioral evidence of atypical facial emotion processing in individuals with autism spectrum disorder (ASD), the neural underpinnings of such behavioral heterogeneities remain unclear. Here, I have used brain-tissue mapped artificial neural network (ANN) models of primate vision to probe candidate neural and behavior markers of atypical facial emotion recognition in ASD at an image-by-image level. Interestingly, the image-level behavioral patterns of the ANNs better matched the neurotypical subjects 'behavior than those measured in ASD. This behavioral mismatch was most remarkable when the ANN behavior was decoded from units that correspond to the primate inferior temporal (IT) cortex. ANN-IT responses also explained a significant fraction of the image-level behavioral predictivity associated with neural activity in the human amygdala (from epileptic patients without ASD), strongly suggesting that the previously reported facial emotion intensity encodes in the human amygdala could be primarily driven by projections from the IT cortex. In sum, these results identify primate IT activity as a candidate neural marker and demonstrate how ANN models of vision can be used to generate neural circuit-level hypotheses and guide future human and nonhuman primate studies in autism.SIGNIFICANCE STATEMENT Moving beyond standard parametric approaches that predict behavior with high-level categorical descriptors of a stimulus (e.g., level of happiness/fear in a face image), in this study, I demonstrate how an image-level probe, using current deep-learning-based ANN models, allows identification of more diagnostic stimuli for autism spectrum disorder enabling the design of more powerful experiments. This study predicts that IT cortex activity is a key candidate neural marker of atypical facial emotion processing in people with ASD. Importantly, the results strongly suggest that ASD-related atypical facial emotion intensity encodes in the human amygdala could be primarily driven by projections from the IT cortex.
Collapse
|
27
|
Neural Correlates of Facial Emotion Recognition in Non-help-seeking University Students With Ultra-High Risk for Psychosis. Front Psychol 2022; 13:812208. [PMID: 35756282 PMCID: PMC9226575 DOI: 10.3389/fpsyg.2022.812208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 05/13/2022] [Indexed: 11/13/2022] Open
Abstract
Background Since the introduction of the neurodevelopmental perspective of schizophrenia research on individuals at ultra-high risk for psychosis (UHR) has gained increasing interest, aiming at early detection and intervention. Results from fMRI studies investigating behavioral and brain functional changes in UHR during facial emotion recognition, an essential component of social cognition, showed heterogenous results, probably due clinical diversity across these investigations. This fMRI study investigated emotion recognition in a sub-group of the UHR spectrum, namely non-help-seeking, drug-naïve UHR with high cognitive functioning to reveal the neurofunctional underpinnings of their social functioning in comparison to healthy controls. Methods Two large cohorts of students from an elite University (n 1 = 4,040, n 2 = 4,364) were screened firstly with the Prodromal Questionnaires and by surpassing predefined cut-offs then interviewed with the semi-structured Interview for Psychosis-Risk Syndromes to verify their UHR status. Twenty-one identified non-help-seeking UHR and 23 non-UHR control subjects were scanned with functional magnetic resonance imaging while classifying emotions (i.e., neutral, happy, disgust and fear) in a facial emotion recognition task. Results Behaviorally, no group differences were found concerning accuracy, reaction times, sensitivity or specificity, except that non-help-seeking UHR showed higher specificity when recognizing neutral facial expressions. In comparison to healthy non-UHR controls, non-help-seeking UHR showed generally higher activation in the superior temporal and left Heschl's gyrus as well as in the somatosensory, insular and midcingulate cortex than the control subjects during the entire recognition task regardless of the emotion categories. In an exploratory analysis, in the non-help-seeking UHR group, functional activity in the left superior temporal gyrus was significantly correlated with deficits in the ability to experience emotions at uncorrected statistical thresholds. Conclusions Compared to healthy controls, non-help-seeking UHR show no behavioral deficits during facial emotion recognition, but functional hyperactivities in brain regions associated with this cognitive process. Our study may inspire future early intervention and provide loci for treatment using neural stimulation.
Collapse
|
28
|
The mediating role of gaze patterns in the association of child sleep disturbances and core symptoms of autism spectrum disorder. Autism Res 2022; 15:1719-1731. [PMID: 35521660 DOI: 10.1002/aur.2737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Accepted: 04/20/2022] [Indexed: 11/05/2022]
Abstract
Children with autism spectrum disorder (ASD) are at high risk for sleep disturbances, but the mechanism underlying the association between sleep disturbances and ASD core symptoms is largely unknown. This study examined the relationship between sleep disturbances and ASD core symptoms, and the mediating role of gaze patterns during the facial emotion recognition (FER) task. The study included 57 children with ASD and 59 age- and intelligence-matched typically developing (TD) controls aged 3-7 years. Parents reported their children's sleep disturbances and ASD core symptoms using the Children's Sleep Habits Questionnaire (CSHQ) and Social Communication Questionnaire (SCQ). Children's gaze patterns during the FER task were recorded by an eye tracking method. We found (1) ASD children had more severe sleep disturbances than TD children; (2) ASD children had atypical gaze patterns and poor FER task performance as determined by lower accuracy and longer reaction time; (3) sleep disturbances were significantly associated with ASD core symptoms of social interaction, communication, and restricted, repetitive and stereotyped patterns of behavior; and (4) atypical gaze patterns partially mediated the association between sleep disturbances and ASD core symptoms. These findings suggest the need for more comprehensive clinical interventions and more effective sleep interventions to improve ASD core symptoms. LAY SUMMARY: Sleep disturbances are very common in children with autism spectrum disorder (ASD). The current study found that sleep disturbances were significantly associated with ASD core symptoms, and gaze patterns during facial emotion recognition task could partially mediate this relationship.
Collapse
|
29
|
Emotional Information Processing and Assessment of Cognitive Functions in Social Anxiety Disorder: An Event-Related Potential Study. Clin EEG Neurosci 2022; 53:104-113. [PMID: 33347363 DOI: 10.1177/1550059420981506] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The aim of our study was to determine deficits in cognitive areas, including social cognition such as emotion recognition capacity, theory of mind, and electrophysiological alterations in patients with social anxiety disorder (SAD) and to identify their effects on clinical severity of SAD. Enrolled in our study were 26 patients diagnosed with SAD and 26 healthy volunteers. They were administered the Liebowitz Social Anxiety Scale (LSAS), Reading Mind in the Eyes Test (RMET), and Cambridge Neuropsychological Test Automated Battery. EEG monitoring was performed for electrophsiologic investigation. In the patient group, total reading the mind scores were lower (P = .027) while P300 latencies and emotion recognition latency during the Emotion Recognition Task (ERT) were longer (P = .038 and P = .012, respectively). The false alarm scores in the Rapid Visual Information Processing Task (RVP) were higher in the patient group (P = .038). In a model created using multivariate linear regression analysis, an effect of ERT and RVP scores on LSAS scores was found. Results of our study confirm that particularly impairment of cognitive functions such as sustained attention and emotion recognition may seriously affect the clinical presentation negatively. P300 latency in the parietal region may has the potential to be a biological marker that can be used in monitoring treatment.
Collapse
|
30
|
Recognition of emotional face expressions in patients with restless legs syndrome. APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-6. [PMID: 35213285 DOI: 10.1080/23279095.2022.2043326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE Restless legs syndrome (RLS) is one of the commonest neurologic diseases. Along with sensory and motor symptoms, cognitive impairment and psychiatric features can be seen with RLS. The present study, was planned to look for evidence of cognitive impairment by evaluating facial emotion recognition (FER) in patients with RLS. METHODS In this study, 80 patients with RLS and 50 healthy controls (HCs) were included. Demographic data were recorded. All patients with RLS and HCs were tested with Beck anxiety inventory (BAI), Beck depression inventory (BDI) and with Ekman's test for recognition of facial emotions. RESULTS Sixty-three of the patients with RLS and 37 of the HCs were female. The mean age of the patients was 45.41 ± 8.24, and the mean age of HCs was 43.12 ± 10.35. The patients and HCs were similar regarding sex, age, educational status, and marital status. Patients with RLS had FER difficulties comparing HCs. There was a negative correlation between Ekman's test scores and BDI (r = -0.311, p < 0.001) and BAI scores (r = -0.379, p < 0.001). CONCLUSION FER is an invaluable research topic regarding cognitive function in RLS, which may help us develop different perspectives in terms of revealing the pathophysiology and is very important for the well-being of the patients' social interactions.
Collapse
|
31
|
Validation of the P1vital® Faces Set for Use as Stimuli in Tests of Facial Emotion Recognition. Front Psychiatry 2022; 13:663763. [PMID: 35222109 PMCID: PMC8874121 DOI: 10.3389/fpsyt.2022.663763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 01/19/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Negative bias in facial emotion recognition is a well-established concept in mental disorders such as depression. However, existing face sets of emotion recognition tests may be of limited use in international research, which could benefit from more contemporary and diverse alternatives. Here, we developed and provide initial validation for the P1vital® Affective Faces set (PAFs) as a contemporary alternative to the widely-used Pictures of Facial Affect (PoFA). METHODS The PAFs was constructed of 133 color photographs of facial expressions of ethnically-diverse trained actors and compared with the PoFA, comprised of 110 black and white photographs of facial expressions of generally Caucasian actors. Sixty-one recruits were asked to classify faces from both sets over six emotions (happy, sad, fear, anger, disgust, surprise) varying in intensity in 10% increments from 0 to 100%. RESULTS Participants were significantly more accurate in identifying correct emotions viewing faces from the PAFs. In both sets, participants identified happy faces more accurately than fearful faces, were least likely to misclassify facial expressions as happy and most likely to misclassify all emotions at low intensity as neutral. Accuracy in identifying facial expressions improved with increasing emotion intensity for both sets, reaching peaks at 60 and 80% intensity for the PAFs and PoFA, respectively. The study was limited by small sizes and age-range of participants and ethnic diversity of actors. CONCLUSIONS The PAFs successfully depicted a range of emotional expressions with improved performance over the PoFA and may be used as a contemporary set in facial expression recognition tests.
Collapse
|
32
|
Theory of mind and facial emotion recognition in adults with temporal lobe epilepsy: A meta-analysis. Front Psychiatry 2022; 13:976439. [PMID: 36276336 PMCID: PMC9582667 DOI: 10.3389/fpsyt.2022.976439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 09/16/2022] [Indexed: 12/01/2022] Open
Abstract
BACKGROUND Mounting studies have investigated impairments in social cognitive domains (including theory of mind [ToM] and facial emotion recognition [FER] in adult patients with temporal lobe epilepsy (TLE). However, to date, inconsistent findings remain. METHODS A search of PubMed, Web of Science, and Embase databases was conducted until December 2021. Hedges g effect sizes were computed with a random-effects model. Meta-regressions were used to assess the potential confounding factors of between-study variability in effect sizes. RESULTS The meta-analysis included 41 studies, with a combined sample of 1,749 adult patients with TLE and 1,324 healthy controls (HCs). Relative to HCs, adult patients with TLE showed large impairments in ToM (g = -0.92) and cognitive ToM (g = -0.92), followed by medium impairments in affective ToM (g = -0.79) and FER (g = -0.77). Besides, no (statistically) significant differences were observed between the magnitude of social cognition impairment in adult with TLE who underwent and those who did not undergo epilepsy surgery. Meta-regressions exhibited that greater severity of executive functioning was associated with more severe ToM defects, and older age was associated with more severe FER defects. CONCLUSIONS Results of this meta-analysis suggest that adult patients with TLE show differential impairments in the core aspects of social cognitive domains (including ToM and FER), which may help in planning individualized treatment with appropriate cognitive and behavioral interventions.
Collapse
|
33
|
Social cognition in children and adolescents with epilepsy: A meta-analysis. Front Psychiatry 2022; 13:983565. [PMID: 36186867 PMCID: PMC9520261 DOI: 10.3389/fpsyt.2022.983565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 08/29/2022] [Indexed: 11/16/2022] Open
Abstract
Many studies have investigated impairments in two key domains of social cognition (theory of mind [ToM] and facial emotion recognition [FER]) in children and adolescents with epilepsy. However, inconsistent conclusions were found. Our objective was to characterize social cognition performance of children and adolescents with epilepsy. A literature search was conducted using Web of Science, PubMed, and Embase databases. The article retrieval, screening, quality assessment (Newcastle-Ottawa-Scale), and data extraction were performed independently by two investigators. A random-effects model was used to examine estimates. The meta-analysis included 19 studies, with a combined sample of 623 children and adolescents with epilepsy (mean [SD] age, 12.13 [2.62] years; 46.1% female) and 677 healthy controls [HCs]) (mean [SD] age, 11.48 [2.71] years; 50.7% female). The results revealed that relative to HCs, children and adolescents with epilepsy exhibited deficits in ToM (g = -1.08, 95% CI [-1.38, -0.78], p < 0.001, the number of studies [k] = 13), FER (g = -0.98, 95% CI [-1.33, -0.64], p < 0.001, k = 12), and ToM subcomponents (cognitive ToM: g = -1.04, 95% CI [-1.35, -0.72], p < 0.001, k = 12] and affective ToM: g = -0.73, 95% CI [-1.12, -0.34], p < 0.001, k = 8). In addition, there were no statistically significant differences in social cognition deficits between children and adolescents with focal epilepsy and generalized epilepsy. Meta-regressions confirmed the robustness of the results. These quantitative results further deepen our understanding of the two core domains of social cognition in children and adolescents with epilepsy and may assist in the development of cognitive interventions for this patient population. Systematic review registration: https://inplasy.com/inplasy-2022-3-0011/, identifier INPLASY202230011.
Collapse
|
34
|
Multimodal Emotion Recognition on RAVDESS Dataset Using Transfer Learning. SENSORS 2021; 21:s21227665. [PMID: 34833739 PMCID: PMC8618559 DOI: 10.3390/s21227665] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 11/12/2021] [Accepted: 11/15/2021] [Indexed: 11/29/2022]
Abstract
Emotion Recognition is attracting the attention of the research community due to the multiple areas where it can be applied, such as in healthcare or in road safety systems. In this paper, we propose a multimodal emotion recognition system that relies on speech and facial information. For the speech-based modality, we evaluated several transfer-learning techniques, more specifically, embedding extraction and Fine-Tuning. The best accuracy results were achieved when we fine-tuned the CNN-14 of the PANNs framework, confirming that the training was more robust when it did not start from scratch and the tasks were similar. Regarding the facial emotion recognizers, we propose a framework that consists of a pre-trained Spatial Transformer Network on saliency maps and facial images followed by a bi-LSTM with an attention mechanism. The error analysis reported that the frame-based systems could present some problems when they were used directly to solve a video-based task despite the domain adaptation, which opens a new line of research to discover new ways to correct this mismatch and take advantage of the embedded knowledge of these pre-trained models. Finally, from the combination of these two modalities with a late fusion strategy, we achieved 80.08% accuracy on the RAVDESS dataset on a subject-wise 5-CV evaluation, classifying eight emotions. The results revealed that these modalities carry relevant information to detect users’ emotional state and their combination enables improvement of system performance.
Collapse
|
35
|
Impact of physician empathy on patient outcomes: a gender analysis. Br J Gen Pract 2021; 72:e99-e107. [PMID: 34990388 PMCID: PMC8763196 DOI: 10.3399/bjgp.2021.0193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Accepted: 09/20/2021] [Indexed: 10/31/2022] Open
Abstract
BACKGROUND Empathy in primary care settings has been linked to improved health outcomes. However, the operationalisation of empathy differs between studies, and, to date, no study has concurrently compared affective, cognitive, and behavioural components of empathy regarding patient outcomes. Moreover, it is unclear how gender interacts with the studied dimensions. AIM To examine the relationship between several empathy dimensions and patient-reported satisfaction, consultation's quality, and patients' trust in their physicians, and to determine whether this relationship is moderated by a physician's gender. DESIGN AND SETTING Analysis of the empathy of 61 primary care physicians in relation to 244 patient experience questionnaires in the French-speaking region of Switzerland. METHOD Sixty-one physicians were video-recorded with two male and two female patients. Six different empathy measures were assessed: two self-reported measures, a facial recognition test, two external observational measures, and a Synchrony of Vocal Mean Fundamental Frequencies (SVMFF), measuring vocally coded emotional arousal. After the consultation, patients indicated their satisfaction with, trust in, and quality of the consultation. RESULTS Female physicians self-rated their empathic concern higher than their male counterparts did, whereas male physicians were more vocally synchronised (in terms of frequencies of speech) to their patients. SVMFF was the only significant predictor of all patient outcomes. Verbal empathy statements were linked to higher satisfaction when the physician was male. CONCLUSION Gender differences were observed more often in self-reported measures of empathy than in external measures, indicating a probable social desirability bias. SVMFF significantly predicted all patient outcomes, and could be used as a cost-effective proxy for relational quality.
Collapse
|
36
|
Abstract
The ability to accurately recognize facial expressions is a key element of social interaction. Facial emotion recognition (FER) assessments show promise as a clinical screening and therapeutic tool, but realizing this potential requires better understanding of the stability of this skill. Transient mood states are known to bias emotion recognition in some contexts and may represent a critical factor impacting FER ability. In particular, it is unclear how natural fluctuations in individuals' mood state over time contribute to specific changes in the ability to recognize facial expressions. The current study tested 55 neurotypical participants across multiple visits using the Emotion Recognition test and found that fluctuations in positive and negative mood state altered recognition of specific emotions. Surprisingly, effects of mood state on emotion recognition were noncongruent; increased positive mood was associated with improved recognition of scared expressions but worsened recognition of happy expressions. Our results suggest that minor fluctuations in mood state in a neurotypical population affect emotion recognition. Therefore, mood should be taken into account by researchers and clinicians assessing FER skills. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
|
37
|
The neural underpinnings of facial emotion recognition in ischemic stroke patients. J Neuropsychol 2021; 15:516-532. [PMID: 33554463 PMCID: PMC8518120 DOI: 10.1111/jnp.12240] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Revised: 12/16/2020] [Indexed: 01/19/2023]
Abstract
Deficits in facial emotion recognition occur frequently after stroke, with adverse social and behavioural consequences. The aim of this study was to investigate the neural underpinnings of the recognition of emotional expressions, in particular of the distinct basic emotions (anger, disgust, fear, happiness, sadness and surprise). A group of 110 ischaemic stroke patients with lesions in (sub)cortical areas of the cerebrum was included. Emotion recognition was assessed with the Ekman 60 Faces Test of the FEEST. Patient data were compared to data of 162 matched healthy controls (HC's). For the patients, whole brain voxel-based lesion-symptom mapping (VLSM) on 3-Tesla MRI images was performed. Results showed that patients performed significantly worse than HC's on both overall recognition of emotions, and specifically of disgust, fear, sadness and surprise. VLSM showed significant lesion-symptom associations for FEEST total in the right fronto-temporal region. Additionally, VLSM for the distinct emotions showed, apart from overlapping brain regions (insula, putamen and Rolandic operculum), also regions related to specific emotions. These were: middle and superior temporal gyrus (anger); caudate nucleus (disgust); superior corona radiate white matter tract, superior longitudinal fasciculus and middle frontal gyrus (happiness) and inferior frontal gyrus (sadness). Our findings help in understanding how lesions in specific brain regions can selectively affect the recognition of the basic emotions.
Collapse
|
38
|
Using machine learning to improve the discriminative power of the FERD screener in classifying patients with schizophrenia and healthy adults. J Affect Disord 2021; 292:102-107. [PMID: 34111689 DOI: 10.1016/j.jad.2021.05.032] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 05/12/2021] [Accepted: 05/21/2021] [Indexed: 10/21/2022]
Abstract
Background Facial emotion recognition deficit (FERD) seems to be an obvious feature of patients with schizophrenia and has great potential for classifying patients and non-patients. The FERD screener was previously developed to classify patients from healthy adults. However, an obvious drawback of this screener is that the recommended cut-off scores could enhance either sensitivity or specificity (about 0.92) only, while the other one is at an only acceptable level (about 0.66). Machine learning (ML) algorithms are famous for their feature extraction and data classification abilities, which are promising for improving the discriminative power of screeners. This study aimed to improve the discriminative power of the FERD screener using an ML algorithm. Methods The data were extracted from a previous study. Artificial neural networks were generated to estimate the probability of being a patient with schizophrenia or a healthy adult based on the examinee's responses on the FERD screener (168 items). The performances of the ML-FERD screener were examined using a stratified five-fold cross-validation method. Results Across the five subsets of data, the ML-FERD screener showed extremely high areas under the receiver operating characteristic curve of 0.97-0.99. With the optimized cut-off scores, the average sensitivity and specificity of the ML-FERD screener were 0.90 (0.85-0.93) and 0.93 (0.86-1.00), respectively. Limitations The characteristics of patients were not representative, and the age was mismatched to control group. Conclusion The ML-FERD screener appears to have a better discriminative power to classify patients with schizophrenia and healthy adults than does the FERD screener.
Collapse
|
39
|
Neural links between facial emotion recognition and cognitive impairment in presbycusis. Int J Geriatr Psychiatry 2021; 36:1171-1178. [PMID: 33503682 DOI: 10.1002/gps.5501] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2020] [Revised: 10/05/2020] [Accepted: 01/22/2021] [Indexed: 12/29/2022]
Abstract
OBJECTIVES Facial emotion recognition (FER) is impaired in people with dementia and with severe to profound hearing loss, probably reflecting common neural changes. Here, we aim to study the association between brain structures and FER impairment in mild to moderate age-related hearing loss participants. METHODS We evaluated FER in a cross-sectional cohort of 111 Chilean nondemented elderly participants. They were assessed for FER in seven different categories using 35 facial stimuli. We collected pure-tone average (PTA) audiometric thresholds, cognitive and neuropsychiatric assessments, and morphometric brain imaging using a 3-Tesla MRI. RESULTS According to PTA threshold levels, participants were classified as controls (≤25 dB, n = 56) or presbycusis (>25 dB, n = 55), with an average PTA of 17.08 ± 4.8 dB HL and 36.27 ± 9.5 dB HL respectively. Poorer total FER score was correlated with worse hearing thresholds (r = -0.23, p < 0.05) in participants with presbycusis. Multiple regression models explained 57 % of the variability of FER in presbycusis and 10% in controls. In both groups, the main determinant of FER was cognitive performance. In the brain structure of presbycusis participants, FER was correlated with the atrophy of the right insula, right hippocampus, bilateral cingulate cortex and multiple areas of the temporal cortex. In controls, FER was only associated with bilateral middle temporal cortex volume. CONCLUSIONS FER impairment in presbycusis is distinctively associated with atrophy of neural structures engaged in the perceptual and conceptual level of face emotion processing.
Collapse
|
40
|
An Ear Wearable Device System for Facial Emotion Recognition Disorders. Front Bioeng Biotechnol 2021; 9:703048. [PMID: 34249893 PMCID: PMC8261155 DOI: 10.3389/fbioe.2021.703048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 05/26/2021] [Indexed: 11/13/2022] Open
Abstract
A wearable device system was proposed in the present work to address the problem of facial emotion recognition disorders. The proposed system could comprehensively analyze the user’s own stress status, emotions of people around, and the surrounding environment. The system consists of a multi-dimensional physiological signals acquisition module, an image acquisition and transmission module, a user interface of the user mobile terminal, and a cloud database for data storage. Moreover, a deep learning based multi-model physiological signal pressure recognition algorithm and a facial emotion recognition algorithm were designed and implemented in the system. Some publicly available data sets were used to test the two algorithms, and the experiment results showed that the two algorithms could well realize the expected functions of the system.
Collapse
|
41
|
Psychometric properties of the Cambridge-Mindreading Face-Voice Battery for Children in children with ASD. Autism Res 2021; 14:1965-1974. [PMID: 34089304 DOI: 10.1002/aur.2546] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 05/18/2021] [Indexed: 11/11/2022]
Abstract
This study examined the psychometric characteristics of the Cambridge-Mindreading Face-Voice Battery for Children (CAM-C) for a sample of 333 children, ages 6-12 years with ASD (with no intellectual disability). Internal consistency was very good for the Total score (0.81 for both Faces and Voices) and respectable for the Complex emotions score (0.72 for Faces and 0.74 for Voices); however, internal consistency was lower for Simple emotions (0.65 for Faces and 0.61 for Voices). Test-retest reliability at 18 and 36 weeks was very good for the faces and voices total (0.76-0.81) and good for simple and complex faces and voices (0.53-0.75). Significant correlations were found between CAM-C Faces and scores on another measure of face-emotion recognition (Diagnostic Analysis of Nonverbal Accuracy-Second Edition), and between Faces and Voices scores and child age, IQ (except perceptual IQ and Simple Voice emotions), and language ability. Parent-reported ASD symptom severity and the Emotion Recognition scale on the SRS-2 were not related to CAM-C scores. Suggestions for future studies and further development of the CAM-C are provided. LAY SUMMARY: Facial and vocal emotion recognition are important for social interaction and have been identified as a challenge for individuals with autism spectrum disorder. Emotion recognition is an area frequently targeted by interventions. This study evaluated a measure of emotion recognition (the CAM-C) for its consistency and validity in a large sample of children with autism. The study found the CAM-C showed many strengths needed to accurately measure the change in emotion recognition during intervention.
Collapse
|
42
|
Social cognition and executive functioning in multiple sclerosis: A cluster-analytic approach. J Neuropsychol 2021; 16:97-115. [PMID: 33989458 DOI: 10.1111/jnp.12248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Revised: 04/08/2021] [Indexed: 11/28/2022]
Abstract
Multiple sclerosis (MS) is associated with deficits in social cognition, the process underlying social interaction and cognitive function. However, the relationships between executive impairment and social cognition remain unclear in MS. Previous studies exclusively focused on group comparisons between healthy controls and patients with MS, treating the latter as a homogeneous population. The variability of socio- and neurocognitive profiles in this pathology therefore remains underexplored. In the present study, we used a cluster analytic approach to explore the heterogeneity of executive and social cognition skills in MS. A total of 106 patients with MS were compared with 53 healthy matched controls on executive (e.g., working memory) and social cognition (facial emotion recognition and theory of mind) performances. A cluster analysis was then performed, focusing on the MS sample, to explore the presence of differential patterns of interaction between executive and social cognition difficulties and their links to sociodemographic, clinical and cognitive variables. We identified three distinct functional profiles: patients with no executive or social cognition deficits (Cluster 1); patients with difficulties in facial emotion recognition and theory of mind and, to a lesser extent, executive functioning (Cluster 2); and patients with executive functioning difficulties only (Cluster 3). Clinical characteristics (disease duration, disability, fatigue) did not differ between clusters. CONCLUSIONS: These results suggest that there are qualitative differences in the social cognition and executive difficulties that are commonly found among patients with MS. If replicated, the identification of these profiles in clinical practice could allow for more individualized rehabilitation.
Collapse
|
43
|
Negative parental emotional environment increases the association between childhood behavioral problems and impaired recognition of negative facial expressions. Dev Psychopathol 2021; 34:936-945. [PMID: 33926601 DOI: 10.1017/s0954579420002072] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Impaired facial emotion recognition is a transdiagnostic risk factor for a range of psychiatric disorders. Childhood behavioral difficulties and parental emotional environment have been independently associated with impaired emotion recognition; however, no study has examined the contribution of these factors in conjunction. We measured recognition of negative (sad, fear, anger), neutral, and happy facial expressions in 135 children aged 5-7 years referred by their teachers for behavioral problems. Parental emotional environment was assessed for parental expressed emotion (EE) - characterized by negative comments, reduced positive comments, low warmth, and negativity towards their child - using the 5-minute speech sample. Child behavioral problems were measured using the teacher-informant Strengths and Difficulties Questionnaire (SDQ). Child behavioral problems and parental EE were independently associated with impaired recognition of negative facial expressions specifically. An interactive effect revealed that the combination of both factors was associated with the greatest risk for impaired recognition of negative faces, and in particular sad facial expressions. No relationships emerged for the identification of happy facial expressions. This study furthers our understanding of multidimensional processes associated with the development of facial emotion recognition and supports the importance of early interventions that target this domain.
Collapse
|
44
|
Context-Aware Emotion Recognition in the Wild Using Spatio-Temporal and Temporal-Pyramid Models. SENSORS 2021; 21:s21072344. [PMID: 33801739 PMCID: PMC8036494 DOI: 10.3390/s21072344] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 03/24/2021] [Accepted: 03/25/2021] [Indexed: 11/18/2022]
Abstract
Emotion recognition plays an important role in human–computer interactions. Recent studies have focused on video emotion recognition in the wild and have run into difficulties related to occlusion, illumination, complex behavior over time, and auditory cues. State-of-the-art methods use multiple modalities, such as frame-level, spatiotemporal, and audio approaches. However, such methods have difficulties in exploiting long-term dependencies in temporal information, capturing contextual information, and integrating multi-modal information. In this paper, we introduce a multi-modal flexible system for video-based emotion recognition in the wild. Our system tracks and votes on significant faces corresponding to persons of interest in a video to classify seven basic emotions. The key contribution of this study is that it proposes the use of face feature extraction with context-aware and statistical information for emotion recognition. We also build two model architectures to effectively exploit long-term dependencies in temporal information with a temporal-pyramid model and a spatiotemporal model with “Conv2D+LSTM+3DCNN+Classify” architecture. Finally, we propose the best selection ensemble to improve the accuracy of multi-modal fusion. The best selection ensemble selects the best combination from spatiotemporal and temporal-pyramid models to achieve the best accuracy for classifying the seven basic emotions. In our experiment, we take benchmark measurement on the AFEW dataset with high accuracy.
Collapse
|
45
|
Reading the mind in cartoon eyes: Comparing human versus cartoon emotion recognition in those with high and low levels of autistic traits. Psychol Rep 2021; 125:1380-1396. [PMID: 33715510 PMCID: PMC9136470 DOI: 10.1177/0033294120988135] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
People who have a high degree of autistic traits often underperform on theory of mind tasks such as perspective-taking or facial emotion recognition compared to those with lower levels of autistic traits. However, some research suggests that this may not be the case if the agent they are evaluating is anthropomorphic (i.e. animal or cartoon) rather than typically human. The present studies examined the relation between facial emotion recognition and autistic trait profiles in over 750 adults using either a standard or cartoon version of the Reading the Mind in the Eyes (RME) test. Results showed that those scoring above the clinical cut off for autistic traits on the Autism Quotient performed significantly worse than those with the lowest levels of autistic traits on the standard RME, while scores across these groups did not differ substantially on the cartoon version of the task. These findings add further evidence that theory of mind ability such as facial emotion recognition is not at a global deficit in those with a high degree of autistic traits. Instead, differences in this ability may be specific to evaluating human agents.
Collapse
|
46
|
Facial emotion recognition in panic disorder: a mini-review of behavioural studies. J Affect Disord 2021; 282:173-178. [PMID: 33418364 DOI: 10.1016/j.jad.2020.12.064] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 12/15/2020] [Accepted: 12/18/2020] [Indexed: 11/29/2022]
Abstract
BACKGROUND Panic Disorder (PD) is characterized by unexpected and repeated moments of intense fear or anxiety, which manifest themselves through strong cognitive and behavioural symptoms. However, a clear picture of how impairments in recognition and processing of facial emotions affect the everyday life of PD patients has yet to be delineated. This review attempts to provide an overview of behavioural studies of emotion detection from facial stimuli in PD patients. METHODS A bibliographic research on PubMed of all studies investigating the recognition and processing of facial emotion stimuli in patients with PD and in high-risk offspring was performed, and nine articles (yrs: 2000 to 2019) were discovered. RESULTS In several of the reviewed studies, PD patients showed significant deficits in detecting (particularly negative) emotions in facial stimuli. These impairments were also found in the offspring of parents with PD and high-risk individuals. LIMITATIONS Inferences are constrained by methodological heterogeneity, included but not limited to cross-study variability in the stimuli employed, and in the clinical characterization of PD patients. CONCLUSIONS In general, the results of this survey confirm that deficits in processing facially conveyed negative emotions should be considered a core impairment in PD. However, future larger and more homogenous studies are warranted to better highlight the connection between emotion recognition and PD.
Collapse
|
47
|
Abstract
Deficits in facial emotion recognition are one of the most common cognitive impairments, and they have been extensively studied in various psychiatric disorders, especially in schizophrenia. However, there is still a lack of conclusive evidence about the factors associated with schizophrenia and impairment at each stage of the disease, which poses a challenge to the clinical management of patients. Based on this, we summarize facial emotion cognition among patients with schizophrenia, introduce the internationally recognized Bruce-Young face recognition model, and review the behavioral and event-related potential studies on the recognition of emotions at each stage of the face recognition process, including suggestions for the future direction of clinical research to explore the underlying mechanisms of schizophrenia.
Collapse
|
48
|
An Exploratory Study on Cross-Cultural Differences in Facial Emotion Recognition Between Adults From Malaysia and Australia. Front Psychiatry 2021; 12:622077. [PMID: 34177636 PMCID: PMC8219914 DOI: 10.3389/fpsyt.2021.622077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 05/07/2021] [Indexed: 01/29/2023] Open
Abstract
While culture and depression influence the way in which humans process emotion, these two areas of investigation are rarely combined. Therefore, the aim of this study was to investigate the difference in facial emotion recognition among Malaysian Malays and Australians with a European heritage with and without depression. A total of 88 participants took part in this study (Malays n = 47, Australians n = 41). All participants were screened using The Structured Clinical Interview for DSM-5 Clinician Version (SCID-5-CV) to assess the Major Depressive Disorder (MDD) diagnosis and they also completed the Beck Depression Inventory (BDI). This study consisted of the facial emotion recognition (FER) task whereby the participants were asked to look at facial images and determine the emotion depicted by each of the facial expressions. It was found that depression status and cultural group did not significantly influence overall FER accuracy. Malaysian participants without MDD and Australian participants with MDD performed quicker as compared to Australian participants without MDD on the FER task. Also, Malaysian participants more accurately recognized fear as compared to Australian participants. Future studies can focus on the extent of the influence and other aspects of culture and participant condition on facial emotion recognition.
Collapse
|
49
|
Measuring change in facial emotion recognition in individuals with autism spectrum disorder: A systematic review. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2020; 24:1607-1628. [PMID: 32551983 PMCID: PMC11078255 DOI: 10.1177/1362361320925334] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
LAY ABSTRACT Children and adults with autism spectrum disorder show difficulty recognizing facial emotions in others, which makes social interaction challenging. While there are many treatments developed to improve facial emotion recognition, there is no agreement on the best way to measure such abilities in individuals with autism spectrum disorder. The purpose of this review is to examine studies that were published between January 1998 and November 2019 and have measured change in facial emotion recognition to evaluate the effectiveness of different treatments. Our search yielded 65 studies, and within these studies, 36 different measures were used to evaluate facial emotion recognition in individuals with autism spectrum disorder. Only six of these measures, however, were used in different studies and by different investigators. In this review, we summarize the different measures and outcomes of the studies, in order to identify promising assessment tools and inform future research.
Collapse
|
50
|
FusionSense: Emotion Classification Using Feature Fusion of Multimodal Data and Deep Learning in a Brain-Inspired Spiking Neural Network. SENSORS (BASEL, SWITZERLAND) 2020; 20:E5328. [PMID: 32957655 PMCID: PMC7571195 DOI: 10.3390/s20185328] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Revised: 09/04/2020] [Accepted: 09/11/2020] [Indexed: 01/22/2023]
Abstract
Using multimodal signals to solve the problem of emotion recognition is one of the emerging trends in affective computing. Several studies have utilized state of the art deep learning methods and combined physiological signals, such as the electrocardiogram (EEG), electroencephalogram (ECG), skin temperature, along with facial expressions, voice, posture to name a few, in order to classify emotions. Spiking neural networks (SNNs) represent the third generation of neural networks and employ biologically plausible models of neurons. SNNs have been shown to handle Spatio-temporal data, which is essentially the nature of the data encountered in emotion recognition problem, in an efficient manner. In this work, for the first time, we propose the application of SNNs in order to solve the emotion recognition problem with the multimodal dataset. Specifically, we use the NeuCube framework, which employs an evolving SNN architecture to classify emotional valence and evaluate the performance of our approach on the MAHNOB-HCI dataset. The multimodal data used in our work consists of facial expressions along with physiological signals such as ECG, skin temperature, skin conductance, respiration signal, mouth length, and pupil size. We perform classification under the Leave-One-Subject-Out (LOSO) cross-validation mode. Our results show that the proposed approach achieves an accuracy of 73.15% for classifying binary valence when applying feature-level fusion, which is comparable to other deep learning methods. We achieve this accuracy even without using EEG, which other deep learning methods have relied on to achieve this level of accuracy. In conclusion, we have demonstrated that the SNN can be successfully used for solving the emotion recognition problem with multimodal data and also provide directions for future research utilizing SNN for Affective computing. In addition to the good accuracy, the SNN recognition system is requires incrementally trainable on new data in an adaptive way. It only one pass training, which makes it suitable for practical and on-line applications. These features are not manifested in other methods for this problem.
Collapse
|