101
|
Nineuil C, Houot M, Dellacherie D, Méré M, Denos M, Dupont S, Samson S. Revisiting emotion recognition in different types of temporal lobe epilepsy: The influence of facial expression intensity. Epilepsy Behav 2023; 142:109191. [PMID: 37030041 DOI: 10.1016/j.yebeh.2023.109191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 02/27/2023] [Accepted: 03/18/2023] [Indexed: 04/10/2023]
Abstract
Temporal lobe epilepsy (TLE) can induce various difficulties in recognizing emotional facial expressions (EFE), particularly for negative valence emotions. However, these difficulties have not been systematically examined according to the localization of the epileptic focus. For this purpose, we used a forced-choice recognition task in which faces expressing fear, sadness, anger, disgust, surprise, or happiness were presented in different intensity levels from moderate to high intensity. The first objective of our study was to evaluate the impact of emotional intensity on the recognition of different categories of EFE in TLE patients compared to control participants. The second objective was to assess the effect of localizationof epileptic focus on the recognition of EFE in patients with medial temporal lobe epilepsy (MTLE) associated or not with hippocampal sclerosis (HS), or lateral temporal lobe epilepsy (LTLE). The results showed that the 272 TLE patients and the 68 control participants were not differently affected by the intensity of EFE. However, we obtained group differences within the clinical population when we took into account the localization of the temporal lobe epileptic focus. As predicted, TLE patients were impaired in recognizing fear and disgust relative to controls. Moreover, the scores of these patients varied according to the localization of the epileptic focus, but not according to the cerebral lateralization of TLE. The facial expression of fear was less well recognized by MTLE patients, with or without HS, and the expression of disgust was less well recognized by LTLE as well as MTLE without HS patients. Moreover, emotional intensity modulated differently the recognition of disgust and surprise of the three patient groups underlying the relevance of using moderate emotional intensity to distinguish the effect of epileptic focus localization. These findings should be taken into account for interpreting the emotional behaviors and deserve to befurther investigated before considering TLE surgical treatment or social cognition interventions in TLE patients.
Collapse
|
102
|
Heffer N, Dennie E, Ashwin C, Petrini K, Karl A. Multisensory processing of emotional cues predicts intrusive memories after virtual reality trauma. VIRTUAL REALITY 2023; 27:2043-2057. [PMID: 37614716 PMCID: PMC10442266 DOI: 10.1007/s10055-023-00784-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 03/03/2023] [Indexed: 08/25/2023]
Abstract
Research has shown that high trait anxiety can alter multisensory processing of threat cues (by amplifying integration of angry faces and voices); however, it remains unknown whether differences in multisensory processing play a role in the psychological response to trauma. This study examined the relationship between multisensory emotion processing and intrusive memories over seven days following exposure to an analogue trauma in a sample of 55 healthy young adults. We used an adapted version of the trauma film paradigm, where scenes showing a car accident trauma were presented using virtual reality, rather than a conventional 2D film. Multisensory processing was assessed prior to the trauma simulation using a forced choice emotion recognition paradigm with happy, sad and angry voice-only, face-only, audiovisual congruent (face and voice expressed matching emotions) and audiovisual incongruent expressions (face and voice expressed different emotions). We found that increased accuracy in recognising anger (but not happiness and sadness) in the audiovisual condition relative to the voice- and face-only conditions was associated with more intrusions following VR trauma. Despite previous results linking trait anxiety and intrusion development, no significant influence of trait anxiety on intrusion frequency was observed. Enhanced integration of threat-related information (i.e. angry faces and voices) could lead to overly threatening appraisals of stressful life events and result in greater intrusion development after trauma. Supplementary Information The online version contains supplementary material available at 10.1007/s10055-023-00784-1.
Collapse
|
103
|
Xu M, Cheng J, Li C, Liu Y, Chen X. Spatio-temporal deep forest for emotion recognition based on facial electromyography signals. Comput Biol Med 2023; 156:106689. [PMID: 36867897 DOI: 10.1016/j.compbiomed.2023.106689] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 02/02/2023] [Accepted: 02/14/2023] [Indexed: 03/05/2023]
Abstract
Emotion recognition is a key component of human-computer interaction technology, for which facial electromyogram (fEMG) is an important physiological modality. Recently, deep-learning-based emotion recognition using fEMG signals has drawn increased attention. However, the ability of effective feature extraction and the demand of large-scale training data are two dominant factors that restrict the performance of emotion recognition. In this paper, a novel spatio-temporal deep forest (STDF) model is proposed to classify three categories of discrete emotions (neutral, sadness, and fear) using multi-channel fEMG signals. The feature extraction module fully extracts effective spatio-temporal features of fEMG signals using a combination of 2D frame sequences and multi-grained scanning. Meanwhile, a cascade forest-based classifier is designed to provide optimal structures for different scales of training data via automatically adjusting the number of cascade layers. The proposed model and five comparison methods were evaluated on our in-house fEMG dataset that included three discrete emotions and three channels of fEMG electrodes with a total of twenty-seven subjects. Experimental results demonstrate that the proposed STDF model achieves the best recognition performance with an average accuracy of 97.41%. Besides, our proposed STDF model can reduced the scale of training data to 50% while the average accuracy of emotion recognition is only reduced by about 5%. Our proposed model offers an effective solution for practical applications of fEMG-based emotion recognition.
Collapse
|
104
|
Baltariu IC, Enea V, Kaffenberger J, Duiverman LM, Aan Het Rot M. The acute effects of alcohol on social cognition: A systematic review of experimental studies. Drug Alcohol Depend 2023; 245:109830. [PMID: 36907121 DOI: 10.1016/j.drugalcdep.2023.109830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Revised: 01/27/2023] [Accepted: 02/15/2023] [Indexed: 03/14/2023]
Abstract
RATIONALE Alcohol effects on social cognition have been studied by measuring facial emotion recognition, empathy, Theory of Mind (ToM) and other forms of information processing. OBJECTIVES Using the PRISMA guidelines, we reviewed experimental studies that examined acute effects of alcohol on social cognition. METHODS Scopus, PsycInfo, PubMed, and Embase were searched between July 2020 - January 2023. The PICO strategy was used for identifying participants, interventions, comparators, and outcomes. Participants (N = 2330) were adult social alcohol users. Interventions consisted of acute alcohol administration. Comparators included placebo or the lowest alcohol dose. Outcome variables were grouped into three themes: facial processing, empathy and ToM, and perceptions of inappropriate sexual behavior. RESULTS A total of 32 studies were reviewed. Studies measuring facial processing (67%) often found no effects of alcohol on the recognition of specific emotions, facilitated emotion recognition at lower doses and worsened emotion recognition at higher doses. In studies measuring empathy or ToM (24%), lower doses were more likely to lead to improvements while higher doses were generally impairing. Within the third group of studies (9%), moderate to high alcohol doses made it more difficult to perceive sexual aggression accurately. CONCLUSIONS Lower alcohol doses might sometimes help facilitate social cognition, but most data were in line with the idea that alcohol tends to worsen social cognition, particularly at higher doses. Future studies might focus on examining other moderators of the effects of alcohol on social cognition, particularly interpersonal characteristics such as trait emotional empathy, and participant and target gender.
Collapse
|
105
|
Sharma E, Ravi GS, Kumar K, Thennarasu K, Heron J, Hickman M, Vaidya N, Holla B, Rangaswamy M, Mehta UM, Krishna M, Chakrabarti A, Basu D, Nanjayya SB, Singh RL, Lourembam R, Kumaran K, Kuriyan R, Kurpad SS, Kartik K, Kalyanram K, Desrivieres S, Barker G, Orfanos DP, Toledano M, Purushottam M, Bharath RD, Murthy P, Jain S, Schumann G, Benegal V. Growth trajectories for executive and social cognitive abilities in an Indian population sample: Impact of demographic and psychosocial determinants. Asian J Psychiatr 2023; 82:103475. [PMID: 36736106 DOI: 10.1016/j.ajp.2023.103475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 01/18/2023] [Indexed: 01/21/2023]
Abstract
Cognitive abilities are markers of brain development and psychopathology. Abilities, across executive, and social domains need better characterization over development, including factors that influence developmental change. This study is based on the cVEDA [Consortium on Vulnerability to Externalizing Disorders and Addictions] study, an Indian population based developmental cohort. Verbal working memory, visuo-spatial working memory, response inhibition, set-shifting, and social cognition (faux pas recognition and emotion recognition) were cross-sectionally assessed in > 8000 individuals over the ages 6-23 years. There was adequate representation across sex, urban-rural background, psychosocial risk (psychopathology, childhood adversity and wealth index, i.e. socio-economic status). Quantile regression was used to model developmental change. Age-based trajectories were generated, along with examination of the impact of determinants (sex, childhood adversity, and wealth index). Development in both executive and social cognitive abilities continued into adulthood. Maturation and stabilization occurred in increasing order of complexity, from working memory to inhibitory control to cognitive flexibility. Age related change was more pronounced for low quantiles in response inhibition (β∼4 versus =2 for higher quantiles), but for higher quantiles in set-shifting (β > -1 versus -0.25 for lower quantiles). Wealth index had the largest influence on developmental change across cognitive abilities. Sex differences were prominent in response inhibition, set-shifting and emotion recognition. Childhood adversity had a negative influence on cognitive development. These findings add to the limited literature on patterns and determinants of cognitive development. They have implications for understanding developmental vulnerabilities in young persons, and the need for providing conducive socio-economic environments.
Collapse
|
106
|
Gong B, Li N, Li Q, Yan X, Chen J, Li L, Wu X, Wu C. The Mandarin Chinese auditory emotions stimulus database: A validated set of Chinese pseudo-sentences. Behav Res Methods 2023; 55:1441-1459. [PMID: 35641682 DOI: 10.3758/s13428-022-01868-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/29/2022] [Indexed: 11/08/2022]
Abstract
Emotional prosody is fully embedded in language and can be influenced by the linguistic properties of a specific language. Considering the limitations of existing Chinese auditory stimulus database studies, we developed and validated an emotional auditory stimuli database composed of Chinese pseudo-sentences, recorded by six professional actors in Mandarin Chinese. Emotional expressions included happiness, sadness, anger, fear, disgust, pleasant surprise, and neutrality. All emotional categories were vocalized into two types of sentence patterns, declarative and interrogative. In addition, all emotional pseudo-sentences, except for neutral, were vocalized at two levels of emotional intensity: normal and strong. Each recording was validated with 40 native Chinese listeners in terms of the recognition accuracy of the intended emotion portrayal; finally, 4361 pseudo-sentence stimuli were included in the database. Validation of the database using a forced-choice recognition paradigm revealed high rates of emotional recognition accuracy. The detailed acoustic attributes of vocalization were provided and connected to the emotion recognition rates. This corpus could be a valuable resource for researchers and clinicians to explore the behavioral and neural mechanisms underlying emotion processing of the general population and emotional disturbances in neurological, psychiatric, and developmental disorders. The Mandarin Chinese auditory emotion stimulus database is available at the Open Science Framework ( https://osf.io/sfbm6/?view_only=e22a521e2a7d44c6b3343e11b88f39e3 ).
Collapse
|
107
|
Yildirim E, Akbulut FP, Catal C. Analysis of facial emotion expression in eating occasions using deep learning. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 82:1-13. [PMID: 37362640 PMCID: PMC10031178 DOI: 10.1007/s11042-023-15008-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Revised: 08/30/2022] [Accepted: 02/27/2023] [Indexed: 06/28/2023]
Abstract
Eating is experienced as an emotional social activity in any culture. There are factors that influence the emotions felt during food consumption. The emotion felt while eating has a significant impact on our lives and affects different health conditions such as obesity. In addition, investigating the emotion during food consumption is considered a multidisciplinary problem ranging from neuroscience to anatomy. In this study, we focus on evaluating the emotional experience of different participants during eating activities and aim to analyze them automatically using deep learning models. We propose a facial expression-based prediction model to eliminate user bias in questionnaire-based assessment systems and to minimize false entries to the system. We measured the neural, behavioral, and physical manifestations of emotions with a mobile app and recognize emotional experiences from facial expressions. In this research, we used three different situations to test whether there could be any factor other than the food that could affect a person's mood. We asked users to watch videos, listen to music or do nothing while eating. This way we found out that not only food but also external factors play a role in emotional change. We employed three Convolutional Neural Network (CNN) architectures, fine-tuned VGG16, and Deepface to recognize emotional responses during eating. The experimental results demonstrated that the fine-tuned VGG16 provides remarkable results with an overall accuracy of 77.68% for recognizing the four emotions. This system is an alternative to today's survey-based restaurant and food evaluation systems.
Collapse
|
108
|
Tucci AA, Schroeder A, Noël C, Shvetz C, Yee J, Howard AL, Keshavan MS, Guimond S. Social cognition in youth with a first-degree relative with schizophrenia: A systematic scoping review. Psychiatry Res 2023; 323:115173. [PMID: 36989908 DOI: 10.1016/j.psychres.2023.115173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 03/14/2023] [Accepted: 03/18/2023] [Indexed: 03/31/2023]
Abstract
Social-cognitive deficits are present in individuals at familial high-risk (FHR) for schizophrenia and may play a role in the onset of the illness. No literature review has examined the social-cognitive profiles of youth at FHR who are within the peak window of risk for developing schizophrenia, which could provide insight on the endophenotypic role of social cognition. This systematic scoping review (1) summarizes the evidence on social-cognitive deficits in youth at FHR, (2) explores brain correlates, and (3) describes social-cognitive deficits and prodromal symptom associations. We searched PsycInfo and PubMed for studies investigating social cognition in FHR youth aged 35 or younger and included 19 studies (FHR=639; controls=689). Studies report that youth at FHR have difficulty recognizing negative emotions, particularly fear. Youth at FHR also have difficulty performing complex theory of mind tasks. Abnormality in corticolimbic and temporoparietal regions are observed in youth at FHR during social-cognitive tasks, but results are inconsistent. Finally, there is evidence for negative associations between prodromal symptoms and performance on emotion regulation and theory of mind tasks, but the research is scarce. This review highlights the need for studies on youth at FHR using longitudinal designs and extensive social-cognitive, brain imaging and clinical measures.
Collapse
|
109
|
Lohrasbi S, Moradi AR, Sadeghi M. Exploring Emotion Recognition Patterns Among Iranian People Using CANTAB as an Approved Neuro-Psychological Assessment. Basic Clin Neurosci 2023; 14:289-295. [PMID: 38107531 PMCID: PMC10719974 DOI: 10.32598/bcn.2022.3607.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Revised: 09/13/2021] [Accepted: 11/07/2021] [Indexed: 12/19/2023] Open
Abstract
Introduction Emotion recognition is the main component of social cognition and has various patterns in different cultures and nationalities. The present study aimed to investigate emotion recognition patterns among Iranians using the Cambridge neuro-psychological test automated battery (CANTAB) as a valid neuropsychological test. Methods In this descriptive-analytical study, 117 males and females (Mean±SD of age 32.1±6.4) were initially assessed by computerized intelligence and progressive matrices of RAVEN-2. Furthermore, the excitement recognition subtest taken from the Cambridge neuro-psychological test automated battery (CANTAB) was performed. The correct response of participants to each of the six basic emotions as well as the recognition time was used for analysis. Results The maximum correct response rate was 75.83% related to happy emotion. The correct responses for sadness, surprise, disgust, anger, and fear were 70%, 68.48%, 47.84%, 42.54%, and 38.26%, respectively. Moreover, the shortest recognition time was related to disgust at 322 ms, while sadness with a mean response time of 1800 ms and fear response time at 1529 ms indicated the longest recognition time. In addition, participants recognized happiness with a mean response time of 1264 ms better than other emotions; however, post-hoc t-test analyses showed that only the correct responses for sadness and surprised emotions did not differ significantly, (t(112)=-0.59, P=0.55, d=0.05). These results suggested that different emotions have various correct responses. However, sadness and surprise did not differ. Conclusion The findings of this study could be beneficial for evaluating cognitive elements, as well as cognitive abilities and inabilities among the Iranian population. Moreover, the findings could be used for investigating social cognition in this population. Highlights Emotion recognition patterns among Iranians were investigated using a valid neuropsychological test.Iranians showed higher accuracy in recognizing happiness and lower accuracy in recognizing fear.Disgust was recognized with the shortest response time, while sadness and fear had the longest recognition time.The findings highlight cultural differences in emotion recognition and can aid in evaluating cognitive abilities and social cognition in the Iranian population.The study emphasizes the importance of considering cultural factors in assessing and understanding emotion recognition. Plain Language Summary Understanding how people recognize emotions is crucial for effective communication and building social connections. However, the ability to recognize emotions can vary across cultures. This study aimed to investigate how Iranians recognize emotions using a reliable test. The researchers assessed 117 Iranian adults, both males and females, using a computer-based test. Participants were asked to identify six basic emotions (happiness, sadness, anger, disgust, fear, and surprise) displayed on a screen. The researchers measured the participants' accuracy in identifying each emotion and the time it took them to recognize it. The findings revealed that Iranians were most accurate in recognizing happiness and least accurate in recognizing fear. They were better at identifying positive emotions like happiness and surprise compared to negative emotions like disgust and anger. Participants took the least time to recognize disgust and the longest time to recognize sadness and fear. These results show that Iranians have specific patterns in recognizing emotions, which can be influenced by cultural factors. Understanding these patterns is important for assessing cognitive abilities and social cognition in the Iranian population. Moreover, these findings have broader implications. They highlight the need to consider cultural differences in emotion recognition, as it can impact communication and social interactions. The study's outcomes can be valuable for various applications. For instance, they can aid in developing tests to assess emotion recognition difficulties in individuals with conditions such as autism or schizophrenia. Furthermore, these findings can be useful for professionals, such as employees in customer service or mental health providers, who need to accurately interpret others' emotions. By shedding light on cultural variations in emotion recognition, this research contributes to our understanding of human emotions and their role in interpersonal relationships.
Collapse
|
110
|
Powell T, Plate RC, Miron CD, Wagner NJ, Waller R. Callous-unemotional Traits and Emotion Recognition Difficulties: Do Stimulus Characteristics Play a role? Child Psychiatry Hum Dev 2023:10.1007/s10578-023-01510-3. [PMID: 36811753 DOI: 10.1007/s10578-023-01510-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 02/12/2023] [Indexed: 02/24/2023]
Abstract
Emotion recognition difficulties are linked to callous-unemotional (CU) traits, which predict risk for severe antisocial behavior. However, few studies have investigated how stimulus characteristics influence emotion recognition performance, which could give insight into the mechanisms underpinning CU traits. To address this knowledge gap, children aged 7-10 years old (N = 45; 53% female, 47% male; 46.3% Black/African-American, 25.9% White, 16.7% Mixed race or Other, 9.3% Asian) completed an emotion recognition task featuring static facial stimuli from child and adult models and facial and full-body dynamic stimuli from adult models. Parents reported on CU traits of children in the sample. Children showed better emotion recognition for dynamic than static faces. Higher CU traits were associated with worse emotion recognition, particularly for sad and neutral expressions. Stimulus characteristics did not impact associations between CU traits and emotion recognition.
Collapse
|
111
|
Ferrer-Cairols I, Ferré-González L, García-Lluch G, Peña-Bautista C, Álvarez-Sánchez L, Baquero M, Cháfer-Pericás C. Emotion recognition and baseline cortisol levels relationship in early Alzheimer disease. Biol Psychol 2023; 177:108511. [PMID: 36716987 DOI: 10.1016/j.biopsycho.2023.108511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 01/09/2023] [Accepted: 01/26/2023] [Indexed: 01/30/2023]
Abstract
BACKGROUND Emotion recognition is often impaired in early Alzheimer's disease (AD) and can be evaluated using the Reading the Mind in the Eyes Test (RMET). Similarly, cortisol levels can affect cognition and could be considered a biomarker of AD. OBJECTIVES The aim of this study was to analyse the relationship between the emotion recognition task and cortisol levels in participants with early Alzheimer Disease (AD). METHODS Complex emotion recognition was assessed with RMET, and plasma cortisol levels were determined by mass spectrometry in participants classified into mild cognitive impairment (MCI) due to AD (n = 25), mild dementia (MD) due to AD (n = 20), MCI non-AD (n = 34), MD non-AD (n = 13) and healthy controls (HC) (n = 16) groups. RESULTS Significantly lower positive emotion recognition was found in the MCI non-AD group (p = 0.02) and lower emotion recognition in MD (AD and non-AD) groups (p < 0.01) compared to the healthy group. In addition, significant differences were observed between cortisol and all RMET scores among the MCI and MD groups (p < 0.01). A significant correlation was also obtained between total and neutral RMET scores and cortisol levels in MD groups (p = 0.01). CONCLUSIONS These outcomes suggest that detection of positive emotion dysfunction could help to identify MCI non-AD patients. Furthermore, general impaired emotion recognition and high cortisol levels may be associated with cognitive impairment at mild dementia level.
Collapse
|
112
|
Shepherd JL, Rippon D. The impact of briefly observing faces in opaque facial masks on emotion recognition and empathic concern. Q J Exp Psychol (Hove) 2023; 76:404-418. [PMID: 35319298 PMCID: PMC9896299 DOI: 10.1177/17470218221092590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Since the outbreak of SARS-CoV-2 in 2019, there have been global public health initiatives that have advocated for the community use of face masks to reduce spread of the virus. Although the community use of facial coverings has been deemed essential for public health, there have been calls for enquiries to ascertain how face masks may impact non-verbal methods of communication. This study aimed to ascertain how the brief observations of faces in opaque facial coverings could impact facial emotion recognition. It was also an aim to ascertain if there was an association between the levels of empathic concern and facial emotion recognition when viewing masked faces. An opportunity sample of 199 participants, who resided in the United Kingdom, were randomly assigned to briefly observe either masked (n = 102) or unmasked (n = 97) faces. Participants in both conditions were required to view a series of facial expressions, from the Radboud Faces Database, with models conveying the emotional states of anger, disgust, fear, happiness, sadness, and surprised. Each face was presented to participants for a period of 250 ms in the masked and unmasked conditions. A 6 (emotion type) x 2 (masked/unmasked condition) mixed ANOVA revealed that viewing masked faces significantly reduced facial emotion recognition of disgust, fear, happiness, sadness, and surprised. However, there were no differences in the success rate of recognising the emotional state of anger between the masked and unmasked conditions. Furthermore, higher levels of empathic concern were associated with greater success in facially recognising the emotional state of disgust. The results of this study suggest that significant reductions in emotion recognition, when viewing faces in opaque masks, can still be observed when people are exposed to facial stimuli for a brief period of time.
Collapse
|
113
|
Ho FC, Lam CSC, Lo SK. Differences Between Students With Comorbid Intellectual Disability and Autism Spectrum Disorder and Those With Intellectual Disability Alone in the Recognition of and Reaction to Emotions. J Autism Dev Disord 2023; 53:593-605. [PMID: 32761303 DOI: 10.1007/s10803-020-04630-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
This study investigates whether students with intellectual disability (ID) alone differ from students with combined individual disability and autism spectrum disorder (ASD) in their recognition of emotions. The ability to recognise emotions does not mean that students automatically know how to react to these emotions. Differences in performance on recognition and reaction tasks are examined. Participants were 20 primary 6 students who had ID with ASD and 20 primary 6 students who had ID without ASD from four special schools. The testing and training materials were adapted from a local teaching package. The results showed that both groups exhibited similar performance patterns in recognition tasks. Students with comorbid ASD exhibited inferior performance in tasks requiring reactions to complex emotions.
Collapse
|
114
|
Social Cognition in Temporal and Frontal Lobe Epilepsy: Systematic Review, Meta-analysis, and Clinical Recommendations. J Int Neuropsychol Soc 2023; 29:205-229. [PMID: 35249578 DOI: 10.1017/s1355617722000066] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
OBJECTIVE Despite the importance of social cognitive functions to mental health and social adjustment, examination of these functions is absent in routine assessment of epilepsy patients. Thus, this review aims to provide a comprehensive overview of the literature on four major aspects of social cognition among temporal and frontal lobe epilepsy, which is a critical step toward designing new interventions. METHOD Papers from 1990 to 2021 were reviewed and examined for inclusion in this study. After the deduplication process, a systematic review and meta-analysis of 44 and 40 articles, respectively, involving 113 people with frontal lobe epilepsy and 1482 people with temporal lobe epilepsy were conducted. RESULTS Our results indicated that while patients with frontal or temporal lobe epilepsy have difficulties in all aspects of social cognition relative to nonclinical controls, the effect sizes were larger for theory of mind (g = .95), than for emotion recognition (g = .69) among temporal lobe epilepsy group. The frontal lobe epilepsy group exhibited significantly greater impairment in emotion recognition compared to temporal lobe. Additionally, people with right temporal lobe epilepsy (g = 1.10) performed more poorly than those with a left-sided (g = .90) seizure focus, specifically in the theory of mind domain. CONCLUSIONS These data point to a potentially important difference in the severity of deficits within the emotion recognition and theory of mind abilities depending on the laterlization of seizure side. We also suggest a guide for the assessment of impairments in social cognition that can be integrated into multidisciplinary clinical evaluation for people with epilepsy.
Collapse
|
115
|
Ikeda S. Development of Emotion Recognition from Facial Expressions with Different Eye and Mouth Cues in Japanese People. J Genet Psychol 2023; 184:187-197. [PMID: 36661090 DOI: 10.1080/00221325.2023.2168174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Research has reported that Japanese people are more likely to focus on and look longer at eyes when reading emotions from facial expressions than their western counterparts. However, how these tendencies develop and whether there is a relationship between the two tendencies (to focus on the eyes and to look longer at the eyes) is unclear. The present study examined emotion recognition and gaze patterns in Japanese preschool children (n = 51) and university students (n = 57), using facial expressions with different eye and mouth cues. The results showed developmental changes in emotion recognition, with adults being more sensitive to negative emotions, whereas gaze patterns showed no developmental changes. Furthermore, there was no relationship between emotion recognition and gaze patterns. This suggests that the implicit and explicit processing of emotion recognition develops at different times, and that there is no direct relationship between the two processes.
Collapse
|
116
|
Cuzzocrea F, Gugliandolo MC, Cannavò M, Liga F. Emotion recognition in individuals wearing facemasks: a preliminary analysis of age-related differences. CURRENT PSYCHOLOGY 2023; 42:1-4. [PMID: 36684462 PMCID: PMC9843093 DOI: 10.1007/s12144-023-04239-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 11/11/2022] [Accepted: 01/03/2023] [Indexed: 01/19/2023]
Abstract
COVID-19 is severely affecting individuals' lives worldwide. Previous research warned that facial occlusion may impair facial emotion recognition, whilst other findings suggested that age-related differences may be relevant in emotion recognition in others' faces. However, studies observing individuals' ability to interpret others' facial mimicry are heterogenous, thus precluding the generalizability of the findings. This preliminary study examined age-related differences and the influence of different covering types (with and without face masks) in determining different levels of facial emotion recognition. 131 participants were split into 3 age-groups (10-14; 15-17; 20-25) and were asked to complete an emotion recognition task. Participants were better able to recognize facial emotions without any occlusion, and happiness was the most recognizable emotion. Moreover, adolescent group performed better in recognizing anger and fear in stimuli depicting masked and unmasked faces. Current results suggest the importance of monitoring emotion recognition abilities in developing individuals during the COVID-19 pandemic.
Collapse
|
117
|
Nayani FZ, Yuki M, Maddux WW, Schug J. Lay theories about emotion recognition explain cultural differences in willingness to wear facial masks during the COVID-19 pandemic. CURRENT RESEARCH IN ECOLOGICAL AND SOCIAL PSYCHOLOGY 2023; 4:100089. [PMID: 36685995 PMCID: PMC9839383 DOI: 10.1016/j.cresp.2023.100089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 12/15/2022] [Accepted: 01/12/2023] [Indexed: 06/08/2023]
Abstract
Given that mask-wearing proved to be an important tool to slow the spread of infection during the COVID-19 pandemic, investigating the psychological and cultural factors that influence norms for mask wearing across cultures is exceptionally important. One factor that may influence mask wearing behavior is the degree to which people believe masks potentially impair emotion recognition. Based on previous research suggesting that there may be cultural differences in facial regions that people in Japan and the United States attend to when inferring a target's emotional state, we predicted that Americans would perceive masks (which cover the mouth) as more likely to impair emotion recognition, whereas Japanese would perceive facial coverings that conceal the eye region (sunglasses) to be more likely to impair emotion recognition. The results showed that Japanese participants reported wearing masks more than Americans. Americans also reported higher expected difficulty in interpreting emotions of individuals wearing masks (vs. sunglasses), while Japanese reported the reverse effect. Importantly, expectations about the negative impact of facial masks on emotion recognition explained cultural differences in mask-wearing behavior, even accounting for existing social norms.
Collapse
|
118
|
Hasnul MA, Ab. Aziz NA, Abd. Aziz A. Augmenting ECG Data with Multiple Filters for a Better Emotion Recognition System. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2023; 48:1-22. [PMID: 36685996 PMCID: PMC9838506 DOI: 10.1007/s13369-022-07585-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 12/18/2022] [Indexed: 01/13/2023]
Abstract
A physiological-based emotion recognition system (ERS) with a unimodal approach such as an electrocardiogram (ECG) is not as popular compared to a multimodal approach. However, a single modality has the advantage of lower development and computational cost. Therefore, this study focuses on a unimodal ECG-based ERS. The ECG-based ERS has the potential to become the next mass-adopted consumer application due to the wide availability of wearable and mobile ECG devices in the market. Currently, ECG-inclusive affective datasets are limited, and many of the existing datasets have small sample sizes. Hence, ECG-based ERS studies are stunted by the lack of quality data. A novel multi-filtering augmentation technique is proposed here to increase the sample size of the ECG data. This technique augments the ECG signals by cleaning the data in different ways. Three small ECG datasets labelled according to emotion state are used in this study. The benefit of the proposed augmentation techniques is measured using the classification accuracy of five machine learning algorithms; k-nearest neighbours (KNN), support vector machine, decision tree, random forest and multilayer perceptron. The results show that with the proposed technique, there is a significant improvement in performance for all the datasets and classifiers. KNN classifier improved the most with the augmented data and the reported classification accuracies of over 90%.
Collapse
|
119
|
Bai Z, Liu J, Hou F, Chen Y, Cheng M, Mao Z, Song Y, Gao Q. Emotion recognition with residual network driven by spatial-frequency characteristics of EEG recorded from hearing-impaired adults in response to video clips. Comput Biol Med 2023; 152:106344. [PMID: 36470142 DOI: 10.1016/j.compbiomed.2022.106344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Revised: 10/31/2022] [Accepted: 11/21/2022] [Indexed: 12/03/2022]
Abstract
In recent years, emotion recognition based on electroencephalography (EEG) signals has attracted plenty of attention. Most of the existing works focused on normal or depressed people. Due to the lack of hearing ability, it is difficult for hearing-impaired people to express their emotions through language in their social activities. In this work, we collected the EEG signals of hearing-impaired subjects when they were watching six kinds of emotional video clips (happiness, inspiration, neutral, anger, fear, and sadness) for emotion recognition. The biharmonic spline interpolation method was utilized to convert the traditional frequency domain features, Differential Entropy (DE), Power Spectral Density (PSD), and Wavelet Entropy (WE) into the spatial domain. The patch embedding (PE) method was used to segment the feature map into the same patch to obtain the differences in the distribution of emotional information among brain regions. For feature classification, a compact residual network with Depthwise convolution (DC) and Pointwise convolution (PC) is proposed to separate spatial and channel mixing dimensions to better extract information between channels. Dependent subject experiments based on 70% training sets and 30% testing sets were performed. The results showed that the average classification accuracies by PE (DE), PE (PSD), and PE (WE) were 91.75%, 85.53%, and 75.68%, respectively which were improved by 11.77%, 23.54%, and 16.61% compared with DE, PSD, and WE. Moreover, the comparison experiments were carried out on the SEED and DEAP datasets with PE (DE), which achieved average accuracies of 90.04% (positive, neutral, and negative) and 88.75% (high valence and low valence). By exploring the emotional brain regions, we found that the frontal, parietal, and temporal lobes of hearing-impaired people were associated with emotional activity compared to normal people whose main emotional brain area was the frontal lobe.
Collapse
|
120
|
Paine AL, van Goozen SHM, Burley DT, Anthony R, Shelton KH. Facial emotion recognition in adopted children. Eur Child Adolesc Psychiatry 2023; 32:87-99. [PMID: 34228226 PMCID: PMC9908728 DOI: 10.1007/s00787-021-01829-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 06/13/2021] [Indexed: 11/27/2022]
Abstract
Children adopted from public care are more likely to experience emotional and behavioural problems. We investigated two aspects of emotion recognition that may be associated with these outcomes, including discrimination accuracy of emotions and response bias, in a mixed-method, multi-informant study of 4-to-8-year old children adopted from local authority care in the UK (N = 42). We compared adopted children's emotion recognition performance to that of a comparison group of children living with their birth families, who were matched by age, sex, and teacher-rated total difficulties on the Strengths and Difficulties Questionnaire (SDQ, N = 42). We also examined relationships between adopted children's emotion recognition skills and their pre-adoptive histories of early adversity (indexed by cumulative adverse childhood experiences), their parent- and teacher-rated emotional and behavioural problems, and their parents' coded warmth during a Five Minute Speech Sample. Adopted children showed significantly worse facial emotion discrimination accuracy of sad and angry faces than non-adopted children. Adopted children's discrimination accuracy of scared and neutral faces was negatively associated with parent-reported behavioural problems, and discrimination accuracy of angry and scared faces was associated with parent- and teacher-reported emotional problems. Contrary to expectations, children who experienced more recorded pre-adoptive early adversity were more accurate in identifying negative emotions. Warm adoptive parenting was associated with fewer behavioural problems, and a lower tendency for children to incorrectly identify faces as angry. Study limitations and implications for intervention strategies to support adopted children's emotion recognition and psychological adjustment are discussed.
Collapse
|
121
|
Pavez R, Diaz J, Arango-Lopez J, Ahumada D, Mendez-Sandoval C, Moreira F. Emo-mirror: a proposal to support emotion recognition in children with autism spectrum disorders. Neural Comput Appl 2023; 35:7913-7924. [PMID: 34642548 PMCID: PMC8497190 DOI: 10.1007/s00521-021-06592-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 09/24/2021] [Indexed: 11/25/2022]
Abstract
Autism spectrum disorder (ASD) is a neurodevelopmental disorder defined as persistent difficulty in maturing the socialization process. Health professionals have used traditional methods in the therapies performed on patients with the aim of improving the expression of emotions by patients. However, they have not been sufficient to detect the different emotions expressed in the face of people according to different sensations. Therefore, different artificial intelligence techniques have been applied to improve the results obtained in these therapies. In this article, we propose the construction of an intelligent mirror to recognize five basic emotions: angry, scared, sad, happy and neutral. This mirror uses convolutional neural networks to analyze the images that are captured by a camera and compare it with the one that the patient should perform, thus supporting the therapies performed by health professionals in children with ASD. The proposal presents the platform and computer architecture, as well as the evaluation by specialists under the technology acceptance model.
Collapse
|
122
|
Wei Y, Liu Y, Li C, Cheng J, Song R, Chen X. TC-Net: A Transformer Capsule Network for EEG-based emotion recognition. Comput Biol Med 2023; 152:106463. [PMID: 36571938 DOI: 10.1016/j.compbiomed.2022.106463] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 11/30/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
Deep learning has recently achieved remarkable success in emotion recognition based on Electroencephalogram (EEG), in which convolutional neural networks (CNNs) are the mostly used models. However, due to the local feature learning mechanism, CNNs have difficulty in capturing the global contextual information involving temporal domain, frequency domain, intra-channel and inter-channel. In this paper, we propose a Transformer Capsule Network (TC-Net), which mainly contains an EEG Transformer module to extract EEG features and an Emotion Capsule module to refine the features and classify the emotion states. In the EEG Transformer module, EEG signals are partitioned into non-overlapping windows. A Transformer block is adopted to capture global features among different windows, and we propose a novel patch merging strategy named EEG-PatchMerging (EEG-PM) to better extract local features. In the Emotion Capsule module, each channel of the EEG feature maps is encoded into a capsule to better characterize the spatial relationships among multiple features. Experimental results on two popular datasets (i.e., DEAP and DREAMER) demonstrate that the proposed method achieves the state-of-the-art performance in the subject-dependent scenario. Specifically, on DEAP (DREAMER), our TC-Net achieves the average accuracies of 98.76% (98.59%), 98.81% (98.61%) and 98.82% (98.67%) at valence, arousal and dominance dimensions, respectively. Moreover, the proposed TC-Net also shows high effectiveness in multi-state emotion recognition tasks using the popular VA and VAD models. The main limitation of the proposed model is that it tends to obtain relatively low performance in the cross-subject recognition task, which is worthy of further study in the future.
Collapse
|
123
|
Emotion is perceived accurately from isolated body parts, especially hands. Cognition 2023; 230:105260. [PMID: 36058103 DOI: 10.1016/j.cognition.2022.105260] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 08/16/2022] [Accepted: 08/17/2022] [Indexed: 11/21/2022]
Abstract
Body posture and configuration provide important visual cues about the emotion states of other people. We know that bodily form is processed holistically, however, emotion recognition may depend on different mechanisms; certain body parts, such as the hands, may be especially important for perceiving emotion. This study therefore compared participants' emotion recognition performance when shown images of full bodies, or of isolated hands, arms, heads and torsos. Across three experiments, emotion recognition accuracy was above chance for all body parts. While emotions were recognized most accurately from full bodies, recognition performance from the hands was more accurate than for other body parts. Representational similarity analysis further showed that the pattern of errors for the hands was related to that for full bodies. Performance was reduced when stimuli were inverted, showing a clear body inversion effect. The high performance for hands was not due only to the fact that there are two hands, as performance remained well above chance even when just one hand was shown. These results demonstrate that emotions can be decoded from body parts. Furthermore, certain features, such as the hands, are more important to emotion perception than others. STATEMENT OF RELEVANCE: Successful social interaction relies on accurately perceiving emotional information from others. Bodies provide an abundance of emotion cues; however, the way in which emotional bodies and body parts are perceived is unclear. We investigated this perceptual process by comparing emotion recognition for body parts with that for full bodies. Crucially, we found that while emotions were most accurately recognized from full bodies, emotions were also classified accurately when images of isolated hands, arms, heads and torsos were seen. Of the body parts shown, emotion recognition from the hands was most accurate. Furthermore, shared patterns of emotion classification for hands and full bodies suggested that emotion recognition mechanisms are shared for full bodies and body parts. That the hands are key to emotion perception is important evidence in its own right. It could also be applied to interventions for individuals who find it difficult to read emotions from faces and bodies.
Collapse
|
124
|
The association between acute stress & empathy: A systematic literature review. Neurosci Biobehav Rev 2023; 144:105003. [PMID: 36535374 DOI: 10.1016/j.neubiorev.2022.105003] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 12/09/2022] [Accepted: 12/12/2022] [Indexed: 12/23/2022]
Abstract
Empathy is a fundamental component of our social-emotional experience. Over the last decade, there has been increased interest in understanding the effects of acute stress on empathy. We provide a first comprehensive-and systematic-overview identifying emerging patterns and gaps in this literature. Regarding affective empathy, there is abundant evidence for stress contagion-the 'spillover' of stress from a stressed target to an unstressed perceiver. We highlight contextual factors that can facilitate and/or undermine these effects. Fewer studies have investigated the effects of acute stress on affective empathy, revealing a nuanced picture, some evidence suggests acute stress can block contagion of other's emotions; but again contextual differences need to be considered. Regarding cognitive empathy, most studies find no conclusive effects for simplistic measures of emotion recognition; however, studies using more complex empathy tasks find that acute stress might affect cognitive empathy differentially for men and women. This review provides an important first step towards understanding how acute stress can impact social-togetherness, and aims to aid future research by highlighting (in)congruencies and outstanding questions.
Collapse
|
125
|
TERMS: textual emotion recognition in multidimensional space. APPL INTELL 2023; 53:2673-2693. [PMID: 35578619 PMCID: PMC9094737 DOI: 10.1007/s10489-022-03567-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/29/2022] [Indexed: 01/14/2023]
Abstract
Microblogs generate a vast amount of data in which users express their emotions regarding almost all aspects of everyday life. Capturing affective content from these context-dependent and subjective texts is a challenging task. We propose an intelligent probabilistic model for textual emotion recognition in multidimensional space (TERMS) that captures the subjective emotional boundaries and contextual information embedded in a text for robust emotion recognition. It is implausible with discrete label assignment;therefore, the model employs a soft assignment by mapping varying emotional perceptions in a multidimensional space and generates them as distributions via the Gaussian mixture model (GMM). To strengthen emotion distributions, TERMS integrates a probabilistic emotion classifier that captures the contextual and linguistic information from texts. The integration of these aspects, the context-aware emotion classifier and the learned GMM parameters provide a complete coverage for accurate emotion recognition. The large-scale experimentation shows that compared to baseline and state-of-the-art models, TERMS achieved better performance in terms of distinguishability, prediction, and classification performance. In addition, TERMS provide insights on emotion classes, the annotation patterns, and the models application in different scenarios.
Collapse
|