51
|
Huang K, Tian Z, Zhang Q, Yang H, Wen S, Feng J, Tang W, Wang Q, Feng L. Reduced eye gaze fixation during emotion recognition among patients with temporal lobe epilepsy. J Neurol 2024; 271:2560-2572. [PMID: 38289536 DOI: 10.1007/s00415-024-12202-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 01/09/2024] [Accepted: 01/16/2024] [Indexed: 04/28/2024]
Abstract
OBJECTIVES To investigate the facial scan patterns during emotion recognition (ER) through the dynamic facial expression task and the awareness of social interference test (TASIT) using eye tracking (ET) technology, and to find some ET indicators that can accurately depict the ER process, which is a beneficial supplement to existing ER assessment tools. METHOD Ninety-six patients with TLE and 88 healthy controls (HCs) were recruited. All participants watched the dynamic facial expression task and TASIT including a synchronized eye movement recording and recognized the emotion (anger, disgust, happiness, or sadness). The accuracy of ER was recorded. The first fixation time, first fixation duration, dwell time, and fixation count were selected and analyzed. RESULTS TLE patients exhibited ER impairment especially for disgust (Z = - 3.391; p = 0.001) and sadness (Z = - 3.145; p = 0.002). TLE patients fixated less on the face, as evidenced by the reduced fixation count (Z = - 2.549; p = 0.011) of the face and a significant decrease in the fixation count rate (Z = - 1.993; p = 0.046). During the dynamic facial expression task, TLE patients focused less on the eyes, as evidenced by the decreased first fixation duration (Z = - 4.322; p = 0.000), dwell time (Z = - 4.083; p = 0.000), and fixation count (Z = - 3.699; p = 0.000) of the eyes. CONCLUSION TLE patients had ER impairment, especially regarding negative emotions, which may be attributable to their reduced fixation on the eyes during ER, and the increased fixation on the mouth could be a compensatory effect to improve ER performance. Eye-tracking technology could provide the process indicators of ER, and is a valuable supplement to traditional ER assessment tasks.
Collapse
|
52
|
Houssein EH, Hammad A, Emam MM, Ali AA. An enhanced Coati Optimization Algorithm for global optimization and feature selection in EEG emotion recognition. Comput Biol Med 2024; 173:108329. [PMID: 38513391 DOI: 10.1016/j.compbiomed.2024.108329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 03/07/2024] [Accepted: 03/17/2024] [Indexed: 03/23/2024]
Abstract
Emotion recognition based on Electroencephalography (EEG) signals has garnered significant attention across diverse domains including healthcare, education, information sharing, and gaming, among others. Despite its potential, the absence of a standardized feature set poses a challenge in efficiently classifying various emotions. Addressing the issue of high dimensionality, this paper introduces an advanced variant of the Coati Optimization Algorithm (COA), called eCOA for global optimization and selecting the best subset of EEG features for emotion recognition. Specifically, COA suffers from local optima and imbalanced exploitation abilities as other metaheuristic methods. The proposed eCOA incorporates the COA and RUNge Kutta Optimizer (RUN) algorithms. The Scale Factor (SF) and Enhanced Solution Quality (ESQ) mechanism from RUN are applied to resolve the raised shortcomings of COA. The proposed eCOA algorithm has been extensively evaluated using the CEC'22 test suite and two EEG emotion recognition datasets, DEAP and DREAMER. Furthermore, the eCOA is applied for binary and multi-class classification of emotions in the dimensions of valence, arousal, and dominance using a multi-layer perceptron neural network (MLPNN). The experimental results revealed that the eCOA algorithm has more powerful search capabilities than the original COA and seven well-known counterpart methods related to statistical, convergence, and diversity measures. Furthermore, eCOA can efficiently support feature selection to find the best EEG features to maximize performance on four quadratic emotion classification problems compared to the methods of its counterparts. The suggested method obtains a classification accuracy of 85.17% and 95.21% in the binary classification of low and high arousal emotions in two public datasets: DEAP and DREAMER, respectively, which are 5.58% and 8.98% superior to existing approaches working on the same datasets for different subjects, respectively.
Collapse
|
53
|
Vandervert L, Manto M, Adamaszek M, Ferrari C, Ciricugno A, Cattaneo Z. The Evolution of the Optimization of Cognitive and Social Functions in the Cerebellum and Thereby the Rise of Homo sapiens Through Cumulative Culture. CEREBELLUM (LONDON, ENGLAND) 2024:10.1007/s12311-024-01692-z. [PMID: 38676835 DOI: 10.1007/s12311-024-01692-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/02/2024] [Indexed: 04/29/2024]
Abstract
The evolution of the prominent role of the cerebellum in the development of composite tools, and cumulative culture, leading to the rise of Homo sapiens is examined. Following Stout and Hecht's (2017) detailed description of stone-tool making, eight key repetitive involvements of the cerebellum are highlighted. These key cerebellar learning involvements include the following: (1) optimization of cognitive-social control, (2) prediction (3) focus of attention, (4) automaticity of smoothness, appropriateness, and speed of movement and cognition, (5) refined movement and social cognition, (6) learns models of extended practice, (7) learns models of Theory of Mind (ToM) of teachers, (8) is predominant in acquisition of novel behavior and cognition that accrues from the blending of cerebellar models sent to conscious working memory in the cerebral cortex. Within this context, the evolution of generalization and blending of cerebellar internal models toward optimization of social-cognitive learning is described. It is concluded that (1) repetition of movement and social cognition involving the optimization of internal models in the cerebellum during stone-tool making was the key selection factor toward social-cognitive and technological advancement, (2) observational learning during stone-tool making was the basis for both technological and social-cognitive evolution and, through an optimizing positive feedback loop between the cerebellum and cerebral cortex, the development of cumulative culture occurred, and (3) the generalization and blending of cerebellar internal models related to the unconscious forward control of the optimization of imagined future states in working memory was the most important brain adaptation leading to intertwined advances in stone-tool technology, cognitive-social processes behind cumulative culture (including the emergence of language and art) and, thereby, with the rise of Homo sapiens.
Collapse
|
54
|
杨 文, 徐 可. [Research progress on emotion recognition by combining virtual reality environment and electroencephalogram signals]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2024; 41:389-397. [PMID: 38686422 PMCID: PMC11058485 DOI: 10.7507/1001-5515.202310045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 02/19/2024] [Indexed: 05/02/2024]
Abstract
Emotion recognition refers to the process of determining and identifying an individual's current emotional state by analyzing various signals such as voice, facial expressions, and physiological indicators etc. Using electroencephalogram (EEG) signals and virtual reality (VR) technology for emotion recognition research helps to better understand human emotional changes, enabling applications in areas such as psychological therapy, education, and training to enhance people's quality of life. However, there is a lack of comprehensive review literature summarizing the combined researches of EEG signals and VR environments for emotion recognition. Therefore, this paper summarizes and synthesizes relevant research from the past five years. Firstly, it introduces the relevant theories of VR and EEG signal emotion recognition. Secondly, it focuses on the analysis of emotion induction, feature extraction, and classification methods in emotion recognition using EEG signals within VR environments. The article concludes by summarizing the research's application directions and providing an outlook on future development trends, aiming to serve as a reference for researchers in related fields.
Collapse
|
55
|
Guo Z, Yang M, Lin L, Li J, Zhang S, He Q, Gao J, Meng H, Chen X, Tao Y, Yang C. E-MFNN: an emotion-multimodal fusion neural network framework for emotion recognition. PeerJ Comput Sci 2024; 10:e1977. [PMID: 38660191 PMCID: PMC11041955 DOI: 10.7717/peerj-cs.1977] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 03/12/2024] [Indexed: 04/26/2024]
Abstract
Emotional recognition is a pivotal research domain in computer and cognitive science. Recent advancements have led to various emotion recognition methods, leveraging data from diverse sources like speech, facial expressions, electroencephalogram (EEG), electrocardiogram, and eye tracking (ET). This article introduces a novel emotion recognition framework, primarily targeting the analysis of users' psychological reactions and stimuli. It is important to note that the stimuli eliciting emotional responses are as critical as the responses themselves. Hence, our approach synergizes stimulus data with physical and physiological signals, pioneering a multimodal method for emotional cognition. Our proposed framework unites stimulus source data with physiological signals, aiming to enhance the accuracy and robustness of emotion recognition through data integration. We initiated an emotional cognition experiment to gather EEG and ET data alongside recording emotional responses. Building on this, we developed the Emotion-Multimodal Fusion Neural Network (E-MFNN), optimized for multimodal data fusion to process both stimulus and physiological data. We conducted extensive comparisons between our framework's outcomes and those from existing models, also assessing various algorithmic approaches within our framework. This comparison underscores our framework's efficacy in multimodal emotion recognition. The source code is publicly available at https://figshare.com/s/8833d837871c78542b29.
Collapse
|
56
|
Fares-Otero NE, Halligan SL, Vieta E, Heilbronner U. Pupil size as a potential marker of emotion processing in child maltreatment. J Affect Disord 2024; 351:392-395. [PMID: 38290582 DOI: 10.1016/j.jad.2024.01.242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Revised: 01/08/2024] [Accepted: 01/26/2024] [Indexed: 02/01/2024]
|
57
|
Scarth M, Hauger LE, Thorsby PM, Leknes S, Hullstein IR, Westlye LT, Bjørnebekk A. Supraphysiological testosterone levels from anabolic steroid use and reduced sensitivity to negative facial expressions in men. Psychopharmacology (Berl) 2024; 241:701-715. [PMID: 37993638 DOI: 10.1007/s00213-023-06497-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 11/03/2023] [Indexed: 11/24/2023]
Abstract
RATIONALE Anabolic-androgenic steroids (AAS) are used to improve physical performance and appearance, but have been associated with deficits in social cognitive functioning. Approximately 30% of people who use AAS develop a dependence, increasing the risk for undesired effects. OBJECTIVES To assess the relationship between AAS use (current/previous), AAS dependence, and the ability to recognize emotional facial expressions, and investigate the potential mediating role of hormone levels. METHODS In total 156 male weightlifters, including those with current (n = 45) or previous (n = 34) AAS use and never-using controls (n = 77), completed a facial Emotion Recognition Task (ERT). Participants were presented with faces expressing one out of six emotions (sadness, happiness, fear, anger, disgust, and surprise) and were instructed to indicate which of the six emotions each face displayed. ERT accuracy and response time were recorded and evaluated for association with AAS use status, AAS dependence, and serum reproductive hormone levels. Mediation models were used to evaluate the mediating role of androgens in the relationship between AAS use and ERT performance. RESULTS Compared to never-using controls, men currently using AAS exhibited lower recognition accuracy for facial emotional expressions, particularly anger (Cohen's d = -0.57, pFDR = 0.03) and disgust (d = -0.51, pFDR = 0.05). Those with AAS dependence (n = 47) demonstrated worse recognition of fear relative to men without dependence (d = 0.58, p = 0.03). Recognition of disgust was negatively correlated with serum free testosterone index (FTI); however, FTI did not significantly mediate the association between AAS use and recognition of disgust. CONCLUSIONS Our findings demonstrate impaired facial emotion recognition among men currently using AAS compared to controls. While further studies are needed to investigate potential mechanisms, our analysis did not support a simple mediation effect of serum FTI.
Collapse
|
58
|
Ju X, Li M, Tian W, Hu D. EEG-based emotion recognition using a temporal-difference minimizing neural network. Cogn Neurodyn 2024; 18:405-416. [PMID: 38699602 PMCID: PMC11061074 DOI: 10.1007/s11571-023-10004-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 07/25/2023] [Accepted: 08/21/2023] [Indexed: 05/05/2024] Open
Abstract
Electroencephalogram (EEG) emotion recognition plays an important role in human-computer interaction. An increasing number of algorithms for emotion recognition have been proposed recently. However, it is still challenging to make efficient use of emotional activity knowledge. In this paper, based on prior knowledge that emotion varies slowly across time, we propose a temporal-difference minimizing neural network (TDMNN) for EEG emotion recognition. We use maximum mean discrepancy (MMD) technology to evaluate the difference in EEG features across time and minimize the difference by a multibranch convolutional recurrent network. State-of-the-art performances are achieved using the proposed method on the SEED, SEED-IV, DEAP and DREAMER datasets, demonstrating the effectiveness of including prior knowledge in EEG emotion recognition.
Collapse
|
59
|
Hu L, Tan C, Xu J, Qiao R, Hu Y, Tian Y. Decoding emotion with phase-amplitude fusion features of EEG functional connectivity network. Neural Netw 2024; 172:106148. [PMID: 38309138 DOI: 10.1016/j.neunet.2024.106148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 12/20/2023] [Accepted: 01/23/2024] [Indexed: 02/05/2024]
Abstract
Decoding emotional neural representations from the electroencephalographic (EEG)-based functional connectivity network (FCN) is of great scientific importance for uncovering emotional cognition mechanisms and developing harmonious human-computer interactions. However, existing methods mainly rely on phase-based FCN measures (e.g., phase locking value [PLV]) to capture dynamic interactions between brain oscillations in emotional states, which fail to reflect the energy fluctuation of cortical oscillations over time. In this study, we initially examined the efficacy of amplitude-based functional networks (e.g., amplitude envelope correlation [AEC]) in representing emotional states. Subsequently, we proposed an efficient phase-amplitude fusion framework (PAF) to fuse PLV and AEC and used common spatial pattern (CSP) to extract fused spatial topological features from PAF for multi-class emotion recognition. We conducted extensive experiments on the DEAP and MAHNOB-HCI datasets. The results showed that: (1) AEC-derived discriminative spatial network topological features possess the ability to characterize emotional states, and the differential network patterns of AEC reflect dynamic interactions in brain regions associated with emotional cognition. (2) The proposed fusion features outperformed other state-of-the-art methods in terms of classification accuracy for both datasets. Moreover, the spatial filter learned from PAF is separable and interpretable, enabling a description of affective activation patterns from both phase and amplitude perspectives.
Collapse
|
60
|
Pumphrey JD, Ramani S, Islam T, Berard JA, Seegobin M, Lymer JM, Freedman MS, Wang J, Walker LAS. Assessing multimodal emotion recognition in multiple sclerosis with a clinically accessible measure. Mult Scler Relat Disord 2024; 86:105603. [PMID: 38583368 DOI: 10.1016/j.msard.2024.105603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 03/11/2024] [Accepted: 03/31/2024] [Indexed: 04/09/2024]
Abstract
BACKGROUND Multiple sclerosis (MS) negatively impacts cognition and has been associated with deficits in social cognition, including emotion recognition. There is a lack of research examining emotion recognition from multiple modalities in MS. The present study aimed to employ a clinically available measure to assess multimodal emotion recognition abilities among individuals with MS. METHOD Thirty-one people with MS and 21 control participants completed the Advanced Clinical Solutions Social Perceptions Subtest (ACS-SP), BICAMS, and measures of premorbid functioning, mood, and fatigue. ANCOVAs examined group differences in all outcomes while controlling for education. Correlational analyses examined potential correlates of emotion recognition in both groups. RESULTS The MS group performed significantly worse on the ACS-SP than the control group, F(1, 49) = 5.32, p = .025. Significant relationships between emotion recognition and cognitive functions were found only in the MS group, namely for information processing speed (r = 0.59, p < .001), verbal learning (r = 0.52, p = .003) and memory (r = 0.65, p < 0.001), and visuospatial learning (r = 0.62, p < 0.001) and memory (r = 0.52, p = .003). Emotion recognition did not correlate with premorbid functioning, mood, or fatigue in either group. CONCLUSIONS This study was the first to employ the ACS-SP to assess emotion recognition in MS. The results suggest that emotion recognition is impacted in MS and is related to other cognitive processes, such as information processing speed. The results provide information for clinicians amidst calls to include social cognition measures in standard MS assessments.
Collapse
|
61
|
Cheng C, Liu W, Fan Z, Feng L, Jia Z. A novel transformer autoencoder for multi-modal emotion recognition with incomplete data. Neural Netw 2024; 172:106111. [PMID: 38237444 DOI: 10.1016/j.neunet.2024.106111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Revised: 12/18/2023] [Accepted: 01/05/2024] [Indexed: 02/28/2024]
Abstract
Multi-modal signals have become essential data for emotion recognition since they can represent emotions more comprehensively. However, in real-world environments, it is often impossible to acquire complete data on multi-modal signals, and the problem of missing modalities causes severe performance degradation in emotion recognition. Therefore, this paper represents the first attempt to use a transformer-based architecture, aiming to fill the modality-incomplete data from partially observed data for multi-modal emotion recognition (MER). Concretely, this paper proposes a novel unified model called transformer autoencoder (TAE), comprising a modality-specific hybrid transformer encoder, an inter-modality transformer encoder, and a convolutional decoder. The modality-specific hybrid transformer encoder bridges a convolutional encoder and a transformer encoder, allowing the encoder to learn local and global context information within each particular modality. The inter-modality transformer encoder builds and aligns global cross-modal correlations and models long-range contextual information with different modalities. The convolutional decoder decodes the encoding features to produce more precise recognition. Besides, a regularization term is introduced into the convolutional decoder to force the decoder to fully leverage the complete and incomplete data for emotional recognition of missing data. 96.33%, 95.64%, and 92.69% accuracies are attained on the available data of the DEAP and SEED-IV datasets, and 93.25%, 92.23%, and 81.76% accuracies are obtained on the missing data. Particularly, the model acquires a 5.61% advantage with 70% missing data, demonstrating that the model outperforms some state-of-the-art approaches in incomplete multi-modal learning.
Collapse
|
62
|
McManimen SL, Hay J, Long C, Bryan CJ, Aase DM. Suicide-related cognitions and emotional bias performance in a community sample. J Affect Disord 2024; 349:197-200. [PMID: 38190852 DOI: 10.1016/j.jad.2024.01.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 11/21/2023] [Accepted: 01/03/2024] [Indexed: 01/10/2024]
Abstract
BACKGROUND Suicide is theorized to be connected to social interactions and feelings of belongingness. Those with suicide-related cognitions (SRCs) demonstrate attentional bias toward negative or suicide-related words, which can lead to increased feelings of rejection or alienation. As social interactions employ both verbal and nonverbal cues, there exists a gap in understanding how perception of emotional expressions can contribute to the development or exacerbation of suicidal ideation. METHODS The current sample (N = 114, 60.5 % female, 74.6 % white) completed the Suicide Cognitions Scale-Revised (SCS-R) and Patient Health Questionnaire (PHQ-9) to assess SRCs and depression severity. The Emotional Bias Task (EBT) was used to assess emotional response latency. RESULTS Multiple regression analyses on EBT results showed that endorsement of SRCs and depression severity were not associated with any particular emotional response bias. However, presence of SRCs showed an association with longer latencies to identify ambiguous emotional expressions, even when controlling for depressive symptoms and age LIMITATIONS: Measures were self-completed online. Relative homogeneity of the sample and cross-sectional design limits interpretation of the results. CONCLUSIONS Those with more severe SRCs take longer to recognize positive, nonverbal cues. Irregular processing of positive emotional stimuli combined with bias toward negative verbal cues could worsen feelings of rejection or alienation in social interactions, therefore increasing risk of developing SI. This suggests that interventions focusing on allocation of attentional resources to process positive social cues may be beneficial for those with SRCs to reduce severity and risk of suicide.
Collapse
|
63
|
Gandia-Ferrero MT, Adrián-Ventura J, Cháfer-Pericás C, Alvarez-Sanchez L, Ferrer-Cairols I, Martinez-Sanchis B, Torres-Espallardo I, Baquero-Toledo M, Marti-Bonmati L. Relationship between neuroimaging and emotion recognition in mild cognitive impairment patients. Behav Brain Res 2024; 461:114844. [PMID: 38176615 DOI: 10.1016/j.bbr.2023.114844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 11/27/2023] [Accepted: 12/29/2023] [Indexed: 01/06/2024]
Abstract
OBJECTIVE Dementia is a major public health problem with high needs for early detection, efficient treatment, and prognosis evaluation. Social cognition impairment could be an early dementia indicator and can be assessed with emotion recognition evaluation tests. The purpose of this study is to investigate the link between different brain imaging modalities and cognitive status in Mild Cognitive Impairment (MCI) patients, with the goal of uncovering potential physiopathological mechanisms based on social cognition performance. METHODS The relationship between the Reading the Mind in the Eyes Test (RMET) and some clinical and biochemical variables ([18 F]FDG PET-CT and anatomical MR parameters, neuropsychological evaluation, and CSF biomarkers) was studied in 166 patients with MCI by using a correlational approach. RESULTS The RMET correlated with neuropsychological variables, as well as with structural and functional brain parameters obtained from the MR and FDG-PET imaging evaluation. However, significant correlations between the RMET and CSF biomarkers were not found. DISCUSSION Different neuroimaging parameters were found to be related to an emotion recognition task in MCI. This analysis identified potential minimally-invasive biomarkers providing some knowledge about the physiopathological mechanisms in MCI.
Collapse
|
64
|
Buisman RSM, Compier-de Block LHCG, Bakermans-Kranenburg MJ, Pittner K, van den Berg LJM, Tollenaar MS, Elzinga BM, Voorthuis A, Linting M, Alink LRA. The role of emotion recognition in the intergenerational transmission of child maltreatment: A multigenerational family study. CHILD ABUSE & NEGLECT 2024; 149:106699. [PMID: 38417291 DOI: 10.1016/j.chiabu.2024.106699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 01/26/2024] [Accepted: 02/09/2024] [Indexed: 03/01/2024]
Abstract
BACKGROUND Understanding how child maltreatment is passed down from one generation to the next is crucial for the development of intervention and prevention strategies that may break the cycle of child maltreatment. Changes in emotion recognition due to childhood maltreatment have repeatedly been found, and may underly the intergenerational transmission of child maltreatment. OBJECTIVE In this study we, therefore, examined whether the ability to recognize emotions plays a role in the intergenerational transmission of child abuse and neglect. PARTICIPANTS AND SETTING A total of 250 parents (104 males, 146 females) were included that participated in a three-generation family study. METHOD Participants completed an emotion recognition task in which they were presented with series of photographs that depicted the unfolding of facial expressions from neutrality to the peak emotions anger, fear, happiness, and sadness. Multi-informant measures were used to examine experienced and perpetrated child maltreatment. RESULTS A history of abuse, but not neglect, predicted a shorter reaction time to identify fear and anger. In addition, parents who showed higher levels of neglectful behavior made more errors in identifying fear, whereas parents who showed higher levels of abusive behavior made more errors in identifying anger. Emotion recognition did not mediate the association between experienced and perpetrated child maltreatment. CONCLUSIONS Findings highlight the importance of distinguishing between abuse and neglect when investigating the precursors and sequalae of child maltreatment. In addition, the effectiveness of interventions that aim to break the cycle of abuse and neglect could be improved by better addressing the specific problems with emotion processing of abusive and neglectful parents.
Collapse
|
65
|
Zhang R, Guo H, Xu Z, Hu Y, Chen M, Zhang L. MGFKD: A semi-supervised multi-source domain adaptation algorithm for cross-subject EEG emotion recognition. Brain Res Bull 2024; 208:110901. [PMID: 38355058 DOI: 10.1016/j.brainresbull.2024.110901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 12/31/2023] [Accepted: 02/11/2024] [Indexed: 02/16/2024]
Abstract
Currently, most models rarely consider the negative transfer problem in the research field of cross-subject EEG emotion recognition. To solve this problem, this paper proposes a semi-supervised domain adaptive algorithm based on few labeled samples of target subject, which called multi-domain geodesic flow kernel dynamic distribution alignment (MGFKD). It consists of three modules: 1) GFK common feature extractor: projects the feature distribution of source and target subjects to the Grassmann manifold space, and obtains the latent common features of the two feature distributions through GFK method. 2) Source domain selector: obtains pseudo-labels of the target subject through weak classifier, finds "golden source subjects" by using few known labels of target subjects. 3) Label corrector: uses a dynamic distribution balance strategy to correct the pseudo-labels of the target subject. We conducted comparison experiments on the SEED and SEED-IV datasets, and the results show that MGFKD outperforms unsupervised and semi-supervised domain adaptation algorithms, achieving an average accuracy of 87.51±7.68% and 68.79±8.25% on the SEED and SEED-IV datasets with only one labeled sample per video for target subject. Especially when the number of source domains is set as 6 and the number of known labels is set as 5, the accuracy increase to 90.20±7.57% and 69.99±7.38%, respectively. The above results prove that our proposed algorithm can efficiently and quickly improve the cross-subject EEG emotion classification performance. Since it only need a small number of labeled samples of new subjects, making it has strong application value in future EEG-based emotion recognition applications.
Collapse
|
66
|
Díaz-Vázquez B, López-Romero L, Romero E. Emotion Recognition Deficits in Children and Adolescents with Psychopathic Traits: A Systematic Review. Clin Child Fam Psychol Rev 2024; 27:165-219. [PMID: 38240937 PMCID: PMC10920463 DOI: 10.1007/s10567-023-00466-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/03/2023] [Indexed: 03/08/2024]
Abstract
Children and adolescents with psychopathic traits show deficits in emotion recognition, but there is no consensus as to the extent of their generalizability or about the variables that may be moderating the process. The present Systematic Review brings together the existing scientific corpus on the subject and attempts to answer these questions through an exhaustive review of the existing literature according to PRISMA 2020 statement. Results confirmed the existence of pervasive deficits in emotion recognition and, more specifically, on distress emotions (e.g., fear), a deficit that transcends all modalities of emotion presentation and all emotional stimuli used. Moreover, they supported the key role of attention to relevant areas that provide emotional cues (e.g., eye-region) and point out differences according to the presence of disruptive behavior and based on the psychopathy dimension examined. This evidence could advance the current knowledge on developmental models of psychopathic traits. Yet, homogenization of the conditions of research in this area should be prioritized to be able to draw more robust and generalizable conclusions.
Collapse
|
67
|
Zhang Z, Peng Y, Jiang Y, Chen T. The pictorial set of Emotional Social Interactive Scenarios between Chinese Adults (ESISCA): Development and validation. Behav Res Methods 2024; 56:2581-2594. [PMID: 37528294 DOI: 10.3758/s13428-023-02168-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/12/2023] [Indexed: 08/03/2023]
Abstract
Affective picture databases with a single facial expression or body posture in one image have been widely applied to investigate emotion. However, to date, there was no standardized database containing the stimuli which involve multiple emotional signals in social interactive scenarios. The current study thus developed a pictorial set comprising 274 images depicting two Chinese adults' interactive scenarios conveying emotions of happiness, anger, sadness, fear, disgust, and neutral. The data of the valence and arousal ratings of the scenes and the emotional categories of the scenes and the faces in the images were provided in the present study. Analyses of the data collected from 70 undergraduate students suggested high reliabilities of the valence and arousal ratings of the scenes and high judgmental agreements in categorizing the scene and facial emotions. The findings suggested that the present dataset is well constructed and could be useful for future studies to investigate the emotion recognition or empathy in social interactions in both healthy and clinical (e.g., ASD) populations.
Collapse
|
68
|
Xie S, Lei L, Sun J, Xu J. [Research on emotion recognition method based on IWOA-ELM algorithm for electroencephalogram]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2024; 41:1-8. [PMID: 38403598 PMCID: PMC10894732 DOI: 10.7507/1001-5515.202303010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
Emotion is a crucial physiological attribute in humans, and emotion recognition technology can significantly assist individuals in self-awareness. Addressing the challenge of significant differences in electroencephalogram (EEG) signals among different subjects, we introduce a novel mechanism in the traditional whale optimization algorithm (WOA) to expedite the optimization and convergence of the algorithm. Furthermore, the improved whale optimization algorithm (IWOA) was applied to search for the optimal training solution in the extreme learning machine (ELM) model, encompassing the best feature set, training parameters, and EEG channels. By testing 24 common EEG emotion features, we concluded that optimal EEG emotion features exhibited a certain level of specificity while also demonstrating some commonality among subjects. The proposed method achieved an average recognition accuracy of 92.19% in EEG emotion recognition, significantly reducing the manual tuning workload and offering higher accuracy with shorter training times compared to the control method. It outperformed existing methods, providing a superior performance and introducing a novel perspective for decoding EEG signals, thereby contributing to the field of emotion research from EEG signal.
Collapse
|
69
|
Israelashvili J, Dijk C, Fischer AH. Social anxiety is associated with personal distress and disrupted recognition of negative emotions. Heliyon 2024; 10:e24587. [PMID: 38317896 PMCID: PMC10839860 DOI: 10.1016/j.heliyon.2024.e24587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 11/29/2023] [Accepted: 01/10/2024] [Indexed: 02/07/2024] Open
Abstract
Past research investigating the relation between social anxiety (SA), empathy and emotion recognition is marked by conceptual and methodological issues. In the present study, we aim to overcome these limitations by examining whether individuals with high (n = 40) vs. low (n = 43) social anxiety differed across these two facets of empathy and whether this could be related to their recognition of emotions. We employed a naturalistic emotion recognition paradigm in which participants watched short videos of individuals (targets) sharing authentic emotional experiences. After each video, we measured self-reported empathic concern and distress, as well as their ability to recognize the emotions expressed by the targets in the videos. Our results show that individuals with high social anxiety recognized the targets' emotions less accurately. Furthermore, high socially anxious individuals reported more personal distress than low socially anxious individuals, whereas no significant difference was found for empathic concern. The findings suggest that reduced recognition of emotions among SA individuals can be better explained by the negative effects of social stress than by a general deficit in empathy.
Collapse
|
70
|
Sun W, Liu Y, Li S, Tian J, Wang F, Liu D. Research on driver's anger recognition method based on multimodal data fusion. TRAFFIC INJURY PREVENTION 2024; 25:354-363. [PMID: 38346170 DOI: 10.1080/15389588.2023.2297658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 12/18/2023] [Indexed: 03/23/2024]
Abstract
OBJECTIVES This paper aims to address the challenge of low accuracy in single-modal driver anger recognition by introducing a multimodal driver anger recognition model. The primary objective is to develop a multimodal fusion recognition method for identifying driver anger, focusing on electrocardiographic (ECG) signals and driving behavior signals. METHODS Emotion-inducing experiments were performed employing a driving simulator to capture both ECG signals and driving behavioral signals from drivers experiencing both angry and calm moods. An analysis of characteristic relationships and feature extraction was conducted on ECG signals and driving behavior signals related to driving anger. Seventeen effective feature indicators for recognizing driving anger were chosen to construct a dataset for driver anger. A binary classification model for recognizing driving anger was developed utilizing the Support Vector Machine (SVM) algorithm. RESULTS Multimodal fusion demonstrated significant advantages over single-modal approaches in emotion recognition. The SVM-DS model using decision-level fusion had the highest accuracy of 84.75%. Compared with the driver anger emotion recognition model based on unimodal ECG features, unimodal driving behavior features, and multimodal feature layer fusion, the accuracy increased by 9.10%, 4.15%, and 0.8%, respectively. CONCLUSIONS The proposed multimodal recognition model, incorporating ECG and driving behavior signals, effectively identifies driving anger. The research results provide theoretical and technical support for the establishment of a driver anger system.
Collapse
|
71
|
Abu-Nowar H, Sait A, Al-Hadhrami T, Al-Sarem M, Noman Qasem S. SENSES-ASD: a social-emotional nurturing and skill enhancement system for autism spectrum disorder. PeerJ Comput Sci 2024; 10:e1792. [PMID: 38435572 PMCID: PMC10909167 DOI: 10.7717/peerj-cs.1792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 12/12/2023] [Indexed: 03/05/2024]
Abstract
This article introduces the Social-Emotional Nurturing and Skill Enhancement System (SENSES-ASD) as an innovative method for assisting individuals with autism spectrum disorder (ASD). Leveraging deep learning technologies, specifically convolutional neural networks (CNN), our approach promotes facial emotion recognition, enhancing social interactions and communication. The methodology involves the use of the Xception CNN model trained on the FER-2013 dataset. The designed system accepts a variety of media inputs, successfully classifying and predicting seven primary emotional states. Results show that our system achieved a peak accuracy rate of 71% on the training dataset and 66% on the validation dataset. The novelty of our work lies in the intricate combination of deep learning methods specifically tailored for high-functioning autistic adults and the development of a user interface that caters to their unique cognitive and sensory sensitivities. This offers a novel perspective on utilising technological advances for ASD intervention, especially in the domain of emotion recognition.
Collapse
|
72
|
Burgio F, Menardi A, Benavides-Varela S, Danesin L, Giustiniani A, Van den Stock J, De Mitri R, Biundo R, Meneghello F, Antonini A, Vallesi A, de Gelder B, Semenza C. Facial emotion recognition in individuals with mild cognitive impairment: An exploratory study. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2024:10.3758/s13415-024-01160-5. [PMID: 38316707 DOI: 10.3758/s13415-024-01160-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 01/10/2024] [Indexed: 02/07/2024]
Abstract
Understanding facial emotions is fundamental to interact in social environments and modify behavior accordingly. Neurodegenerative processes can progressively transform affective responses and affect social competence. This exploratory study examined the neurocognitive correlates of face recognition, in individuals with two mild cognitive impairment (MCI) etiologies (prodromal to dementia - MCI, or consequent to Parkinson's disease - PD-MCI). Performance on the identification and memorization of neutral and emotional facial expressions was assessed in 31 individuals with MCI, 26 with PD-MCI, and 30 healthy controls (HC). Individuals with MCI exhibited selective impairment in recognizing faces expressing fear, along with difficulties in remembering both neutral and emotional faces. Conversely, individuals with PD-MCI showed no differences compared with the HC in either emotion recognition or memory. In MCI, no significant association emerged between the memory for facial expressions and cognitive difficulties. In PD-MCI, regression analyses showed significant associations with higher-level cognitive functions in the emotional memory task, suggesting the presence of compensatory mechanisms. In a subset of participants, voxel-based morphometry revealed that the performance on emotional tasks correlated with regional changes in gray matter volume. The performance in the matching of negative expressions was predicted by volumetric changes in brain areas engaged in face and emotional processing, in particular increased volume in thalamic nuclei and atrophy in the right parietal cortex. Future studies should leverage on neuroimaging data to determine whether differences in emotional recognition are mediated by pathology-specific atrophic patterns.
Collapse
|
73
|
Çelebi M, Öztürk S, Kaplan K. An emotion recognition method based on EWT-3D-CNN-BiLSTM-GRU-AT model. Comput Biol Med 2024; 169:107954. [PMID: 38183705 DOI: 10.1016/j.compbiomed.2024.107954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 12/28/2023] [Accepted: 01/01/2024] [Indexed: 01/08/2024]
Abstract
This has become a significant study area in recent years because of its use in brain-machine interaction (BMI). The robustness problem of emotion classification is one of the most basic approaches for improving the quality of emotion recognition systems. One of the two main branches of these approaches deals with the problem by extracting the features using manual engineering and the other is the famous artificial intelligence approach, which infers features of EEG data. This study proposes a novel method that considers the characteristic behavior of EEG recordings and based on the artificial intelligence method. The EEG signal is a noisy signal with a non-stationary and non-linear form. Using the Empirical Wavelet Transform (EWT) signal decomposition method, the signal's frequency components are obtained. Then, frequency-based features, linear and non-linear features are extracted. The resulting frequency-based, linear, and nonlinear features are mapped to the 2-D axis according to the positions of the EEG electrodes. By merging this 2-D images, 3-D images are constructed. In this way, the multichannel brain frequency of EEG recordings, spatial and temporal relationship are combined. Lastly, 3-D deep learning framework was constructed, which was combined with convolutional neural network (CNN), bidirectional long-short term memory (BiLSTM) and gated recurrent unit (GRU) with self-attention (AT). This model is named EWT-3D-CNN-BiLSTM-GRU-AT. As a result, we have created framework comprising handcrafted features generated and cascaded from state-of-the-art deep learning models. The framework is evaluated on the DEAP recordings based on the person-independent approach. The experimental findings demonstrate that the developed model can achieve classification accuracies of 90.57 % and 90.59 % for valence and arousal axes, respectively, for the DEAP database. Compared with existing cutting-edge emotion classification models, the proposed framework exhibits superior results for classifying human emotions.
Collapse
|
74
|
van Dijl TL, Aben HP, Synhaeve NE, de Waardt DA, Videler AC, Kop WJ. Alexithymia and facial emotion recognition in patients with functional neurological disorder. Clin Neurol Neurosurg 2024; 237:108128. [PMID: 38325039 DOI: 10.1016/j.clineuro.2024.108128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 01/18/2024] [Indexed: 02/09/2024]
Abstract
OBJECTIVES Patients with functional neurological disorder (FND) are known to have difficulties recognizing and processing emotions. Problems recognizing internal emotional states (alexithymia) are common in FND, but little is known about recognizing emotions expressed by other people. This study investigates whether patients with FND have higher levels of alexithymia and reduced facial emotion recognition compared to healthy controls. METHODS Patients with FND (n = 31, mean age=42.7 [SD=14.8] years, 54.8% women) were compared to healthy controls (n = 33, mean age=45.1 [SD=16.2] years, 63.6% women). The Bermond-Vorst Alexithymia Questionnaire (BVAQ) was used for the assessment of alexithymia and the Ekman 60 Faces Test (EFT) for facial emotion recognition. RESULTS Patients with FND had higher levels of alexithymia than healthy controls (BVAQ=71.8 [SD=19.8] versus 59.3 [SD=20.3], p = .02, Cohen's d=0.62). Facial emotion recognition did not significantly differ between FND patients and controls (EFT total score FND: 46.1 [SD=5.9], Controls: 47.5 [SD=5.5], p = .34, Cohen's d=0.24). Only recognition of surprise differed between patients and controls (FND: 8.4 [SD=1.8], Controls: 9.2 [SD=1.0), p = .03, Cohen's d= 0.56). Higher levels of alexithymia were associated with poorer facial emotion recognition, but this relationship was not statistically significant (FND: β= -0.20, p = .28; Controls: β=-0.03; p = .87). CONCLUSIONS The current data confirm prior observations that patients with FND have higher alexithymia levels than controls without FND. Difficulties recognizing emotions among patients with FND primarily involves recognition of internal emotional states rather than recognition of facially expressed emotions by others. These findings require replication in a larger and more divers sample.
Collapse
|
75
|
Yang L, Tang Q, Chen Z, Zhang S, Mu Y, Yan Y, Xu P, Yao D, Li F, Li C. EEG based emotion recognition by hierarchical bayesian spectral regression framework. J Neurosci Methods 2024; 402:110015. [PMID: 38000636 DOI: 10.1016/j.jneumeth.2023.110015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 10/22/2023] [Accepted: 11/16/2023] [Indexed: 11/26/2023]
Abstract
Spectral regression (SR), a graph-based learning regression model, can be used to extract features from graphs to realize efficient dimensionality reduction. However, due to the SR method remains a regularized least squares problem and being defined in L2-norm space, the effect of artifacts in EEG signals cannot be efficiently resisted. In this work, to further improve the robustness of the graph-based regression models, we propose to utilize the prior distribution estimation in the Bayesian framework and develop a robust hierarchical Bayesian spectral regression framework (named HB-SR), which is designed with the hierarchical Bayesian ensemble strategies. In the proposed HB-SR, the impact of noises can be effectively reduced by the adaptive adjustment approach in model parameters with the data-driven manner. Specifically, in the current work, three different distributions have been elaborately designed to enhance the universality of the proposed HB-SR, i.e., Gaussian distribution, Laplace distribution, and Student-t distribution. To objectively evaluate the performance of the HB-SR framework, we conducted both simulation studies and emotion recognition experiments based on emotional EEG signals. Experimental results have consistently indicated that compared with other existing spectral regression methods, the proposed HB-SR can effectively suppress the influence of noises and achieve robust EEG emotion recognition.
Collapse
|