1
|
You L, Zhong T, He E, Liu X, Zhong Q. Cross-subject affective analysis based on dynamic brain functional networks. Front Hum Neurosci 2025; 19:1445763. [PMID: 40297263 PMCID: PMC12034672 DOI: 10.3389/fnhum.2025.1445763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Accepted: 03/18/2025] [Indexed: 04/30/2025] Open
Abstract
Introduction Emotion recognition is crucial in facilitating human-computer emotional interaction. To enhance the credibility and realism of emotion recognition, researchers have turned to physiological signals, particularly EEG signals, as they directly reflect cerebral cortex activity. However, due to inter-subject variability and non-smoothness of EEG signals, the generalization performance of models across subjects remains a challenge. Methods In this study, we proposed a novel approach that combines time-frequency analysis and brain functional networks to construct dynamic brain functional networks using sliding time windows. This integration of time, frequency, and spatial domains helps to effectively capture features, reducing inter-individual differences, and improving model generalization performance. To construct brain functional networks, we employed mutual information to quantify the correlation between EEG channels and set appropriate thresholds. We then extracted three network attribute features-global efficiency, local efficiency, and local clustering coefficients-to achieve emotion classification based on dynamic brain network features. Results The proposed method is evaluated on the DEAP dataset through subject-dependent (trial-independent), subject-independent, and subject- and trial-independent experiments along both valence and arousal dimensions. The results demonstrate that our dynamic brain functional network outperforms the static brain functional network in all three experimental cases. High classification accuracies of 90.89% and 91.17% in the valence and arousal dimensions, respectively, were achieved on the subject-independent experiments based on the dynamic brain function, leading to significant advancements in EEG-based emotion recognition. In addition, experiments with each brain region yielded that the left and right temporal lobes focused on processing individual private emotional information, whereas the remaining brain regions paid attention to processing basic emotional information.
Collapse
Affiliation(s)
- Lifeng You
- School of Physics, South China Normal University, Guangzhou, China
| | - Tianyu Zhong
- School of Social Sciences, Nanyang Technological University, Singapore, Singapore
| | - Erheng He
- School of Physics, South China Normal University, Guangzhou, China
| | - Xuejie Liu
- School of Electronic Science and Engineering (School of Microelectronics), South China Normal University, Foshan, China
| | - Qinghua Zhong
- School of Electronic Science and Engineering (School of Microelectronics), South China Normal University, Foshan, China
| |
Collapse
|
2
|
Hu F, He K, Qian M, Liu X, Qiao Z, Zhang L, Xiong J. STAFNet: an adaptive multi-feature learning network via spatiotemporal fusion for EEG-based emotion recognition. Front Neurosci 2024; 18:1519970. [PMID: 39720230 PMCID: PMC11666491 DOI: 10.3389/fnins.2024.1519970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2024] [Accepted: 11/18/2024] [Indexed: 12/26/2024] Open
Abstract
Introduction Emotion recognition using electroencephalography (EEG) is a key aspect of brain-computer interface research. Achieving precision requires effectively extracting and integrating both spatial and temporal features. However, many studies focus on a single dimension, neglecting the interplay and complementarity of multi-feature information, and the importance of fully integrating spatial and temporal dynamics to enhance performance. Methods We propose the Spatiotemporal Adaptive Fusion Network (STAFNet), a novel framework combining adaptive graph convolution and temporal transformers to enhance the accuracy and robustness of EEG-based emotion recognition. The model includes an adaptive graph convolutional module to capture brain connectivity patterns through spatial dynamic evolution and a multi-structured transformer fusion module to integrate latent correlations between spatial and temporal features for emotion classification. Results Extensive experiments were conducted on the SEED and SEED-IV datasets to evaluate the performance of STAFNet. The model achieved accuracies of 97.89% and 93.64%, respectively, outperforming state-of-the-art methods. Interpretability analyses, including confusion matrices and t-SNE visualizations, were employed to examine the influence of different emotions on the model's recognition performance. Furthermore, an investigation of varying GCN layer depths demonstrated that STAFNet effectively mitigates the over-smoothing issue in deeper GCN architectures. Discussion In summary, the findings validate the effectiveness of STAFNet in EEG-based emotion recognition. The results emphasize the critical role of spatiotemporal feature extraction and introduce an innovative framework for feature fusion, advancing the state of the art in emotion recognition.
Collapse
Affiliation(s)
- Fo Hu
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Kailun He
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Mengyuan Qian
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Xiaofeng Liu
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Zukang Qiao
- Department of Tuina, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
| | - Lekai Zhang
- The School of Design and Architecture, Zhejiang University of Technology, Hangzhou, China
| | - Junlong Xiong
- Department of Tuina, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
| |
Collapse
|
3
|
Jiang C, Dai Y, Ding Y, Chen X, Li Y, Tang Y. TSANN-TG: Temporal-Spatial Attention Neural Networks with Task-Specific Graph for EEG Emotion Recognition. Brain Sci 2024; 14:516. [PMID: 38790494 PMCID: PMC11119157 DOI: 10.3390/brainsci14050516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 05/14/2024] [Accepted: 05/16/2024] [Indexed: 05/26/2024] Open
Abstract
Electroencephalography (EEG)-based emotion recognition is increasingly pivotal in the realm of affective brain-computer interfaces. In this paper, we propose TSANN-TG (temporal-spatial attention neural network with a task-specific graph), a novel neural network architecture tailored for enhancing feature extraction and effectively integrating temporal-spatial features. TSANN-TG comprises three primary components: a node-feature-encoding-and-adjacency-matrices-construction block, a graph-aggregation block, and a graph-feature-fusion-and-classification block. Leveraging the distinct temporal scales of features from EEG signals, TSANN-TG incorporates attention mechanisms for efficient feature extraction. By constructing task-specific adjacency matrices, the graph convolutional network with an attention mechanism captures the dynamic changes in dependency information between EEG channels. Additionally, TSANN-TG emphasizes feature integration at multiple levels, leading to improved performance in emotion-recognition tasks. Our proposed TSANN-TG is applied to both our FTEHD dataset and the publicly available DEAP dataset. Comparative experiments and ablation studies highlight the excellent recognition results achieved. Compared to the baseline algorithms, TSANN-TG demonstrates significant enhancements in accuracy and F1 score on the two benchmark datasets for four types of cognitive tasks. These results underscore the significant potential of the TSANN-TG method to advance EEG-based emotion recognition.
Collapse
Affiliation(s)
- Chao Jiang
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China; (C.J.); (X.C.)
- College of Electronics and Information Engineering, Shanghai University of Electric Power, Shanghai 200090, China; (Y.D.); (Y.D.)
| | - Yingying Dai
- College of Electronics and Information Engineering, Shanghai University of Electric Power, Shanghai 200090, China; (Y.D.); (Y.D.)
| | - Yunheng Ding
- College of Electronics and Information Engineering, Shanghai University of Electric Power, Shanghai 200090, China; (Y.D.); (Y.D.)
| | - Xi Chen
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China; (C.J.); (X.C.)
| | - Yingjie Li
- College of International Education, Shanghai University, Shanghai 200444, China
- School of Life Sciences, Shanghai University, Shanghai 200444, China
- Institute of Biomedical Engineering, Shanghai University, Shanghai 200444, China
| | - Yingying Tang
- Shanghai Mental Health Center, Shanghai 200030, China;
| |
Collapse
|
4
|
Hu L, Tan C, Xu J, Qiao R, Hu Y, Tian Y. Decoding emotion with phase-amplitude fusion features of EEG functional connectivity network. Neural Netw 2024; 172:106148. [PMID: 38309138 DOI: 10.1016/j.neunet.2024.106148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 12/20/2023] [Accepted: 01/23/2024] [Indexed: 02/05/2024]
Abstract
Decoding emotional neural representations from the electroencephalographic (EEG)-based functional connectivity network (FCN) is of great scientific importance for uncovering emotional cognition mechanisms and developing harmonious human-computer interactions. However, existing methods mainly rely on phase-based FCN measures (e.g., phase locking value [PLV]) to capture dynamic interactions between brain oscillations in emotional states, which fail to reflect the energy fluctuation of cortical oscillations over time. In this study, we initially examined the efficacy of amplitude-based functional networks (e.g., amplitude envelope correlation [AEC]) in representing emotional states. Subsequently, we proposed an efficient phase-amplitude fusion framework (PAF) to fuse PLV and AEC and used common spatial pattern (CSP) to extract fused spatial topological features from PAF for multi-class emotion recognition. We conducted extensive experiments on the DEAP and MAHNOB-HCI datasets. The results showed that: (1) AEC-derived discriminative spatial network topological features possess the ability to characterize emotional states, and the differential network patterns of AEC reflect dynamic interactions in brain regions associated with emotional cognition. (2) The proposed fusion features outperformed other state-of-the-art methods in terms of classification accuracy for both datasets. Moreover, the spatial filter learned from PAF is separable and interpretable, enabling a description of affective activation patterns from both phase and amplitude perspectives.
Collapse
Affiliation(s)
- Liangliang Hu
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; West China Institute of Children's Brain and Cognition, Chongqing University of Education, Chongqing 400065, China.
| | - Congming Tan
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Jiayang Xu
- School of Bioinformatics, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Rui Qiao
- School of Bioinformatics, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Yilin Hu
- School of Bioinformatics, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Yin Tian
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; School of Bioinformatics, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; Chongqing Institute for Brain and Intelligence, Guangyang Bay Laboratory, Chongqing 400064, China.
| |
Collapse
|
5
|
Jafari M, Shoeibi A, Khodatars M, Bagherzadeh S, Shalbaf A, García DL, Gorriz JM, Acharya UR. Emotion recognition in EEG signals using deep learning methods: A review. Comput Biol Med 2023; 165:107450. [PMID: 37708717 DOI: 10.1016/j.compbiomed.2023.107450] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 08/03/2023] [Accepted: 09/01/2023] [Indexed: 09/16/2023]
Abstract
Emotions are a critical aspect of daily life and serve a crucial role in human decision-making, planning, reasoning, and other mental states. As a result, they are considered a significant factor in human interactions. Human emotions can be identified through various sources, such as facial expressions, speech, behavior (gesture/position), or physiological signals. The use of physiological signals can enhance the objectivity and reliability of emotion detection. Compared with peripheral physiological signals, electroencephalogram (EEG) recordings are directly generated by the central nervous system and are closely related to human emotions. EEG signals have the great spatial resolution that facilitates the evaluation of brain functions, making them a popular modality in emotion recognition studies. Emotion recognition using EEG signals presents several challenges, including signal variability due to electrode positioning, individual differences in signal morphology, and lack of a universal standard for EEG signal processing. Moreover, identifying the appropriate features for emotion recognition from EEG data requires further research. Finally, there is a need to develop more robust artificial intelligence (AI) including conventional machine learning (ML) and deep learning (DL) methods to handle the complex and diverse EEG signals associated with emotional states. This paper examines the application of DL techniques in emotion recognition from EEG signals and provides a detailed discussion of relevant articles. The paper explores the significant challenges in emotion recognition using EEG signals, highlights the potential of DL techniques in addressing these challenges, and suggests the scope for future research in emotion recognition using DL techniques. The paper concludes with a summary of its findings.
Collapse
Affiliation(s)
- Mahboobeh Jafari
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Afshin Shoeibi
- Data Science and Computational Intelligence Institute, University of Granada, Spain.
| | - Marjane Khodatars
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Sara Bagherzadeh
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Ahmad Shalbaf
- Department of Biomedical Engineering and Medical Physics, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - David López García
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Juan M Gorriz
- Data Science and Computational Intelligence Institute, University of Granada, Spain; Department of Psychiatry, University of Cambridge, UK
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia
| |
Collapse
|
6
|
Yu X, Li Z, Zang Z, Liu Y. Real-Time EEG-Based Emotion Recognition. SENSORS (BASEL, SWITZERLAND) 2023; 23:7853. [PMID: 37765910 PMCID: PMC10534520 DOI: 10.3390/s23187853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 09/08/2023] [Accepted: 09/09/2023] [Indexed: 09/29/2023]
Abstract
Most studies have demonstrated that EEG can be applied to emotion recognition. In the process of EEG-based emotion recognition, real-time is an important feature. In this paper, the real-time problem of emotion recognition based on EEG is explained and analyzed. Secondly, the short time window length and attention mechanisms are designed on EEG signals to follow emotion change over time. Then, long short-term memory with the additive attention mechanism is used for emotion recognition, due to timely emotion updates, and the model is applied to the SEED and SEED-IV datasets to verify the feasibility of real-time emotion recognition. The results show that the model performs relatively well in terms of real-time performance, with accuracy rates of 85.40% and 74.26% on SEED and SEED-IV, but the accuracy rate has not reached the ideal state due to data labeling and other losses in the pursuit of real-time performance.
Collapse
Affiliation(s)
- Xiangkun Yu
- College of Computer Science and Technology, Qingdao University, Qingdao 266071, China
| | - Zhengjie Li
- School of Automation, Qingdao University, Qingdao 266071, China
| | - Zhibang Zang
- College of Computer Science and Technology, Qingdao University, Qingdao 266071, China
| | - Yinhua Liu
- School of Automation, Qingdao University, Qingdao 266071, China
- Shandong Key Laboratory of Industrial Control Technology, Qingdao 266071, China
- Institute for Future, Qingdao University, Qingdao 266071, China
| |
Collapse
|
7
|
Chao H, Cao Y, Liu Y. Multi-channel EEG emotion recognition through residual graph attention neural network. Front Neurosci 2023; 17:1135850. [PMID: 37559702 PMCID: PMC10407101 DOI: 10.3389/fnins.2023.1135850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 06/26/2023] [Indexed: 08/11/2023] Open
Abstract
In this paper, a novel EEG emotion recognition method based on residual graph attention neural network is proposed. The method constructs a three-dimensional sparse feature matrix according to the relative position of electrode channels, and inputs it into the residual network to extract high-level abstract features containing electrode spatial position information. At the same time, the adjacency matrix representing the connection relationship of electrode channels is constructed, and the time-domain features of multi-channel EEG are modeled using graph. Then, the graph attention neural network is utilized to learn the intrinsic connection relationship between EEG channels located in different brain regions from the adjacency matrix and the constructed graph structure data. Finally, the high-level abstract features extracted from the two networks are fused to judge the emotional state. The experiment is carried out on DEAP data set. The experimental results show that the spatial domain information of electrode channels and the intrinsic connection relationship between different channels contain salient information related to emotional state, and the proposed model can effectively fuse these information to improve the performance of multi-channel EEG emotion recognition.
Collapse
Affiliation(s)
- Hao Chao
- College of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, China
| | | | | |
Collapse
|
8
|
Zong J, Xiong X, Zhou J, Ji Y, Zhou D, Zhang Q. FCAN-XGBoost: A Novel Hybrid Model for EEG Emotion Recognition. SENSORS (BASEL, SWITZERLAND) 2023; 23:5680. [PMID: 37420845 DOI: 10.3390/s23125680] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 06/03/2023] [Accepted: 06/15/2023] [Indexed: 07/09/2023]
Abstract
In recent years, artificial intelligence (AI) technology has promoted the development of electroencephalogram (EEG) emotion recognition. However, existing methods often overlook the computational cost of EEG emotion recognition, and there is still room for improvement in the accuracy of EEG emotion recognition. In this study, we propose a novel EEG emotion recognition algorithm called FCAN-XGBoost, which is a fusion of two algorithms, FCAN and XGBoost. The FCAN module is a feature attention network (FANet) that we have proposed for the first time, which processes the differential entropy (DE) and power spectral density (PSD) features extracted from the four frequency bands of the EEG signal and performs feature fusion and deep feature extraction. Finally, the deep features are fed into the eXtreme Gradient Boosting (XGBoost) algorithm to classify the four emotions. We evaluated the proposed method on the DEAP and DREAMER datasets and achieved a four-category emotion recognition accuracy of 95.26% and 94.05%, respectively. Additionally, our proposed method reduces the computational cost of EEG emotion recognition by at least 75.45% for computation time and 67.51% for memory occupation. The performance of FCAN-XGBoost outperforms the state-of-the-art four-category model and reduces computational costs without losing classification performance compared with other models.
Collapse
Affiliation(s)
- Jing Zong
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Xin Xiong
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Jianhua Zhou
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Ying Ji
- Graduate School, Kunming Medical University, Kunming 650500, China
| | - Diao Zhou
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Qi Zhang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| |
Collapse
|