1
|
Jafari M, Shoeibi A, Khodatars M, Bagherzadeh S, Shalbaf A, García DL, Gorriz JM, Acharya UR. Emotion recognition in EEG signals using deep learning methods: A review. Comput Biol Med 2023; 165:107450. [PMID: 37708717 DOI: 10.1016/j.compbiomed.2023.107450] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 08/03/2023] [Accepted: 09/01/2023] [Indexed: 09/16/2023]
Abstract
Emotions are a critical aspect of daily life and serve a crucial role in human decision-making, planning, reasoning, and other mental states. As a result, they are considered a significant factor in human interactions. Human emotions can be identified through various sources, such as facial expressions, speech, behavior (gesture/position), or physiological signals. The use of physiological signals can enhance the objectivity and reliability of emotion detection. Compared with peripheral physiological signals, electroencephalogram (EEG) recordings are directly generated by the central nervous system and are closely related to human emotions. EEG signals have the great spatial resolution that facilitates the evaluation of brain functions, making them a popular modality in emotion recognition studies. Emotion recognition using EEG signals presents several challenges, including signal variability due to electrode positioning, individual differences in signal morphology, and lack of a universal standard for EEG signal processing. Moreover, identifying the appropriate features for emotion recognition from EEG data requires further research. Finally, there is a need to develop more robust artificial intelligence (AI) including conventional machine learning (ML) and deep learning (DL) methods to handle the complex and diverse EEG signals associated with emotional states. This paper examines the application of DL techniques in emotion recognition from EEG signals and provides a detailed discussion of relevant articles. The paper explores the significant challenges in emotion recognition using EEG signals, highlights the potential of DL techniques in addressing these challenges, and suggests the scope for future research in emotion recognition using DL techniques. The paper concludes with a summary of its findings.
Collapse
Affiliation(s)
- Mahboobeh Jafari
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Afshin Shoeibi
- Data Science and Computational Intelligence Institute, University of Granada, Spain.
| | - Marjane Khodatars
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Sara Bagherzadeh
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Ahmad Shalbaf
- Department of Biomedical Engineering and Medical Physics, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - David López García
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Juan M Gorriz
- Data Science and Computational Intelligence Institute, University of Granada, Spain; Department of Psychiatry, University of Cambridge, UK
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia
| |
Collapse
|
2
|
Bai Z, Hou F, Sun K, Wu Q, Zhu M, Mao Z, Song Y, Gao Q. SECT: A Method of Shifted EEG Channel Transformer for Emotion Recognition. IEEE J Biomed Health Inform 2023; 27:4758-4767. [PMID: 37540609 DOI: 10.1109/jbhi.2023.3301993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/06/2023]
Abstract
Recently, electroencephalographic (EEG) emotion recognition attract attention in the field of human-computer interaction (HCI). However, most of the existing EEG emotion datasets primarily consist of data from normal human subjects. To enhance diversity, this study aims to collect EEG signals from 30 hearing-impaired subjects while they watch video clips displaying six different emotions (happiness, inspiration, neutral, anger, fear, and sadness). The frequency domain feature matrix of EEG signals, which comprise power spectral density (PSD) and differential entropy (DE), were up-sampled using cubic spline interpolation to capture the correlation among different channels. To select emotion representation information from both global and localized brain regions, a novel method called Shifted EEG Channel Transformer (SECT) was proposed. The SECT method consists of two layers: the first layer utilizes the traditional channel Transformer (CT) structure to process information from global brain regions, while the second layer acquires localized information from centrally symmetrical and reorganized brain regions by shifted channel Transformer (S-CT). We conducted a subject-dependent experiment, and the accuracy of the PSD and DE features reached 82.51% and 84.76%, respectively, for the six kinds of emotion classification. Moreover, subject-independent experiments were conducted on a public dataset, yielding accuracies of 85.43% (3-classification, SEED), 66.83% (2-classification on Valence, DEAP), and 65.31% (2-classification on Arouse, DEAP), respectively.
Collapse
|
3
|
Hancer E, Subasi A. EEG-based emotion recognition using dual tree complex wavelet transform and random subspace ensemble classifier. Comput Methods Biomech Biomed Engin 2023; 26:1772-1784. [PMID: 36367337 DOI: 10.1080/10255842.2022.2143714] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 08/25/2022] [Accepted: 10/26/2022] [Indexed: 11/13/2022]
Abstract
Emotions are strongly admitted as a main source to establish meaningful interactions between humans and computers. Thanks to the advancements in electroencephalography (EEG), especially in the usage of portable and cheap wearable EEG devices, the demand for identifying emotions has extremely increased. However, the overall scientific knowledge and works concerning EEG-based emotion recognition is still limited. To cover this issue, we introduce an EEG-based emotion recognition framework in this study. The proposed framework involves the following stages: preprocessing, feature extraction, feature selection and classification. For the preprocessing stage, multi scale principle component analysis and sysmlets-4 filter are used. A version of discrete wavelet transform (DWT), namely dual tree complex wavelet transform (DTCWT) is utilized for the feature extraction stage. To reduce the feature dimension size, a variety of statistical criteria are employed. For the final stage, we adopt ensemble classifiers due to their promising performance in classification problems. The proposed framework achieves nearly 96.8% accuracy by using random subspace ensemble classifier. It can therefore be resulted that the proposed EEG-based framework performs well in terms of identifying emotions.
Collapse
Affiliation(s)
- Emrah Hancer
- Department of Software Engineering, Bucak Technology Faculty, Mehmet Akif Ersoy University, Burdur, Turkey
| | - Abdulhamit Subasi
- Institute of Biomedicine, Faculty of Medicine, University of Turku, Turku, Finland
- Department of Computer Science, College of Engineering, Effat University, Jeddah, Saudi Arabia
| |
Collapse
|
4
|
Li M, Qiu M, Zhu L, Kong W. Feature hypergraph representation learning on spatial-temporal correlations for EEG emotion recognition. Cogn Neurodyn 2023; 17:1271-1281. [PMID: 37786664 PMCID: PMC10542078 DOI: 10.1007/s11571-022-09890-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Revised: 08/28/2022] [Accepted: 09/14/2022] [Indexed: 11/28/2022] Open
Abstract
Electroencephalogram(EEG) becomes popular in emotion recognition for its capability of selectively reflecting the real emotional states. Existing graph-based methods have made primary progress in representing pairwise spatial relationships, but leaving higher-order relationships among EEG channels and higher-order relationships inside EEG series. Constructing a hypergraph is a general way of representing higher-order relations. In this paper, we propose a spatial-temporal hypergraph convolutional network(STHGCN) to capture higher-order relationships that existed in EEG recordings. STHGCN is a two-block hypergraph convolutional network, in which feature hypergraphs are constructed over the spectrum, space, and time domains, to explore spatial and temporal correlations under specific emotional states, namely the correlations of EEG channels and the dynamic relationships of temporal stamps. What's more, a self-attention mechanism is combined with the hypergraph convolutional network to initialize and update the relationships of EEG series. The experimental results demonstrate that constructed feature hypergraphs can effectively capture the correlations among valuable EEG channels and the correlations inside valuable EEG series, leading to the best emotion recognition accuracy among the graph methods. In addition, compared with other competitive methods, the proposed method achieves state-of-art results on SEED and SEED-IV datasets.
Collapse
Affiliation(s)
- Menghang Li
- College of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, 310018 China
- Key Laboratory of Brain Machine Collaborative Intelligence of Zhejiang Province, Hangzhou, 310018 China
| | - Min Qiu
- College of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, 310018 China
- Key Laboratory of Brain Machine Collaborative Intelligence of Zhejiang Province, Hangzhou, 310018 China
| | - Li Zhu
- College of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, 310018 China
- Key Laboratory of Brain Machine Collaborative Intelligence of Zhejiang Province, Hangzhou, 310018 China
| | - Wanzeng Kong
- College of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, 310018 China
- Key Laboratory of Brain Machine Collaborative Intelligence of Zhejiang Province, Hangzhou, 310018 China
| |
Collapse
|
5
|
Tasci G, Gun MV, Keles T, Tasci B, Barua PD, Tasci I, Dogan S, Baygin M, Palmer EE, Tuncer T, Ooi CP, Acharya UR. QLBP: Dynamic patterns-based feature extraction functions for automatic detection of mental health and cognitive conditions using EEG signals. CHAOS, SOLITONS & FRACTALS 2023; 172:113472. [DOI: 10.1016/j.chaos.2023.113472] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2024]
|
6
|
Yang L, Wang Y, Yang X, Zheng C. Stochastic weight averaging enhanced temporal convolution network for EEG-based emotion recognition. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
|
7
|
Dogan S, Tuncer I, Baygin M, Tuncer T. A new hand-modeled learning framework for driving fatigue detection using EEG signals. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08491-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
|
8
|
Gul Y, Muezzinoglu T, Kilicarslan G, Dogan S, Tuncer T. Application of the deep transfer learning framework for hydatid cyst classification using CT images. Soft comput 2023. [DOI: 10.1007/s00500-023-07945-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/07/2023]
|
9
|
Peng G, Zhao K, Zhang H, Xu D, Kong X. Temporal relative transformer encoding cooperating with channel attention for EEG emotion analysis. Comput Biol Med 2023; 154:106537. [PMID: 36682180 DOI: 10.1016/j.compbiomed.2023.106537] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 12/18/2022] [Accepted: 01/10/2023] [Indexed: 01/18/2023]
Abstract
Electroencephalogram (EEG)-based emotion computing has become a hot topic of brain-computer fusion. EEG signals have inherent temporal and spatial characteristics. However, existing studies did not fully consider the two properties. In addition, the position encoding mechanism in the vanilla transformer cannot effectively encode the continuous temporal character of the emotion. A temporal relative (TR) encoding mechanism is proposed to encode the temporal EEG signals for constructing the temporality self-attention in the transformer. To explore the contribution of each EEG channel corresponding to the electrode on the cerebral cortex to emotion analysis, a channel-attention (CA) mechanism is presented. The temporality self-attention mechanism cooperates with the channel-attention mechanism to utilize the temporal and spatial information of EEG signals simultaneously by preprocessing. Exhaustive experiments are conducted on the DEAP dataset, including the binary classification on valence, arousal, dominance, and liking. Furthermore, the discrete emotion category classification task is also conducted by mapping the dimensional annotations of DEAP into discrete emotion categories (5-class). Experimental results demonstrate that our model outperforms the advanced methods for all classification tasks.
Collapse
Affiliation(s)
- Guoqin Peng
- Yunnan University, Kunming, 650500, China; Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, 310013, China; Department of Psychiatry of Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, 310013, China.
| | | | - Hao Zhang
- Yunnan University, Kunming, 650500, China
| | - Dan Xu
- Yunnan University, Kunming, 650500, China.
| | - Xiangzhen Kong
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, 310013, China; Department of Psychiatry of Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, 310013, China.
| |
Collapse
|
10
|
Wang X, Ren Y, Luo Z, He W, Hong J, Huang Y. Deep learning-based EEG emotion recognition: Current trends and future perspectives. Front Psychol 2023; 14:1126994. [PMID: 36923142 PMCID: PMC10009917 DOI: 10.3389/fpsyg.2023.1126994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 01/11/2023] [Indexed: 03/03/2023] Open
Abstract
Automatic electroencephalogram (EEG) emotion recognition is a challenging component of human-computer interaction (HCI). Inspired by the powerful feature learning ability of recently-emerged deep learning techniques, various advanced deep learning models have been employed increasingly to learn high-level feature representations for EEG emotion recognition. This paper aims to provide an up-to-date and comprehensive survey of EEG emotion recognition, especially for various deep learning techniques in this area. We provide the preliminaries and basic knowledge in the literature. We review EEG emotion recognition benchmark data sets briefly. We review deep learning techniques in details, including deep belief networks, convolutional neural networks, and recurrent neural networks. We describe the state-of-the-art applications of deep learning techniques for EEG emotion recognition in detail. We analyze the challenges and opportunities in this field and point out its future directions.
Collapse
Affiliation(s)
- Xiaohu Wang
- School of Intelligent Manufacturing and Mechanical Engineering, Hunan Institute of Technology, Hengyang, China
| | - Yongmei Ren
- School of Electrical and Information Engineering, Hunan Institute of Technology, Hengyang, China
| | - Ze Luo
- School of Intelligent Manufacturing and Mechanical Engineering, Hunan Institute of Technology, Hengyang, China
| | - Wei He
- School of Electrical and Information Engineering, Hunan Institute of Technology, Hengyang, China
| | - Jun Hong
- School of Intelligent Manufacturing and Mechanical Engineering, Hunan Institute of Technology, Hengyang, China
| | - Yinzhen Huang
- School of Computer and Information Engineering, Hunan Institute of Technology, Hengyang, China
| |
Collapse
|
11
|
Siddiqui F, Mohammad A, Alam MA, Naaz S, Agarwal P, Sohail SS, Madsen DØ. Deep Neural Network for EEG Signal-Based Subject-Independent Imaginary Mental Task Classification. Diagnostics (Basel) 2023; 13:diagnostics13040640. [PMID: 36832128 PMCID: PMC9955721 DOI: 10.3390/diagnostics13040640] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Revised: 02/03/2023] [Accepted: 02/06/2023] [Indexed: 02/11/2023] Open
Abstract
BACKGROUND Mental task identification using electroencephalography (EEG) signals is required for patients with limited or no motor movements. A subject-independent mental task classification framework can be applied to identify the mental task of a subject with no available training statistics. Deep learning frameworks are popular among researchers for analyzing both spatial and time series data, making them well-suited for classifying EEG signals. METHOD In this paper, a deep neural network model is proposed for mental task classification for an imagined task from EEG signal data. Pre-computed features of EEG signals were obtained after raw EEG signals acquired from the subjects were spatially filtered by applying the Laplacian surface. To handle high-dimensional data, principal component analysis (PCA) was performed which helps in the extraction of most discriminating features from input vectors. RESULT The proposed model is non-invasive and aims to extract mental task-specific features from EEG data acquired from a particular subject. The training was performed on the average combined Power Spectrum Density (PSD) values of all but one subject. The performance of the proposed model based on a deep neural network (DNN) was evaluated using a benchmark dataset. We achieved 77.62% accuracy. CONCLUSION The performance and comparison analysis with the related existing works validated that the proposed cross-subject classification framework outperforms the state-of-the-art algorithm in terms of performing an accurate mental task from EEG signals.
Collapse
Affiliation(s)
- Farheen Siddiqui
- Department of Computer Science and Engineering, School of Engineering Sciences and Technology, Jamia Hamdard, New Delhi 110062, India
| | - Awwab Mohammad
- Department of Computer Science and Engineering, School of Engineering Sciences and Technology, Jamia Hamdard, New Delhi 110062, India
| | - M. Afshar Alam
- Department of Computer Science and Engineering, School of Engineering Sciences and Technology, Jamia Hamdard, New Delhi 110062, India
| | - Sameena Naaz
- Department of Computer Science and Engineering, School of Engineering Sciences and Technology, Jamia Hamdard, New Delhi 110062, India
| | - Parul Agarwal
- Department of Computer Science and Engineering, School of Engineering Sciences and Technology, Jamia Hamdard, New Delhi 110062, India
| | - Shahab Saquib Sohail
- Department of Computer Science and Engineering, School of Engineering Sciences and Technology, Jamia Hamdard, New Delhi 110062, India
- Correspondence: (S.S.S.); (D.Ø.M.)
| | - Dag Øivind Madsen
- Department of Business, Marketing and Law, USN School of Business, University of South-Eastern Norway, 3511 Hønefoss, Norway
- Correspondence: (S.S.S.); (D.Ø.M.)
| |
Collapse
|
12
|
Zhong MY, Yang QY, Liu Y, Zhen BY, Zhao FD, Xie BB. EEG emotion recognition based on TQWT-features and hybrid convolutional recurrent neural network. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
13
|
Multimodal EEG Emotion Recognition Based on the Attention Recurrent Graph Convolutional Network. INFORMATION 2022. [DOI: 10.3390/info13110550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
EEG-based emotion recognition has become an important part of human–computer interaction. To solve the problem that single-modal features are not complete enough, in this paper, we propose a multimodal emotion recognition method based on the attention recurrent graph convolutional neural network, which is represented by Mul-AT-RGCN. The method explores the relationship between multiple-modal feature channels of EEG and peripheral physiological signals, converts one-dimensional sequence features into two-dimensional map features for modeling, and then extracts spatiotemporal and frequency–space features from the obtained multimodal features. These two types of features are input into a recurrent graph convolutional network with a convolutional block attention module for deep semantic feature extraction and sentiment classification. To reduce the differences between subjects, a domain adaptation module is also introduced to the cross-subject experimental verification. This proposed method performs feature learning in three dimensions of time, space, and frequency by excavating the complementary relationship of different modal data so that the learned deep emotion-related features are more discriminative. The proposed method was tested on the DEAP, a multimodal dataset, and the average classification accuracies of valence and arousal within subjects reached 93.19% and 91.82%, respectively, which were improved by 5.1% and 4.69%, respectively, compared with the only EEG modality and were also superior to the most-current methods. The cross-subject experiment also obtained better classification accuracies, which verifies the effectiveness of the proposed method in multimodal EEG emotion recognition.
Collapse
|
14
|
Dogan A, Barua PD, Baygin M, Tuncer T, Dogan S, Yaman O, Dogru AH, Acharya RU. Automated accurate emotion classification using Clefia pattern-based features with EEG signals. INTERNATIONAL JOURNAL OF HEALTHCARE MANAGEMENT 2022. [DOI: 10.1080/20479700.2022.2141694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Affiliation(s)
- Abdullah Dogan
- Department of Computer Engineering, Middle East Technical University, Ankara, Turkey
| | - Prabal Datta Barua
- School of Business (Information System), University of Southern Queensland, Toowoomba, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, Turkey
| | - Mehmet Baygin
- Department of Computer Engineering, College of Engineering, Ardahan University, Ardahan, Turkey
| | - Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Orhan Yaman
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Ali Hikmet Dogru
- Department of Computer Engineering, Middle East Technical University, Ankara, Turkey
| | - Rajendra U. Acharya
- Ngee Ann Polytechnic, Department of Electronics and Computer Engineering, Singapore
- Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| |
Collapse
|
15
|
Abdulrahman A, Baykara M, Alakus TB. A Novel Approach for Emotion Recognition Based on EEG Signal Using Deep Learning. APPLIED SCIENCES 2022; 12:10028. [DOI: 10.3390/app121910028] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
Emotion can be defined as a voluntary or involuntary reaction to external factors. People express their emotions through actions, such as words, sounds, facial expressions, and body language. However, emotions expressed in such actions are sometimes manipulated by people and real feelings cannot be conveyed clearly. Therefore, understanding and analyzing emotions is essential. Recently, emotion analysis studies based on EEG signals appear to be in the foreground, due to the more reliable data collected. In this study, emotion analysis based on EEG signals was performed and a deep learning model was proposed. The study consists of four stages. In the first stage, EEG data were obtained from the GAMEEMO dataset. In the second stage, EEG signals were transformed with both VMD (variation mode decomposition) and EMD (empirical mode decomposition), and a total of 14 (nine from EMD, five from VMD) IMFs were obtained from each signal. In the third stage, statistical features were obtained from IMFs and maximum value, minimum value, and average values were used for this. In the last stage, both binary-class and multi-class classifications were made. The proposed deep learning model is compared with kNN (k nearest neighbor), SVM (support vector machines), and RF (random forest). At the end of the study, an accuracy of 70.89% in binary-class classification and 90.33% in multi-class classification was obtained with the proposed deep learning method.
Collapse
|
16
|
Aydemir E, Baygin M, Dogan S, Tuncer T, Barua PD, Chakraborty S, Faust O, Arunkumar N, Kaysi F, Acharya UR. Mental performance classification using fused multilevel feature generation with EEG signals. INTERNATIONAL JOURNAL OF HEALTHCARE MANAGEMENT 2022. [DOI: 10.1080/20479700.2022.2130645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/10/2022]
Affiliation(s)
- Emrah Aydemir
- Department of Management Information, College of Management, Sakarya University, Sakarya, Turkey
| | - Mehmet Baygin
- Department of Computer Engineering, College of Engineering, Ardahan University, Ardahan, Turkey
| | - Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Prabal Datta Barua
- School of Management & Enterprise, University of Southern Queensland, Darling Heights, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, Australia
| | - Subrata Chakraborty
- School of Science and Technology, Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, Australia
- Center for Advanced Modelling and Geospatial Information Systems, Faculty of Engineering and IT, University of Technology Sydney, Sydney, Australia
| | - Oliver Faust
- School of Computing, Anglia Ruskin University, Cambridge, UK
| | - N. Arunkumar
- Department of Electronics and Instrumentation, SASTRA University, Thanjavur, India
| | - Feyzi Kaysi
- Department of Electronic and Automation, Vocational School of Technical Sciences, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - U. Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore, Singapore
- Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore, Singapore
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| |
Collapse
|
17
|
CNN-XGBoost fusion-based affective state recognition using EEG spectrogram image analysis. Sci Rep 2022; 12:14122. [PMID: 35986065 PMCID: PMC9391364 DOI: 10.1038/s41598-022-18257-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 08/08/2022] [Indexed: 11/13/2022] Open
Abstract
Recognizing emotional state of human using brain signal is an active research domain with several open challenges. In this research, we propose a signal spectrogram image based CNN-XGBoost fusion method for recognising three dimensions of emotion, namely arousal (calm or excitement), valence (positive or negative feeling) and dominance (without control or empowered). We used a benchmark dataset called DREAMER where the EEG signals were collected from multiple stimulus along with self-evaluation ratings. In our proposed method, we first calculate the Short-Time Fourier Transform (STFT) of the EEG signals and convert them into RGB images to obtain the spectrograms. Then we use a two dimensional Convolutional Neural Network (CNN) in order to train the model on the spectrogram images and retrieve the features from the trained layer of the CNN using a dense layer of the neural network. We apply Extreme Gradient Boosting (XGBoost) classifier on extracted CNN features to classify the signals into arousal, valence and dominance of human emotion. We compare our results with the feature fusion-based state-of-the-art approaches of emotion recognition. To do this, we applied various feature extraction techniques on the signals which include Fast Fourier Transformation, Discrete Cosine Transformation, Poincare, Power Spectral Density, Hjorth parameters and some statistical features. Additionally, we use Chi-square and Recursive Feature Elimination techniques to select the discriminative features. We form the feature vectors by applying feature level fusion, and apply Support Vector Machine (SVM) and Extreme Gradient Boosting (XGBoost) classifiers on the fused features to classify different emotion levels. The performance study shows that the proposed spectrogram image based CNN-XGBoost fusion method outperforms the feature fusion-based SVM and XGBoost methods. The proposed method obtained the accuracy of 99.712% for arousal, 99.770% for valence and 99.770% for dominance in human emotion detection.
Collapse
|
18
|
Narendra R, Suresha M, Manjunatha Aradhya VN. COSLETS: Recognition of Emotions Based on EEG Signals. Brain Inform 2022. [DOI: 10.1007/978-3-031-15037-1_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
|