1
|
Gao Y, Zhang C, Huang J, Meng M. EEG multi-domain feature transfer based on sparse regularized Tucker decomposition. Cogn Neurodyn 2024; 18:185-197. [PMID: 38406207 PMCID: PMC10881956 DOI: 10.1007/s11571-023-09936-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2022] [Revised: 12/21/2022] [Accepted: 01/24/2023] [Indexed: 02/19/2023] Open
Abstract
Tensor analysis of electroencephalogram (EEG) can extract the activity information and the potential interaction between different brain regions. However, EEG data varies between subjects, and the existing tensor decomposition algorithms cannot guarantee that the features across subjects are distributed in the same domain, which leads to the non-objectivity of the classification result and analysis, In addition, traditional Tucker decomposition is prone to the explosion of feature dimensions. To solve these problems, combined with the idea of feature transfer, a novel EEG tensor transfer algorithm, Tensor Subspace Learning based on Sparse Regularized Tucker Decomposition (TSL-SRT), is proposed in this paper. In TSL-SRT, new EEG samples are considered as the target domain and original samples as the source domain. The target features can be obtained by projecting the target tensor to the source feature space to ensure that all features are in the same domain. Furthermore, to solve the problem of dimension explosion caused by TSL-SRT, a redundant EEG features screening algorithm is adopted to eliminate the redundant features, and achieves 77.8%, 73.2% and 75.3% accuracy on three BCI datasets. By visualizing the spatial basic matrix of the feature space, it can be seen that TSL-SRT is effective in extracting the features of active brain regions in the BCI task and it can extract the multi-domain features of different subjects in the same domain simultaneously, which provides a new method for the tensor analysis of EEG.
Collapse
Affiliation(s)
- Yunyuan Gao
- College of Automation, Hangzhou Dianzi University, Hangzhou, 310018 Zhejiang People’s Republic of China
- Zhejiang Key Laboratory of Brain Computer Collaborative Intelligence, Hangzhou Dianzi University, Hangzhou, 310018 Zhejiang People’s Republic of China
| | - Congrui Zhang
- College of Automation, Hangzhou Dianzi University, Hangzhou, 310018 Zhejiang People’s Republic of China
| | - Jincheng Huang
- HDU-ITMO Joint Institute, Hangzhou Dianzi University, Hangzhou, 310018 Zhejiang People’s Republic of China
| | - Ming Meng
- College of Automation, Hangzhou Dianzi University, Hangzhou, 310018 Zhejiang People’s Republic of China
- Zhejiang Key Laboratory of Brain Computer Collaborative Intelligence, Hangzhou Dianzi University, Hangzhou, 310018 Zhejiang People’s Republic of China
| |
Collapse
|
2
|
Liu R, Chao Y, Ma X, Sha X, Sun L, Li S, Chang S. ERTNet: an interpretable transformer-based framework for EEG emotion recognition. Front Neurosci 2024; 18:1320645. [PMID: 38298914 PMCID: PMC10827927 DOI: 10.3389/fnins.2024.1320645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 01/02/2024] [Indexed: 02/02/2024] Open
Abstract
Background Emotion recognition using EEG signals enables clinicians to assess patients' emotional states with precision and immediacy. However, the complexity of EEG signal data poses challenges for traditional recognition methods. Deep learning techniques effectively capture the nuanced emotional cues within these signals by leveraging extensive data. Nonetheless, most deep learning techniques lack interpretability while maintaining accuracy. Methods We developed an interpretable end-to-end EEG emotion recognition framework rooted in the hybrid CNN and transformer architecture. Specifically, temporal convolution isolates salient information from EEG signals while filtering out potential high-frequency noise. Spatial convolution discerns the topological connections between channels. Subsequently, the transformer module processes the feature maps to integrate high-level spatiotemporal features, enabling the identification of the prevailing emotional state. Results Experiments' results demonstrated that our model excels in diverse emotion classification, achieving an accuracy of 74.23% ± 2.59% on the dimensional model (DEAP) and 67.17% ± 1.70% on the discrete model (SEED-V). These results surpass the performances of both CNN and LSTM-based counterparts. Through interpretive analysis, we ascertained that the beta and gamma bands in the EEG signals exert the most significant impact on emotion recognition performance. Notably, our model can independently tailor a Gaussian-like convolution kernel, effectively filtering high-frequency noise from the input EEG data. Discussion Given its robust performance and interpretative capabilities, our proposed framework is a promising tool for EEG-driven emotion brain-computer interface.
Collapse
Affiliation(s)
- Ruixiang Liu
- School of Intelligent Medicine, China Medical University, Shenyang, Liaoning, China
| | - Yihu Chao
- School of Intelligent Medicine, China Medical University, Shenyang, Liaoning, China
| | - Xuerui Ma
- School of Intelligent Medicine, China Medical University, Shenyang, Liaoning, China
| | - Xianzheng Sha
- School of Intelligent Medicine, China Medical University, Shenyang, Liaoning, China
| | - Limin Sun
- Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai, China
| | - Shuo Li
- School of Life Sciences, China Medical University, Shenyang, Liaoning, China
| | - Shijie Chang
- School of Intelligent Medicine, China Medical University, Shenyang, Liaoning, China
| |
Collapse
|
3
|
Olmez Y, Koca GO, Sengur A, Acharya UR. PS-VTS: particle swarm with visit table strategy for automated emotion recognition with EEG signals. Health Inf Sci Syst 2023; 11:22. [PMID: 37151916 PMCID: PMC10160266 DOI: 10.1007/s13755-023-00224-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 04/24/2023] [Indexed: 05/09/2023] Open
Abstract
Recognizing emotions accurately in real life is crucial in human-computer interaction (HCI) systems. Electroencephalogram (EEG) signals have been extensively employed to identify emotions. The researchers have used several EEG-based emotion identification datasets to validate their proposed models. In this paper, we have employed a novel metaheuristic optimization approach for accurate emotion classification by applying it to select both channel and rhythm of EEG data. In this work, we have proposed the particle swarm with visit table strategy (PS-VTS) metaheuristic technique to improve the effectiveness of EEG-based human emotion identification. First, the EEG signals are denoised using a low pass filter, and then rhythm extraction is done using discrete wavelet transform (DWT). The continuous wavelet transform (CWT) approach transforms each rhythm signal into a rhythm image. The pre-trained MobilNetv2 model has been pre-trained for deep feature extraction, and a support vector machine (SVM) is used to classify the emotions. Two models are developed for optimal channels and rhythm sets. In Model 1, optimal channels are selected separately for each rhythm, and global optima are determined in the optimization process according to the best channel sets of the rhythms. The best rhythms are first determined for each channel, and then the optimal channel-rhythm set is selected in Model 2. Our proposed model obtained an accuracy of 99.2871% and 97.8571% for the classification of HA (high arousal)-LA (low arousal) and HV (high valence)-LV (low valence), respectively with the DEAP dataset. Our generated model obtained the highest classification accuracy compared to the previously reported methods.
Collapse
Affiliation(s)
- Yagmur Olmez
- Department of Mechatronics Engineering, University of Firat, 23119 Elazig, Turkey
| | - Gonca Ozmen Koca
- Department of Mechatronics Engineering, University of Firat, 23119 Elazig, Turkey
| | - Abdulkadir Sengur
- Department of Electrical and Electronics Engineering, University of Firat, 23119 Elazig, Turkey
| | - U. Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia
| |
Collapse
|
4
|
Sattiraju A, Ellis CA, Miller RL, Calhoun VD. An Explainable and Robust Deep Learning Approach for Automated Electroencephalography-based Schizophrenia Diagnosis. bioRxiv 2023:2023.05.27.542592. [PMID: 37398173 PMCID: PMC10312438 DOI: 10.1101/2023.05.27.542592] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Schizophrenia (SZ) is a neuropsychiatric disorder that affects millions globally. Current diagnosis of SZ is symptom-based, which poses difficulty due to the variability of symptoms across patients. To this end, many recent studies have developed deep learning methods for automated diagnosis of SZ, especially using raw EEG, which provides high temporal precision. For such methods to be productionized, they must be both explainable and robust. Explainable models are essential to identify biomarkers of SZ, and robust models are critical to learn generalizable patterns, especially amidst changes in the implementation environment. One common example is channel loss during EEG recording, which could be detrimental to classifier performance. In this study, we developed a novel channel dropout (CD) approach to increase the robustness of explainable deep learning models trained on EEG data for SZ diagnosis to channel loss. We developed a baseline convolutional neural network (CNN) architecture and implement our approach as a CD layer added to the baseline (CNN-CD). We then applied two explainability approaches to both models for insight into learned spatial and spectral features and show that the application of CD decreases model sensitivity to channel loss. The CNN and CNN-CD achieved accuracies of 81.9% and 80.9% on testing data, respectively. Furthermore, our models heavily prioritized the parietal electrodes and the α-band, which is supported by existing literature. It is our hope that this study motivates the further development of explainable and robust models and bridges the transition from research to application in a clinical decision support role.
Collapse
Affiliation(s)
- Abhinav Sattiraju
- Tri-institutional Center for Translational Research in Neuroimaging and Data Science: Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, GA 30303 USA
| | - Charles A Ellis
- Tri-institutional Center for Translational Research in Neuroimaging and Data Science: Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, GA 30303 USA
| | - Robyn L Miller
- Tri-institutional Center for Translational Research in Neuroimaging and Data Science: Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, GA 30303 USA
| | - Vince D Calhoun
- Tri-institutional Center for Translational Research in Neuroimaging and Data Science: Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, GA 30303 USA
| |
Collapse
|
5
|
Jafari M, Shoeibi A, Khodatars M, Bagherzadeh S, Shalbaf A, García DL, Gorriz JM, Acharya UR. Emotion recognition in EEG signals using deep learning methods: A review. Comput Biol Med 2023; 165:107450. [PMID: 37708717 DOI: 10.1016/j.compbiomed.2023.107450] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 08/03/2023] [Accepted: 09/01/2023] [Indexed: 09/16/2023]
Abstract
Emotions are a critical aspect of daily life and serve a crucial role in human decision-making, planning, reasoning, and other mental states. As a result, they are considered a significant factor in human interactions. Human emotions can be identified through various sources, such as facial expressions, speech, behavior (gesture/position), or physiological signals. The use of physiological signals can enhance the objectivity and reliability of emotion detection. Compared with peripheral physiological signals, electroencephalogram (EEG) recordings are directly generated by the central nervous system and are closely related to human emotions. EEG signals have the great spatial resolution that facilitates the evaluation of brain functions, making them a popular modality in emotion recognition studies. Emotion recognition using EEG signals presents several challenges, including signal variability due to electrode positioning, individual differences in signal morphology, and lack of a universal standard for EEG signal processing. Moreover, identifying the appropriate features for emotion recognition from EEG data requires further research. Finally, there is a need to develop more robust artificial intelligence (AI) including conventional machine learning (ML) and deep learning (DL) methods to handle the complex and diverse EEG signals associated with emotional states. This paper examines the application of DL techniques in emotion recognition from EEG signals and provides a detailed discussion of relevant articles. The paper explores the significant challenges in emotion recognition using EEG signals, highlights the potential of DL techniques in addressing these challenges, and suggests the scope for future research in emotion recognition using DL techniques. The paper concludes with a summary of its findings.
Collapse
Affiliation(s)
- Mahboobeh Jafari
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Afshin Shoeibi
- Data Science and Computational Intelligence Institute, University of Granada, Spain.
| | - Marjane Khodatars
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Sara Bagherzadeh
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Ahmad Shalbaf
- Department of Biomedical Engineering and Medical Physics, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - David López García
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Juan M Gorriz
- Data Science and Computational Intelligence Institute, University of Granada, Spain; Department of Psychiatry, University of Cambridge, UK
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia
| |
Collapse
|
6
|
Fu B, Gu C, Fu M, Xia Y, Liu Y. A novel feature fusion network for multimodal emotion recognition from EEG and eye movement signals. Front Neurosci 2023; 17:1234162. [PMID: 37600016 PMCID: PMC10436100 DOI: 10.3389/fnins.2023.1234162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 07/20/2023] [Indexed: 08/22/2023] Open
Abstract
Emotion recognition is a challenging task, and the use of multimodal fusion methods for emotion recognition has become a trend. Fusion vectors can provide a more comprehensive representation of changes in the subject's emotional state, leading to more accurate emotion recognition results. Different fusion inputs or feature fusion methods have varying effects on the final fusion outcome. In this paper, we propose a novel Multimodal Feature Fusion Neural Network model (MFFNN) that effectively extracts complementary information from eye movement signals and performs feature fusion with EEG signals. We construct a dual-branch feature extraction module to extract features from both modalities while ensuring temporal alignment. A multi-scale feature fusion module is introduced, which utilizes cross-channel soft attention to adaptively select information from different spatial scales, enabling the acquisition of features at different spatial scales for effective fusion. We conduct experiments on the publicly available SEED-IV dataset, and our model achieves an accuracy of 87.32% in recognizing four emotions (happiness, sadness, fear, and neutrality). The results demonstrate that the proposed model can better explore complementary information from EEG and eye movement signals, thereby improving accuracy, and stability in emotion recognition.
Collapse
Affiliation(s)
- Baole Fu
- School of Automation, Qingdao University, Qingdao, China
- Institute for Future, Qingdao University, Qingdao, China
| | - Chunrui Gu
- School of Automation, Qingdao University, Qingdao, China
- Institute for Future, Qingdao University, Qingdao, China
| | - Ming Fu
- School of Automation, Qingdao University, Qingdao, China
- Institute for Future, Qingdao University, Qingdao, China
| | - Yuxiao Xia
- School of Automation, Qingdao University, Qingdao, China
- Institute for Future, Qingdao University, Qingdao, China
| | - Yinhua Liu
- School of Automation, Qingdao University, Qingdao, China
- Institute for Future, Qingdao University, Qingdao, China
- Shandong Key Laboratory of Industrial Control Technology, Qingdao, China
| |
Collapse
|
7
|
Nandini D, Yadav J, Rani A, Singh V. Design of subject independent 3D VAD emotion detection system using EEG signals and machine learning algorithms. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2023]
|
8
|
Gong L, Li M, Zhang T, Chen W. EEG emotion recognition using attention-based convolutional transformer neural network. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/12/2023]
|
9
|
Puri DV, Nalbalwar SL, Nandgaonkar AB, Gawande JP, Wagh A. Automatic detection of Alzheimer’s disease from EEG signals using low-complexity orthogonal wavelet filter banks. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
10
|
Bai Z, Liu J, Hou F, Chen Y, Cheng M, Mao Z, Song Y, Gao Q. Emotion recognition with residual network driven by spatial-frequency characteristics of EEG recorded from hearing-impaired adults in response to video clips. Comput Biol Med 2023; 152:106344. [PMID: 36470142 DOI: 10.1016/j.compbiomed.2022.106344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Revised: 10/31/2022] [Accepted: 11/21/2022] [Indexed: 12/03/2022]
Abstract
In recent years, emotion recognition based on electroencephalography (EEG) signals has attracted plenty of attention. Most of the existing works focused on normal or depressed people. Due to the lack of hearing ability, it is difficult for hearing-impaired people to express their emotions through language in their social activities. In this work, we collected the EEG signals of hearing-impaired subjects when they were watching six kinds of emotional video clips (happiness, inspiration, neutral, anger, fear, and sadness) for emotion recognition. The biharmonic spline interpolation method was utilized to convert the traditional frequency domain features, Differential Entropy (DE), Power Spectral Density (PSD), and Wavelet Entropy (WE) into the spatial domain. The patch embedding (PE) method was used to segment the feature map into the same patch to obtain the differences in the distribution of emotional information among brain regions. For feature classification, a compact residual network with Depthwise convolution (DC) and Pointwise convolution (PC) is proposed to separate spatial and channel mixing dimensions to better extract information between channels. Dependent subject experiments based on 70% training sets and 30% testing sets were performed. The results showed that the average classification accuracies by PE (DE), PE (PSD), and PE (WE) were 91.75%, 85.53%, and 75.68%, respectively which were improved by 11.77%, 23.54%, and 16.61% compared with DE, PSD, and WE. Moreover, the comparison experiments were carried out on the SEED and DEAP datasets with PE (DE), which achieved average accuracies of 90.04% (positive, neutral, and negative) and 88.75% (high valence and low valence). By exploring the emotional brain regions, we found that the frontal, parietal, and temporal lobes of hearing-impaired people were associated with emotional activity compared to normal people whose main emotional brain area was the frontal lobe.
Collapse
Affiliation(s)
- Zhongli Bai
- Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin, 300384, China.
| | - Junjie Liu
- Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin, 300384, China.
| | - Fazheng Hou
- Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin, 300384, China.
| | - Yirui Chen
- Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin, 300384, China.
| | - Meiyi Cheng
- Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin, 300384, China.
| | - Zemin Mao
- Technical College for the Deaf, Tianjin University of Technology, Tianjin, 300384, China.
| | - Yu Song
- Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin, 300384, China.
| | - Qiang Gao
- Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, TUT Maritime College, Tianjin University of Technology, Tianjin, 300384, China.
| |
Collapse
|
11
|
Lin Y, Wing-Kuen Ling B, Wang W, Hu L, Xu N, Zhou X. Fusion of electroencephalograms at different channels and different activities via multivariate quaternion valued singular spectrum analysis for intellectual and developmental disorder recognition. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
12
|
Awan AW, Usman SM, Khalid S, Anwar A, Alroobaea R, Hussain S, Almotiri J, Ullah SS, Akram MU. An Ensemble Learning Method for Emotion Charting Using Multimodal Physiological Signals. Sensors (Basel) 2022; 22:9480. [PMID: 36502183 PMCID: PMC9739519 DOI: 10.3390/s22239480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 11/24/2022] [Accepted: 11/28/2022] [Indexed: 06/17/2023]
Abstract
Emotion charting using multimodal signals has gained great demand for stroke-affected patients, for psychiatrists while examining patients, and for neuromarketing applications. Multimodal signals for emotion charting include electrocardiogram (ECG) signals, electroencephalogram (EEG) signals, and galvanic skin response (GSR) signals. EEG, ECG, and GSR are also known as physiological signals, which can be used for identification of human emotions. Due to the unbiased nature of physiological signals, this field has become a great motivation in recent research as physiological signals are generated autonomously from human central nervous system. Researchers have developed multiple methods for the classification of these signals for emotion detection. However, due to the non-linear nature of these signals and the inclusion of noise, while recording, accurate classification of physiological signals is a challenge for emotion charting. Valence and arousal are two important states for emotion detection; therefore, this paper presents a novel ensemble learning method based on deep learning for the classification of four different emotional states including high valence and high arousal (HVHA), low valence and low arousal (LVLA), high valence and low arousal (HVLA) and low valence high arousal (LVHA). In the proposed method, multimodal signals (EEG, ECG, and GSR) are preprocessed using bandpass filtering and independent components analysis (ICA) for noise removal in EEG signals followed by discrete wavelet transform for time domain to frequency domain conversion. Discrete wavelet transform results in spectrograms of the physiological signal and then features are extracted using stacked autoencoders from those spectrograms. A feature vector is obtained from the bottleneck layer of the autoencoder and is fed to three classifiers SVM (support vector machine), RF (random forest), and LSTM (long short-term memory) followed by majority voting as ensemble classification. The proposed system is trained and tested on the AMIGOS dataset with k-fold cross-validation. The proposed system obtained the highest accuracy of 94.5% and shows improved results of the proposed method compared with other state-of-the-art methods.
Collapse
Affiliation(s)
- Amna Waheed Awan
- Department of Computer Engineering, Bahria University, Islamabad 44000, Pakistan
| | - Syed Muhammad Usman
- Department of Creative Technologies, Faculty of Computing and AI, Air University, Islamabad 44000, Pakistan
| | - Shehzad Khalid
- Department of Computer Engineering, Bahria University, Islamabad 44000, Pakistan
| | - Aamir Anwar
- School of Computing and Engineering, The University of West London, London W5 5RF, UK
| | - Roobaea Alroobaea
- Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| | - Saddam Hussain
- School of Digital Science, Universiti Brunei Darussalam, Jalan Tungku Link, Gadong BE1410, Brunei
| | - Jasem Almotiri
- Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| | - Syed Sajid Ullah
- Department of Information and Communication Technology, University of Agder (UiA), N-4898 Grimstad, Norway
- Department of Electrical and Computer Engineering, Villanova University, Villanova, PA 19085, USA
| | - Muhammad Usman Akram
- College of Eletrical and Mechanical Engineering (E & ME), National University of Science and Technology (NUST), Islamabad 44000, Pakistan
| |
Collapse
|
13
|
Deniz E, Sobahi N, Omar N, Sengur A, Acharya UR. Automated robust human emotion classification system using hybrid EEG features with ICBrainDB dataset. Health Inf Sci Syst 2022; 10:31. [PMID: 36387749 PMCID: PMC9649575 DOI: 10.1007/s13755-022-00201-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 10/23/2022] [Indexed: 11/11/2022] Open
Abstract
Emotion identification is an essential task for human-computer interaction systems. Electroencephalogram (EEG) signals have been widely used in emotion recognition. So far, there have been several EEG-based emotion recognition datasets that the researchers have used to validate their developed models. Hence, we have used a new ICBrainDB EEG dataset to classify angry, neutral, happy, and sad emotions in this work. Signal processing-based wavelet transform (WT), tunable Q-factor wavelet transform (TQWT), and image processing-based histogram of oriented gradients (HOG), local binary pattern (LBP), and convolutional neural network (CNN) features have been used extracted from the EEG signals. The WT is used to extract the rhythms from each channel of the EEG signal. The instantaneous frequency and spectral entropy are computed from each EEG rhythm signal. The average, and standard deviation of instantaneous frequency, and spectral entropy of each rhythm of the signal are the final feature vectors. The spectral entropy in each channel of the EEG signal after performing the TQWT is used to create the feature vectors in the second signal side method. Each EEG channel is transformed into time-frequency plots using the synchrosqueezed wavelet transform. Then, the feature vectors are constructed individually using windowed HOG and LBP features. Also, each channel of the EEG data is fed to a pretrained CNN to extract the features. In the feature selection process, the ReliefF feature selector is employed. Various feature classification algorithms namely, k-nearest neighbor (KNN), support vector machines, and neural networks are used for the automated classification of angry, neutral, happy, and sad emotions. Our developed model obtained an average accuracy of 90.7% using HOG features and a KNN classifier with a tenfold cross-validation strategy.
Collapse
Affiliation(s)
- Erkan Deniz
- Electrical and Electronics Engineering Department, Technology Faculty, Firat University, Elazig, Turkey
| | - Nebras Sobahi
- Department of Electrical and Computer Engineering, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Naaman Omar
- Department of Information Technology Management, Administration Technical College, Duhok Polytechnic University, Duhok, Iraq
| | - Abdulkadir Sengur
- Electrical and Electronics Engineering Department, Technology Faculty, Firat University, Elazig, Turkey
| | - U. Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore, 599489 Singapore
- Biomedical Engineering, School of Science and Technology, SUSS University, Singapore, Singapore
- Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| |
Collapse
|
14
|
Dogan A, Barua PD, Baygin M, Tuncer T, Dogan S, Yaman O, Dogru AH, Acharya RU. Automated accurate emotion classification using Clefia pattern-based features with EEG signals. International Journal of Healthcare Management 2022. [DOI: 10.1080/20479700.2022.2141694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Affiliation(s)
- Abdullah Dogan
- Department of Computer Engineering, Middle East Technical University, Ankara, Turkey
| | - Prabal Datta Barua
- School of Business (Information System), University of Southern Queensland, Toowoomba, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, Turkey
| | - Mehmet Baygin
- Department of Computer Engineering, College of Engineering, Ardahan University, Ardahan, Turkey
| | - Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Orhan Yaman
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Ali Hikmet Dogru
- Department of Computer Engineering, Middle East Technical University, Ankara, Turkey
| | - Rajendra U. Acharya
- Ngee Ann Polytechnic, Department of Electronics and Computer Engineering, Singapore
- Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| |
Collapse
|
15
|
Kaklauskas A, Abraham A, Ubarte I, Kliukas R, Luksaite V, Binkyte-Veliene A, Vetloviene I, Kaklauskiene L. A Review of AI Cloud and Edge Sensors, Methods, and Applications for the Recognition of Emotional, Affective and Physiological States. Sensors (Basel) 2022; 22:7824. [PMID: 36298176 PMCID: PMC9611164 DOI: 10.3390/s22207824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 09/28/2022] [Accepted: 10/12/2022] [Indexed: 06/16/2023]
Abstract
Affective, emotional, and physiological states (AFFECT) detection and recognition by capturing human signals is a fast-growing area, which has been applied across numerous domains. The research aim is to review publications on how techniques that use brain and biometric sensors can be used for AFFECT recognition, consolidate the findings, provide a rationale for the current methods, compare the effectiveness of existing methods, and quantify how likely they are to address the issues/challenges in the field. In efforts to achieve the key goals of Society 5.0, Industry 5.0, and human-centered design better, the recognition of emotional, affective, and physiological states is progressively becoming an important matter and offers tremendous growth of knowledge and progress in these and other related fields. In this research, a review of AFFECT recognition brain and biometric sensors, methods, and applications was performed, based on Plutchik's wheel of emotions. Due to the immense variety of existing sensors and sensing systems, this study aimed to provide an analysis of the available sensors that can be used to define human AFFECT, and to classify them based on the type of sensing area and their efficiency in real implementations. Based on statistical and multiple criteria analysis across 169 nations, our outcomes introduce a connection between a nation's success, its number of Web of Science articles published, and its frequency of citation on AFFECT recognition. The principal conclusions present how this research contributes to the big picture in the field under analysis and explore forthcoming study trends.
Collapse
Affiliation(s)
- Arturas Kaklauskas
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Ajith Abraham
- Machine Intelligence Research Labs, Scientific Network for Innovation and Research Excellence, Auburn, WA 98071, USA
| | - Ieva Ubarte
- Institute of Sustainable Construction, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Romualdas Kliukas
- Department of Applied Mechanics, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Vaida Luksaite
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Arune Binkyte-Veliene
- Institute of Sustainable Construction, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Ingrida Vetloviene
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Loreta Kaklauskiene
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| |
Collapse
|
16
|
Kumar G. S. S, Arun A, Sampathila N, Vinoth R. Machine Learning Models for Classification of Human Emotions Using Multivariate Brain Signals. Computers 2022; 11:152. [DOI: 10.3390/computers11100152] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Humans can portray different expressions contrary to their emotional state of mind. Therefore, it is difficult to judge humans’ real emotional state simply by judging their physical appearance. Although researchers are working on facial expressions analysis, voice recognition, and gesture recognition; the accuracy levels of such analysis are much less and the results are not reliable. Hence, it becomes vital to have realistic emotion detector. Electroencephalogram (EEG) signals remain neutral to the external appearance and behavior of the human and help in ensuring accurate analysis of the state of mind. The EEG signals from various electrodes in different scalp regions are studied for performance. Hence, EEG has gained attention over time to obtain accurate results for the classification of emotional states in human beings for human–machine interaction as well as to design a program where an individual could perform a self-analysis of his emotional state. In the proposed scheme, we extract power spectral densities of multivariate EEG signals from different sections of the brain. From the extracted power spectral density (PSD), the features which provide a better feature for classification are selected and classified using long short-term memory (LSTM) and bi-directional long short-term memory (Bi-LSTM). The 2-D emotion model considered for the classification of frontal, parietal, temporal, and occipital is studied. The region-based classification is performed by considering positive and negative emotions. The performance accuracy of our previous model’s results of artificial neural network (ANN), support vector machine (SVM), K-nearest neighbor (K-NN), and LSTM was compared and 94.95% accuracy was received using Bi-LSTM considering four prefrontal electrodes.
Collapse
|
17
|
Agarwal R, Andujar M, Canavan S. Classification of emotions using EEG activity associated with different areas of the brain. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.08.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
18
|
Wang L, Huang X, Ren L, Zhan Q. Signal analysis and classification of a novel active brain-computer interface based on four-category sequential coding. Biomed Signal Process Control 2022; 78:103857. [DOI: 10.1016/j.bspc.2022.103857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
19
|
Priyasad D, Fernando T, Denman S, Sridharan S, Fookes C. Affect recognition from scalp-EEG using channel-wise encoder networks coupled with geometric deep learning and multi-channel feature fusion. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109038] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
20
|
Tripathy RK, Paternina MA, de la O Serna JA. Editorial: Machine Learning and Deep Learning for Physiological Signal Analysis. Front Physiol 2022; 13:887070. [PMID: 35492610 PMCID: PMC9043552 DOI: 10.3389/fphys.2022.887070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 03/15/2022] [Indexed: 11/23/2022] Open
Affiliation(s)
- Rajesh Kumar Tripathy
- Birla Institute of Technology and Science (BITS) Pilani, Hyderabad, India
- *Correspondence: Rajesh Kumar Tripathy,
| | - Mario Arrieta Paternina
- Department of Electrical Engineering, National Autonomous University of Mexico, México City, Mexico
| | - José Antonio de la O Serna
- Department of Electrical Engineering, Autonomous University of Nuevo León, San Nicolás de los Garza, Mexico
| |
Collapse
|
21
|
Muralidharan N, Gupta S, Prusty MR, Tripathy RK. Detection of COVID19 from X-ray images using multiscale Deep Convolutional Neural Network. Appl Soft Comput 2022; 119:108610. [PMID: 35185439 PMCID: PMC8842414 DOI: 10.1016/j.asoc.2022.108610] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 11/09/2021] [Accepted: 02/05/2022] [Indexed: 12/17/2022]
Abstract
The Coronavirus disease 2019 (COVID19) pandemic has led to a dramatic loss of human life worldwide and caused a tremendous challenge to public health. Immediate detection and diagnosis of COVID19 have lifesaving importance for both patients and doctors. The availability of COVID19 tests increased significantly in many countries, thereby provisioning a limited availability of laboratory test kits Additionally, the Reverse Transcription-Polymerase Chain Reaction (RT-PCR) test for the diagnosis of COVID 19 is costly and time-consuming. X-ray imaging is widely used for the diagnosis of COVID19. The detection of COVID19 based on the manual investigation of X-ray images is a tedious process. Therefore, computer-aided diagnosis (CAD) systems are needed for the automated detection of COVID19 disease. This paper proposes a novel approach for the automated detection of COVID19 using chest X-ray images. The Fixed Boundary-based Two-Dimensional Empirical Wavelet Transform (FB2DEWT) is used to extract modes from the X-ray images. In our study, a single X-ray image is decomposed into seven modes. The evaluated modes are used as input to the multiscale deep Convolutional Neural Network (CNN) to classify X-ray images into no-finding, pneumonia, and COVID19 classes. The proposed deep learning model is evaluated using the X-ray images from two different publicly available databases, where database A consists of 1225 images and database B consists of 9000 images. The results show that the proposed approach has obtained a maximum accuracy of 96% and 100% for the multiclass and binary classification schemes using X-ray images from dataset A with 5-fold cross-validation (CV) strategy. For dataset B, the accuracy values of 97.17% and 96.06% are achieved using multiscale deep CNN for multiclass and binary classification schemes with 5-fold CV. The proposed multiscale deep learning model has demonstrated a higher classification performance than the existing approaches for detecting COVID19 using X-ray images.
Collapse
|
22
|
Maithri M, Raghavendra U, Gudigar A, Samanth J, Murugappan M, Chakole Y, Acharya UR. Automated emotion recognition: Current trends and future perspectives. Comput Methods Programs Biomed 2022; 215:106646. [PMID: 35093645 DOI: 10.1016/j.cmpb.2022.106646] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Revised: 12/25/2021] [Accepted: 01/16/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Human emotions greatly affect the actions of a person. The automated emotion recognition has applications in multiple domains such as health care, e-learning, surveillance, etc. The development of computer-aided diagnosis (CAD) tools has led to the automated recognition of human emotions. OBJECTIVE This review paper provides an insight into various methods employed using electroencephalogram (EEG), facial, and speech signals coupled with multi-modal emotion recognition techniques. In this work, we have reviewed most of the state-of-the-art papers published on this topic. METHOD This study was carried out by considering the various emotion recognition (ER) models proposed between 2016 and 2021. The papers were analysed based on methods employed, classifier used and performance obtained. RESULTS There is a significant rise in the application of deep learning techniques for ER. They have been widely applied for EEG, speech, facial expression, and multimodal features to develop an accurate ER model. CONCLUSION Our study reveals that most of the proposed machine and deep learning-based systems have yielded good performances for automated ER in a controlled environment. However, there is a need to obtain high performance for ER even in an uncontrolled environment.
Collapse
Affiliation(s)
- M Maithri
- Department of Mechatronics, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - U Raghavendra
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - Anjan Gudigar
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India.
| | - Jyothi Samanth
- Department of Cardiovascular Technology, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal, Karnataka 576104, India
| | - Murugappan Murugappan
- Department of Electronics and Communication Engineering, Kuwait College of Science and Technology, 13133, Kuwait
| | - Yashas Chakole
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - U Rajendra Acharya
- School of Engineering, Ngee Ann Polytechnic, Clementi 599489, Singapore; Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan; Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore
| |
Collapse
|
23
|
Bhanumathi KS, Jayadevappa D, Tunga S, Hu F. Feedback Artificial Shuffled Shepherd Optimization-Based Deep Maxout Network for Human Emotion Recognition Using EEG Signals. Int J Telemed Appl 2022; 2022:1-14. [PMID: 35282409 PMCID: PMC8904914 DOI: 10.1155/2022/3749413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 12/24/2021] [Indexed: 11/17/2022] Open
Abstract
Emotion recognition is very important for the humans in order to enhance the self-awareness and react correctly to the actions around them. Based on the complication and series of emotions, EEG-enabled emotion recognition is still a difficult issue. Hence, an effective human recognition approach is designed using the proposed feedback artificial shuffled shepherd optimization- (FASSO-) based deep maxout network (DMN) for recognizing emotions using EEG signals. The proposed technique incorporates feedback artificial tree (FAT) algorithm and shuffled shepherd optimization algorithm (SSOA). Here, median filter is used for preprocessing to remove the noise present in the EEG signals. The features, like DWT, spectral flatness, logarithmic band power, fluctuation index, spectral decrease, spectral roll-off, and relative energy, are extracted to perform further processing. Based on the data augmented results, emotion recognition can be accomplished using the DMN, where the training process of the DMN is performed using the proposed FASSO method. Furthermore, the experimental results and performance analysis of the proposed algorithm provide efficient performance with respect to accuracy, specificity, and sensitivity with the maximal values of 0.889, 0.89, and 0.886, respectively.
Collapse
|
24
|
Tuncer T, Dogan S, Baygin M, Rajendra Acharya U. Tetromino pattern based accurate EEG emotion classification model. Artif Intell Med 2022; 123:102210. [PMID: 34998511 DOI: 10.1016/j.artmed.2021.102210] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 10/31/2021] [Accepted: 11/01/2021] [Indexed: 11/17/2022]
Abstract
Nowadays, emotion recognition using electroencephalogram (EEG) signals is becoming a hot research topic. The aim of this paper is to classify emotions of EEG signals using a novel game-based feature generation function with high accuracy. Hence, a multileveled handcrafted feature generation automated emotion classification model using EEG signals is presented. A novel textural features generation method inspired by the Tetris game called Tetromino is proposed in this work. The Tetris game is one of the famous games worldwide, which uses various characters in the game. First, the EEG signals are subjected to discrete wavelet transform (DWT) to create various decomposition levels. Then, novel features are generated from the decomposed DWT sub-bands using the Tetromino method. Next, the maximum relevance minimum redundancy (mRMR) features selection method is utilized to select the most discriminative features, and the selected features are classified using support vector machine classifier. Finally, each channel's results (validation predictions) are obtained, and the mode function-based voting method is used to obtain the general results. We have validated our developed model using three databases (DREAMER, GAMEEMO, and DEAP). We have attained 100% accuracies using DREAMER and GAMEEMO datasets. Furthermore, over 99% of classification accuracy is achieved for DEAP dataset. Thus, our developed emotion detection model has yielded the best classification accuracy rate compared to the state-of-the-art techniques and is ready to be tested for clinical application after validating with more diverse datasets. Our results show the success of the presented Tetromino pattern-based EEG signal classification model validated using three public emotional EEG datasets.
Collapse
Affiliation(s)
- Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey.
| | - Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Mehmet Baygin
- Department of Computer Engineering, Faculty of Engineering, Ardahan University, Ardahan, Turkey
| | - U Rajendra Acharya
- Ngee Ann Polytechnic, Department of Electronics and Computer Engineering, 599489, Singapore; Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore; Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| |
Collapse
|
25
|
Li R, Ren C, Zhang X, Hu B. A novel ensemble learning method using multiple objective particle swarm optimization for subject-independent EEG-based emotion recognition. Comput Biol Med 2022; 140:105080. [PMID: 34902609 DOI: 10.1016/j.compbiomed.2021.105080] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 11/24/2021] [Accepted: 11/24/2021] [Indexed: 12/19/2022]
Abstract
Emotion recognition is a vital but challenging step in creating passive brain-computer interface applications. In recent years, many studies on electroencephalogram (EEG)-based emotion recognition have been conducted. Ensemble learning has been widely used in emotion recognition because of its superior accuracy and generalization. In this study, we proposed a novel ensemble learning method based on multiple objective particle swarm optimization for subject-independent EEG-based emotion recognition. First, we used a 4 s sliding time window with a 2 s overlap to extract 13 different features from EEG signals and construct a feature vector. Then, we employed L1 regularization to select effective features. Second, a model selection method was applied to choose the optimal basic analysis submodels. Afterward, we proposed an ensemble operator that converts the classification results of a single model from discrete values to continuous values to better characterize the classification results. Subsequently, multiple objective particle swarm optimization was adopted to confirm the optimal parameters of the ensemble learning model. Finally, we conducted extensive experiments on two public datasets: DEAP and SEED. Considering the generalization of the model, we applied leave-one-subject-out cross-validation to evaluate the performance of the model. The experimental results demonstrate that the proposed method achieves a better recognition performance than single methods, commonly used ensemble learning methods, and state-of-the-art methods. The average accuracies for arousal and valence are 65.70% and 64.22%, respectively, on the DEAP database, and the average accuracy on the SEED database is 84.44%.
Collapse
Affiliation(s)
- Rui Li
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, China.
| | - Chao Ren
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, China.
| | - Xiaowei Zhang
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, China.
| | - Bin Hu
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, China.
| |
Collapse
|
26
|
Khodatars M, Shoeibi A, Sadeghi D, Ghaasemi N, Jafari M, Moridian P, Khadem A, Alizadehsani R, Zare A, Kong Y, Khosravi A, Nahavandi S, Hussain S, Acharya UR, Berk M. Deep learning for neuroimaging-based diagnosis and rehabilitation of Autism Spectrum Disorder: A review. Comput Biol Med 2021; 139:104949. [PMID: 34737139 DOI: 10.1016/j.compbiomed.2021.104949] [Citation(s) in RCA: 71] [Impact Index Per Article: 23.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 10/02/2021] [Accepted: 10/13/2021] [Indexed: 01/23/2023]
Abstract
Accurate diagnosis of Autism Spectrum Disorder (ASD) followed by effective rehabilitation is essential for the management of this disorder. Artificial intelligence (AI) techniques can aid physicians to apply automatic diagnosis and rehabilitation procedures. AI techniques comprise traditional machine learning (ML) approaches and deep learning (DL) techniques. Conventional ML methods employ various feature extraction and classification techniques, but in DL, the process of feature extraction and classification is accomplished intelligently and integrally. DL methods for diagnosis of ASD have been focused on neuroimaging-based approaches. Neuroimaging techniques are non-invasive disease markers potentially useful for ASD diagnosis. Structural and functional neuroimaging techniques provide physicians substantial information about the structure (anatomy and structural connectivity) and function (activity and functional connectivity) of the brain. Due to the intricate structure and function of the brain, proposing optimum procedures for ASD diagnosis with neuroimaging data without exploiting powerful AI techniques like DL may be challenging. In this paper, studies conducted with the aid of DL networks to distinguish ASD are investigated. Rehabilitation tools provided for supporting ASD patients utilizing DL networks are also assessed. Finally, we will present important challenges in the automated detection and rehabilitation of ASD and propose some future works.
Collapse
Affiliation(s)
- Marjane Khodatars
- Dept. of Medical Engineering, Mashhad Branch, Islamic Azad University, Mashhad, Iran
| | - Afshin Shoeibi
- Faculty of Electrical Engineering, FPGA Lab, K. N. Toosi University of Technology, Tehran, Iran; Computer Engineering Department, Ferdowsi University of Mashhad, Mashhad, Iran.
| | - Delaram Sadeghi
- Dept. of Medical Engineering, Mashhad Branch, Islamic Azad University, Mashhad, Iran
| | - Navid Ghaasemi
- Faculty of Electrical Engineering, FPGA Lab, K. N. Toosi University of Technology, Tehran, Iran; Computer Engineering Department, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Mahboobeh Jafari
- Electrical and Computer Engineering Faculty, Semnan University, Semnan, Iran
| | - Parisa Moridian
- Faculty of Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Ali Khadem
- Department of Biomedical Engineering, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran.
| | - Roohallah Alizadehsani
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Victoria, 3217, Australia
| | - Assef Zare
- Faculty of Electrical Engineering, Gonabad Branch, Islamic Azad University, Gonabad, Iran
| | - Yinan Kong
- School of Engineering, Macquarie University, Sydney, 2109, Australia
| | - Abbas Khosravi
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Victoria, 3217, Australia
| | - Saeid Nahavandi
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Victoria, 3217, Australia
| | | | - U Rajendra Acharya
- Ngee Ann Polytechnic, Singapore, 599489, Singapore; Dept. of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan; Dept. of Biomedical Engineering, School of Science and Technology, Singapore University of Social Sciences, Singapore
| | - Michael Berk
- Deakin University, IMPACT - the Institute for Mental and Physical Health and Clinical Translation, School of Medicine, Barwon Health, Geelong, Australia; Orygen, The National Centre of Excellence in Youth Mental Health, Centre for Youth Mental Health, Florey Institute for Neuroscience and Mental Health and the Department of Psychiatry, The University of Melbourne, Melbourne, Australia
| |
Collapse
|
27
|
Abstract
Emotions are closely related to human behavior, family, and society. Changes in emotions can cause differences in electroencephalography (EEG) signals, which show different emotional states and are not easy to disguise. EEG-based emotion recognition has been widely used in human-computer interaction, medical diagnosis, military, and other fields. In this paper, we describe the common steps of an emotion recognition algorithm based on EEG from data acquisition, preprocessing, feature extraction, feature selection to classifier. Then, we review the existing EEG-based emotional recognition methods, as well as assess their classification effect. This paper will help researchers quickly understand the basic theory of emotion recognition and provide references for the future development of EEG. Moreover, emotion is an important representation of safety psychology.
Collapse
Affiliation(s)
- Haoran Liu
- The Boiler and Pressure Vessel Safety Inspection Institute of Henan Province, Zhengzhou, China
| | - Ying Zhang
- Patent Examination Cooperation (Henan) Center of the Patent Office, CNIPA, Zhengzhou, China
| | - Yujun Li
- The Boiler and Pressure Vessel Safety Inspection Institute of Henan Province, Zhengzhou, China
| | - Xiangyi Kong
- The Boiler and Pressure Vessel Safety Inspection Institute of Henan Province, Zhengzhou, China
| |
Collapse
|
28
|
Bilucaglia M, Duma GM, Mento G, Semenzato L, Tressoldi PE. Applying machine learning EEG signal classification to emotion‑related brain anticipatory activity. F1000Res 2021; 9:173. [PMID: 37899775 PMCID: PMC10603316 DOI: 10.12688/f1000research.22202.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/11/2021] [Indexed: 10/31/2023] Open
Abstract
Machine learning approaches have been fruitfully applied to several neurophysiological signal classification problems. Considering the relevance of emotion in human cognition and behaviour, an important application of machine learning has been found in the field of emotion identification based on neurophysiological activity. Nonetheless, there is high variability in results in the literature depending on the neuronal activity measurement, the signal features and the classifier type. The present work aims to provide new methodological insight into machine learning applied to emotion identification based on electrophysiological brain activity. For this reason, we analysed previously recorded EEG activity measured while emotional stimuli, high and low arousal (auditory and visual) were provided to a group of healthy participants. Our target signal to classify was the pre-stimulus onset brain activity. Classification performance of three different classifiers (LDA, SVM and kNN) was compared using both spectral and temporal features. Furthermore, we also contrasted the performance of static and dynamic (time evolving) approaches. The best static feature-classifier combination was the SVM with spectral features (51.8%), followed by LDA with spectral features (51.4%) and kNN with temporal features (51%). The best dynamic feature classifier combination was the SVM with temporal features (63.8%), followed by kNN with temporal features (63.70%) and LDA with temporal features (63.68%). The results show a clear increase in classification accuracy with temporal dynamic features.
Collapse
Affiliation(s)
| | - Gian Marco Duma
- Department of Developmental and Social Psychology (DPSS), Università degli Studi di Padova, Padova, Italy
| | - Giovanni Mento
- Department of General Psychology, Università degli Studi di Padova, Padova, Italy
| | - Luca Semenzato
- Department of General Psychology, Università degli Studi di Padova, Padova, Italy
| | - Patrizio E. Tressoldi
- Science of Consciousness Research Group, Studium Patavinum, Università degli Studi di Padova, Padova, Italy
| |
Collapse
|
29
|
Dogan A, Akay M, Barua PD, Baygin M, Dogan S, Tuncer T, Dogru AH, Acharya UR. PrimePatNet87: Prime pattern and tunable q-factor wavelet transform techniques for automated accurate EEG emotion recognition. Comput Biol Med 2021; 138:104867. [PMID: 34543892 DOI: 10.1016/j.compbiomed.2021.104867] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 09/08/2021] [Accepted: 09/08/2021] [Indexed: 11/24/2022]
Abstract
Nowadays, many deep models have been presented to recognize emotions using electroencephalogram (EEG) signals. These deep models are computationally intensive, it takes a longer time to train the model. Also, it is difficult to achieve high classification performance using for emotion classification using machine learning techniques. To overcome these limitations, we present a hand-crafted conventional EEG emotion classification network. In this work, we have used novel prime pattern and tunable q-factor wavelet transform (TQWT) techniques to develop an automated model to classify human emotions. Our proposed cognitive model comprises feature extraction, feature selection, and classification steps. We have used TQWT on the EEG signals to obtain the sub-bands. The prime pattern and statistical feature generator are employed on the generated sub-bands and original signal to generate 798 features. 399 (half of them) out of 798 features are selected using minimum redundancy maximum relevance (mRMR) selector, and misclassification rates of each signal are evaluated using support vector machine (SVM) classifier. The proposed network generated 87 feature vectors hence, this model is named PrimePatNet87. In the last step of the feature generation, the best 20 feature vectors which are selected based on the calculated misclassification rates, are concatenated. The generated feature vector is subjected to the feature selection and the most significant 1000 features are selected using the mRMR selector. These selected features are then classified using an SVM classifier. In the last phase, iterative majority voting has been used to generate a general result. We have used three publicly available datasets, namely DEAP, DREAMER, and GAMEEMO, to develop our proposed model. Our presented PrimePatNet87 model reached over 99% classification accuracy on whole datasets with leave one subject out (LOSO) validation. Our results demonstrate that the developed prime pattern network is accurate and ready for real-world applications.
Collapse
|
30
|
Islam MR, Islam MM, Rahman MM, Mondal C, Singha SK, Ahmad M, Awal A, Islam MS, Moni MA. EEG Channel Correlation Based Model for Emotion Recognition. Comput Biol Med 2021; 136:104757. [PMID: 34416570 DOI: 10.1016/j.compbiomed.2021.104757] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2021] [Revised: 08/05/2021] [Accepted: 08/06/2021] [Indexed: 11/26/2022]
Abstract
Emotion recognition using Artificial Intelligence (AI) is a fundamental prerequisite to improve Human-Computer Interaction (HCI). Recognizing emotion from Electroencephalogram (EEG) has been globally accepted in many applications such as intelligent thinking, decision-making, social communication, feeling detection, affective computing, etc. Nevertheless, due to having too low amplitude variation related to time on EEG signal, the proper recognition of emotion from this signal has become too challenging. Usually, considerable effort is required to identify the proper feature or feature set for an effective feature-based emotion recognition system. To extenuate the manual human effort of feature extraction, we proposed a deep machine-learning-based model with Convolutional Neural Network (CNN). At first, the one-dimensional EEG data were converted to Pearson's Correlation Coefficient (PCC) featured images of channel correlation of EEG sub-bands. Then the images were fed into the CNN model to recognize emotion. Two protocols were conducted, namely, protocol-1 to identify two levels and protocol-2 to recognize three levels of valence and arousal that demonstrate emotion. We investigated that only the upper triangular portion of the PCC featured images reduced the computational complexity and size of memory without hampering the model accuracy. The maximum accuracy of 78.22% on valence and 74.92% on arousal were obtained using the internationally authorized DEAP dataset.
Collapse
Affiliation(s)
- Md Rabiul Islam
- Electrical and Electronic Engineering, Bangladesh Army University of Engineering & Technology, Natore, 6431, Bangladesh; Electrical and Electronic Engineering, Khulna University of Engineering & Technology, Khulna, 9203, Bangladesh.
| | - Md Milon Islam
- Computer Science and Engineering, Khulna University of Engineering & Technology, Khulna, 9203, Bangladesh.
| | - Md Mustafizur Rahman
- Electrical and Electronic Engineering, Jashore University of Science and Technology, Jashore, 7408, Bangladesh.
| | - Chayan Mondal
- Electrical and Electronic Engineering, Khulna University of Engineering & Technology, Khulna, 9203, Bangladesh.
| | - Suvojit Kumar Singha
- Electrical and Electronic Engineering, Khulna University of Engineering & Technology, Khulna, 9203, Bangladesh.
| | - Mohiuddin Ahmad
- Electrical and Electronic Engineering, Khulna University of Engineering & Technology, Khulna, 9203, Bangladesh.
| | - Abdul Awal
- Electronics and Communication Engineering, Khulna University, Khulna, 9208, Bangladesh.
| | - Md Saiful Islam
- School of Information and Communication Technology, Griffith University, Gold Coast, Australia.
| | - Mohammad Ali Moni
- School of Health and Rehabilitation Sciences, The University of Queensland, St Lucia, QLD, 4072, Australia.
| |
Collapse
|
31
|
Shanmuga Priya K, Vasanthi S. Emotion classification using EEG signal for women safety application based on deep learning. IFS 2014. [DOI: 10.3233/jifs-221825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
Abstract
An emotion is a conscious logical response that varies for different situations in women’s life. These mental responses are caused by physiological, cognitive, and behavioral changes. Gender-based violence undermines the participation of women in decision-making, resulting in a decline in their quality of life. More accurate and automatic classification of women’s emotions can enhance human-computer interfaces and security in real time. There are some wearable technologies and mobile applications that claim to ensure the safety of women. However, they rely on limited social action and are ineffective at ensuring women’s safety when and where it is needed. In this work, a novel CDB-LSTM network has been proposed to accurately classify the emotions of women in seven different classes. The electroencephalogram (EEG) offers non-radioactive methods of identifying emotions. Initially, the EEG signals are preprocessed and they are converted into images via Time-Frequency Representation (TPR). A smoothed pseudo-Wigner-Ville distribution (SPWVD) is employed to convert the EEG time-domain signals into input images. Consequently, these converted images are given as input to the Convolutional Deep Belief Network (CDBN) for extracting the most relevant features. Finally, Bi-directional LSTM is used for classifying the emotions of women into seven classes namely: happy, relax, sad, fear, anxiety, anger, and stress. The proposed CDB-LSTM network preserves the high accuracy range of 97.27% in the validation phase. The proposed CDB-LSTM network improves the overall accuracy by 6.20% 32.98% 6.85% and 3.30% better than CNN-LSTM, Multi-domain feature fusion model, GCNN-LSTM and CNN with SVM and DT respectively.
Collapse
|